4 thoughts on “How Software Will Redefine Networking”

  1. I think some clarification is in order. If we see Openflow deployed, it is most likely to be used in the datacenter or campus, rather than the “Big I” Internet, as such. The idea is very interesting but I am a bit skeptical of eventual deployment in a wide scale, however. The primary idea is to use OpenFlow to TEST new network protocols, not to necessary deploy them widely. The kicker is that OpenFlow was orginally designed to let University researchers test network protocol enhancements, ignoring the fact that University researchers don’t ever actually make any real network protocol enhancements. This will be more useful to Google than Stanford.

    http://nanog.org/meetings/nanog50/abstracts.php?pt=MTY2OSZuYW5vZzUw&nm=nanog50 – has three very good presentation on OpenFlow.

    1. Daniel

      I made it pretty clear that the primary use case of Open Flow is in the data center, campus and large enterprise environments. However, that doesn’t mean that this won’t go in new directions.

      The point of ONF is to help foster an environment where smart folks like yourself can take Open Flow in any/many new directions.

      Your skepticism is justified, for many new technologies come to fore find a way to disappoint. What makes this intriguing, at least to me, is that this actually attacks the economics of networking as we know it, and at the same time it is about evolving the Internet/networks to meet today’s and tomorrow’s needs.

      Thank you for sharing the presentation. I appreciate the time you put in your comment.

  2. It is strange that Openflow is akin to the BIOS of a computer, because it is not typically modified or enhanced by the developer community. Typically the BIOS is written to the hardware specs of a system and programmed into the system motherboard. Only the computer manufacturer deals with building and programming the upgrades. If Openflow is truly the network disrupter, there must be a better way of enhancing and updating network equipment.

  3. From a technical perspective, coordinating the use of a shared resource in acceptable time horizons is difficult in a centralized manner – let alone a distributed manner. Furthermore although the concept, if developed appropriately, could be applied to provider networks maintained by telecomms, it would require a change in business model – the carrier’s primary responsibility becomes the maintenance of the network with limited management capabilities. Given the size of provider capital expenditures, it would be a very expensive proposition for Google and others to seek such capabilities, unless it was in private networks carved out for themselves – in which case carriers are simply leasing bandwidth to Google et. al.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.