Jonathan Heiliger, Vice President of Technical Operations at Facebook has been a longtime proponent of shaking up the web infrastructure establishment. From storage to chips to servers, he has been a vocal champion for infrastructure hardware that is made to the specifications of large Internet companies like Facebook.
That is why he is super excited about the formation of the Open Networking Foundation (ONF), a not-for-profit industry group that has been established to promote software-defined networks. ONF is being established with the backing of Internet giants Google, Yahoo and Facebook, along with Microsoft, Verizon and Deutsche Telekom. The foundation also has the backing of numerous hardware makers such as Broadcom, Cisco, Dell, HP, IBM, Juniper Networks, Marvell, NEC, Netgear and VMware.
The first task of ONF will be to adopt and then lead the ongoing development of the OpenFlow standard (www.openflow.org) and encourage its adoption by freely licensing it to all member companies. ONF will then begin the process of defining global management interfaces. (ONF Press Release)
At the core of the ONF is a piece of software, Open Flow, that was result of a joint research project between Stanford University and University of Berkeley. In her article earlier this year, Stacey Higginbotham thus described the Open Flow effort:
The idea behind the OpenFlow effort is that today’s network needs to be smarter and more flexible in order to handle and efficiently deliver more information. To do that, the fundamental idea is to separate the packet switching mechanisms and control functions. Users can freely develop and operate control middleware independently of the switching mechanism.
One of ONF’s big proponents, Nick McKeown, ONF Board member and professor at Stanford University when speaking at our Structure conference in June 2010 outlined his vision of software defined networks as becoming core to the future of the Internet.
McKeown pointed out that from big Internet companies to telecoms and data center operators, many are already experimenting with the idea of software-defined networking, and that is why it made perfect sense for various parties to come together and put all the momentum behind Open Flow.
In a phone conversation earlier today, McKeown explained that at its crudest, Open Flow is akin to the BIOS inside a personal computer, which is firmware software that talks to all hardware elements and then helps boot up the operating system (OS). On top of the BIOS sits the OS and upon the OS sit the applications. So Open Flow can view all the network elements (switches for instance) and then work with the a network operating system which in turn be used to build optimized applications.
Unlike the past when enterprises were often the cutting edge customers, today it is giants like Google (s GOOG) and Facebook, who are often the purveyor’s of cutting edge technology and techniques. Their needs are very different from the hardware that is made and sold by companies like Cisco or Force 10, mostly because those companies cannot make hardware optimized for the needs for a specific web-organization. In addition, the gear supports a whole range of standards, which adds overhead and slows down the performance for a web company.
While networking technologies have also evolved in this time, the ONF believes that more rapid innovation is needed. SDN fulfills this need by enabling innovation in all kinds of networks through relatively simple software changes. SDN thus gives owners and operators of networks better control over their networks, allowing them to optimize network behavior to best serve their and their customers’ needs. (ONF Press Release)
Now, you can unleash the power of creative software writers at the network layer and use the networking infrastructure more effectively, said Facebook’s Heiliger. So for instance, Facebook could write its own network OS – and they are thinking about it – and write software applications to take advantage of the network.
For instance, a Hadoop-based application could use the network in the wee-hours of the morning to crunch data at a certain data center, depending on its geographical location and workload. During the day, the data center’s network is optimized for an all-together different application. “This opens up a Pandora’s box of creativity,” said Heiliger.
ONF has found favor amongst the academic communities for a long time, but with the establishment of ONF, it seems we are looking at a bright, commercial future for Open Flow, which is one of the more disruptive developments in the world of networking technology.
Today, theoretically speaking, a giant Internet company like Google can buy networking silicon from Broadcom, and build its own commodity switches and create its own network topography using Open Flow. What cost millions of dollars could be built for tens of thousands of dollars – and that is going to change the economics of the data centers.
An apt analogy to me would be the arrival of x86 servers and their impact on Sun’s E10K super servers. In time, the minnows ate away at the whale. I wouldn’t be surprised if the same happens to the switching business first, and then to other elements of the networking ecosystem. An engineer familiar with the Open Flow technology joked that Open Flow frees you from the tyranny of the firmware providers. That is networking speak for Cisco, Juniper and Force 10 Networks.
Urs Hoelzle, Google’s Senior Vice President of Engineering at Google and ONF President and Chairman of the Board predicted that by end of this year we are going to see first of the Open Flow capable hardware come to market. In addition, he expects to see Open Flow 1.1-based controllers for data centers to come to market as well. Martin Casado who was one of the primary researchers on Open Flow has co-founded Nicira, which is building the Open Flow controller for the data centers, while Big Switch Networks, a stealth mode company is currently working on an Open Flow controller for the enterprise market.
Hoelzle cautioned that I shouldn’t get too ahead of myself, because it will be a couple of years before the technology starts to make its way through different elements of the networking hardware stack. When I asked him if we could one day see Open Flow make its way into our home networking gear – our home networks are getting inherently complex after all – Hoelzle said that is won’t be anytime soon, but it is within realm of possibility.
I think some clarification is in order. If we see Openflow deployed, it is most likely to be used in the datacenter or campus, rather than the “Big I” Internet, as such. The idea is very interesting but I am a bit skeptical of eventual deployment in a wide scale, however. The primary idea is to use OpenFlow to TEST new network protocols, not to necessary deploy them widely. The kicker is that OpenFlow was orginally designed to let University researchers test network protocol enhancements, ignoring the fact that University researchers don’t ever actually make any real network protocol enhancements. This will be more useful to Google than Stanford.
http://nanog.org/meetings/nanog50/abstracts.php?pt=MTY2OSZuYW5vZzUw&nm=nanog50 – has three very good presentation on OpenFlow.
Daniel
I made it pretty clear that the primary use case of Open Flow is in the data center, campus and large enterprise environments. However, that doesn’t mean that this won’t go in new directions.
The point of ONF is to help foster an environment where smart folks like yourself can take Open Flow in any/many new directions.
Your skepticism is justified, for many new technologies come to fore find a way to disappoint. What makes this intriguing, at least to me, is that this actually attacks the economics of networking as we know it, and at the same time it is about evolving the Internet/networks to meet today’s and tomorrow’s needs.
Thank you for sharing the presentation. I appreciate the time you put in your comment.
It is strange that Openflow is akin to the BIOS of a computer, because it is not typically modified or enhanced by the developer community. Typically the BIOS is written to the hardware specs of a system and programmed into the system motherboard. Only the computer manufacturer deals with building and programming the upgrades. If Openflow is truly the network disrupter, there must be a better way of enhancing and updating network equipment.
From a technical perspective, coordinating the use of a shared resource in acceptable time horizons is difficult in a centralized manner – let alone a distributed manner. Furthermore although the concept, if developed appropriately, could be applied to provider networks maintained by telecomms, it would require a change in business model – the carrier’s primary responsibility becomes the maintenance of the network with limited management capabilities. Given the size of provider capital expenditures, it would be a very expensive proposition for Google and others to seek such capabilities, unless it was in private networks carved out for themselves – in which case carriers are simply leasing bandwidth to Google et. al.