Why Network Virtualization Is Important

7 thoughts on “Why Network Virtualization Is Important”

  1. These innovations will never have an impact on link capacity, but may increase the opportunities for granular segmentation. At the metropolitan fiber level, fewer than 20% of the carriers have upgraded inter facility with DWDM transceiver cards. I think that before carriers spend money on fancy virtualized routers, that the link level upgrades have to go in. Most carrier network management apps already have underused virtual circuit features.

  2. Alan, you raise a good point, but the JNPR and CSCO boxes are going into core MPLS nodes. So it’s not really an issue of DWDM capacity, because the locations where they’re being installed already have it. But your 20% number is right – little need right now for a multi-chassis router in a typical 10,000 line central office.

  3. I don’t think that the point here is additional link capacity or higher throughput. Virtualizing multiple high density routers is all about lowering operational costs and building efficiencies into future deployments. No one wants to configure thousands of ports individually – they want to configure a virtual network or pathway and have policy (QoS, traffic shaping, etc) apply to all of the ports along the path. Just like server virtualization technologies allow systems administrators to manage lots of physical server resources, network virtualization should apply to lots of physical router resources. As a concept this is nothing terribly new, but with the word “virtualization” stuck onto this concept it might get JNPR some good press 🙂

  4. My point was that these ‘new’ ways of subdividing the ports and sub networks don’t materially add to the existing network management features. If they haven’t upgraded transceivers, there is no great urgency to have virtual ports. Gotta have one to take advantage of the other, unless of course, the facility is way under provisioned.

  5. Juniper’s new router might have some near term issues in the cloud world. The 100G solution uses 2 50GE processors – works in principle, but there is a packet re-ordering problem they’ll have to solve. And, it’s questionable if the 100G solution will be standards compliant – and if you have a non-standards compliant solution, are you locked in to Juniper? 100G is going to be necessary in the cloud world. Vendors have to get this solved – and soon to make virtualization a working reality.

This site uses Akismet to reduce spam. Learn how your comment data is processed.