Warning: session_start(): open(/var/lib/php/session/sess_9nlu458vsa5kgi9bk4tfbm5kl5, O_RDWR) failed: No space left on device (28) in /var/www/blogs/wp-content/plugins/zemanta/zemanta.php on line 633
The Path to Network Virtualization « Tracy Corbo

The Path to Network Virtualization

posted by Tracy Corbo   | February 7, 2012 | 0 Comments

On September 6, Nicira, a startup network virtualization company, unveiled its Network Virtualization Platform (NVP). NVP is a software-based system that creates a distributed virtual network infrastructure in cloud data centers that is completely decoupled and independent from physical network hardware. Nicira has announced that well known entities such as AT&T, eBay, Fidelity Investments, NTT and Rackspace are already using the product. NVP software is delivered through a usage-based, monthly subscription-pricing model, which scales per virtual network port. Customers only pay for what they use, and pricing is scaled accordingly. NVP software has been commercially available since July 2011.

OpenFlow and SDN

While server virtualization has decoupled the application from the underlying hardware neither has been decoupled from the physical network. The networking layer has remained stubbornly physical with little or no sign of moving away from that model. There are two sides to every argument: one being that the networking functionality is too highly specialized and requires purpose built hardware and components to scale and deliver stability. The flip side is that the networking vendors do not want to give up their model and have made little or no effort to decouple the hardware from the networking operating system software. These are not new arguments, but cloud has brought it to a head. Cloud computing is stymied as long as the networking layer remains in the physical plane. Oh sure there are ways to work around it and hybrid cloud solutions are clearly the near term solution, but the sheer momentum of cloud pushed along by the mobility movement demands a change to the status quo and that change is coming in the form of Software Defined Networking or SDN.

OpenFlow is step one in achieving SDN. It is a communication protocol that operates at Layer 2 and that abstracts the forwarding plane away from its underlying network switch or router hardware. It makes it possible to work with equipment from multiple vendors. HP recently announced support for OpenFlow on a number of its networking switches. OpenFlow is vendor neutral, unlike MPLS, with a goal to keep it simple and effective. At its core the goal is to remain faithful to the following concepts:

  • A separation of the data and control planes with a well defined vendor agnostic API/protocol between the two
  • A logically centralized control plane with an open API for network applications and services
  • Network slicing and virtualization to support experimentation at scale on a production network.

Network virtualization vendors such as Embrane and Niciria are taking it to the next level and leveraging this abstraction away from the physical layer and building virtualized solutions that are no longer hampered by the underlying hardware layer of the network and better yet are vendor neutral in design. These early market entrants are largely working as orchestration solutions linking networking resources to cloud services. The caveat here is what role does the networking team play in this new model? If application teams can “point and click” to spin up new resources that include the underlying networking hardware that previously had to be manually configured and provisioned, could that result in an unbalanced or unstable network?

We are still in the early stages of SDN and network virtualization, but I would like to see some more thought given to the management aspect of the underlying networking hardware in this model. It is easy to understand the need to simplify and improve the provisioning of resource down to the networking layer, but at the same time network stability cannot be compromised. Also the more abstracted something is the more difficult it is to troubleshoot. So assuming network virtualization by design is meant to bypass and avoid bad network segments, what if a problem happens up at the virtualized layer of the network? Who owns the problem and what tools do you use to resolve those issues? This feels like something in between middleware and a thin overlay network and that would provide an interim solution until networking hardware can evolve.


Posted in Uncategorized



Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

  • Bookmark and Share
  • RSS New Research

  • Archives

  • Recent Posts



Warning: Unknown: open(/var/lib/php/session/sess_9nlu458vsa5kgi9bk4tfbm5kl5, O_RDWR) failed: No space left on device (28) in Unknown on line 0

Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/lib/php/session) in Unknown on line 0