We ended 2013 with all sorts of vendors jumping on the SDN bandwagon, which has done more to confuse the issue than to solve any real world business problems. So we must ask ourselves what is SDN really? Well at this point it could be anything from a dessert topping to a floor polish depending on whom you ask. SDN ideally was supposed to make managing, provisioning, and orchestrating the network easier. To date it would appear that SDN has done more to exasperated the schism that already existed between the physical and virtual components of the IT infrastructure. We still need physical infrastructure, so this is about finding a way for both the physical and virtual components to work together in concert. Network equipment vendors cannot ignore the virtual component and virtual components must work in harmony with the physical elements. In EMA’s latest research report “Obstacles and Priorities on the Journey to the Software-Defined Data Center” customers said that they do not want to rip and replace their existing network infrastructure to get greater programmability. What is needed is a transitional technology that can bridge the old and the new and in the technology world that typically means some sort of middleware.

Progress – Moving Forward

SDN on the plus side did a great deal to get the ball rolling. There is no question that the networking equipment vendors were dragging their feet hoping perhaps all the hype would die down and go away. The truth of the matter is that it fueled and highlighted what has become a major issue in today’s IT infrastructure – the physical network has failed to evolve and is now slowing down major initiatives such as cloud and the software defined datacenter (SDDC). The noise around SDN got so loud that it forced all the network equipment vendors to address how they would make the networking layer more accessible. OpenFlow has emerged at least as a partial first step in adding some programmability to existing network infrastructures and at this point in time Arista, Brocade, Cisco, Dell, HP, Juniper and Cisco offer either OpenFlow enabled switches or at least some level of compatibility with the technology. We have even begun to see the emergence of viable white box networking solutions based on OpenFlow from startups such as Cumulus Networks and Pica8.

Contention – Too Much FUD – Too Many Controllers

It would be great if everyone could play nice, but sadly there are many points of contention already and others looming on the horizon. Where to begin? The term SDN is so overhyped, it fails to conjure up any kind of clear and concise picture of how it will get us from the manual network processes we have today to the point where network programmability is as easy as point and click. SDN is quickly becoming a moniker for “me too” wannabes. Because of this there is no one single approach to achieving network programmability and there are vendors inside and outside the networking space claiming to have SDN-based technology. Also there has been a plethora of SDN related “Open Source” projects. The original OpenFlow project has been archived and all new development has move to the ONF working group and not everyone came along for the ride, so there are OpenFlow 1.0 enabled solutions (from the original 1.0 spec) as well as OpenFlow 1.x versions (with 1.3 being the latest) on the market. There is also the Open Daylight Project – a LINUX-based initiative for which IBM is a strong advocate and participant. Open Contrail is an open source project based on the Juniper Contrail solution. And while you will see multiple vendor names appear on various open source projects, it would appear that their level of participation varies. Then you have Cisco, who was the last to the party, with its Application Centric Infrastructure (Cisco ACI) that has its own Application Policy Infrastructure Controller (Cisco APIC) which will support OpenFlow. So while OpenFlow shows the most promise of providing common ground between different equipment vendors, it would appear that all OpenFlow support is not created equal. Also with all these groups building their own controllers and accompanying interfaces that means there is will be no single standard interface – northbound, southbound, east or west. The ONF group recently set up a working group (NBI WG) to develop an open set of northbound interfaces, but these will be specific to the OpenFlow 1.x spec and, meanwhile, vendors are off building their own northbound interfaces into their controller of choice.

Pitfalls – Major Technology Shifts Take Years Even Decades

Change is hard and major technology shifts can take decades, but there are some things that I personally think need to change from both a network equipment design perspective and how network engineers work with the equipment. Some network engineers love CLI, but CLI is a bane to automation. We must be able to automate key network processes such as configuration changes and that cannot happen if CLI is the tool of choice. CLI is a tool for manual, individual device configurations and therefore it does not readily lend itself to automated processes. Network equipment vendors love proprietary network fabrics wedded to their hardware designs, but this approach can be a double edge sword. On the one hand, it may create some feature/function benefits that are very vendor specific, but at the same time could make it harder to separate software from hardware logic. It can also make it more difficult for other third party interfaces to take full advantage of all the underlying feature sets. Finally APIs are not enough. Interoperability must happen through common shared protocols like OpenFlow and not proprietary APIs. APIs require upkeep on both the part of the vendor and their customer and can quickly result in “brittle” architectures, where backward compatibility concerns turn into major barriers to product upgrades.

The Short Term Answer is not about SDN – It’s About Gateways

SDN-enabled this and that will continue to rollout in the coming year, but don’t look to SDN to solve the most immediate issue at hand, which is how do we make the physical and virtual infrastructures visible to both network and datacenter operations teams. How do we share data and ensure that resources are not over or under provisioned? The answer looks to be along the lines of what is provided within VMware’s NSX Network Gateway Services. In the latter part of 2013 Arista, Brocade, Cumulus Networks, Dell, HP Networking, and Juniper signed up to provide integration between VMware NSX and their networking equipment. If all goes as planned (or at least promised) we are talking about a common view of shared information across both virtual and network management platforms. And I would say that this is the most interesting and exciting news to come out of all the SDN hype. Today we have a real world problem around managing and troubleshooting virtual components in the network. Oh don’t get me wrong – the networking layer needs to evolve – but if you asked me (and I am biased) getting better visibility trumps programmability any day. Let the SDN wars rage on, but give me visibility and manageability across my physical and virtual elements in the network sooner rather than later.

Enhanced by Zemanta