I was on a call listening to a Riverbed customer explain the dilemma they were facing and how Riverbed helped them solve a difficult performance issue. What was really interesting was how the problem came about. The company decided to go to a cloud-based SaaS service for a major application. The good news is that it took the day-to-day maintenance of the software out of their hands. The bad news is, the performance was poor. The company ran into significant performance issues because of the way the cloud service was designed. He referred to it as a “monolithic cloud application” and what he meant by that was that the design was such that all the traffic was forced to travel to the cloud provider’s single fixed datacenter in the United States. The company is a large global utility provider based in Europe. The short version of the story minus all the gory details was that they ended up backhauling the traffic back to their datacenter where they could make use of their MPLS services and Riverbed’s acceleration solutions to fix the problem. This left me scratching my head, because it certainly sounded a bit counter productive to their overall objective.

In our rush to embrace virtualization across all aspects of the IT infrastructure and stuff everything into the cloud, our performance, visibility, and management issues do not magically go away. As a matter of fact it makes things even more difficult, because now IT departments have ceded control and lost visibility to pieces and parts of the infrastructure, application, and network connectivity. Cloud and virtualization are not the panacea for all that ails IT. They are but pieces of the next generation of computing design and infrastructure. These will become components of the overall design, but we are a far cry from the zero IT footprint datacenter. The reality is that we are building hybrid environments that include both physical and virtual elements. Some components and pieces are more readily virtualized than others and that in and of itself can be wholly dependent on a particular deployment scenario. For example when there is no option for on premise equipment then a virtualized solution might be appropriate, but at the same time an IT department might purchase a large scalable hardware solution to place in the datacenter. There is no right or wrong here. We are seeing both WAN optimization and ADC vendors embrace the virtual paradigm. F5 has recently made two announcements around a new subscription based billing model for AWS and a new licensing model that combines physical and virtual components in order to enable customers to pick and choose, mix and match what works best for them.

In all the zeal to latch onto something new and promising, it is sometimes easy to toss out the good with the bad. Enterprise IT departments need to figure out how to best utilize their physical, virtual, and cloud-based resources in light of their particular deployment needs and not based on industry hype. We are just now hitting the reality check wall with cloud computing that demonstrates that cloud may not be the best deployment option in some deployment situations. What we have are more choices, but that does not negate the need for still applying logic and best practices in networking model and design.  For example, if you are a globally distributed company with lots of end users scattered across the planet then implementing a cloud-based application solution that resides in a single site in one country is going to have performance problems for some users. We already know that, that is a well-established fact. The question is can those problems be mitigated without losing sight of the original operational objectives? In many cases the answer is yes and we might be surprised how often we reach back into our bag of traditional performance optimization tricks to make it so. New does not always mean better. We have more choices the trick is to pick and choose wisely.

Enhanced by Zemanta