Thanksgiving is a time of gathering and community, and for many of us a primary task, beyond the obvious food, and football traditions, will be to reach out and connect with extended family that can’t be with us in person.  For my family, it will be Skype video chat when possible and phone calls when not.  But I must say, despite my best efforts at configuration, I am unfulfilled by the Skype video chat experience.  Now don’t get me wrong – it’s better than a phone call, but my tech experiences (as well as my inner geek) make me believe with certainty that we can do much, much better….

Let’s take the example of telepresence suites. They are real-life Star Trek Enterprise bridge view screens – life-sized, high-def, and audio-perfect. If this technology were to become ubiquitous, it would truly change the way in which humans interact in fundamental ways.  Just think of the opportunities – set up your telepresence system at one end of your dining room so you could include your distant cousins or grandparents at the table.  It’s an intriguing idea that may come true some day for the consumer market, but we are much closer to this reality within the enterprise.  And in both cases, the true revolution is being driven not by high end suites, but by devices that everyone has or will soon have – front facing cameras on smart phones and tablets.

As much as I am enthralled with this technology, I also understand the technical challenges facing those who wish to deploy it.  From an enterprise network planning and operations perspective, as well as from a service quality assurance perspective, the challenge can be summarized in one word: volume. While one telepresence suite can consume multiple gigabits of bandwidth, their relatively small total numbers means planning and segmenting networks around them is workable. But when every endpoint in a large, distributed network may simultaneously be demanding continuous, high priority real-time videoconferencing communications a minimum rates of 100-200 kbps, the challenge gets quickly daunting.

Consider the case of the international investment bank I interviewed as part of my recently-published research report, Videoconferencing Impact on Network Management. The company has deployed over 10,000 high-definition desktop video devices, and expects to double that over the next 12-18 months. If 5% of those endpoints are active, the total aggregate bandwidth impact remains in the comfortable level of 100 Mbps or less. But on a busy day, where 20% or 30% of those endpoints are active at any one time, that footprint can rapidly approach a full one Gbps. And remember, this is no longer a dedicated network – this traffic must travel a shared network that is also carrying critical business applications and data feeds. See how much fun this is going to be?

That same investment bank has been working on this problem for years, and firmly believes they have a plan to make it work. First off, videoconferencing from the desktop is assigned one half of the high-priority network traffic queue (the other half is for VoIP), which is in turn accorded 30% of planned network traffic to and from distributed (WAN-connected) locations. This is a standard configuration for most sites, however adjustments are made depending on the constituent composition of any particular site, either by shifting the mix between video and VoIP or by increasing the high priority queue size. The bank’s Voice & Video team is populated with personnel that mostly came out of the networking organization. Consequently, they not only have a keen understanding of what impact this traffic will have on the network, but they also have maintained a close working relationship with the network team so that monitoring can be shared and adjustments made quickly and smoothly.

One other interesting finding was the experiences shared around what most often causes video conferencing quality to suffer. Our respondents told us that the most common root cause was operator error – not all that surprising, given that videoconferencing systems are still in their early phase of broad adoption. But beyond that, the next most common causes of problems were (in order):

  1. Network traffic congestion in the WAN
  2. Network traffic congestion in the LAN
  3. Network latency in the WAN
  4. Network latency in the LAN

These four problems clustered together as more common than a wide range of other potential sources, including device health, network device configuration, endpoint device configuration, network device health, and even videoconferencing system health. Quite clearly, the greatest challenge facing successful deployment of videoconferencing (beyond end-user training) is the integrity of the network. What surprised me was that LAN congestion and latency were called out so closely as root causes to WAN congestion and latency – the latter would naturally be expected to be stress points given bandwidth constraints and inherently higher latency contributions. This means that network planners and operators must pay close attention to QoS policies in the LAN just as much as they would for the WAN.

Network optimization also plays a role here. Another practitioner that I interviewed related a story illustrating this point. His shop had deployed WAN optimization controllers (WOCs) to help with QoS policy enforcement and data compression across the WAN. When those WOCs temporarily lost their QoS policy rules during a software upgrade, the team immediately began seeing videoconferencing quality issues arise, and the issues disappeared as soon as they were able to restore the QoS rules to the WOCs.  The experience was a powerful lesson regarding how much videoconferencing relies on QoS.

There’s more, of course.  One area of emerging focus regarding desktop videoconferencing is the way in which it will work (or not) together with hosted desktop technology, such as VDI. More on that in a future post, but for now, please enjoy your holiday…..

Happy Thanksgiving!


Enhanced by Zemanta