This is where Enterprise Network Availability Monitoring Systems (ENAMS) come into play. ENAMS are software products that are used by network operators and managers to keep a constant watch over their networks. For those that still have a Network Operations Center (NOC), this would be the platform that would be used by operators for live monitoring. EMA has been following these technologies for its entire history, and recently published an updated landscape report on major ENAMS solutions. In that report, 17 products from 16 vendors were reviewed against a set of functional and ownership/operational cost metrics that EMA has developed based on market research and practitioner dialogue. Among the products reviewed are stalwarts of the NOC such as HP NNMi (nee OpenView), CA Spectrum, and Ipswitch WhatsUp Gold, as well as many others.
Some of the key features and functionality emphasized within the study include:
- Discovery: In order to manage a network, you have to understand what comprises that network and how those elements are connected and dependent upon one another. This can be done manually, but automated discovery can make it much easier to keep up with change.
- Alarm management: The most basic output of an ENAMS system is the running list of alerts and alarms either sent by network devices or interpreted and raised by the ENAMS itself. These form the basis of investigations and work tasks by NOC personnel.
- Fault isolation and troubleshooting: When a problem does arise, these features help in diagnosing and determining root causes. Some such features are automated, and others are ideally organized to facilitate rapid diagnosis.
The most recent edition of the study reflects a vibrant and dynamic range of available choices for those seeking effective, scalable, enterprise-class network monitoring solutions. As with most EMA Radar Reports, the solutions tend to fall along a continuum, from those at one end that are designed for ultra large, complex managed environments and with loads of bells and whistles (but also a high cost of acquisition and ownership) to those that are designed for cost-effectively meeting the needs for small to medium-sized networks. This is a longstanding and mature sector, so feature completeness is high across the board compared to other areas of management technology. Even the lesser known ENAMS challengers are coming to the table with significant capabilities, and this is good news, because it keeps all of the vendors moving forward and keeping pace with new technologies to be managed and new techniques for continuous improvement in efficiency and effectiveness.]]>
First off, there weren’t a lot of net new things being introduced, other than some roadmaps shared under NDA and a truly cool multi-station mobile work experience demo. But that’s ok. Too often, tech companies feel compelled to forever be pushing “something new,” when in reality IT pros need vendors to stick to existing story lines, execute, and deliver. I’d count Citrix as one of the latter.
Citrix has positioned itself in the technology arena as being the essential glue between applications, infrastructure and the end-user/customer. Its solutions are mostly based on solving problems for the enterprise, however a growing number can be applicable to commercial/external use cases.
What I found most interesting about the overall Citrix strategy is that it aligns quite nicely with the high level vision that we at EMA endorse. The vision starts with recognizing that the end result and goal of all IT investments and initiatives are to improve human and business outcomes. The Citrix vision then includes the next layer of detail by recognizing that three core elements must come together to make this possible – IT infrastructure, applications, and the end user device. Oh – and security has to be everywhere. Citrix refers to this as its “Any app, Any cloud, Any device” (and I’d add “securely”) strategy for the most flexible and effective IT-empowered working environments possible.
The EMA vision for IT transformation includes all of this, as well as a healthy dose of service management, automation, and orchestration to tie it all together. We also pay attention to another layer of technology and best practices regarding business outcomes, in the form of Business Intelligence (BI).
While Citrix does have some offerings that fall into the categories of applications (totaling $500M in SaaS) and infrastructure (such as the CloudPlatform, XenServer, NetScaler, and CloudBridge products), those remain complementary story lines to it’s primary solutions in the third element area – the end point where workers ultimately access applications.
While I will leave detailed commentary re: endpoint delivery technologies to my esteemed colleague, Steve Brasen, I have to mention one of the key announcements made at this particular event. Citrix announced its “Software Defined Workplace” solution initiative. Citrix needed some way to be assertive around the whole “Software-Defined” industry trend. Here’s a link to the press release, as well as a link to an interview on the topic that Network World posted with CEO Mark Templeton.
In my view, this isn’t really anything new or different, other than a nice marketing label that sums up what Citrix would like to be known for – secure, mobile access to the apps and services people need to do their work, from any device. The company plans to further expand solutions for achieving flexibility in virtual workspaces, server-based apps, mobility, and virtual desktop. They would have done this anyways, regardless of the new label, but it does fit into the current trend and lingo. And frankly, there isn’t any other company out there today that can bring all of the pieces to bear to the same degree and completeness as Citrix.
But there is one other detail behind the SDWorkplace that I will address directly. None of it works unless you are connected via a highly available, high performing network. Citrix knows this, and has made investments in improving network delivery of its workplace enabling technologies, via NetScaler and CloudBridge. Some of the technologies included in ByteMobile will be helpful here, as well as those acquired along with Framehawk in January 2014.
Besides these control-side approaches, visibility is hugely important, and Citrix has made progress here with its NetScaler Insight Center products, which are available as features included when you buy NetScaler appliances. There’s not a ton of public info out there about the Insight products, although an example PDF for the HDX Insight module can be found here. There is also a module for Web traffic insights. Both leverage AppFlow technology implemented within NetScaler and CloudBridge.
Overall, I’m glad to see Citrix driving the conversation towards human/business outcomes. We need that context to keep the rest of what we IT geek types busy ourselves with on a day-to-day basis. New tech is cool, for sure, but will it really, truly help us do a better job of making IT’s internal and external customers more productive and efficient? That’s the key question that we need to keep asking ourselves.
But as we all know, with so many elements needing to come together and function properly in order for an IT end user or customer to receive a quality experience, there are many places from whence problems could spring. It could be the network, sure, but the problem could lie within servers, storage, security systems, end user devices, or the design of one of the many pieces of software that comprise today’s application/service architectures.
A recent example heard during a conversation with a networking consultant* serves as a great example. A manufacturer with 20 locations spread around the globe was struggling to understand why their business critical application was so slow, and the IT team was at each other’s throats trying to assign blame. The consultant brought in Riverbed ARX (formerly OPNET AppResponse Xpert, now known as Riverbed SteelCentral AppResponse) – an application-centric, network-based performance monitoring system – to see what was going on as the application crossed the network during real user activity. What they quickly found were several issues. First, it became immediately clear that data was moving from clients directly into the NetApp storage system, rather than going through the application front end, as was intended in the application design. This initial discovery was dismissed by the application support manager, who pointed to a drawing on his whiteboard of his developer’s design flowchart. As is so often the case, “as designed” is not always exactly the same thing as “as implemented.”
The actual facts as monitored by the ARX platform showed that each user session was throwing off hundreds of TCP connections to the database, as a flurry of window size negotiations took place between the Windows7 client and the NetApp server. Because NetApp has its own definition of CIFS, this was resulting in massive, repeated fragmentation issues. Another issue that became clear was one very slow SQL statement, which was taking far longer than all other database queries. Taking a quick screenshot of the SQL statement, the application architect had a fix in hand in almost no time.
So what was the problem here? Was it the network? Well, yes – in some ways – the window size negotiations were a big problem, resulting in fragmentation and inefficient network delivery. But it was also the application, and the storage system, and the end client systems. Each contributed to the issue in some way. Having the full picture, clearly presented for all to see, allowed the IT team to get to work on fixing the problems instead of pointing fingers. In fact, this was the first time, in the CIO’s memory, that his network and application teams had ever agreed on anything!
The bottom line is this – application performance issues could start anywhere, and oftentimes are a combination of factors and interactions. The sooner every member of the IT team embraces this reality, the better. Everyone wins when the team focuses on the end goal and works together, using a common set of data to guide the analysis process. In our example here, and in so many more that I hear from networking pros in shops large and small, application-aware monitoring from the network perspective is an excellent starting point, providing visibility that can guide efficient and effective triage and diagnostics. So think about moving away from “blaming the network” and instead try “turning to the network” in order to deal with those difficult issues. The results can be amazing.
*Authors note: The “networking consultant” was a representative of Edgeworx Solutions, Inc., who sponsored this blog post. Regardless of sponsorship support, this topic is one that I have been following, researching, and advocating around for many years. Edgeworx Solutions has over 15 years of experience in the fields of NPM and APM for Telco, Enterprise, and Governmental infrastructures. There are other services partners out there too, of course, and if you decide to seek such support, you will want to make sure those you consider can address all the gaps you might have from a resource, expertise, and action perspective.
Deploying IP Voice (VoIP) is a big, important project for most organizations, involving a significant investment in new equipment, cost-justified on promises of increased flexibility and lower cost of operations. But such returns can remain elusive – particular for those who don’t take the time to understand how VoIP works and the ways in which the IT organization, and in particular the networking team, must prepare to ensure acceptable VoIP quality and performance.
The typical story that I’ve heard in dialogues with IT pros goes something like the following. VoIP technology is brought into the lab for testing and works very well. A pilot is rolled out and despite a few snafus, is also generally successful. Production rollout goes ahead, and troubles begin to snowball. Dropped calls, poor call quality, frustrated users, and many, many hours spent trying to figure out what went wrong.
As it turns out, the old maxim “an ounce of prevention is worth a pound of cure” applies here. For VoIP, an ounce of prevention in the form of a network readiness assessment can save many pounds of support effort down the road.
First, a quick word on how VoIP works from the network perspective (and why it sometimes doesn’t) will be helpful here. VoIP uses a unique set of network protocols – some for setting up calls, such as SIP or SCCP, and others for the actual call session, such as RTP. VoIP success can be stymied if any of those protocols are not properly transmitted between call initiator, call server, and ultimately call receiver. During a call, VoIP call quality can suffer from three issues related to the network – packet loss, high latency, or high jitter (packet delivery disorder). All three of those conditions can arise for a number of reasons, but most commonly result from simple network congestion.
With those thoughts in mind, we can plan our ounce of prevention! Assessing VoIP readiness starts with establishing visibility and taking inventory. Visibility is key, because before you can make sure your network is ready for VoIP, you first need to understand what is on your network and how your network is working. Things that you should look for at this stage include:
- Applications that are most active in the network: Pay particular attention to those that are using large amounts of bandwidth on a sustained basis, such as streaming audio or video, big file transfers, and data backups.
- Areas of high utilization: Congestion is the enemy of voice quality, so it’s essential to recognize where VoIP might be choked out. Start with WAN links, where bandwidth is usually already constrained, but don’t forget to assess the LAN as well, which can have congestion issues of its own.
- Presence of any existing IP-based voice or video traffic: You may have VoIP on your network and not even be aware of it! For instance, Skype and Google Voice are present on many networks – particular those that allow BYOD – and Microsoft Lync is often present as well since it is now a bundled option in MS Office suites as well as MS cloud and web-based offerings. Recognizing these products is important in understanding and planning the user side of the rollout, while also helping to reveal how the network is currently handling VoIP traffic.
A very helpful approach to gauging readiness is to set up test VoIP traffic generators to generate synthetic simulated calls from various points in the network. This capability is available within the IP SLA feature set offered by Cisco networking gear, or can be accomplished using test software agents or handheld test devices. First off, this solves the protocol deliverability question, ensuring that VoIP control and session packets can indeed get from point A to point B. Test sessions are then measured for quality, commonly using a combined metric known as Mean Opinion Score (MOS), which takes into account loss, latency, and jitter. MOS scores of 4-5 are good, 3-4 is marginal, and anything less than 3 will be poor quality and unacceptable to most callers.
Finally, and importantly, optimizing the network for long term VoIP success will almost certainly require the use of network Quality of Service (QoS) policies, which are used to mark specific types of traffic for high priority delivery by the network. Typically, VoIP is assigned to a higher priority than, for instance, backup traffic or non-business web browsing, helping to assure that VoIP packets get processed more quickly, resulting in less drops, less latency, and less jitter. A further step can be taken by reserving a specific portion of network bandwidth exclusively for high priority traffic such as VoIP and IP videoconferencing. While these steps will come later in the process, understanding existing network QoS settings, if any, and how well those policies are being complied with across the network are essential as part of the readiness assessment.
So –how do you get started? To start, you will need some specialized tools and training. You can try to make do with the tools you have, or you can buy/install/learn new tools. That won’t be a quick process, but if you choose that route, you will be ready for whatever comes down the road. Another option, and one that many shops choose, is to hire the expertise from an external services provider. This approach removes the learning curve barrier and avoids one-time tools purchases.
While there are lots of helping hands out there, it’s important to realize that because VoIP is a complex issue, you may not want to solely rely upon your VoIP or network products supplier, as they may not fully grasp or respect the bigger picture that is your overall IT environment. For many, a better choice is to engage an independent third party services partner that is fully versed across both categories.
An example of such a services partner is Edgeworx, who offers the EDGE VoIP Readiness Assessment service. The EDGE service includes network analysis, network design evaluation, and network configuration optimization in support of fully preparing the network for VoIP success. There are other services partners out there too, of course, and if you go this route you will want to make sure that the partners you consider can fill all the gaps that you might have from a resource, expertise, and action perspective.
Whether you choose to take on a VoIP readiness assessment on your own, bring in a partner, or some combination thereof, be prepared for another pleasant surprise. The optimization and configuration measures taken as a result will also improve overall network performance and resilience, meaning that other applications will perform better as well. So in the end, this ounce of prevention may actually be worth more than a pound of cure.
Network management is admittedly a small portion of the story here. But it is an important part of the story, and was always included in the laundry list of technologies that must be present and performing highly for all these cloud initiatives to be successful. A couple of key announcements and statements caught my attention and are relevant specifically to IBM’s network management strategy.
1. Analytics. Advanced data analysis and analytics has become increasingly important as a capstone to all operations monitoring solutions, including network monitoring, whether for event collection, device availability, or performance. IBM announced its Netcool Operations Insight product, which is a successor to its well-known Netcool/Omnibus event management platform. See also the EMA whitepaper on this new solution. What is significant here is that IBM has added its SmartCloud Analytics Log Analysis as an integral module that is shipped with the Omnibus event engine. While user interface consoles are still not fully meshed, contextual log analysis now becomes a simple right-click drill down from alarm and event views, accelerating investigative workflows. These new capabilities are also expected to soon be present in the IBM Netcool Network Management bundle, so that networking pros will have access to this advanced functionality as well.
2. SaaS. Also announced was IBM Service Engage – a brand new set of management SaaS offerings hosted on IBM’s SoftLayer cloud platform. A full EMA Product Brief on Service Engage can be found here. While network management is not part of the initial SaaS offerings, it is on the roadmap and we could see it by the end of 2014. Service Engage represents a major shift in the way IBM delivers its management tools. In the past, IBM solutions typically required significant professional services engagement and extended time periods for deployment, and product upgrade cycles were measured in months or even years. With the Service Engage SaaS model, deployment has been radically streamlined, to the point of becoming nearly trivial, and product updates can be delivered in weeks or even days. As an example, the design objective for one of the initial offerings, systems monitoring (a la IBM Tivoli Manager,) is to complete deployment and be up and running in under five minutes. This is a revolutionary departure from the time and effort required for IBM’s traditional on premise model, and represents the most viable approach IBM has ever had for successfully meeting the needs of mid-tier shops.
3. SDN. I had a chance to sit in on a session that addressed supporting SDN as a part of cloud orchestration. A representative from Juniper presented the basic elements of the Contrail controller and virtual network overlay architecture. An IBM speaker talked about IBM SmartCloud Orchestrator and how networking can be connected into the platform using OpenStack. This was not news – it was first announced last September – but I did have an opportunity to talk with the speakers afterwards and was informed that there are a number of pilot deployments currently underway. In my recent joint research with Torsten Volk on challenges facing those trying to move to SDDC/SDE, networking was identified as the top pain point. I don’t take that as a negative – I take it as an indication that networking is simply the least comfortable and least well understood when it comes to software defined infrastructure, orchestration, and automation. When indeed there are production-side deployments of integrated SDN under cloud orchestration, we will be one huge step closer to truly automated environments.
4. The API Economy: There was an “ah ha” moment for me at this conference. When Robert LeBlanc, SVP of IBM Software Group, held a main stage session on the BlueMix development platform, he invited Jeff Lawson, CEO of Twilio to come on stage and demonstrate how simple it was to invoke Twilio’s Cloud Unified Communications service features using the BlueMix environment. Jeff proceeded to compose live code on stage in front of 10,000 people, banging out 20+ lines in a few minutes to add an automated text messaging function to a sample website. It was a short routine, one that could easily include some errors and create undesired results, but also short enough that the relative risk was fairly low. My first thought was “well that’s just fine if you’re an experienced coder.” Then I realized – this is where we are all headed – even in the realm of network management. As the API economy slowly but steadily invades every aspect of IT, writing code in this way becomes the glueware. And as a result, functionality that would have taken weeks or months to develop before platforms like BlueMix was indeed completed and functional in just a few minutes. And that has me thinking a little about the possibilities for truly agile IT and just how rapidly the managed environment will be changing in the future….
What continues to impress me about IBM is the comprehensive and rich nature of its portfolio – particularly if you are looking beyond network management. Yes, they are still missing bits and pieces here and there (packet-based network performance monitoring, for example) but the vast majority of needs and capabilities are covered. Time will tell, but the new Service Engage SaaS platform could be a turning point for IBM, allowing them to overcome the cost and complexity of deployment/maintenance that has for so long kept them off short lists outside of the world’s largest organizations. Regardless, my opinion is that IBM still has a strong network management pulse, and one that may quicken as 2014 unfolds.
Based on the research that I’ve done in the past year as well is that I plan to be doing in the coming year, and the many conversations I’ve had with technologists and practitioners alike, I’d like to offer up the following five resolutions for network engineering, management, and operations professionals.
1. Get the spoons ready for ‘API Soup’. My colleague, Tracy Corbo, and I have a running joke about all of the times that we hear about the many great and wonderful APIs that are becoming a standard part of network and infrastructure equipment as well as management systems. Our term of endearment for this trend is “API Soup” and what it means is that in order to get everything out of the latest networking technologies (this includes all forms of SDN) it will be necessary to engage directly in programming activities. This means its time to bone up on scripting and programming skills and learning what it means for an API to be RESTful. Fortunately, you have some time before this becomes a pressing need – most of these new products and technologies are still in the early stages of adoption – but they are certainly cropping up everywhere you look, and their ubiquity is a “when” and not an “if” in our book. Best get a head start…
2. Lasso the change maverick. There’s nothing worse than getting that call in the middle of the night (or the middle of the day) informing you that the network is down. Time and time again, we hear about network outages that are traceable to changes intentionally made to network device configurations, which had unanticipated or unintended side effects. Even the best pre-deployment testing plans cannot fully anticipate production environment variability, so how can you reduce operational risk? If you do nothing else, make sure that you have in place a network change and configuration management (NCCM) system that can capture known good configurations and restore them rapidly if and when a crisis occurs. Better yet, start moving towards configuration enforcement, reducing your use of CLI, and leveraging automation wherever possible.
3. Take a developer out for lunch. One of last year’s resolutions was to take a SysAdmin to lunch. Hopefully you are still doing that on a regular basis. If not, fall back, regroup, and get to know the SysAdmin group. If you’re good on that, it’s time to move up a notch and get to know the IT development team. As infrastructure continues to become more and more virtualized and Operations evolves into an internal cloud service provider, one of your most important customers will be the application development team. Getting to know them a bit will come in very handy when working with them to determine true capacity and performance needs for the new and upgraded applications they are rolling out at an ever-increasing clip.
4. Shine a bright light on virtualization. Last year, I recommended that network managers find a way to establish visibility inside hypervisors, so that network connectivity and performance could be better understood across virtual networking components. The problem is now rising beyond and outside the hypervisors, with the increasing use of distributed virtual switching and virtual overlay networks. While virtual overlays are essentially encapsulations, you need to be able to peer inside them to understand exactly which end points are communicating and for what purpose. This is becoming particularly urgent as certain hypervisor companies turn up the heat on their virtual network solutions, promising that virtual system administrators no longer need to deal with the networking team in order to establish cross-infrastructure connectivity.
5. Aspire to become a performance advocate. For over 10 years running, I’ve advocated the network viewpoint (be it physical or virtual) as a superior/essential source of performance information both for monitoring as well as troubleshooting. The last two years have seen a resurgence in the value place upon the role that networks play, as IT infrastructure and application environments become increasingly integrated, virtualized, dynamic, and automated. There has never been a better opportunity than now to step forward and share the information about application performance and end-user experience that can be gathered from the network perspective. Go forth and gather data, and share it liberally. This can only come to good.
Best wishes, good luck, live long & prosper, may the force be with you, and have a Happy New Year. See you around the network in 2014!
At first, Big Data was about off-line, unstructured business data, in larger volumes than normal, and managed/accessed/mined in new and interesting ways, using specific types of databasing and new analysis tools. The term “Big Data” is really quite silly, if you think about it. Haven’t there always been really big accumulations of data, even if they were in relatively limited clusters, buried deep inside data centers on fleets of storage arrays? Just think how much data the NSA must have on hand.
Because the term “Big Data” is so non-specific, many people have been confused by it’s connotations. The most natural has been to simply assume it is a reference to the sheer volume of data. By this measure, many management technologies, particularly performance management tools that generate and analyze gigabytes of metrics and measurements each day, would be Big Data applications.
It doesn’t help that many vendors have added to the confusion by twisting Big Data to their own marketing needs. For instance, I had one vendor try to tell me that the growing volumes of data-oriented mobile services was a Big Data problem. Puh-leeze!
What is more important, in my view, is the nature of the approach to data collection, management, and analysis. Big Data means large accumulations of structured or unstructured data, but it also means using certain specific technologies to hold that data – most notably Hadoop. And by this measure, network management vendors are starting to make noises about building direct Hadoop interfaces. Ok – so that is valid Big Data.
The other area where Big Data is increasing in relevance is in the area of analytics. My colleagues that cover Big Data from a Business Intelligence perspective know this area really well, including their most research on Operationalizing Big Data. There are a whole host of products on the market that are designed to crunch those Big Data sets and find insights and actionable information. Some of these very same approaches and technologies could be applied to network management (and infrastructure management, more broadly) in support of capacity planning and long-range service quality assessment. In fact, in the service provider sector, this is already the case, with a growing number of applications, such as customer experience management.
What has not been properly dealt with, in my view, is the need for Real-time Big Data analytics, for instance in support of automated IT and security operations. All of the data we can collect from and via networks has offline value, but also real-time value, if you can just elicit the important, actionable needles from the haystacks of metrics and measurements. This is the field of performance analytics, and it’s a fast-growing portion of performance management solutions as well as security monitoring and enforcement. Tom Nolle opined on this to some degree recently for SearchNetworking, but none of his use case examples are unique, in my view, and can be done with tools on hand today. But some real-time vendors are using Big Data approaches as part of their product architectures – such as what is being done by SevOne. And more announcements are coming.
I’d be the first to cheer if the term Big Data just plain went away, as the world begins to realize that this is nothing new. We’ve seen it before and solved it before – we are just doing it now in new and different ways. But that’s not how tech marketing and tech sales work, and so I’ll keep trying to help provide some sanity amidst the hype, though dialogues with practitioners and marketers, including a new research project that EMA will be launching shortly. Until then, resistance is futile. See you in the collective.
CA Nimsoft Monitor Snap product highlights include:
The growing complexity of today’s modern IT infrastructures is causing many smaller enterprises to hit the wall of manageability with their monitoring approaches. Many are using a disparate mix of commercial and open source tools that are not integrated, too often resulting in a fragmented understanding of how infrastructure components are working together (or not). Too much time is spent keeping the tools in sync versus using the tools themselves. A better answer is to shift towards a single, integrated/unified solution that provides uniform performance monitoring across networks, servers, storage, and applications.
CA Nimsoft Monitor (and thus Nimsoft Monitor Snap) has been architected just for such purposes – delivering performance and availability monitoring across all IT technology domains. It uses a unified approach, meaning that each set of features is delivered as a module that connects into a single platform, rather than an integrated approach, whereby multiple independent products share data and information. Unified systems are almost always simpler to deploy and administer over time. The CA Nimsoft Monitor solution has found substantial success worldwide, with more than 1500 deployments in enterprises and hundreds more as a platform used by MSPs.
With Snap, CA Technologies seeks to make the unified solution available to a broader slice of enterprises. While CA Nimsoft Monitor Snap does not offer all the same features of the full premium (paid) version, most of those not included won’t be greatly missed in a 30-device managed environment. Plus, anyone who needs to test drive the fully featured version can still do so via a 30-day free trial.
While there is no dedicated support for the free version, the Snap Central support community is an alternative that offers much more than simple, static FAQs. A vendor-moderated resource such as Snap Central provides a two-way channel of communication, where product adopters get to interact not just with CA Technologies experts, but also with other Snap implementers.
CA Technologies still needs to work out the wrinkles in how Snap users will transition from free to paid versions of the product. EMA also believes that CA Technologies may need to reconsider the 30-device limit, as it could prevent system users from gauging the value of the solution in a real production environment.
The release of a free version of CA Nimsoft Monitor is a great opportunity for smaller IT shops to get a taste for a truly unified management product than can help bridge visibility gaps and grow with future needs. EMA also looks forward to the ongoing evolution of Snap Central, since it is a great way to build expertise and enrich the value received by anyone deciding to give the product a try.
The EMCWorld 2013 event started first, kicking off with the news that EMC had built, trialed, and released a new virtualization layer called ViPR (pronounced “viper”). The name does actually stand for something, but it’s so abstract that it’s not worthy of mention (plus I was sworn to secrecy). Viper sounds cool, so ViPR it is. But seriously – this is not just a rebranding of storage virtualization. It is a true software-defined architecture, with a services layer, a controller layer, northbound APIs, and a promise of true multi-vendor support on the southbound side. Now let’s be honest here – this is an EMC-centric story – there was nary a whisper on standardization, though there were fervent promises of multi-vendor support. But storage does indeed need to come along to the programmability party if VMware’s grander vision of the Software Defined Data Center (SDDC) is every truly going to take flight. For EMC’s part, they did it right – built it out, found early adopter customers (CSC, UBS) and saw it through into production deployments. So the solution is real, not just vaporware (or should that be ViPRware??).
A little off to the side of this big event was a parallel release by the Infrastructure Management team that will be of interest to network and infrastructure managers. This latest Software Assurance Suite release included a new consolidated GUI built using the thin, HTML5 frameworks originating from acquiree Watch4net. There’s also a very cool new Smarts feature called Watchlist, which consolidates availability, performance, and configuration management data to present health, notifications, change history, compliance status, and impact for devices, groups of elements – even service groupings. This is great progress towards business-aligned operations, and is a clear value-add for anyone using the EMC management platform.
I was asked to host a BoF session on Converged Operations Management as part of the conference program. Not surprisingly, the largest contingent was comprised of storage managers, but there were also many virtual systems managers. Network managers were scarce. While a significant number of attendees indicated that they were trying to move towards converged, cross-domain operations, two key frustrations quickly emerged in the conversation. First, many expressed serious challenges finding qualified cross-domain generalists to staff the function. The two most practical answers were to hire junior folks that had training in one technology domain and put them through cross-domain OJT, and to look for folks that have had to wear multiple hats (by necessity) while working in smaller IT shops. The second challenge was that there didn’t seem to be any consensus on what tools could deliver the operational views necessary for supporting a converged team. Some had built their own, many were focused on vCenter Operations, and some were using a mishmash of disconnected tools. Clearly we can do better, as an industry, and make sure that management tools are not a barrier to this important evolutionary progression.
As always, the time spent talking with management product experts and real-world practitioners is invaluable in keeping my own research well grounded. But next year, if these two events happen in parallel, maybe I’ll hire a telepresence robot, like the ones you see being used for telemedicine or remote learning. Maybe cloning will have been perfected. A Harry Potter-esque time turner could do the trick. But most likely, I’ll just wear out another pair of shoes.
First up, I talked to the CA AppLogic team. Now AppLogic is not where you might expect to find network management, as it is presented externally as a cloud resource manager that virtualizes IT infrastructure in the service of flexibility and agility. For most, this means focusing on compute resources along with direct-attached storage, but AppLogic also recognizes and addresses some essentials regarding network path connectivity. As it turns out, AppLogic does its own network resource discovery, and builds/maintains an internal model of these resources as part of its virtual resource pool. AppLogic then apportions connectivity, including expected network bandwidth and prioritization needs, as part of deploying and administering work loads via what amounts to a virtual overlay network. AppLogic does not touch any network devices to do this – it just directs and manages the capacity it finds and believes is available across the network.
I also spent time digging into the CA APM story. There is an interesting technological parallel that I’ve been tracking between the use of packet capture and analysis within app-aware network performance management (ANPM) tools and also within traditional application performance management (APM). See EMA’s ANPM Radar Report 2013 for more on that, but suffice to say that a number of management tools vendors are bringing together APM and ANPM techniques in addressing market needs for application performance visibility. CA offers the Customer Experience Manager, a component of the CA APM (nee Wily) solution that is designed to measure customer experiences by analyzing packet streams going into and out of a front-end web server. CA recently expanded the scope of its APM monitoring and diagnostics by integrating the CA Application Delivery Analysis (ADA) appliance. The ADA is deployed at various points around the network, commonly between tiers in the data center, at datacenter ingress/egress, and even at remote sites, to provide a more comprehensive view of application traffic flow. Now interestingly, while CEM has Wily heritage, ADA began its life as the NetQoS SuperAgent product, which was squarely aimed not at application support but rather at network engineers (a la ANPM). So this is a case where network management technology has been dual-purposed to support APM directly.
Finally, I spent some time getting an update on the CA Service Operations Insight (SOI) product. SOI is an operations bridge that takes inputs from element and domain managers, relates them via service modeling, and displays health information on a service-by-service basis. Connectors are also provided for CMDB and ServiceDesk. SOI includes pre-verified plug-in adapters for Spectrum and Performance Center, as well as the APM suite (and much more). What is interesting here is that while many technology vendors have tried to build service awareness into their network management tools, CA has taken a different route, applying service awareness at the operations bridge layer. I talked to three CA customers who were using this combination, in all cases bringing CA Spectrum and CA APM data into SOI, and all said it was both easy to deploy and really helpful as a means for elevating each data silo into a common, integrated view. Again in all three cases, the result has eliminated finger pointing among traditionally siloed teams when things go sideways.
Net net, this is all good for the future of network management. These points of integration bring the network viewpoint directly into its predestined role – a critical component of truly systemic, service-oriented planning and operations.