Based on a recent discussion with VMware, here are the arguments:
So what this comes down to is VMware accepting the fact that there will be open IaaS platforms out there. VMware has recognized that vilifying these platforms will be much more costly than embracing OpenStack. Of course, all this does not address the cost concerns that will come by hosting OpenStack on the vSphere platform and I strongly believe that eventually there will be no justification left for paying large sums of money each year for a proprietary hypervisor. But until then, VMware is doing the right thing by embracing the unavoidable and capturing additional business in the process.]]>
By definition, these vendors have the highest credibility when it comes to truly abstracting storage management from the underlying hardware, as they simply do not sell hardware. Without this conflict of interest, each one of these vendors has a significant incentive to change the storage game for the benefit of the customer. This means providing as many of the features and benefits of Software Defined Storage as possible (see part 1 of this series).
By definition, when a hardware vendor offers software that commoditizes its own range of traditional storage hardware, customers have to look at the fine print. However, each one of the below products deserves a close look and a comparison of features and economics.
These are really software vendors, using mostly commodity hardware, including Flash arrays, as a delivery vehicle for SDS. This is a concession to the traditional way storage is purchased and mostly offers a turnkey experience. When considering the purchase of this type of storage appliance, it is essential to make sure to avoid the creation of technology islands that require separate management tools, processes and staff.
Storage today should be decoupled from hardware. There are compelling use cases for each type of storage, may this be SAN, NAS, Flash, RAM or DAS. However, there is absolutely no reason at all to further go down the path of storage silos, where the NetApp guys, the EMC guys and the Flash guys all are separated, without much knowledge of each other or of application requirements.
Ultimately, applications will request the storage they require, by sending API calls for virtual volumes of a certain performance level, cost, size, resiliency and location. SDS will translate these API calls into a set of instructions that automatically provisions the storage that comes the closest to these application requirements.
We are about to launch an EMA research project that will analyze the implications of big data analytics for systems management from the perspective of IT operations staff. Servers, networks, storage and the automation and management solutions that keep them running efficiently need to dramatically adjust to these new requirements.
The following questions will be answered:
What are the key infrastructure obstacles –internal and external– of big data deployments?
What are the perceived data center and cloud-related key pain points of big data deployment projects?
How to best integrate big data architectures with traditional data center and cloud infrastructure?
How to ensure end-to-end visibility across the big data capture, management and analysis process?
How to ensure performance, scalability, elasticity and manageability of enterprise big data environments?
How can big data co-exist with traditional application environments?
How can customers best take advantage of physical, virtual and cloud infrastructure for big data?
What are the roles of automation and analytics when creating and managing big data infrastructure?
What is the impact of Big Data on traditional IT roles, processes and data center silos?
How to ensure performance, security and SLA compliance of big data infrastructure?
Which key technologies and architectures are required to better benefit from big data projects?
What governance, automation and orchestration platforms are needed for optimal infrastructure management?
How can Big Data projects take optimal advantage of private and public cloud?
How does Big Data analytics help support enterprise IT management?
The result of this end-user research will be a clear cut set of check lists for IT professionals to determine how IT operations should adjust to big data requirements in the short and long term.Vendors will have the opportunity to determine how their offerings align with customer requirements in the big data context.
Needless to say that I’m very much looking forward to getting started.
The following three questions are at the core of the SDS discussion:
Enterprise Management Associates (EMA) is about to launch a major research project shedding light on these three core challenges.
EMA believes that true SDS must exhibit the following core and “bonus” capabilities:
Why this Matters to Customers
SDS is a logical layer that federates all existing and future storage of any age, brand or type. This means that storage features are purchased separately from the underlying hardware. It does not mean that hardware is entirely commoditized, as performance and reliability characteristics still matter. SDS simply enables customers to not pay multiple times for the same advanced storage features –snapshots, clones, DR, backup, thin provisioning, compression, deduplication etc. – and to more intelligently use expensive storage tiers such as Flash or RAM.
In my next post, I will provide a brief overview of the vendor landscape within the SDS arena.
First of all, it is important to mention the incredible excitement that Jim (Jim Frey is EMA VP for Network Management) and I came across when discussing this topic with practitioners. As expected, IT Pros do not share one homogeneous definition of the SDDC, but we did identify a set of core priorities that everyone was burning to address in 2014:
1. Centralized management of servers, network and storage
2. Best-practice, repeatable configurations of application environments
3. Orchestration and automation for easy cross silo application deployment
At the end of the day, the SDDC is all about managing server, network and compute infrastructure –internal and within the public cloud- through a central set of management software. The SDDC introduces an additional layer of abstraction –right above the private and public cloud- that enables the policy-driven provisioning of application environments and ultimately “empowers” applications to define their own environments, based on performance, security, availability and further policy requirements.
“In a truly Software Defined Data Center, infrastructure is defined through centralized management software and by enterprise applications.”
In short, the concept of the SDDC is regarded as the “golden path” to coping with today’s exploding hunger for IT services. When asked for the key technologies organizations will invest in this year (2014), practitioners made it very clear that each of their investments is directly aimed at better serving business needs, without having the luxury of drastically increasing their data center OPEX and CAPEX:
Key Areas of Enterprise IT Investments in 2014
1. Capacity management: Seemingly an old and boring discipline, capacity management is staging a comeback in 2014. Really, we shouldn’t call it “comeback”, but “metamorphosis” from the ugly duckling, conducted by a small number of data center gearheads, to a truly critical data center discipline that constitutes an essential part of every infrastructure and application management planning, deployment and management decision. Within the massively heterogeneous environment of the SDDC, capacity management has to truly “understand” the specific requirements of each individual application workload. Therefore, I’m happy to declare 2014 the year of truly dynamic and application-aware capacity management.
2. Multi virtualization and/or cloud management platform: In today’s data centers, we see more and more different hypervisor technologies for servers, storage and networking. At the same time, organizations have begun to more aggressively adopt external cloud resources. This availability of a “quick fix” leads to rapidly increasing infrastructure complexity, making SLA management, security and application performance assurance more difficult to achieve.
3. Configuration Management: Configuration management is the third key investment area in 2014. This goes very much in line with today’s hunger for rapid and consistent application deployment. Configuration management software helps bridging the traditional gap between developers and IT Operations (DevOps), encapsulating application deployment, management and troubleshooting instructions within the application code.
For a true deep dive into the SDDC topic, with all of its bottlenecks, challenges, opportunities, business drivers, risks and other key considerations, please download the full report from our website.
Last but not least, I would like to thank all of our sponsors that made this report possible in the first place.
1. It’s all about the service. This is the core lesson to take away from the event. IBM has created a rating system for IT services, where the ones that are easy to consume and are architected for scalability and elasticity receive the highest score. Whenever new applications are created, developers should first explore how far they can get without any actual coding, just through combining existing services.
“The best code is the code you never write” – Jerry Cuomo, IBM Fellow and WebSphere CTO
2. OpenStack is IBM’s IaaS technology of choice. The company’s goal is to ultimately enable workload portability and interoperability, based on the OpenStack platform. However, it is important to note, that despite the company’s heavy OpenStack investments, IBM’s next generation cloud could also work based on another IaaS framework, should OpenStack not turn out to be the winner of this race.
“It’s all about the abstraction of the service” – Dave Lindquist, IBM Fellow and Tivoli Software CTO
3. OASIS TOSCA is the key standard to enable workload portability and interoperability between clouds. IBM is working on making OpenStack Heat TOSCA compliant, so that any workload that is described through the TOSCA standard could run on any OpenStack cloud.
“Having standards is irrelevant unless you have a way to validate compliance” – Jason McGee, IBM Fellow, CTO PureApplication System
4. The main goal of any cloud is to be developer friendly. Ultimately, cloud is today’s new server and must provide developers with easy access to the services they need for scripting, authentication, data storage and so forth. CloudFoundry offers a standardized platform for developers to access standard frameworks and services.
“To enable your developers, you have to meet them where they are” – Jerry Cuomo, IBM Fellow and WebSphere CTO
5. Analytics is key, not only for the business, but also for IT operations. IBM is heavily investing into adding analytics capabilities to pretty much any of its software solutions. Ultimately, any type of data -structured and unstructured- available to an enterprise can and should be analyzed and leveraged.
“A lot of technologies deal great with structured or unstructured information. The current and future challenge is bringing it all together” – Robert LeBlanc, Senior VP Middleware Software
6. System Patterns and Application Patterns are key for efficient workload deployment, operation and management. IBM is looking to gradually shift focus toward Application Patterns, as these offer a higher level of abstraction and therefore enable better agility when compared to System Patterns.
“Patterns understand the service context, while scripts are blind to their surroundings” – Jason McGee, IBM Fellow, CTO PureApplication System
7. Open standards are key at the IaaS, PaaS and SaaS level. IBM feels strongly about converging toward a core set of standards on each one of these levels for optimal support of the underlying storage, network and compute systems on the one hand and of developer centric services and application environments on the other.
“It’s all about the app” – Danny Sabbah, CTO and GM, Next Generation Platform
8. Design is essential for any type of software. IBM’s recently opened design center in Austin, TX, and the company hiring large numbers of user experience experts, demonstrate that IBM is serious about enhancing the user experience of existing and new software systems.
“It’s about designing a vase vs. creating a way to better enjoy flowers in your home” – Phil Gilbert, Head of IBM Design
9. The SoftLayer acquisition was all about software, namely IMS (Infrastructure Management System). The ability to provision network, storage and server resources independently of the presence of a hypervisor, as well as the performance and security of the triple network architecture, make SoftLayer a key puzzle piece, as well as the next deployment target for systems and application patterns, within the IBM strategy.
“SoftLayer is all about speed” - Jim Comfort, General Manager, GTS Cloud Development & Delivery
10. PureSystems is one of IBM’s areas of growths and like SoftLayer, PureSystems is a software play. Intelligent management software enables the consolidation and ongoing optimization of application workloads.
“Just turn it on and it works and learns” – Marie Wieck, General Manager, WebSphere Software
While these features new capabilities are important, what really matters are two key facts that are more prevalent than ever.
Every time a new technology is introduced to the data center, there has to be a business case, that answers the seemingly century old question of the CFO: “Why should I care?”. Making the business case for OpenStack is indeed non-trivial, as there are many factors to consider and scenarios to plan for. Simply putting out OpenStack for developers to play with, will not make them abandon Amazon Web Services. To be successful, OpenStack must be part of a comprehensive and application-centric IT strategy and should not even aim at fully replacing AWS. It is important to understand that OpenStack is only one destination for specific enterprise workloads and works best when embedded within the existing enterprise IT context. There are very interesting sessions at OpenStack summit that will demonstrate how early adopters have successfully integrated OpenStack with their DevOps processes that are based on Git, Gerrit, Jenkins and other standard development and test software.
Ultimately, the OpenStack vision comprises of a much more application centric list of tasks, such as configuration management, continuous deployment, auto-healing and scaling of entire application environments. That said, it is essential to note that these capabilities will all remain basic and focused on integration capabilities with a wide range of enterprise IT management tools. At the end of the day, we must not forget that the vendors supporting OpenStack cannot be interested in OpenStack replacing their own IT management tools and hardware platforms. The reason these vendors are supporting OpenStack lies in the common desire to make their software and hardware available to a broader customer base.
Participants in recent EMA research projects have repeatedly noticed that we have classified OpenStack more as a standard than an actual IaaS software platform. This was a deliberate decision by EMA, as we see the key value of OpenStack in creating a common standard that will help customers integrate currently existing siloes in storage, network and compute. To achieve this integration, hardware and software vendors must agree to making their products available to be accessed by the individual OpenStack modules (Nova, Cinder, Swift, Glance, Neutron, etc.). Ultimately, OpenStack delivers the ability to mix and match 3rd party solutions, based on application requirements. Therefore, we regard OpenStack as a standard.
Finally, here are some questions that remain to be answered in the near future:
Today, the game is a different one. “It’s all about the app” is the mantra repeated by all major and minor vendors. This means, that pushing the entire technology stack on the helpless customer is not the solution. The successful vendor doesn’t simply understand this very basic truth, but incorporates it as an addition to the “it’s all about the app” mantra into its corporate strategy. Customers truly benefit when their software vendor offers solutions that are open, portable and interoperable, without –explicitly or implicitly– trying to convince them to rip & replace existing systems.
And this is the crux. The vendor with the most open and customer centric strategy will ultimately be successful. Vendors that are burdened with legacy baggage and therefore feel the need to keep aggressively ousting their competitors, instead of offering solutions that play nicely within the existing customer environment, will ultimately wither. Customers more and more understand that there cannot and should not be one winner in the “war of the stacks”. In fact, it shouldn’t even be a war. Each one of these stacks has vastly different characteristics in terms of cost, compliance, security, resiliency, performance and application support. This means that ideally enterprise IT will ultimately act as a service broker, offering business units the optimal cloud infrastructure for each individual workload.
Here’s a quick comparison chart that shows that there is a place for each of the current competitors. To illustrate my point that each platform has its unique target use cases, I have limited this chart to 3 advantages and 3 downsides each.
|VMware vCloud||-Battle tested
-Comprehensive feature set
-Widest support amongst app vendors
-Wide integration with storage, network and compute technologies
|-Lacks enterprise features
-Difficult to deploy and configure
-Supported by Citrix and friends
-Battle tested and scalable
-Fewer server, network and storage devices supported
|Amazon EC2||-Cheap for some workloads
|-Expensive for some workloads
None of these platforms will go anywhere anytime soon. One of them may ultimately take a larger market share than the others. However, this race is only just beginning. All the talk about current market share is very much a moot point, as customers are only now getting their bearings and as production deployments of each of these stacks are still rare. The fact that OpenStack is the topic du jour for us analysts and journalists, as well as many major vendors should not be taken as an indication for who will ultimately be successful. In the end, it is safe to say, there will be a world of many clouds.
EMC ViPR: Announced earlier this year, EMC’s ViPR technology signals the beginning of a revolution in enterprise storage. While servers are mostly virtualized and most organizations are in the process of making concrete plans for how to create application-aware networks, storage lags behind significantly and is almost entirely blind to the applications depending on it.
With ViPR, EMC lays the groundwork for centrally managing storage resources of any type and brand, not only across one data center, but spanning multiple locations and public cloud offerings. Tying together storage through the ViPR management layer enables end customers and service providers to offer their application teams one single northbound API for provisioning and managing ALL corporate storage resources. Application-aware storage is one of the key requirements for the Software Defined Data Center, which is why you should take a good look at what EMC is up to with its ViPR software.
ServiceMesh: The Software Defined Data Center, following EMA’s definition of the term, spans across all network, compute and storage resources that are available to the organization. These resources can be located within or outside of the corporate data center and they can be delivered in IaaS, PaaS or SaaS format.
ServiceMesh offers a central governance layer that sits on top of physical, virtual and cloud resources and enforces policy compliance, security and cost efficiency. Instead of today’s common script-based solutions, which by definition are error prone and difficult to govern, ServiceMesh leverages a declarative approach and is therefore worth a look for any organization that already has or is in the process of adopting private or public cloud environments. Follow this link to learn more about why ServiceMesh was an EMA Vendor to Watch in March of 2013.
Simplivity: Simplivity’s Omnicube appliances are hyper-converged building blocks consisting of all the components that can be typically found in a traditional rack: servers, switches, shared storage, WAN acceleration appliance, SSD appliance etc. Omnicubes make all of these resources available as shared pools that are managed through VMware vCenter. Policies are applied on a per VM basis, without the VMware administrators having to worry about configuring storage, network and compute resources.
Simplivity’s VM-centric approach to hyper-converged infrastructure enables customers to simply add on more 2U Omnicube appliances, when resources run out. These appliances can be hosted in different geographic locations, enabling advanced disaster recovery and high availability capabilities. In short, Simplivity’s Omnicube delivers a Software Defined Data Center in a box that can also leverage public cloud resources and is therefore worth a close look when evaluating converged infrastructure options.
CloudPhysics: CloudPhysics applies a big data approach to IT operations management, where operations data is collected across CloudPhysics’ entire customer base. This data is used as context knowledge when evaluating customer infrastructure events, configuration items and performance metrics.
Leveraging collective knowledge for operations management enables customers to automatically learn from their peers, without any manual intervention or dependencies on critical operations staff members. EMA’s impact brief on the launch of CloudPhysics in August 2013 provides a lot more details regarding this intriguing new approach to IT operations management.
Cirba: CiRBA addresses one of today’s central cloud challenges that becomes even more relevant when building the Software Defined Data Center: “where should I place my new application workloads?” and “how can I consolidate application environments in a secure and policy compliant way, without sacrificing performance”. These are questions that go vastly unaddressed in today’s private and public cloud platforms, but will have to be tackled before cloud can take on mission critical applications.
The new CiRBA reservation console in combination with CiRBA’s existing control console provides conclusive answers to these cloud capacity management challenges. Whether customers are considering OpenStack, vCloud, IBM SmartCloud or any other cloud solution, it is worth taking a close look at CiRBA. Specifically if a cloud is deployed to hyperconverged environments such as VBLOCK, Simplivity, Flexpod, HP Cloud System or IBM Pure Systems, there could be tremendous consolidation potential, which translates into better utilization of these high-dollar hardware platforms (see EMA’s Vendor to Watch report for more details). In short, take a look at CiRBA, as you will get more value out of your infrastructure today and in future.
VMTurbo: VMTurbo aims at automatically optimizing resource allocation, based on the importance of a specific application. Mission critical applications are automatically enabled to self-provision more network, storage and compute capacity than lower tier apps. Resource re-allocation happens in near real-time, based on a set of compliance, policy and cost requirements.
VMTurbo has recently added management capabilities for NetApp storage arrays, as well as for Amazon Web Services, Microsoft Azure, vCloud, CloudStack and all common hypervisor platforms. Ultimately, VMTurbo aims at orchestrating the entire Software Defined Data Center –including public cloud resources– based on its analytics engine and market-like model of resource allocation (see EMA’s Vendor to Watch report for more details).
MetaCloud: Many organizations are intrigued by the potential freedom from hypervisor licensing fees that OpenStack could bring (see my previous post on the business case for OpenStack). In other words, what is really interesting about OpenStack is that it is best consumed with the free KVM hypervisor. However, the prospect of having to manage a new stack of open source cloud and virtualization technologies within the corporate data center has led many organizations to hesitate and limit their OpenStack deployments to small pilot environments.
The MetaCloud value proposition aims at eliminating these concerns, promising that customers can safely run many –not all applications are supported on KVM– workloads on the free KVM hypervisor, while MetaCloud remotely manages the environment. This includes upgrading OpenStack as new releases become available, as well as the addition of important features such as high availability and more flexible network management.
Pivotal: Probably the biggest startup in history -1,250 employees, $1 billion in funding- Pivotal facilitates a truly disruptive approach to IT operations management, by placing the application in the center of all efforts, aiming to enable customers to operate in a manner as nimble and agile as today’s poster children of DevOps: Google, Amazon and Facebook.
Pivotal constitutes a radical paradigm shift in enterprise IT, moving away from managing resources and toward a fully data driven model of enterprise IT. Overcoming the traditional separation of software delivery and IT operations constitutes Pivotal’s key mission. To make this happen, Pivotal leverages open technologies such as Spring, Cloud Foundry, Redis, Rabbit MQ etc.
This year’s VMworld was fully focused on the Software Defined Data Center, comprising of pooled network, storage, and compute resources that are centrally configured and made available to the application via APIs. Each one of these eight vendors contributes to the vision of the Software Defined Data Center and therefore offers tremendous customer value.