Cisco has recognized this market opportunity and on July 30th, 2013 completed the $180 million acquisition of Composite Software, a leading data virtualization software and services company. There are many moving parts to these transactions but perhaps the biggest area of interest is the opportunity this acquisition has for disrupting the data virtualization space. Data virtualization leverages query optimizers, database connections and deploys additional technology to deliver the fastest possible speed to data. The network has always been a roadblock to greater success regardless of how well optimized the solution might be to work with it. Cisco brings a wealth of insight to this issue and will certainly add new functionality to Composite that will allow it to better understand network traffic and perhaps control it to some extent. In doing so Cisco and Composite will enhance the capabilities of data virtualization beyond those its competitors. I expect we will see significant activity from their competitors to strengthen their bonds with network partners in the coming months.
Moving forward Cisco plans to operate Composite as an independent entity keeping the entire executive team, R&D group, and sales and services organization, demonstrating that Cisco intends to grow the revenue stream of its acquisition and to focus on further innovation from its technology.
I covered this event in greater detail in an EMA Impact Brief that you can download here.
The Circle Computer Group purchase will strengthen Syncsort’s functionality around Big Data and Hadoop but more importantly Circle’s DL/2 solution provides an engine that enables the migration of applications accessing IBM® Information Management System™ (IMS) to DB2® on z/OS®, without requiring any application changes resulting in greater flexibility of the data and significant cost savings. This mainframe data can be made available to Big Data platforms such as Apache Hadoop by using Syncsort’s DMX-h product line. Circle also provides similar capabilities for moving IBM Virtual Storage Access Method (VSAM) data.
The Circle acquisition is well suited for Syncsort and keeps the company close to its data roots. I have been a fan of Syncsort technology for years and I’m looking forward to whats next for Lonne Jaffe and team.
At MicroStrategy World last week they presented their Visual Insight solution delivered inside of v9.3 as well as there cloud data discovery solution Express. Both are aimed at the self-driven data discovery market and will compete with Tableau Software, Tibco Spotfire, QlikTech and others that have helped to carve out this new and powerful BI segment.
The trend towards data discovery has been growing over the past couple years and while some of the companies listed above entered the market with purpose built stand-alone solutions the big stack players have entered the market as well. IBM/Cognos, Oracle, SAP and others have brought compelling solutions to market to help fend off this new competition to their traditional BI solutions and business models.
The MicroStrategy team made an interesting point during our briefing (thus the reason for this post). They have chosen to embed Visual Insight into their v9.3 platform product. Creating a solid link and relationship to the data and the processes that are leveraged and managed within their standard BI platform. This tight relationship helps to reduce the Wild, Wild West effect that discovery tools can create when they are stand-alone islands within the enterprise but still enable the core value of a data discovery tool. This integration creates a more easily managed environment, perhaps more trustworthy data access and the opportunity to manage the landscape from a compliance and governance perspective.
So the question is, does integrating a data discovery solution within a wider more managed environment or platform create a less effective tool for discovery or a stronger tool? Does this shine a spotlight on a challenge that stand-alone solutions will be faced with as the larger stack platforms enable data discovery and make it enterprise friendly?
I’d be pleased to hear your thoughts.
“Data is the new oil.” http://www.forbes.com/sites/perryrotella/2012/04/02/is-data-the-new-oil/
I agree with Ann but to get value from crude oil it must be processed. And that is often what is lost in the buzz surrounding the Volume, Velocity and Variety (3Vs) attributes of Big Data requirements. The application of “complex workloads” is what turns the “crude oil” of Big Data into something consumable. After you have ingested data sources that may contain Big Data’s 3Vs, a suitable environment is required to process and leverage the data. Otherwise, all you have accomplished is the creation of a new form of long-term storage and another information silo. Addressing “complex workloads” allows Big Data to be integrated as an aspect of a wider enterprise environment whether that is operational or analytical.
This brings about some interesting questions.
In each use case, a decision must to be made as to which aspect of the Big Data environment is used to facilitate operational or analytical action. As Hadoop and other ingestion technologies are mastering the “science” of getting Big Data in a platform, where “complex workloads” are allocated and performed is going to be the “art” of gaining value from Big Data initiatives.
EMA has defined this intersection of Big Data platforms as the Hybrid Data Ecosystem.
The EMA Hybrid Data Ecosystem (see below) includes the following components: Operational Systems; Enterprise Data Warehouses (EDW) and Data Marts (DM); Analytical platforms (ADBMS); Hadoop, Key/Value, Graph data stores (NoSQL); and Cloud-based sources.
A more agile data integration approach that leverages lean integration principles needs to be implemented to insure business executives and analysts can move at the speed required to make value based decisions. Jared Hillam, EIM Practice Director for Intricity, LLC a recent guest on the Architect-to-Architect & Business Value Series I’m participating in made the point that there are too many chefs in the data kitchen and new tools need to be applied to consolidate processes and to enable core teams to execute. I agree with Jared, the opportunity loss most enterprises experience because executives can’t access data is costing millions and taking a faster, leaner approach is defiantly necessary.
Data virtualization technology, when it leverages lean integration principles brings an innovative solution to this problem. Data virtualization compliments the data warehouse infrastructure while delivering repeatable success through reuse and by cutting the time between request and delivery of data. Data virtualization’s common access layer is especially well suited to rapid prototyping it brings the end user into the loop early and engages them with the analyst to shorten requirement gathering, time costs and risk. These platforms allow the analyst to profile data in real-time, apply transformations and with sophisticated platforms enact data quality functions on the data as its accessed by the data virtualization layer. Once these projects are complete fellow analysts within the organization creating further agile value in the system can reuse the processes and projects.
As data management ecosystems expand beyond traditional data warehouse’s to include cloud, Big Data and analytic platforms the ability to apply lean integration principles and agile work processes will become an even greater value to the information supply chain. Data virtualization is technology that enables this agility and creates time to value for its users.
On June 26th I will be a guest on the Architect-to-Architect & Business Value Series discussing how to Achieve Business Intelligence Nirvana with Self-Service and Data Virtualization. I hope you will join us for the program.
Now is the time to examine your data strategy and to investigate how integrating these new technologies can transform your traditional enterprise data warehouse centric landscape into a flexible Hybrid Data Ecosystem that leverages the best platform to match your workload, data and business goals.
I’ll be covering this topic in detail during my keynote at TDWI World Conference in Chicago. May 7th, 2012.
Philip Russom, Research Director of Data Management at TDWI, and Grant Parsamyan, Director of BI & Data Warehousing at eHarmony.com got together last week on Informatica’s Architect-to-Architect & Business Value Roundtable Series to discuss how data virtualization delivers agility while complimenting the investment you have made in your data warehouse technology. I’m participating in this series as well and will be hosting a session in June.
Data Virtualization is an excellent way to leverage and add value to the investment you have made in your data warehouse infrastructure. Data landscapes are expanding quickly and as they become more distributed DV technology can act as the trusted layer of data access across these systems. In the 90s some people saw DV technology as a possible path to circumventing DW’s that’s not the case today. I’m not aware of any vendor in the space that promotes this as a best practice. In the end, DV helps to secure the DW as a critical element in our data world and an agile way to leverage the data within it.
Philip made a great point early on in the segment when he compared the importance of agile development and responsible data access & preparation. His position is you can’t have one without the other and I think he makes a great point. Speedy application development is great but skipping the proper steps to prepare the source data can eliminate the value you get from being agile.
As I mentioned above adding agility to your data environment is critical Philip shared some research data that illustrates the problem. 41% of the respondents indicated that when they are faced with a standard data adjustment such as adding a new hierarchy it can take between 1-3 months in a non-agile environment. Grant Parsamyan stated without hesitation that time frame would never work for eHarmony.com.
Data virtualization plays a key role in addressing these challenges. Delivering a managed and trusted data layer speeds development. This is especially true when the data virtualization and data integration people can work more directly with the business building prototypes and managing processes. Companies who integrate data preparation tools in the process will see even greater gains in productivity accuracy and use.
As always, I’m just scratching the surface on the ideas and practices Philip and Grant discussed so follow the link to listen in for your self. For more of my thoughts on this series follow the link below.
Wayne shared an timeline of how data virtualization has matured over the past 20 years in it he points out that data virtualization first came to the industry under the moniker of Virtual Data Warehousing (VDW) in the early 90’s and was quickly dismissed by the physical data warehouse purists. The technology was interesting but went against the grain of most common data management practices of the time. Data Federation came later and gave way to Enterprise Information Integration (EII) in the 2000’s. The data virtualization technology we see in the market today started to get significant traction over the past five years or so. Why the history lesson? I think its important to point out that DV has been around a good long time and the foundation of this technology is rooted in significant experience coupled with today’s more powerful computing platforms and networks it’s a relevant and important technology especially when it complements existing approaches for managing data.
Rob Myers Manager, BI Architecture/EDW Solution Architect for HeathNow NY was on the program and shared in-depth insights into how they’ve implemented DV technology. Data virtualization has gone somewhat viral at HealthNow they have had success in implementing an SOA style data service for applications, they also leveraged their DV hosted common data model and definitions to enable a stronger Master data Management (MDM) foundation and in the end have addressed rampant data mart spread by delivering the DV access layer.
A critical requirement for HealthNow was the ability to couple Data Federation functionality with traditional data integration features such as data quality, profiling, cleansing and ETL. Addressing data quality issues helped gain acceptance with IT and business shareholders who badly needed a trusted data layer for applications within the company. Having the ability to switch between data federation and ETL has a made the environment more agile and given HealthNow flexibility to serve a wider variety of data requirements.
HealthNow used DV to take control of its data mart sprawl. When they began to implement the technology over 30K data marts were spread across the company. Delivering a unified and trusted data layer enabled them to meet the needs of the users and gain control over an out of control data landscape.
HealthNow’s success is a great example of the flexibility and power a data virtualization solution can provide. Follow this link to replay this last program, Wayne and Rob go into greater detail than I’m able to address here. I’ll be hosting an upcoming session in the series and in the meantime you can follow this link to sign up and listen in.
Previous posts in this series:
BI initiatives are growing in nearly all companies while the supporting data landscapes grow and extend across distributed heterogeneous sources not easily managed or accessed. During a recent web seminar titled “Why Agile BI, Why Now and How Data Virtualization can Help” Informatica VP, Product Strategy, David Lyle called this challenge the “integration hairball” pointing out that serving “speed of the business” demands agility and removal of IT dependency. David makes a great point IT has long been the gatekeeper of data and often the reason for slow information access. As self-service BI solutions gain traction sophisticated users are demanding an agile environment that promotes speed and agility.
There are multiple reasons why traditional environments lack agility and are challenged to deliver fast time to value or more importantly fast time to data. The panel on this webinar did an excellent job of highlighting several challenges that contribute to this issue and can be quickly addressed by a more agile and innovative environment powered by data virtualization.
1. There are too many components involved in the BI stack
2. Business and IT don’t see things the same way
3. BI is treated as any other enterprise application
4. Data is every where not just in the EDW
5. It takes too long to deliver data / reports to business
Data virtualization offers a layer of managed data access that can overcome issues of speed. It delivers a common, logical and virtualized data abstraction and data access layer that analysts can directly access. The addition of data profiling and data quality functionality within the work process allows analysts to set data rules that IT can enforce better serving the needs of self-service end users. Applying these features to federated data in a virtualized environment provides even great agility to a BI system especially as BI tools simply assume the availability of fresh, accurate and consistent data – which is seldom the case.
Data virtualization is a strategic tool for companies to add value and agility to their traditional data management environments I’m participating in the series of web seminars hosted by Informatica that addresses this technology so watch my blog for continued coverage as the series moves forward.
Watch last weeks Data Virtualization Architect-to-Architect Roundtable and follow the series here
Big Data, Cloud, Social and Mobile business intelligence are all at the forefront of his strategy and the company is making significant R&D investment to bring products to the market that align with these initiatives. I attended the last MicroStrategy World event in Monte Carlo and its clear the company has made strong progress on all fronts since then. Many companies in our space struggle to keep focus on one strategic direction so it speaks well of MicroStrategy that they are able to execute on all of these.
Announcements were made in all product segments this week EMA covered some of the cloud news here.
MicroStrategy is leading the industry with regards to social data for business intelligence and has moved beyond simple social monitoring of brands to deliver an application that leverages Facebook’s interest graph. With over 800 million members, Facebook represents the worlds largest database of demographic, network, interest and activities information. MicroStrategy Wisdom delivers consumer insight by leveraging these areas so companies can filter Facebook data by Page Likes, Gender, Relationships, Urbanicity, Education, Age and other dimensions. Additional EMA Coverage here.
I recommend taking a test drive of the public version of Wisdom available at the iTunes Music store. Presently the system is delivering the data from 4, 857,333 Facebook users. Social data analytics should be part of your 2012 business strategy.
Thank you to the Analyst Relations team at MicroStrategy for a valuable visit to the World Event. Special thanks to Jonathan Goldberg and Douglas Chope for their assistance and expertise.