Many times when discussing the topic of big-data, the focus is on the volume of the data, the structure of the data or the near real-time analysis requirements of the data.  We toss around buzzwords like Hadoop, structured vs unstructured, etc…. However, often times what is missed are the analysis goals of the big-data environment.

Peppers and Rogers, a customer-based strategies firm, featured a recent piece on the use of big-data rather than the size or structure of big-data.   While Peppers and Rogers does not say to ignore the characteristics of big-data sources, it focuses more on the analytical process rather than the volume, variety and velocity of big-data.

I agree with Peppers and Rogers with this approach and may even take this opinion one step further.  Any successful big-data analytical environment needs to focus on how the data can impact a company’s top line, bottom line or both when it is conceived and constructed rather than focusing on how many petabytes of information will be stored.

How do you approach your big-data, or ‘traditional’, analytical environment’s requirements and planning?

Post your comments below or ping me directly via twitter at @JohnLMyers44