The 2011 Verizon Data Breach Investigations Report Deserves a Closer Look
It’s that time of year again, when the Verizon RISK Team blossoms forth with its latest, much-awaited Data Breach Investigations Report – and with a number of new and (to some) astonishing findings, the 2011 version did not disappoint (you can get it here).
This year’s report, however, has some important subtleties that could easily get lost in some of the more eye-popping findings – of which the eye-poppingest is likely the dramatic drop in the number of records breached. Only 3.8 million records were confirmed stolen in the 2010 DBIR caseload (which, as in the previous year’s report, included cases from the US Secret Service as well as Verizon – with the addition this year of input on 30 cases from the Dutch National High Tech Crime Unit). This is down from 2008’s record high of nearly 361 million – that’s a change of two orders of magnitude (“Compared to totals for the last few years, that’s basically a rounding error,” as the report’s authors put it.)
At the same time, however, the caseload represented the highest number of breaches ever covered in a single year’s report: nearly 800 cases in 2010, almost doubling the entire caseload covered since 2004 (just over 900 cases comprised the Verizon dataset from 2004 to 2009).
These two factors make for some interesting findings. For example:
- Smaller organizations predominate among this year’s victims. Over half of the caseload (436 cases) is made up of organizations of between 11 and 100 employees.
- The top three industry groups in the caseload make up 87% of breaches. Of these, the hospitality industry (“mostly hotels and restaurants”) stands out (40%), followed by retail (25%) and financial services (22%).
These findings are revealing about the nature of breaches in these categories. As the report describes,
“This rise of breaches in the Hospitality and Retail sectors is one of those areas where we do suspect the numbers reflect trends broader than this immediate caseload. Typically, such organizations represent smaller, softer, and less reactive targets than, for instance, financial institutions. Criminals may be making a classic risk vs. reward decision and opting to “play it safe” in light of recent arrests and prosecutions following large-scale intrusions into Financial Services firms. Numerous smaller strikes on hotels, restaurants, and retailers represent a lower-risk alternative, and cybercriminals may be taking greater advantage of that option.”
Here, then, is an aspect of this year’s report that should be recognized before anyone makes too free with its analysis: the caseload calls out two overall categories of successful breaches: those that exploit what the authors term “targets of opportunity,” versus those that seek out “targets of choice.” Yesterday, Josh Corman called out the implications of a caseload with two such pronounced distinctions. The measures that would better secure, for example, a large enterprise might not be the same as those that would make sense in the smaller organizations that make up the bulk of this report, considering the large number of opportunistic exploits such as attacks against point-of-sale devices among the latter. A superficial read of the report runs the risk of conflating these distinctions and misinterpreting (or worse, mis-applying) its findings.
This should alert organizations to pay closer attention to findings such as commonalities that seem to appear across the board. Once again (*sigh*), the report found that 96% of the breaches investigated (i.e. just about all of them) were “avoidable through simple or intermediate controls.” While this is (yet another) wake-up call to focus real effort on the basics, applying consistency in access management to point-of-sale devices is likely very different from applying it, say, to distributed applications in the data center.
(That’s not to say that some enterprises have no relationship to the security of data among smaller organizations. As the report describes, many of the smaller organizations affected “were often small independent franchise locations of large organizations.” Such “parent” entities would be well advised to engage with their smaller representatives to help them reduce their exposure when at all possible.)
There is still more hidden in the subtleties of these two classes of victims. For one thing, the proportion of breaches defined as “opportunistic” (85%) versus those classified as “targeted” (17%) remained largely unchanged from previous years. This, however, also has a relationship to the overall drop in records breached:
“One finding that did constitute a significant change in 2010 was a sharp drop in the percentage of total records compromised from targeted attacks. They accounted for 21% of records compromised compared to 89% and 90% for 2009 and 2008, respectively. As with attack difficulty, this is mainly due to an absence of any mega-breaches in 2010, almost all of which have been targeted in nature. Instead, we saw more targeted attacks at specific types of data that aren’t typically stolen in bulk, like various types of sensitive organizational data and intellectual property.”
And this, in turn, may obscure another of the most significant subtleties of this year’s report: Aside from the number of records breached, the sheer numbers of many of the data points have grown, even if relative percentages of incidence have changed from previous years, due in many cases to sharp increase in the record number of cases investigated (more than 5x last year’s caseload alone – see the table at right for an illustration the authors sought to make of this point).
Consider, for example, that while sensitive organizational data, intellectual property and classified information still represent a small fraction of the data compromised in this year’s DBIR, the ratios remained similar to previous years. This constancy in light of a significantly increased caseload equates to what the report described as “significant” growth in the sheer volume of data of this type breached:
“At a glance, this appears to concur with recent speculation that payment cards are passé and that IP is the new goal of cybercriminals. This may well be true, but it’s a little too early to dub it a trend based on case evidence alone. Then again, it is noteworthy that the number of breaches involving such data has never been higher in our caseload. It also should be noted that the real rate of theft for IP and classified information is likely higher than any sources (including ours) show. Since fraud detection (e.g., CPP [Common Point of Purchase]) is the most effective means of discovering a breach and since IP isn’t used for financial fraud, then it stands to reason that thieves could pilfer IP freely without being discovered. This is not a comforting thought, but we’ll leave you with it anyway.”
In other words, if the nature of exploit against targets of choice stands out from the sheer volume of opportunistic attacks, the nature of data breached among chosen targets may be changing as well – and this may be less readily discerned from the ways that the theft of records such as credit card or financial account data become apparent.
This will undoubtedly lead to speculation about the nature of more determined adversaries, and what current data tells us about the reality of this class of threat. Here too, however, the Verizon team gives us an excellent example of how to address this issue. The authors use the Verizon Risk and Incident Sharing (VERIS) framework to qualify each aspect of investigated incidents according to a number of parameters. This helps enable investigators to avoid speculating about the nature of the adversary and focus instead on the evidence of exploit – a much more objective approach than (to paraphrase one recent observation) “APT ate my homework.” The ability to see multiple aspects to individual data points and relate them to each other in new ways to yield new learnings is an approach that will undoubtedly appeal to those who understand the value of data synthesis in a data-driven approach to security management.
The VERIS approach also enables a more consistent construction of the sequence of events that lead to an incident, which also helps organizations see how a breach is often the result of a chain of events, which in some cases may be broken at any link in the chain. There is a direct parallel here with other proven approaches to risk management outside of IT. In commercial aviation, this concept of understanding the chain of seemingly innocuous events that led to a disaster has helped aviation professionals better understand how and where to address the weakest links in the chain and reduce the number of serious incidents – not always through the introduction of new technology, but also in practices such as maintaining a “sterile cockpit” free from clutter or distractions such as irrelevant conversation below a certain altitude in critical flight phases.
These examples should give readers some idea why this year’s DBIR deserves more than a cursory evaluation, if organizations of all kinds are to make the most of the valuable insight its sometimes surprising findings offer.