When Breaches Affect a Lot More than the Victim: How Much Security Is Enough?
Another report of a breach at a technology vendor much of the industry depends upon for security and trust; this time, Verisign.
The most immediate concern about this incident was that the attacks in question occurred in 2010, and were not widely known until Reuters discovered the disclosure in the company’s reporting as required to meet new SEC guidelines. According to the Reuters report:
The Verisign attacks were revealed in a quarterly U.S. Securities and Exchange Commission filing in October that followed new guidelines on reporting security breaches to investors. It was the most striking disclosure to emerge in a review by Reuters of more than 2,000 documents mentioning breach risks since the SEC guidance was published.
Learning in this way after so long of such an incident at a company like Verisign is troubling enough, as others have already noted. But there is another aspect of this case, and others like it, which is just as troubling if not more so:
Incidents in the last year from RSA and Symantec to Comodo, DigiNotar and now Verisign highlight how attackers are going after the infrastructure and tools that secure information assets and assure the functioning of information systems worldwide. This is no news to anyone whose focus is protecting those assets, of course. The issue is that successful attacks on these organizations can have a much greater impact on far more than the individual victim company alone.
This raises a significant problem for risk management at these organizations. In these cases, the total impact affects a lot more than the individual company’s assets. How does any organization manage risk in proportion to the impact in these cases, when the scale of impact outsizes the victim?
To say that “well, such companies should always have been aware of this fact or stayed out of the business” may well be true, but is also unrealistic. This is not to excuse poor practices. But we must also consider how we have created the demand for such products and services, and what seemed to work yesterday looks a lot more vulnerable today. The global network we have now was built upon assumptions that today seem quaintly naïve. We will not be able to un-build that edifice any time soon.
It’s bad enough that organizations are already too reluctant to disclose or are likely to argue the materiality of a breach to begin with. Some are undoubtedly concerned that some disclosures have been followed by the worst-case scenario of the business going under or getting sold in a fire sale – but that’s hardly universal. TJX recovered the price of their stock within a few months, and it’s not clear exactly what impact the breach had on stock price in any event (something that put out the fire of correlating stock price with breach impact). Some will be reluctant to disclose in order to avoid giving attackers any further intelligence – which, depending on the nature of what could be disclosed, may be a questionable argument, considering that attackers a) have already been successful in penetrating the organization, and b) they have the advantage to begin with, in their ability to focus on selected exposures, while businesses must make the most of limited assets to protect as many of the most significant exposures as they can.
Many more businesses, however, may fear that disclosure means not only a hit to confidence but also to profitability if disclosure means devoting more assets to managing information risk exposures. But forcing this investment is exactly the effect that mandatory disclosure is intended to have. Too many businesses already tend to place profitability and self-interest ahead of defending against exposures that, exploited, could affect the wider public, let alone disclosing when the measures they do take have failed.
The bigger problem for such organizations is defining a scale for “appropriate” response in light of this wider impact. Undoubtedly, many will try to hide behind this issue to justify limited investment – but the issue remains regardless. How much is enough in these cases?
Which raises an even touchier question: If enough is too much for any one organization, must they be helped to manage the risks that expose us all? It’s a short hop for a business to go from complaining that “’enough’ is too much” to being willing to accept what would undoubtedly be seen by some as corporate welfare in the name of better security for all.
And how does one measure “enough” in such a case? Is regulation the answer? Just raising the question would set off a powder keg on any side of the issue one would care to take. Myself, I have always viewed regulation as something like trying to correct for astigmatism. Rotate the lens one way, and you have great focus in one direction – but not in another. Rotate it the other way and you’ve solved that problem – but created a new one in its place.
Regulation is something like that, as the state of Massachusetts found a few years ago when it first tried to introduce data protection. Make a rule flexible so that businesses can define their risk priorities and manage accordingly, and the rule has no teeth. Go the other direction and make it highly prescriptive, however, and you may be forcing organizations to comply in painful detail with requirements which, once a rule is in place, may already be irrelevant or obsolete. (I’ll defer even raising the question of the role of audit.)
So what is the right answer to this wider-impact problem – or is there any good one? How can those responsible for the welfare of a lot more of us than just themselves or their customers do a better job of protecting all of us without risking their own viability? Or would it really take that much for them to do a better job? There are more than a few incidents that would suggest that simply being more responsible with capabilities already available to any similar organization might have been adequate in a particular case. But how much of that is Monday-morning quarterbacking, if we don’t know what the organization is really up against, on a day-to-day basis?
I also wonder just how tolerant we are willing to become when breaches are disclosed. We say this is the approach we favor, but we are also quick to identify where victims could have done better – and we’re not always exactly generous with our tone, particularly in the case of a “security” company which “should have known better.” This is undoubtedly true in more than a few cases – but if we acknowledge, like many of us in infosec already do, that it’s likely that most of us have already been penetrated or exposed in some way but just aren’t yet aware of it, how willing will we be to recognize this in others, acknowledge that it happens, and learn from the facts of a case? Or does this also entail the acknowledgement that we are all a lot more vulnerable than we are ready to admit?
I’d be interested in your views on any of these questions.