As is well known by now, Charlie Miller, well known for his research of the security of Apple products, has been evicted from the Apple developer program for violating the terms of the developer agreement, according to an Apple email (reproduced in this CNET article). The prevailing opinions on this matter seem to gravitate toward two schools of thought:
- One maintains that Apple responded to an independent security researcher with an unsually heavy hand. True, the app in question had been in the store since September, but Miller says that Apple has known about the issue since October 14. It seems that it wasn’t until that fact was made public that Apple gave him the boot. “I don’t think they’ve ever done this to another researcher. Then again, no researcher has ever looked into the security of their App Store. And after this, I imagine no other ones ever will. That is the really bad news from their decision,” said Miller as quoted by CNET. Blogger Brian Krebs tweeted that there are “three words to describe Apple on security: Opaque, capricious and arbitrary.” Said Kaspersky Labs researcher Roel Schouwenberg in this HuffPost article, “It doesn’t make sense to me. Apple has tried to reach out to the security community. This move seems really counterproductive.”
- The opposite view holds that, in this case, the terms of the Developer Program license agreement were violated, and giving Miller the boot was warranted. “By allowing the application to remain in the App Store, Miller’s good word is the only thing separating him from a common criminal, from Apple’s perspective at least,” said Jonathan Zdziarski, in a published quote:
“Zdziarski said Miller could have proved his point while making his app unavailable for download or by pulling his app from the store immediately after it was approved, ‘rather than give the some 100 million iOS users a chance to download and install this malware.’ Zdziarski said a hacker with bad intentions could have hijacked Miller’s app to attack iPhone and iPad users.”
This morning, I voiced the opinion that Apple must work with the independent security research community, and that Apple’s response in this case was not the right way to go. Without the work of outside third parties, vendors and developers may remain unaware of important issues that merit a response. So what was wrong with giving Miller the shove?
Richard Bejtlich may have said it best in this tweet:
“Apple needs a separate security researcher agreement. [Charlie Miller] routinely breaks regular TOS, so why not explicitly let him?”
If Charlie is in violation of the developer terms of service, why not construct an agreement that enables independent researchers to contribute to improving security? It’s not that Charlie is unrecognized. Why shouldn’t there be a program for working with recognized researchers who have demonstrated their willingness to identify issues and help vendors resolve them – particularly when some aren’t charging for their services as it is?
As Schouwenberg noted above, Apple has sought to make progress in working more closely with the security community. But a more formal approach could beg the question of just how much latitude would – or could – be provided for in such a program. For example, one issue that often dogs penetration testing is the realism of the assessment. If too many constraints are placed on the ability to explore production systems or critical underlying dependencies, just how realistic is the test to begin with? These areas are often ruled out of scope in a pen test – but these may be the very assets that attackers target, specifically because of this impact. How can their security be determined most realistically without placing sensitive issues at risk? Is pointing out evidence of potential exposure enough in every case?
How would such concerns translate to an evaluation of something like the Apple App Store? According to the HuffPost article, “Miller said he was trying to demonstrate a flaw in Apple’s process for reviewing new apps. If he hadn’t introduced the app to the App store, no one would have believed that Apple would accept an app that could infect mobile devices, he said.”
Is it necessary to demonstrate that an insecure app can get through to the App Store? The sheer number of applications that seek an App Store berth would alone suggest that some almost certainly pose some risk. Helping Apple to better address this volume, however, is very probably a worthwhile goal. Even some who feel that Apple did the right thing are even more concerned, as Adrian Kingsley-Hughes put it, “that Apple didn’t spot what this app was doing in the first place. Miller had to talk about it before Apple realized what was going on.”
But is it necessary to expose the store’s customers to a security risk in order to prove a point? Or for that matter, did Charlie’s app actually expose customers to such risk? It was limited in what it could do, but its capability was clearly intended to demonstrate what could get through.
If you deal with these issues of realism and dealing with independent research in your organization, I’d be interested in your experience.