Tuesday 13 August 2013

Whiter-than-white hats, malware, penalty and repentance*

I was recently contacted by a journalist researching a story about ‘hackers’ quitting the dark side (and virus writing in particular) for the bright(-er) side. He cited this set of examples – 7 Hackers Who Got Legit Jobs From Their Exploits – and also mentioned Mike Ellison (formerly known as Stormbringer and Black Wolf, among other handles), who in 1997 contributed a well-written and thought-provoking paper to Virus Bulletin, arguing that the antivirus industry should move beyond stereotypes of virus writers as socially inept, ethically challenged, and programmatically incompetent, and engage with them in dialogue and even consider employing (reformed) virus writers. Ellison had himself written viruses for his own research purposes but apparently he never released them into the wild, though he published code in various forums. His farewell to virus writing and his reasons for quitting the practice are quoted in full by Kurt Wismer here.
As it happens, I was at that presentation and discussion (it was the year of my own first paper for Virus Bulletin), and exchanged some emails with him subsequently. Not that I would have been persuaded enough to offer him a job, even if I had actually been working in the security industry at that time. (I was handling anti-virus, among other chores, for a medical research organization.) Still, he made some good points in his presentation and our later email exchange, and certainly came over as one of the more ethically grounded and technically knowledgeable (ex-)virus writers with whom I had contact at that time. The way Joel Deane recalled it, many of the audience members outside the AV industry were convinced enough to react positively to the idea of an anti-virus product from a company that employed former virus writers, but to the best of my knowledge, Ellison was never offered a job in the AV industry.
I was later approached by a publisher wanting someone to complete a book on which someone called Michael Ellison had apparently done much of the initial work. In fact at least one other researcher received similar approaches, but as far as I know, no-one from the industry took up the offer. I can’t be sure it was the same Michael Ellison as I no longer have any of that correspondence, but it’s relevant to my present train of thought either way.
The book plan as presented to me was largely along the lines of writing a demonstration virus, then writing the module to detect and disinfect it. It’s unlikely that any mainstream AV researcher at the time would have been prepared to publish self-replicating code: I was even more concerned, though, that readers of such a book would be misled into thinking they’d learned far more about virus and antivirus technology than they really had: that approach seems to me to be seriously flawed. In the end, a reader would finish up with an outsider’s view of how anti-malware technology is written using examples of quasi-malicious code that weren’t really typical of the real thing. (Suddenly I’m reminded of the Rosenthal utilities…) However, the publisher refused to consider any modification to the plan and I don’t think it was ever finished. (It’s because Ellison had already said that he wouldn’t be writing any more viruses that I wonder if it might have been a different Michael Ellison.)
There have been other virus writers who were reportedly offered jobs related to or in the security industry on the strength of their presumed programming skills. The PC Mag slide deck cited above has some recent instances, but any rehabilitation of such individuals has not usually been into the mainstream AV industry. There have been whispers about virus writers employed directly by the AV industry – Ellison mentioned in his paper that some of his VX peers were ‘perhaps’ working in AV, perhaps undermining his own core propositions, but didn’t name names, and I’m not about to. (I realize that there are still people out there who believe that we write all the malware, but we don’t.)
I recall one semi-reformed virus writer who claimed that his security clearance in the UK was higher than mine, but I can’t remember his handle. The fact is, though, that vendors in the core virus detection industry would have avoided (publicly at least) employing ex-VXers for several reasons: suspicion of inadequate ethical development; conviction that writing malware is far from proof of technical skill; a suspicion that virus writers are less likely to have the discipline necessary to work in a team or have good coding habits, and may be reluctant to be trained in more ‘appropriate’ ways of working; but probably most of all because other companies would use it to undermine their credibility. In fact, I’ve always felt that for most outsiders, the AV industry would have made its case much better in a variety of contexts – such as the issue of whether it’s a good idea to test security products using newly-created malware - by focusing on the technical arguments against writing ‘good’ viruses (and employing virus writers) rather than simply contending that writing self-replicating code is intrinsically unethical.
However, that was then and this is now, you’ll be surprised to hear. Viruses have not completely vanished and sometimes have much more impact than the low volumes in which they are generally seen. However, malware culture has drastically changed in the 21st century. Malware is almost entirely profit-driven. If security vendors don’t trust hobbyist virus writers enough to employ them, they certainly aren’t likely to put any trust at all in people whose expertise is in writing banking trojans, phishing, and so on. Moving away from vendors with core expertise  in malware detection, security firms in general are more likely to employ hackers – whatever you may understand by the term – with interests and presumed expertise in areas other than malware creation, such as vulnerability research.
The core issue is that a very high percentage of the (new) malware we see nowadays is criminally malicious, so the individual has already chosen to cross a line, and it’s not so easy to walk back over that line when you’ve broken the law. There were, of course, individuals at the time of Ellison’s experimentation whose motives were plainly malicious (usually destruction rather than financially motivated). Conversely, there are still individuals whose experimentation is inspired by curiosity and desire for peer approval rather than real malice or intended criminality. But we generally expect even precocious teens to have enough sense of social responsibility and sense of mens rea not to write phishing Trojans.
There are plenty of recent examples of people who’ve cooperated with law enforcement after capture but far fewer of someone known to have experienced some form of moral epiphany rather than a desperate attempt at an exit strategy. My guess is that the curious/experimental mindset is more to be found in areas like vulnerability research, most of whose practitioners wouldn’t regard themselves as criminals, though most of them probably hope to make some money out of what they do. That isn’t criminal in itself, of course – I expect to make some money out of at least some of what I do! – but it is different to old-school (whitehat) hacking for the sake of experimentation. The gangs behind most malware are usually focused on efficiency and ROI and ‘good enough to work’ coding, though we see evidence of some slightly more blue-sky research. However, co-operation from within those groups is most likely to be inspired by self-preservation.  
By the way, if you think you’ve seen the feature graphic before, it’s extracted from the cartoon here. If you haven’t seen it, it might amuse you for a nanosecond or two.
*”…And he spoke through his cloak, most deep and distinguished
And handed out strongly, for penalty and repentance
William Zanzinger with a six-month sentence…”
(Bob Dylan, ‘The Lonesome Death of Hattie Carroll’)

David Harley CITP FBCS CISSP
ESET Senior Research Fellow

No comments:

Post a Comment