What caused the Heartbleed Bug that endangered the privacy of millions of web users this week? On one level, it looks like a simple case of human error. A software developer from Germany contributed code to the popular OpenSSL software that made a basic, but easy-to-overlook mistake. The OpenSSL developer who approved the change didn't notice the issue either, and (if the NSA is telling the truth) neither did anyone else for more than 2 years.
It's hard to blame those guys. OpenSSL is an open source project. As the Wall Street Journal describes it, the project is "managed by four core European programmers, only one of whom counts it as his full-time job." The OpenSSL Foundation had a budget of less than $1 million in 2013.
That's shocking. Software like OpenSSL increasingly serves as the foundation of the American economy. Cleaning up the mess from the Heartbleed bug will cost millions of dollars in the United States alone. In a society that spends billions of dollars developing software, we should be spending more trying to keep it secure. If we don't do something about that, we're doomed to see problems like Heartbleed crop up over and over again.
Of course, you might ask why this argument doesn't apply to open source software in general. If free-riding is a problem, why do free programs like OpenSSL exist in the first place?
To paraphrase a famous essay by free software developer Eric Raymond, free software is driven by individuals and organizations scratching a "personal itch." Most of the time, people fix bugs or add features to free software projects because as users of the software they will benefit from the improvement.
If a large company has come to depend on a free software project, it makes economic sense to pay engineers to make changes that will benefit them. And once an improvement has been developed, it's easier to contribute the improvement back to the core project so that it will be included in future versions of the software. That creates a virtuous circle: as software gets better, more companies use it, which leads to more improvements, which leads to even better software.
So why didn't this virtuous circle allow OpenSSL to catch the Heartbleed problem sooner? The problem is that security vulnerabilities aren't like other bugs. Most bugs crop up naturally as people use the software. The most common and harmful bugs are the ones that get noticed and fixed first.
But security flaws don't come up naturally. They only surface when someone deliberately goes looking for them. And that can happen one of two ways. If security researchers find a security bug first, it can be quickly patched before much harm is done. If malicious hackers find a bug first, it can be exploited to catastrophic effect.
So the usual open source model of waiting for users to report and fix bugs as they discover them doesn't work for security problems. To find security bugs before the bad guys do, people have to be actively looking for them. And while many IT workers understand the importance of this kind of security auditing, it's much harder to convince management to devote resources to fixing theoretical security bugs when there are always more immediate non-security bugs requiring attention.
But it shouldn't have taken two years for someone to notice the gaping hole in OpenSSL's heartbeat function. Given the how widely OpenSSL is used, there ought to be multiple people auditing every line of code added to the software, so that mistakes can be caught and corrected before the software is widely deployed.
There are several ways this could happen. First and foremost, the core OpenSSL project should be better funded. Governments, foundations, and large corporations that use the software should all be chipping in money to offer the core OpenSSL team full-time jobs and help them hire additional programmers to help them do their work.
Second, more resources should be devoted to independent security audits of popular open source software. That could take the form of grants to academic security researchers, free-standing non-profit organizations, or even a government agency devoted to finding security problems. Indeed, these different types of institutions are likely to have different strengths and weaknesses, so a combination of all three is likely to work best.
In recent years, there has been a heated debate over "cybersecurity." There have been proposals to establish federal security standards, license security professionals, and perhaps even give the president the power to seize control of sensitive networks. There are good reasons to be skeptical of such proposals, because regulating private-sector security practices could easily do more harm than good.
But better funding for security research is something everyone should be able to agree on. And Heartbleed shows it's long overdue.
It's hard to blame those guys. OpenSSL is an open source project. As the Wall Street Journal describes it, the project is "managed by four core European programmers, only one of whom counts it as his full-time job." The OpenSSL Foundation had a budget of less than $1 million in 2013.
That's shocking. Software like OpenSSL increasingly serves as the foundation of the American economy. Cleaning up the mess from the Heartbleed bug will cost millions of dollars in the United States alone. In a society that spends billions of dollars developing software, we should be spending more trying to keep it secure. If we don't do something about that, we're doomed to see problems like Heartbleed crop up over and over again.
Why security flaws are different from other bugs
Computer security is a classic collective action problem. We all benefit from efforts to improve software security, but most organizations don't make it a priority. For most of us, it's economically rational to free-ride on others' computer security efforts.Software like OpenSSL increasingly serves as the foundation of the American economy
Of course, you might ask why this argument doesn't apply to open source software in general. If free-riding is a problem, why do free programs like OpenSSL exist in the first place?
To paraphrase a famous essay by free software developer Eric Raymond, free software is driven by individuals and organizations scratching a "personal itch." Most of the time, people fix bugs or add features to free software projects because as users of the software they will benefit from the improvement.
If a large company has come to depend on a free software project, it makes economic sense to pay engineers to make changes that will benefit them. And once an improvement has been developed, it's easier to contribute the improvement back to the core project so that it will be included in future versions of the software. That creates a virtuous circle: as software gets better, more companies use it, which leads to more improvements, which leads to even better software.
So why didn't this virtuous circle allow OpenSSL to catch the Heartbleed problem sooner? The problem is that security vulnerabilities aren't like other bugs. Most bugs crop up naturally as people use the software. The most common and harmful bugs are the ones that get noticed and fixed first.
But security flaws don't come up naturally. They only surface when someone deliberately goes looking for them. And that can happen one of two ways. If security researchers find a security bug first, it can be quickly patched before much harm is done. If malicious hackers find a bug first, it can be exploited to catastrophic effect.
So the usual open source model of waiting for users to report and fix bugs as they discover them doesn't work for security problems. To find security bugs before the bad guys do, people have to be actively looking for them. And while many IT workers understand the importance of this kind of security auditing, it's much harder to convince management to devote resources to fixing theoretical security bugs when there are always more immediate non-security bugs requiring attention.
We need better funding for security research
We'll only get secure software if we have people actively finding and fixing security flaws. As a society we do have some people like that — there are computer security researchers in both academia and the private sector. The Heartbleed bug itself was discovered by security researchers employed at private companies: Google and a security research company called Codenomicon.But it shouldn't have taken two years for someone to notice the gaping hole in OpenSSL's heartbeat function. Given the how widely OpenSSL is used, there ought to be multiple people auditing every line of code added to the software, so that mistakes can be caught and corrected before the software is widely deployed.
We'll only get secure software if we have people actively finding and fixing security flaws.
There are several ways this could happen. First and foremost, the core OpenSSL project should be better funded. Governments, foundations, and large corporations that use the software should all be chipping in money to offer the core OpenSSL team full-time jobs and help them hire additional programmers to help them do their work.
Second, more resources should be devoted to independent security audits of popular open source software. That could take the form of grants to academic security researchers, free-standing non-profit organizations, or even a government agency devoted to finding security problems. Indeed, these different types of institutions are likely to have different strengths and weaknesses, so a combination of all three is likely to work best.
In recent years, there has been a heated debate over "cybersecurity." There have been proposals to establish federal security standards, license security professionals, and perhaps even give the president the power to seize control of sensitive networks. There are good reasons to be skeptical of such proposals, because regulating private-sector security practices could easily do more harm than good.
But better funding for security research is something everyone should be able to agree on. And Heartbleed shows it's long overdue.
No comments:
Post a Comment