Wednesday, 7 May 2014

Heartbleed, Open Source and Open Sores ---Dwayne Melancon (Trip Wire)


Now that things are settling down after Heartbleed, I think about some of the conversations I’ve had about OpenSSL and open source software over the past couple of weeks. There is a persistent misconception that open source is automatically trustworthy because it is open and more transparent than proprietary (aka closed source) software.
This is clearly a case of necessary, but not sufficient.  Yes, it is true that there is plenty of opportunity (and maybe even motive) for people to review open source software, but that doesn’t mean anyone expends the effort to do so.
This flaw was sitting there out in the open, yet went unnoticed for a couple of years.  The issue is that programmers are human and can sometimes make mistakes that go unnoticed.  This is not just an open source problem, by the way: Apple’s code recently had a major security flaw in their OS software (the infamous “goto fail” bug).  This bug was present in Apple’s shipping code in spite of a rigorous testing process and a large QA budget.

Trust, But Verify

These two issues underscore an old mantra, often applied to security: Trust, but verify.  What does that mean to us? Here are a few of the things I took away from these incidents:
  • Trust is not a control, and hope is not a strategy.  If the “stuff” you’re securing is important to you or your organization, don’t rely on someone else’s statement that it is secure.  You may be able to build enough confidence by studying their test plans and procedures, and scrutinizing their test results.  If that doesn’t appease you, spend time testing it for yourself and ensure that you’ve validated that the code or component you’re using is secure against the most common or most concerning threats you expect to face
  • Design with resilience in mind.  Assume that any component can fail or suddenly become inadequate or insufficient. Build your security in a way that you can swap out components without superhuman effort, and understand the dependencies between components
  • Show your work, and leverage others.  Document your assumptions, your test processes, etc. and share it with others in your team.  This increases the odds that someone will notice things you’d miss if you did everything yourself.
It’s also important to remember not to get so caught up in the minutiae that you miss something big.  Which leads me to…

Zoom Out

When you’re too close to something, it can be easy to lose perspective or miss flaws in the big picture. Beyond the principles above, I also encourage security teams to zoom out and look at the overall system of security — not just the individual components.
If you zoom out so you can consider not only the components, but the interactions between them and the overall flow of information through your system, you can often discover flaws in assumptions, data flow, handoffs between functions, and other issues that can come back and bite you later.
The need to consider the overall system of security is another manifestation of “Trust, but verify.”  Some of the recent, high-profile breaches were at least partially attributable to organizations that didn’t appropriately identify weaknesses at a macro scale, or who didn’t properly safeguard handoffs from one process, team, application, etc. to another.
What have you learned?  What have I missed?  Please share – we can all get better by sharing what we’ve discovered.

No comments:

Post a Comment