Wednesday, September 24, 2014

Software Security - Hackable Even When It's Secure

On a recent call, one of the smartest technical folks I can name said something that made me reach for a notepad, to take the idea down for further development later. He was talking about why some of the systems enterprises believe are secure really aren't, even if they've managed to avoid some of the key issues.

Let me explain this a little deeper, because this thought merits such a discussion.

Think about what you go through if you're testing a web application. I can speak to this type of activity since it was something I focused on for a significant portion of my professional career. Essentially the whole of the problem breaks down to being able to define what the word secure means. Many organizations that I've first-hand witnessed stand up a software security program over the years follow the standard OWASP Top 10. It's relatively easy to understand, it's fairly well maintained, and it's relatively easy to test software against. It's hard to argue with the notion that the OWASP Top 10 is not the standard for determining whether a piece of software is secure or not.

Herein lies the problem. As many of you who do software security testing can testify to, without at least a structured framework (aka checklist) to go against, the testing process becomes never-ending. I don't know about you, but I've never had the luxury of taking all the time I needed, everything always needed to go live yesterday and I or my team was always the speed bump on the way to production readiness. So we first settled on making sure none of the OWASP Top 10 were present in software/applications we tested. Since this created an unreal amount of bugs, we narrowed scope down to just the OWASP Top 2. If we could eliminate injection and cross-site scripting the applications would be significantly more secure, and everything would be better.

Another issue, then. After all that testing, and box-checking, when we were fairly sure the application didn't have remote file includes, cross-site scripting (XSS), SQL Injection or any of that other critical stuff - we allowed the app to go live and it quickly got hacked. The issue this caused for us was not only one of credibility, but also of confusion. How could the app not have any of those critical vulnerabilities but still get easily hacked?!

Now back to the issue at hand.

The fact is that even when you've managed to avoid all the common programming mistakes, and well-known vulnerabilities you can still produce a vulnerable application. Look at what EBay is going through right now. Fact is, even though there may not be any XSS or SQLi in their code - they still have issues allowing people to take over accounts. Why? It's because there is more to securing an application than making sure there aren't any coding mistakes. Fully removing the OWASP Top 10 (good luck with that!) from all your code bases may make your applications more safe than they are now - but it won't make them secure. And therein lies the problem.

When you hand your application over to someone who is going to test it for code issues like the OWASP Top 10, and only that, you're going to miss massive bugs that may still lurk in your code. Heartbleed anyone? Maybe there is a logic flaw in your code. Maybe there is a procedural mistake that allows for someone to bypass a critical security mechanism. Maybe you've forgotten to remove your QA testing user from your production code. Thing is, you may not actually know if you just test it for app security issues with traditional or even emerging tools. Static analysis? Nope. Dynamic analysis? Nope. Manual code review? Maybe.

The ugly truth is that unless you have someone who not only understands what the code should do under normal conditions - but also what it should never do, you will continue to have applications with security issues. This is why automated scanners fail. This is why static analysis tools fail. This is why penetration testers can still fail - unless they're thinking outside the code and thinking in terms of application functionality and performance.

The reality is that for those applications that simply can't easily fail - you not only need to get it tested by some brilliant security and development minds, but also by someone who understands that beautiful combination of software development, security, and application business processes and design. Someone who looks at your application and says: "You know what would be interesting?"...

In my mind this goes a great deal to explaining why there are so many failing software security programs out there in the enterprise. We seem to be checking all the right boxes, testing for all the right things, and still coming up short. Maybe it's because the structural integrity hasn't been validated by the demolitions expert.

Test your applications and software. Go beyond what everyone tells you to check and look deep into the business processes to understand how entire mechanisms can be abused or entirely bypassed. That's how we're going to get a step closer to having better, more safe and secure code.

1 comment:

jwilliams said...

Rabbit - the way you write it sounds like you consider security to be binary, either you have it or you don't. I think of it more like an argument. There's some history in "assurance arguments" if you care. The argument should make some claims about security and then provide some evidence that those claims are satisfied. It's essentially a model of what we expect our security to look like.

For example, a really weak expected security model (like yours) has weak claims that are not structured according to what is important to the business - like the OWASP T10. Even weaker might be an expected security model that "we ran some tool and came up clean". In both of these cases, the definition of "secure" is left to an external party who has no idea about your business or what matters to you.

A better expected security model would probably identify a relatively small set of high level business security concerns -- like data protection, control over infrastructure, etc... The model would articulate a strategy for meeting these concerns with a *set* of integrated defenses, including defense in depth and incident response. And the model would also include the details of implementing that strategy, along with evidence that the defenses actually work and are used in all the right places. This is actually not that hard to generate, because given a clear vision of what you want to create, development organizations are really damn good at creating things.

Stronger expected security models, by the way, specify the *positive* definition of how the system should behave -- like always using PreparedStatement -- rather than a *negative* approach that attempts to send every known form of SQL injection to see if a system is susceptible.

So, with regard to your title, "secure" doesn't equate to "not hackable." Some have argued, and it may be true, that everything is hackable. But security is the state of being protected against the stuff we know about. We might be vulnerable to things we do not yet know about and cannot predict -- but how can we defend against these? The best we can do is to not get burned by things we should have known. And for the record, virtually ALL the breaches have been things we knew about (or should have). In some cases, like Target, they were even specifically told of the problem.

In any case, we need to continue exploring, learning, and evolving our expected security model. And doing the work to verify that actually implement it..

--Jeff

Google+