Friday, June 20, 2008

Application Security - Logic Flaws

Web Application Security Scanners are great tools, in my opinion, and they are getting better and better at finding a wealth of flaws with the applications - but one perfect example of what humans are required for is the following. This is a real-world example - obviously I can't reveal the client but if you know me then you've heard this story and you know exactly who I'm talking about... my point though is the company this happened at is irrelevant. The real issue is the problem, and how it was detected. The example shows how a human being using a black-box scanning tool was able to find a logic flaw within an application that would have been catastrophic if exploited in production... a combination of technology and people with a sprinkle of process saved the day - sort of.

Imagine the following scene... A web-based application, heavily relying on database connectivity, is built in J2EE and about to be released to production. The security team, as typically happens, has to "certify" the application code as 'secure' before production. Let's take into account that the application was just load-tested with 10,000 concurrent users and it breezed through testing.
Security now gets this application and runs a black-box scanner (doesn't matter who the vendor is) against it. The application "halts" after just 10 requests. By halts I mean becomes non-responsive, and effectively dead. Obviously this test is repeated 2 or 3 times, once the app server has been restarted and the same exact result comes back. 10 requests sent, app stops responding... effectively it's dead.

At this point the project manager starts to panic, and the blame-game ensues. Clearly it's the security team's "fault" for breaking the application. Once this idiotic argument gets slapped down, rationalization begins - "well, it worked perfect with 10,000 users, what are you doing different (besides launching attacks?))?" A few days go by, the tests are repeated a few times but the result is always the same. The app server is restarted, and it sings perfectly until the black-box scanner sends 10 bad-data requests, at which point it falls over.

After a week of this, the security analyst asks to step in and analyze the logs to try and help. By now the project is behind schedule for release and people are starting to get very upset. A look through the logs around the time the scan was run produces very strange "Connection pooling" errors from the app server. Basically, there is a connection pool that is being exhausted, and the app just stops working, waiting... indefinitely.

The moral of the story was - after a week of developers trying to figure it out, it took the security analyst looking at it for 5 minutes to isolate the validator function and laugh as the solution was painfully obvious.

Here's the pseudo-code... enjoy figuring this one out - feel free to post replies...

MainDataValidatorFunction ()
{
open DBConn
if dataIsValid
process request
close DBConn
return (1)
else
return (0)
}
OOPS?

2 comments:

Stephan Wehner said...

I guess for the not dataIsValid case the close DBConn call is missing.

Where I currently work there is a healthy no-blame culture: Run in to problem, find cause, solve it. Really quite nice. Of course, sometimes "find cause" is tricky.

In general, the "painfully obvious" is often hidden in a mass of code.

Eugene said...

This is why one ought to use try {} catch {} finally{}!

Google+