Saturday, April 17, 2010

The Validation Fallacy

So lately I've been reading, writing, and thinking a lot about the security of your web applications.  One of the themes that has surfaced in almost every conversation is the idea of validation.  What do I mean?  I have been hearing from security managers and application testers alike that they measure their success (or the success of their web application security programs) in different ways - but all center on one vulnerabilities.

Interesting that after all these years of preaching by not only me but many others in the Information Security field we're still measuring by the number of vulnerabilities.  Forget the term vulnerability ...and I mean that in all seriousness.  Just use "security defect" - it's a much more powerful term.  Besides ... why do you even care how many vulnerabilities (OK, security defects) you find?

The validation fallacy is the belief that the value (or success) of a security program lies in the number of security defects you point out, or uncover.  So if the value of your program isn't in the number of bugs - how do you judge success or failure?

There are a few different metrics I can suggest you use which you will get significantly higher mileage out of.

First, and the one I currently use most - is the Defects over Cycles (DoC).  The DoC metric counts the number of defects over the span of several cycles of development of the same application.  If you're not decreasing the bugs over the life of an application then, as we like to say, you're doing something wrong.  The first time you run a security program you're going to come up with a mountain of defects.  More importantly, you're not going to fix all of them the first time around.  The success should be measured over time as the defects start to drop from one cycle to the next.

A sub-metric here, which is critical, is the Recurring Defect Rate (RDR).  The RDR is the measure of the defects that recur from one cycle (or release) to the next.  The RDR measures defects that are identified, closed, and re-appear again on the next release.  I would consider this one of the primary measures of success for a security program.  The reason I think the RDR is so critical is it takes into account much more than your ability to find bugs. Overall, the goal of any good security program is to not only decrease risk but to also drive education and the adoption of more secure practices throughout the enterprise.  If your developers continue to make the same mistakes over and over ...again, "you're doing something wrong".

Validation of your security program shouldn't come from the number of vulnerabilities you can put on a report.  Your validation should come from the pervasiveness of the secure mindset throughout the company from developer to program manager to senior management.

That is true validation.

No comments:

Google+