This morning, one of the few people who actually understand application/software security, Jeremiah Grossman of White Hat, dropped an interesting tweet. Lots of intelligent people replied, and what seemed like an interesting debate was unfolding.
Then Dan Cornell said something interesting, which got me thinking.
"@jeremiahg What is lacking is likelihood-of-exploit and cost-of-exploit data. I suppose stuff like Verizon #DBIR is helping w/ that"What struck me is the highlighted portion of his tweet. Now, I feel like the industry is desperate for a standardized equation as much as the next guy, but "likelihood-of-exploit" is a red herring if I ever saw one.
Why you ask? I think this type of calculation inherently fails to understand that adversaries are not static.
Jeremiah further points out that through profiling we know certain attacker classes act in certain ways (call them TTPs) and that, as he says [paraphrasing] "dumb bots won't exploit business logic flaws". While that may be true right now we know for certain that human adversaries adapt to defenses. Once we - the defenders - settle in on a risk formula that assume attacker class X operates in mode Y and is unlikely to exploit something like business logic or other complex attacks they will notice our weakness and adapt to attack that class of vulnerabilities.
This is the cat n' mouse game we play with our adversaries. Of course, new tools are developed constantly to help even the most basic of attacker leverage sophisticated attacks against our defenses. Furthermore, it's naive to think that any given piece of infrastructure will be attacked by only certain attackers/adversaries, at least in my humble opinion.
During the conversation stream Dan flipped the ask over to me - and since I was basically saying that formula component wouldn't work - if I had an alternative. Sadly I don't ... I would simply recommend that we take "likelihood-of-exploit" out of the equation.
When building such a risk equation, I believe having good figures for asset value and fix cost can help you balance the two against each other. If the fix cost is higher than the asset value you know right off this is going to be very difficult.
Here's what I believe - I believe in simplicity. So by that logic the more variables you add to the equation the more complex it becomes and the further from the answer you're trying to reach. Things like "opportunity cost", as Dan Cornell points out, are tricky at best, and black magic on a typical Monday, and anything else we add we're just taking a best-guess if we're honest with ourselves.
One last thing ... cost-of-exploit. What I can tell you about cost-of-exploit is that at one point in time we know (roughly) the Dollar, Ruble, or Euro value that goes into that column. However, this is a very small sliver of point-in-time. The cost of exploit, I believe, drops in dramatic fashion over time, but we don't know the point of origin typically. What is the cost of exploiting a specific SQL Injection vulnerability in a web app? Probably pretty low given how many SQLi tools have been released over the years, and how many automated scanners and how-to-for-idiots guides there are. The cost for exploiting a business logic flaw could potentially be very high - but how in the world does anyone with a straight face attempt to convince someone they know that value?
In the end, I recommend simplicity over complexity when it comes to calculating risks and fix/no-fix questions. How do you determine, quantitatively via some risk equation, that a specific vulnerability should be fixed now, later, or not at all? I wish I had a concrete, sure-fire answer that was data-backed and mathematically sound. I don't. I don't think such a unicorn exists. If if does, please leave a comment, and I will be thrilled to be wrong.
In the mean time, I defer to qualitative analysis to help me answer that question... until something better comes along.