Wednesday, January 29, 2014

Where risk calculations fall apart [again]

I suspect this may upset some people who believe these types of things are possible, or are even performing such actions today - and to those folks I apologize in advance but this is merely my opinion.

This morning, one of the few people who actually understand application/software security, Jeremiah Grossman of White Hat, dropped an interesting tweet. Lots of intelligent people replied, and what seemed like an interesting debate was unfolding.

Then Dan Cornell said something interesting, which got me thinking.



"@jeremiahg What is lacking is likelihood-of-exploit and cost-of-exploit data. I suppose stuff like Verizon #DBIR is helping w/ that"
What struck me is the highlighted portion of his tweet. Now, I feel like the industry is desperate for a standardized equation as much as the next guy, but "likelihood-of-exploit" is a red herring if I ever saw one.

Why you ask? I think this type of calculation inherently fails to understand that adversaries are not static.

Jeremiah further points out that through profiling we know certain attacker classes act in certain ways (call them TTPs) and that, as he says [paraphrasing] "dumb bots won't exploit business logic flaws". While that may be true right now we know for certain that human adversaries adapt to defenses. Once we - the defenders - settle in on a risk formula that assume attacker class X operates in mode Y and is unlikely to exploit something like business logic or other complex attacks they will notice our weakness and adapt to attack that class of vulnerabilities.

This is the cat n' mouse game we play with our adversaries. Of course, new tools are developed constantly to help even the most basic of attacker leverage sophisticated attacks against our defenses. Furthermore, it's naive to think that any given piece of infrastructure will be attacked by only certain attackers/adversaries, at least in my humble opinion.

So.........

During the conversation stream Dan flipped the ask over to me - and since I was basically saying that formula component wouldn't work - if I had an alternative. Sadly I don't ... I would simply recommend that we take "likelihood-of-exploit" out of the equation.

When building such a risk equation, I believe having good figures for asset value and fix cost can help you balance the two against each other. If the fix cost is higher than the asset value you know right off this is going to be very difficult.

Here's what I believe - I believe in simplicity. So by that logic the more variables you add to the equation the more complex it becomes and the further from the answer you're trying to reach. Things like "opportunity cost", as Dan Cornell points out, are tricky at best, and black magic on a typical Monday, and anything else we add we're just taking a best-guess if we're honest with ourselves.

One last thing ... cost-of-exploit. What I can tell you about cost-of-exploit is that at one point in time we know (roughly) the Dollar, Ruble, or Euro value that goes into that column. However, this is a very small sliver of point-in-time. The cost of exploit, I believe, drops in dramatic fashion over time, but we don't know the point of origin typically. What is the cost of exploiting a specific SQL Injection vulnerability in a web app? Probably pretty low given how many SQLi tools have been released over the years, and how many automated scanners and how-to-for-idiots guides there are. The cost for exploiting a business logic flaw could potentially be very high - but how in the world does anyone with a straight face attempt to convince someone they know that value?

In the end, I recommend simplicity over complexity when it comes to calculating risks and fix/no-fix questions. How do you determine, quantitatively via some risk equation, that a specific vulnerability should be fixed now, later, or not at all? I wish I had a concrete, sure-fire answer that was data-backed and mathematically sound. I don't. I don't think such a unicorn exists. If if does, please leave a comment, and I will be thrilled to be wrong.

In the mean time, I defer to qualitative analysis to help me answer that question... until something better comes along.

2 comments:

Alex Hutton said...

"Why you ask? I think this type of calculation inherently fails to understand that adversaries are not static."

Nor are they completely random and chaotic. No, like every other human population, they behave according to patterns. Thus the DBIR reference.

The whole "intelligent attacker" meme is dead. If it were real, then the DBIR and DLDB data sets would show little, rather than significant patterns, Mandiant appliances would be useless, and cyber-kill chains fantasy wishes.

There are many more areas in this post that align neither with current theory nor current practice. I would encourage you to have broader discussions about the subj. at hand.




dre said...

Yep! You're wrong! Don't ask me how I know, but I know -- and I know that you know that I know.

As with all things, though, risk models (like all models) don't represent reality perfectly. When I am able to provide likelihood of exploit and similar calculations, you can assume that the findings produced will have an R squared of 95. Sufficient enough to provide properly-calculated cyber insurance, which is the goal. Yes, there will be black swans (e.g., stolen RSA seed tokens) but a risk model can be adjusted on those one-off events. Lastly, this isn't about prediction, but I don't want to say too much and ruin the surprise.

Google+