Wednesday, November 26, 2014

The Absolute Worst Case - 2 Examples of Security's Black Swans

You know that saying "It just got real"? If you're an employee of Sony Pictures - it just got real. In a very, very bad way. There are reports that the entire Sony Pictures infrastructure is down, computer, network, VPN and all - and that there isn't an ETR on target.

There are reports that there is highly sensitive information being held for "ransom", if you can call it that, by that attackers. There is even some reporting that someone representing the attackers has contacted the tech media and disclosed that the way they were able to infiltrate so completely was through insider help. In other words, the barbarians were literally inside the castle walls.


If you work in enterprise security I don't need to explain to you how bad this is, or how thoroughly this type of compromise breaks every single contingency plan most companies (outside the government, defense space) have in place. This compromise, an "IT matter" as Sony Pictures' PR calls it, is epic levels of bad.

Definition of Black Swan event, for clarity:
"The black swan theory or theory of black swan events is a metaphor that describes an event that comes as a surprise, has a major effect, and is often inappropriately rationalized after the fact with the benefit of hindsight."
--Source: Wikipedia-- https://en.wikipedia.org/wiki/Black_swan_theory

You can read some fantastic reporting on the issue here:

Although I truly do not envy those poor souls in Enterprise Security over at Sony Pictures, it's the broader implications of this kind of attack that seriously concern me. This isn't the first time we've seen this type of attack - where the attackers had complete and total access (allegedly) into the infrastructure of the enterprise. It won't be the last time. So can we learn something here, and take it with us going forward? I think we can, if we're willing to pay attention.

I'd like to pose a few hypothetical scenarios here, given the lesson we're learning again from this unfortunate case- and what can or should be done to avoid being, to put it mildly, thoroughly screwed.

Case- Insider Threat / Rogue Insider
Insider threats are the stuff of myth in much of enterprise security. We hear a lot about how dangerous they can be but it's rare that someone actually comes forward with a first-hand account. If this incident is truly an insider threat (rogue employee, aiding an outside attacker) then it will be a case used for years to illustrate the point.

Insiders hold a special place in the nightmares of enterprise security professionals. Mostly because much of our defenses are positioned at our borders so when someone who has access and is a trusted insider goes rogue we have very little recourse. This is the continuing problem we see as defenders - the M&Ms paradigm. Hard outer shell, soft chewy middle.

A lifetime ago when I was leading up enterprise security engineering our team had discussions about how we were going to protect ourselves against this type of threat. We knew we had malicious insiders in many places with deep access and deeper pockets - so rooting them out wasn't going to work. If you can't keep them out then what's the next line of prevention? Maybe it's a little bit of 1990's technology like segmentation of network assets, separation of duties, and tight identity and access management controls. Further that, we profile people's behaviors and look to build operational baselines - I know this is much easier said than done, no need to repeat.

So what happens when prevention fails, often catastrophically and publicly? We turn to detection and response. Failure to prevent isn't failure, it's a fail in the kill chain, forcing us to move to the next step down. Detection, swiftly and silently, is the next big key. Again, if you don't know what normal looks like you will never know what abnormal deviation is, I hope that's intuitive. I've never known an attacker that gets caught by an IPS signature - mainly because there is no such thing. So again, what does detection look like? I think it comes down to detecting deviations (even if they're subtle) in behavioral patterns of humans and/or systems. I don't think you need to spend a million dollars to do it. Maybe it's enough to use Marcus Ranum's "never-seen-before" idea. Take key assets, and build access tables for who accesses, how frequently, and when. Then look for net-new access (even if it's legal/allowed) and investigate. Sure, you may technically have access to that HR share, but you really shouldn't be accessing it, and under normal conditions you wouldn't.

But what if the things you're stealing as an insider threat are the things you work with and have access to every day. Well, then we focus on exfiltration (deeper down the kill chain). How does it leave your environment? Can you prevent people from taking data out of your network, or at least catch them when they try? I'm fairly confident the answer is no if it's just a general question - but if you can identify and tag at some meta-level things that are critical, really critical, to your organization maybe you can find when it's trying to leave the infrastructure without permission? I don't know the answer here mainly because one answer isn't going to solve all of the problems out there, and it's a "well, it depends" answer based on your company profile.

I can tell you this though, insider threats are models for using kill chain analysis.

Recovering from an insider is a little more difficult, particularly when you don't know who they are. Insiders can burrow deep, and stay hidden for a long time - sometimes going completely undiscovered. This means that if you're fairly sure you've been compromised by a malicious insider, but can't identify the attacker, you're in for a rough go at trying to figure out what state to restore to. Do you restore your network/infrastructure to 2 days ago? 2 weeks ago? 2 months ago? The answer is uncertain until you find and profile the attacker. Once you do, you're likely to discover that you can't trust much of your infrastructure telemetry if the attacker was well-hidden. Covering their tracks is something "advanced" adversaries are good at.

The things to think about here are two-fold. First - you need to identify and attribute the attack to someone, or a group. Post-haste. Yesterday speeds. You need to know who they are, so you can start tracing their steps and figure out what they did, when they did it, and the extent of the potential damage. If you can't figure this out quickly, getting the infrastructure to a working state may not do you any good because it could still be compromised in that state, or could leave you open to another run at compromised further down the line when you believe you've removed the threat.

Second, you need to restore services and bring back the business. Today many companies simply cease to exist without IT. If you want to degrade or destroy a company - take away their ability to network and communicate. The battle of service restoration versus security analysis will be bitter, and  you'll probably lose as the CISO. Restore services, and figure out what's going on, maybe in parallel, maybe not - but that first step is almost universal with the notable exception of a few industry segments where being secured is as critical as being online.


Case- Compromised Core Infrastructure
Nothing says you're about to have a bad day like the source of a major attack on your enterprise coming through your endpoint management infrastructure. This starts to feel a lot like an insider threat - although it doesn't necessarily have to be. I can't even imagine the horror of finding out that your endpoint patching and software delivery platform has been re-purposed to deliver malware to all of your endpoints and that it has been the focal point of your adversary's operations. If you can't trust your core infrastructure - what can you trust?

Perhaps trust is the wrong way to look at it, as my friend Steve Ragan pointed out. So what then?

Within the enterprise framework there has to be some piece of infrastructure that is trusted. Maybe it's a system that stays physically offline (off?) until it's critically needed with alternate credentials and operational parameters. Maybe it's a recovery platform that you have a known-good hash of so that you can quickly validate you're working with the genuine article. Maybe it's something else, but you have to have something to trust.

If you have a compromised core infrastructure, I think you're looking at one of two options. Option A is restoring your systems to a questionable state (but not obviously compromised and usable) and working backwards to find the intruder. Option B is closing everything down and re-deploying everything and starting from scratch. Option B may very well sound like the more security-sound option until you factor in the actual data. Nothing says your data can't be compromised...it's not just about windows credentials. Maybe some of your company's top-secret documents are PDFs. Maybe the attacker was clever and trojaned all of your PDFs such that as soon as one is opened, the compromise starts all over again.

I seriously doubt that would be detected because it's likely custom-written code and won't pop up on all but the most sophisticated (dare I say "next ten") detection tools.

My suggestion here? Start with the inner-most critical components of your infrastructure, audit and reset credentials and work your way out in concentric rings until you start to get to components which you can actually get by without. This exercise should keep your operations teams busy for a while, and you can maybe even get a parallel incident response investigation going in the mean time. On the plus side, this gives you a tiny window within which to start to build things better from the ashes. Or maybe not since you'll be going at light speed plus 1mph. This is, however, the only advice that makes sense. It's also the only advice I can give you that I have actually tried myself - and as painful as it sounds, believe me when I say that in real life it's significantly worse.


Before this post gets to long (or have we long crossed that bridge?) I think it's safe to say that very few of you reading this post are operationally prepared to handle this type of incident where you've either got a malicious insider who has gone undetected and wreaked utter chaos, or a compromised core infrastructure by an outsider - or both if you're won the crap lottery. That's a problem because this is our black swan. This is our version of planes with hijackers flying into buildings. We know it's a possibility, but none of us have the resources to do prepare, and let's face it - we have bigger problems. Except that these incidents are real. And the Black Swan is real. It happens. Now what?

Does this adjust your world view, or risk model for your organization somehow? If so, in what way? Will you start taking the insider threat more seriously as a result? Why or why not... and how? By my unscientific calculation there are probably .05% of companies out there who have the capital and the resources to pull off recovering from one of these Black Swan events, with anything even resembling success. The rest of us in the enterprise? What do we do when the worst-case happens?

I'm curious on how you see things. Leave a comment here, or take the conversation to Twitter with the hashtag #DtSR - let's talk about it. I think we can learn something from the horrendous situation Sony Pictures is living right now - let's not waste a teachable moment for everyone, collectively, to get even a tiny bit better.

No comments:

Google+