Monday, December 15, 2014

When the Press Aids the Enemy

Let's start with this- Freedom of the press is a critical part of any free society, and more importantly, a democratically governed society.

But that being said, I can't help but think there are times when the actions of the media aid the enemy. This is a touchy subject so I'll keep it concise and just make a few points that stick in my mind.

First, it's pretty hard to argue that the media looks for ever-more sensational headlines, truth be damned, to get clicks and drive traffic to their publication. Whether it's digital or actual ink-on-paper sensationalism sells, there's no arguing with that.

What troubles me is that like in the war on terrorism, the enemy succeeds in their mission when the media creates hysteria and fear. This much should be clear. The media tend to feed into this pretty regularly and we see this in some of the most sensational headlines from stories that should told in fact, not fantasy.

Saturday, December 13, 2014

Sony Pictures - Lessons From a Real Worst-Case Scenario

There is a lot of junk floating around on the Internet and in the media regarding the Sony Pictures breach. Who did it? What were the motives? These are all being violently discussed in the Twitter-sphere and elsewhere, and if you happen to read the articles and blogs being churned out by the media your head is probably spinning right now.
While I don't think we (the public) generally know enough to be able to talk about the breach with any certainty yet - and perhaps we never will - there is an critical point here which I think is being missed.

What is the lesson the public should take away from the breach, and subsequent consequences?

Tuesday, December 2, 2014

Is Bigger Budget an Adequate Measure of Security Efficacy?

Bigger budgets - the envy of security professionals and the scourge of CISOs the world over. While we'd all like bigger budgets to make security better within our organizations, getting more money to spend isn't necessarily a harbinger of goodness to come.

Monday, December 1, 2014

When Your Marquee Client Gets Hacked

There are people who will tell you that all PR is good PR. In my years in security I have seen both sides of that debate true. Lately though, particularly for security companies who are selling into the enterprise - this may be a double-edged sword that cuts deep.

Look at any reputable (and some not-so-much) security vendor's website and you'll notice there's always a page that gives you all the different logos of the companies who use their products. Most times the vendor pays dearly for that either through deep discounts, or some other concessions just to be able to use the reference. Generally this works to the vendor's advantage because seeing Vendor X used by your peers means that perhaps it's a good idea to give them a look.

Except, maybe, when those peers are getting hammered for being a data breach victim.

Wednesday, November 26, 2014

The Absolute Worst Case - 2 Examples of Security's Black Swans

You know that saying "It just got real"? If you're an employee of Sony Pictures - it just got real. In a very, very bad way. There are reports that the entire Sony Pictures infrastructure is down, computer, network, VPN and all - and that there isn't an ETR on target.

There are reports that there is highly sensitive information being held for "ransom", if you can call it that, by that attackers. There is even some reporting that someone representing the attackers has contacted the tech media and disclosed that the way they were able to infiltrate so completely was through insider help. In other words, the barbarians were literally inside the castle walls.

Wednesday, November 5, 2014

SIEM 3.0 - Continuing to Deliver on Failed Promises

SIEM - Security Information and Event Management - has been a product for many, many years now and virtually every organization out there has bought into the promise of what SIEM will bring. Since the term was coined in 2005, the security industry has largely struggled to deliver on all the promises the product family made.

Friday, October 31, 2014

Having Fun with Password Self-Rest Mechanisms

You know what makes me crazy? Security people who don't understand how crappy attempts to push security policy actually drive security (in the real world) lower. Sometimes, and this makes it a little bit less bad, it's not security people that are responsible but well-meaning developers, project managers, or others who simply don't understand.

The quintessential example of this phenomena is the password self-service reset functionality built into many websites. It's almost 2015 and I was forced to register for a website the other day where I can't really tell you why they needed me to set up a username and password, but I couldn't do what I needed to without that unfortunate string of events that all but guaranteed that I would be upset.

Tuesday, October 21, 2014

The Other Side of Breach Hysteria

In a world where everyone is trying to sell you something, security is certainly no exception. But separating the hype from the truth can easily turn into a full time job if you're not careful.

With all the recent retail data breaches, it would appear as though the sky is falling in large chunks right on top of us. Every big-name retailer, and even some of the smaller ones, are being hacked and their precious card data is bring whisked away to be sold to miscreants and criminals.

Now enter the sales and marketing pitches. After every breach it would seem our mailboxes fill up with subject lines such as-
"Learn how not to be the next , read how our latest gizmo will keep you secure!"
I don't know about you, but the snake-oil pitch is starting to get old. While it's clear that the average buyer is getting the message about data breaches and hackers - I believe there are two other aspects of this which aren't talked about enough.

Saturday, October 11, 2014

Security Lessons from Complex, Dynamic Environments

Security is hard.

Check that- security is relatively hard in static environments, but when you take on a dynamic company environment security becomes unpossible. I'm injecting a bit of humor here because you're going to need a chuckle before you read this.

Some of us in the security industry live in what's referred to as a static environment. Low rate of change (low entropy) means that you can implement a security control or measure and leave it there, knowing that it'll be just as effective today as tomorrow or the day after. Of course, this takes into account the rate at which effectiveness of security tools degrades, and understanding whether things were effective in the first place. It also means that you don't have to worry about things like a new system showing up on the network very often or a new route to the Internet. And when these do happen, you can be relatively sure something is wrong.

Early on in my career I worked for a technical recruiting firm. Computers were just a tool and companies having websites was a novelty. The ancient Novell NetWare 3.11 systems had not seen a reboot in literally half a decade but nothing was broken so everything just kept running and slowly accumulating inches of dust in the back room. When I worked there we modernized to NT 3.51 (don't laugh, I'm dating myself here) and built an IIS-based web page for external consumption. That place was a low entropy environment. We changed out server equipment never, and workstations every 5 years. If all of a sudden something new showed up in the 30 node network, I'd immediately suspect something was amiss. At the time, nothing that exciting ever happened.

Fast forward a few years and I'm working for a financial start-up. It's the early 2000's and this company is the polar opposite of a static company. We have at least 1 new server coming online a day, typically 5-10 new IP addresses showing up that no one can identify. We get by because we have one thing going for us. That one thing is the on-ramp to the Internet. We have a single T1 which connects us to the rest of the world. We drop a firewall and an IDS (I think we used an early SNORT version, maybe, plus a Sonic Wall firewall). When that changed and our employees started to go mobile and thus VPN things got a little hairy.

Fast forward another few years and I'm working at one of the world's largest companies on arguably one of the most complex networks mankind has ever seen. Forget trying to understand or know the everything - we're struggling to keep track of the few things we DO know. Heck we spend 4 weeks NMap'ing (and accidentally causing a minor crisis, oops) our own IP subnets to find all the NT4 systems when support finally and seriously for real this time, ran out.

Now let's look at security in the context of this article (and reported breach) - Let me highlight a few key quotes for you-
"The event was complicated by the fact that the company had undergone corporate acquisitions, which introduced more network connections, and consequently a wider attack surface. The firm had more than 100 entry and exit points to the Internet."
You may chuckle at that, but I bet you have pretty close to this at your organization. Sure, maybe the ingress/egress points you control are few, and well protected, but it's the ones you don't know about which will hurt you. Therein lies the big problem - the disconnect between business risk and information security ("cyber") risk. If information security isn't a part of the fabric of your business, and a part of the core of the business decision-making process you're going to continue to fail big, or suffer by a thousand papercuts.

While not necessarily as sexy as that APT Defender Super Deluxe Edition v2.0 box your vendor is trying to sell you, network and system configuration management, change management and asset management are things you absolutely must get right, and must be involved in as a security professional for your enterprise. The alternative is you have total chaos wherein you're trying to plug each new issue as you find out about it, while the business has long forgotten about the project and has moved on. This sort of asynchronous approach is brutal in both human effort and capital expenditure.

Now let's focus on another interesting quote from the article. Everyone like to offer advice to breach victims, as if they have any clue what they're saying. This one is a gem-
"Going forward, “rearchitecting the network is the best approach to ensure that the company has a consistent security posture across its wide enterprise," officials advised."
What sort of half-baked advice is that?! Those of you who have worked incidents in your careers, have you ever told someone that the best thing to do with your super-complex network is to totally rearchitect it? How quickly would you get thrown out of a 2nd story window if you did? While this advice sounds sane to the person who's saying it - and likely has never had to follow the advice - can you imagine being given the task of completely rearchitecting a large, complex network in-place? I've seen it done. Once. And it took super-human effort, an army of consultants, more outages than I'd care to admit, and it was still cobbled together in some places for "legacy support".

Anyway, somewhere in this was a point about how large, complex networks and dynamic environments are doomed to security failure unless security is elevated to the business level and becomes an executive priority. I recognize that not every company will be able to do this because it won't fit their operating and risk models - but if that's the case you have to prepare for the fallout. In the cases where risk models say security is a business-level issue you have a chance to "get it right"; this means you have to give a solid effort and align to business, and so on.

Security is hard, folks.

Monday, October 6, 2014

To Reform and Institutionalize Research for Public Safety (and Security)

On October 3rd, 2014 a petition appeared on the website titled "Unlocking public access to research on software safety through DMCA and CFAA reform". I encourage you to go read the text of the petition yourself.

While I believe that on the whole the CFAA and more urgently the DMCA need dramatic reforms if not to be flat-out dumped, I'm just not sure I'm completely on board with there this idea is going. I've discussed my displeasure for the CFAA on a few of our recent podcasts if you follow our Down the Security Rabbithole Podcast series, and I would likely throw a party if the DMCA were repealed tomorrow - but unlocking "research" broadly is dangerous.

Wednesday, September 24, 2014

Software Security - Hackable Even When It's Secure

On a recent call, one of the smartest technical folks I can name said something that made me reach for a notepad, to take the idea down for further development later. He was talking about why some of the systems enterprises believe are secure really aren't, even if they've managed to avoid some of the key issues.

Let me explain this a little deeper, because this thought merits such a discussion.

Friday, September 5, 2014

Managing Security in a Highly Decentralized Business Model

Information Security leadership has and will likely continue to be part politicking, part sales, part marketing, and part security. As anyone who has been a security leader or CISO in their job history can attest to, issuing edicts to the business is as easy as it is fruitless- Getting positive results in all but the most strictly regulated environments is nearly impossible. In high centralized organizations, at least, the CISO stands a chance since the organization likely has common goals, processes, and capital spending models. When you get to an organization that operates in a highly distributed and decentralized manner the task of keeping security pace grows to epic proportions.

Wednesday, August 20, 2014

The Indelicate Balance Between "Keep it Working" and "Keep It Safe"

Security professionals continue to fool themselves into believing we walk a delicate balance between keeping the business functional, and keeping it safe (secure). This is, in many people's belief including me, a lie. There is no delicate balance. The notion of being able to balance these on a teeter-totter looks like this:

Guess which one the 'safe and secure' is? Exactly.

An interesting conversation (warning: profanity, not so safe for office) happened earlier today. And as per the usual, someone very smart and seasoned in the enterprise side of defense made the point clear.

The bottom line is this:
  You can't ever cross the line into 'breaking business stuff' because you likely never get the chance again.

Each time the pendulum swings into the "secure" side of the spectrum it stays only for a tiny fraction of time, and we as security professionals have to work very hard to make it stick, or it swings back the other way... quickly.

So the question then is, how do we "make it stick"?

Simple! We demonstrate the business value of good security (aka keeping the enterprise safe). Of course, there are few things that are more simple than this, including tightrope walking the Grand Canyon, being an astronaut, and nuclear physics. Whoops, hyperbole ran away with me there for a moment, sorry. Back to reality.

So the key is to make security sticky. You need to align security to something the business can get behind. Hence, business value is so important to measure. But if you're still stuck reporting useless metrics - like how many port scans your firewall blocked, or how many SQL Injection instances your Software Security program identified - you're miles away from demonstrating business value.

This brings me back to KPIs, and the development of data points which strongly align to business/enterprise goals. All of this is predicated on someone in the security organization (or everyone?) being alert and aware to what the business is trying to accomplish at the board/strategic level. Does your organization have this type of awareness and knowledge? Are you leveraging it?

I can tell you that if you're not, the picture above will continue to be your fate... from yesterday to today and on into the future.

Wednesday, August 13, 2014

Getting in Our Own Way

The security community has this widely-understood reputation for self-destruction. This is not to say that other communities of professionals don't have this issue, but I don't know if the negative impact potential is as great. Clearly I'm not an expert in all fields, so I'll just call this a hunch based on unscientific gut feeling.

What I do see, though, much like with the efforts of the "I am the Cavalry" movement which has sent an open letter via to the auto industry, is resentment and dissent without much backing. In an industry which still has more questions than answers - and it gets worse every day - when someone stands up with a possible effort towards pushing a solution you quickly become a lightning rod for nay-sayers. Why is that?

One of my colleagues who is the veteran CISO has a potential answer - which for the record I'm uncomfortable with. He surmises that the collective "we"(as in security community) aren't actually interested in solving problems because the real solutions require "soft skills like personality" and business savvy in addition to technical accumen. It turns out that taking the time to understand the problem, and attempt to solve it (or at least move the ball forward) is very hard. With the plethora of security problems in nearly everything that has electricity flowing to it, it's near-trivial to find bugs. Some of these bugs are severe, some of them are the same 'ol, same 'ol SQL injection and buffer overflows which we identified over a decade ago but still haven't solved. So finding problems isn't rocket science - actually presenting real, workable solutions is the trick. This is just my humble opinion based on my time in the enterprise and consulting in.

I once worked for a CISO who told his team that he didn't want to hear about more problems until we had a proposed solution. Furthermore, I'm all for constructive criticism to help contribute to the solution - but don't attack the person or the proposed solution just to do it. Don't be that person.

I think it may have been Jeff Moss that I heard say it - "Put up or shut up"... so give me your solution idea, or stop whining things are broken.

Friday, August 8, 2014

Why Your Enterprise Most Likely Doesn't Have a Zero-Day Problem

It should come as no surprise that at Black Hat 2014 this week there were an enormous amount of invaluable conversations, as always. We talked about attacks, exploits and exploitation techniques as well as defenses basic and exotic. A few of these ended up in the same place, logically, and have led me to conclude that the majority of enterprises out there don't have a zero-day problem. Let me explain...

It should by now be clear if you're a security professional that the average enterprise struggles with even the most basic security hygiene. This of course makes life difficult when we start to pile on cross-silo dependancies - for example configuration management - for security effectiveness. While I certainly don't mean to imply that every enterprise can't do the basics, I have yet to meet a CISO who is comfortable with the fundamentals of asset, configuration and user management on an enterprise scale and in a timely fashion.

That being said, I further submit that zero-day attacks and exploits are an advanced level of attack typically reserved for targeted organizations which have significant levels of security capability mandating these advanced levels of effort. Basically if you've got your fundamentals right, and you're doing good block and tackle security, your users are well educated to be skeptical of links and things sent to them the determined attacker will be forced to turn to exploiting yet unknown and unpatched weaknesses in your software to get through your defenses. The truth is, I have come to believe, that the vast majority of enterprises just don't have their act together enough to merit that level of effort from the attacker.

From what I know, an attacker burning a zero-day exploit is a non-trivial matter. Zero-days, while still fairly plentiful, have a cost associated with them and an attacker will use one of these once he or she has exhausted the typical, and often easy, methods of breaching your security. There are simply too many options further down the chain. You have to look no further than a conversation with David Kennedy of TrustedSec who makes it clear exploits aren't required to break in. All that's required, in still far too many instances, is sending someone in the organization a malicious link, or a malicious file and they'll open the door and show you their closely-guarded intellectual property ... and probably hold the door for you as you walk out with it. Yes, indeed it is that simple to exploit corporate security with brain-boggling results.

So why burn a zero-day? Attackers typically won't unless they've encountered roadblocks in other avenues. Since PowerShell is installed on every new Windows PC, it's the perfect tool to use to execute an attack, legitimately, on a target host. All the user has to do is let you in...and we all know that most users will still click on the lure of a dancing bear or the promise of nude photos of their favorite celebrity.

So while your enterprise security organization may actually encounter some malware with zero-day exploits in them, they likely aren't targeted at your organization. The problem your average enterprise has is poor fundamentals - leaving you open to all manner of exploit and penetration without the use of any more advanced techniques than "asking the user for permission". So why would an attacker burn a precious zero-day against you? They likely wouldn't. Unless, you know, you're a target.

Friday, August 1, 2014

Security on a Weak IT Foundation

The interesting question of maturity

Earlier this week, Bill Burns asked me this question...
"can a security team have a higher level of maturity than the IT team that handles its operational tasks?"
It's an interesting question, and one that certainly requires some level of thought. My top-of-my-head response was - well ... no. This is clearly a "lowest common denominator" problem.

The more I thought about it, the more this seemed like an obvious answer - a CMMI level 2 IT organization was never going to support a CMMI level 3-5 security organization. That should seem rather obvious. But the more I thought about this, the more I think that a CMMI level 2 IT organization can't support anything but an n-1 security organization. Let me explain my thinking here-

Weak foundations, weak security

It should be rather obvious that a weak foundation cannot support a tall, strong structure. You simply don't have the stuff it takes to hold it all up, from a building perspective.

In the IT world, if you have weak operational IT practices, you'll never get anything better than weak security practices. For example, let's look at how IT views and assesses assets on the corporate network. If IT can't tell you every asset on the corporate network right now in an on-demand manner, with troves of accurate meta-data then you can't possibly expect to build a strong security operations program on top of that. Security needs foundational things such as the ability to know what's on the network and loads of meta-data about each asset in order to make decisions on the risks these assets pose.

Decomposing that even further to the most simple blocks - if IT doesn't know what's most critical to the business in terms of supporting function, security has absolutely zero chance of successfully crafting a defensive response strategy or operational plan. If an asset is suspected of being malicious or compromised (an IP address, for example) meta-data is needed to decide whether the alert could potentially be a false-positive, or if it even warrants a response (maybe it's just some lab machine which can simply be turned off). As a kid G.I. Joe taught us that knowing was half the battle - and not knowing means you're lost.

Weak foundations, weaker security

In an effort to try to understand this more, my line of thinking leads me to believe that organizations with a particular CMMI score when it comes to general IT, can only support an n-1 CMMI score for security maturity.

The reason I believe this is that security operations, by their very nature, cross many IT silos and require well-thought-out and precisely executed workflows and communication to function well. When you cross team boundaries, silos and responsibilities these inherently break down even a little - thus diminishing what you can build on top of them. Like the great pyramids - the higher you build the more you have to stack inward. Security - at least in my narrow view - is sitting right at the top of the IT ladder, thus making it fairly difficult to do well if the base of the IT operations is shaky.


The long and short of it is this - if your enterprise has poor IT hygiene, and ranks low on the CMMI scale - focus security effort and resources on helping IT level up before you start to drop in expensive and complicated security kit. In essence, flashy boxes or solutions won't do you much good when you try to operationalize them on top of poorly functioning IT infrastructure, processes and methodologies.

Saturday, July 26, 2014

Ad-Hoc Security's Surprisingly Negative Residual Effect

Security is fraught with the ad-hoc approach. Some would argue that the very nature of what we do in the Information Security industry necessitates a level of ad-hoc-ness and that to try and get away from it entirely is foolish.

CISOs are challenged with this very thing, every hour of every day. Threats pop up that they aren't prepared for, and present an imminent danger to the business, so they must react. These reactions are necessary to keep the business operational, no one will argue that, but it is when they have a residual effect on the enterprise that we run into problems.

It's the old snowflake rolling down the mountain analogy... sort of.

How it starts

Since no security program I'm aware of has managed to account for all the threats it will encounter, let's take any one of them as an example. The threat may be some semi-custom malware which targets a particular piece of software in their industry vertical, or it may simply be something as common as a banking trojan. The CISO realizes that they simply don't have the supporting infrastructure to mitigate or help in remediation of the threat - so off to the ad-hoc bin we go.

There are, in general, three possible courses of action which follow.

First the ever-popular "we'll write some code" option. Many CISOs have access to some amazing security talent, and thus the ability to whip-up some custom-coded solution which takes care of the issue. Quite common. I'm not even saying this is a bad option! If you've got the talent, why not utilize it to its full potential.

Second, the almost-as-popular "hire an army of consultants" option. External consultants descend on your enterprise and identify, contain, and work to mitigate the current threat. Your hope is that they document their work, and maybe leave behind some clues as to what was done, why, and how you can repeat this procedure int he future.

Now for the most popular option, unfortunately, if the issue is big enough. This is the "let's buy a box" option. CISOs who feel overwhelmed look to their partners and often times the analysts to provide them with options. Not surprisingly, much of the time the 'solution' comes in a nice 2U rack-mountable appliance, with a yearly maintenance contract.

With the threat, at least temporarily, addressed, it's on to the next big issue. Playing whack-a-mole is the modus operandi for all too many in security leadership... and it's not a commentary on their effectiveness or abilities, it's just simply the way it is.

Once you've moved on from the previous problem what we have left is what is commonly referred to as a "one-off".


Entirely too many networks are simply littered with "one-offs". Solutions which once served some point purpose which have either been forgotten, fallen out of maintenance or support, or simply no longer serve the greater mission of the enterprise security organization. So many of these "one-offs" don't integrate well, aren't interoperable, or don't scale ... or worse they're simply not manageable at the level that your organization needs.

The problem with ad-hoc security measures is that we tend to create too many one-offs like this. Databases getting ripped off through the web apps? Drop in a WAF (Web Application Firewall). PCI requires you to log? Drop in a low-cost SIEM solution. Having difficulty managing the JAVA runtime in your environment ... err ...let's leave that one alone for now. You get the idea.

One of the biggest transgressors in this space is the Identity and Access Management tools in an enterprise. Since the problem is so challenging, enterprises tend to use multiple tools to solve niche, and timely, issues. What's left over is a patchwork of several different IAM tools, identity stores, and rights-management consoles.

The real problem with ad-hoc

The real problem with ad-hoc isn't there are way too many devices, servers, systems, and tools to keep updated and functional. Yes this is definitely a problem, but not the problem, in my opinion. The biggest problem is one of resources. Resources - we're talking about people here. Human beings need to sleep, eat lunch, hang out at the water cooler and take bio breaks. Humans who spend their time trying to make a few tools play nice are really wasting a lot of time...

The challenge of ad-hoc security is that you end up leaving behind a wake of poorly operationalized hardware, software and processes. This turns into a black hole for your people's time, and I don't have to tell you that this creates opportunities for attackers.

The realization

The unfortunate end-result of ad-hoc security, then, is decreased security. You're not really reducing risk over the long-haul but rather increasing it, due to the increased complexity, resource drain, and low levels of inter-operability. It makes perfect sense then that CISOs who don't take a pre-planned approach feel like they're forever on a hamster-wheel and are never getting anywhere in spite of superhuman efforts.

The better approach

Many of you CISOs and security leaders have already discovered and are implementing program-based security measures. You start by defining a business-aligned security strategy, which pre-plans the 'big picture' approach you will take. You set out the high-level guidance, and set timelines and try to manage projects with the understanding that things come up - but you can be ready for them.

This doesn't mean you suddenly stop tactical security measures - you just try to avoid ad-hoc situations which have you dropping in processes and technologies which don't fit in with your long-term goals and strategy. This isn't entirely difficult, but takes having that strategy first!

As always, I look forward to your replies, comments, suggestions and experiences.

Monday, July 21, 2014

Tackling 3rd Party Risk Assessments Through a 3rd Party

In the enterprise, sometimes absurd is the order of the day.

Earlier this week I ended up in a conversation with a colleague about 3rd party risk. We started talking about the kinds of challenges his organization faced, and as the leader of the 3rd party risk program what he's up against. As it turns out when the organization set out to tackle 3rd party risk a slight mis-calculation was made. Long story short, his group has over 100+ vendors to manage in terms of 3rd party risk. That's 100+ vendors that interact with the network, the data, the applications, the people, and the facilities his enterprise has.

His team is staffed by a whopping 3 people, including him. To put this into perspective, and given that there are 250 business days a year, it means his team needs to complete 50 reviews per analyst. With 250 total days to work with, that means that they can spend a maximum of 5 days per 3rd party. Of course, we're not counting vacation days, sick days, or snow days. We're also not counting travel to/from sites to actually do investigative work, or the time it takes to do an analysis, debrief, or any of that.

This started to unravel in my mind, pretty quickly. I pressed my colleague for an answer to how he could possibly achieve any measure of compliance and completeness, to which he answered: "We outsource the evidence gathering to a 3rd party".

My head exploded.

I'm not saying it doesn't make sense, or that there are very many real alternatives - but you have to know how crazy this sounds. They've outsourced the fact-finding portion of 3rd party risk assessments to a 3rd party. BOOM

The truth is that there is a lot that he was doing behind the scenes here which made this a little easier to swallow. For example, a standard questionnaire was developed based on a framework they developed and approved internally which minimized the amount of 'thinking' a 3rd party assessor had to do. Each category of required controls had a gradient on which the 3rd party being assessed was graded, and there was really very little room for interpretation. Mostly.

If you think about it, I'm confident that there are many, many enterprises out there with this minor challenge. Every enterprise does business with at least dozens, on average with hundreds of 3rd parties to varying degrees. From your outsourced payroll provider, to the company that shreds your documents once a week, to the company who sends the administrative assistant who sits at their desk and answers calls and surfs Facebook all day. Every enterprise has a vast number of 3rd parties which need to be assessed - and risks identified.

While I'm definitely not crazy enough to think companies should only handle this with internal, trusted employees, I'm not completely convinced hiring out to a 3rd party is that fantastic of an idea either. There is so much to consider. For example, if that 3rd party assessor misses something, are they liable, or does that fall to your company? Ultimately in the court of public opinion - this is a trick question. The answer is always you.

I suppose the long and short of it is that enterprises have little choice but to use a 3rd party to help them manage 3rd party risk. But then the only question is - do they assess that 3rd party which will be doing the 3rd party risk assessments for unnecessary risk? It's enough to make your head spin, I know it gave me a headache just thinking about it.

What do you think the mature 3rd party risk assessment looks like? Do you have leading practices you could share? Contact me as I'd like to share them with our peers, and others who are struggling with this task right now.

Thursday, July 10, 2014

Compliance and Security Seals from a Different Perspective

Compliance attestations. Quality seals like “Hacker Safe!” All of these things bother most security people I know because to us, these provide very little insight into the security of anything in a tangible way. Or do they? I saw this reply to my blog post on compliance vs. security which made an interesting point. A point, I dare say, I had not really put front-of-mind but probably should have.

Ron Parker was of course correct…and he touched on a much bigger point that this comment was a part of. Much of the time compliance and ‘security badges, aka “security seals” on websites, aren’t done for the sake of making the website or product actually more secure … they’re done to assure the customer that the site or entity is worthy of their trust and business. This is contrary to conventional thinking in the security community.

Saturday, July 5, 2014

Critical Infrastructure as the Next "Cyber War"

I'm tired of reading headlines that say stuff like "It's [cyber] the next war!" because not only are they spreading FUD (fear, uncertainty, doubt) but if this was really the case we [as Americans] would already have "lost".

One of the things the FUD-sters like to ballyhoo about is the nation's critical infrastructure and how our power plants, water treatment facilities and chemical processing plants will be [or already are] targets for foreign nation states in a sneaky digital assault. News flash - this has been going on for some time, and while it's crystal clear to anyone paying attention that the nation's critical infrastructure is in a seriously neglected state when it comes to security - this likely isn't America's biggest problem.

Thursday, July 3, 2014

Harmonizing Compliance and Security for the Enterprise - The Introduction

Pursuit of compliance in the enterprise is proving to be a staggeringly bad security investment, if you ask nearly any enterprise security professional. And yet, we continue to see companies who get breached fall back on the same press releases: "We were PCI-DSS compliant! It's not our fault we were breached!"

I ask myself why, every time it happens. I still don't have a good answer.

Monday, June 16, 2014

Choosing the Right Entry Point for a Software Security Program

The topic of software security, or AppSec, has once again cropped up recently in my travels and conversations so I thought it would be prudent to address that here on the blog. As someone responsible for software security in an enterprise, Fred was being given a small pool of money and a chance to plan, design, and implement a software security program. The big question on Fred's mind then, was where to start.

As we talked through the options, and I discussed some of the mistakes I've made and have witnessed others make, I tried to advise Fred to be cautious. One of the most important things one can do wrong when starting a software security program from scratch is starting in the wrong part of your Software Development Lifecycle or SDLC. This can be exacerbated by the fact that many organizations have many more than one software development lifecycle, and picking the wrong starting block is quickly amplified.

Sunday, June 15, 2014

Getting Wrapped Around the CISO Reporting Structure Axle

CISOs are in the lime-light right now as the parade of data breaches marches on. One of the big topics is the issue of reporting structures. Where should the CISO report to? Should the Senior Information Security leader be a company officer? All valid questions, and more.

Tuesday, June 3, 2014

In Defense of Reactive Security

Warning: This post contains a Sun Tzu quote...

Let's start here:
You're driving down the street, minding your own business and doing the speed limit. Both hands are on the wheel, no cell phone in sight, radio turned down to a moderate level, and you're generally driving like the books tell you to. As you approach the intersection where your light is green you take a quick glance to your left, then to your right. All is right, and you have the clear go-ahead. Now as you come into the intersection a child on a skateboard dives into the street in front of you...
In your mind, right now, you've slammed the breaks and are laying on the horn, right?

Every one of us reacts to our environment, it's how we survive. And yet - when you say "reactive" security today you get looks from people like that's a dirty word. Why is that? Much like other circumstances where perfectly reasonable terms and ideas get hijacked ... I blame marketing.

A responsible enterprise security program plans for as many possible negative scenarios as possible and accounts for them in advance (called being pro-active) and then reacts as conditions in the environment change (called being reactive). One without the other simply makes no sense, and yet all the marketing literature has CISOs thinking that being reactive is somehow bad.

It would appear that in the quest to invent new problems for the many 'solutions' out there, the term reactive has been ascribed some meaning I'm not familiar with. To clarify - both reactive and pro-active security measures are required - in harmony.

There's this interesting quote from Sun Tzu that applies here, mostly-
Strategy without tactics is the slowest route to victory. Tactics without strategy is the noise before defeat.
Pro-active security is better known as strategy. This is all the planning a security leader will do based on a survey of their current resources, capabilities, technology and environment - and if you're lucky maybe based on history as well. Being pro-active is a great idea, in fact, it's absolutely essential. Anyone who's ever tried to paint a room, or lay tile, or heck even sleep-train children will acknowledge that without a proper plan you may be able to get half-way in before you realize you're lost. There is a divergence with Sun Tzu's quote here, in Information Security. Strategy without tactics, in our industry, is certain failure. I don't mean the type of failure where you get hacked or breaches, I mean the type of failure where you get hacked or breached and you find out 9 months later because someone reports it to you... or the media calls your PR officer and asks for a quote on the giant breach you've experienced.

Reactive security is better known as tactics. You need tactics. Your organization, and your strategy is nothing without tactics. The principal reason is that sometimes, just sometimes, those bad guys/gals that we all plan for get creative and adjust their behaviors. Sometimes the markets shift, and business climates, technologies change. Sometimes a vulnerability is found in something you consider core to your security - maybe like SSL, for example - and you have to adjust quickly and decisively. Reactive or tactical security is something you can indeed plan for, but only as much as you can plan for it happening...and you have to give yourself and your security program enough flexibility to be able to adapt and adjust.

From experience, one of the biggest issues to date [that I've come across in my clients and personal experience] in security programs is that they become inflexible, unable to adapt to their changing environments. Once a security strategy is laid out, funding is set, and projects are launched everything is set in stone. Should needs change, adversaries surface we didn't account for, or simply new technologies or methods arise - we're left with a shrug of the shoulders and "Well, the budget for this year is set, we can plan for that for next year" - which is absolutely insane.

So I give CISOs which I advise 3 simple rules to go by:

  1. Develop a strong plan, which has clear goals and has the ability to be flexible when needed
  2. Develop a tactical capability to pivot on-the-fly as needs, environments, and adversaries change
  3. Expect to have to adapt either or both of those

Tuesday, May 27, 2014

Hacking the Registry to Keep WindowsXP Updating - A Bad, Bad Idea

When WindowsXP officially expired on support a while back - I wrote a post blog post titled "The Great WindowsXP Cataclysm" which talked about some of the reasons organizations had for staying on the antiquated operating system. Some of those reasons were valid, especially if you were running a Point-of-Sales (POS) terminal system which is based off of WindowsXP Services Pack 3 called "Windows Embedded POSReady 2009". According to this Microsoft lifecycle support site, this POSReady system runs embedded WindowsXP, and is supported until April 9th, 2019.

Leave it up to the security community to figure out that a simple registry key which identifies the POSReady 2009 operating system could be hacked into the registry of a WindowsXP machine to keep it getting updates. Well ... sort of. This is where it gets weird. Read this ZDNet article with Microsoft's response carefully... and notice that while they admit this will update WindowsXP systems, there is a  string of caveats that should make you think twice.

It's important to acknowledge that this hack (and that's all this really is) essentially tricks the update service into thinking your OS is a point-of-sales WindowsXP embedded device. The essential questions, which Microsoft hints at, is just how different is WindowsXP from WindowsXP Embedded? The answer is - quite a bit, actually. Check out this paper on the difference between WindowsXP Professional and WindowsXP Embedded and decide for yourself if you're willing to take that risk. Architecturally, the two operating systems are close, obviously since they're both based off the same kernel. Once you start getting into the add-ons and run-time environment options Professional and Embedded start to look dramatically different - in my opinion. This means that if you start applying patches and bits meant for the embedded operations system onto your corporate desktops at very least the results would be unpredictable...

So let's summarize my thoughts here.
  • some organizations are still on WinXP on the corporate desktop (and elsewhere, obviously)
  • for those that haven't migrated, excuses are critical... not necessarily valid, but critical
  • a quick registry hack is available which tricks Windows Update into pushing patches and updates meant for a variation of your WindowsXP operating system onto your machine(s)
The hack is a bad idea for the following reasons:
  • potentially de-stabilizes your WindowsXP operating system
  • necessitates significantly more testing to ensure compatibility
  • quite obviously breaks your software agreement
  • could potentially get you into a CFAA or other legal situation
Essentially, my thoughts are this - if you're resorting to hacking the registry to get patches which are meant for an OS similar to yours onto your machine for security - you've got a big, big problem. The energy you're expending, and potential hazards you're creating on top of system stability and unknown security issues ...should get you fired. Immediately.

Folks - this isn't a viable work-around to keep WindowsXP alive. It's a bad, bad idea.

Tuesday, May 13, 2014

Enterprise Security Tools Vendors' Big Problem

I’ve spent a substantial amount of time helping organizations justify enterprise security software purchases, specifically software security testing tools, in my career. Over that time it was common to run up against the question of “Why buy an enterprise tool when we can use open source?” The answer at the time, which was almost 5 years ago at this point, was that the maturity of the open-source tools wasn’t there. That argument, in recent years, has all but vanished, and enterprise security vendors which have open-source or inexpensive alternatives are in trouble.

The most immediate effect I’m seeing is in the testing tools space. Penetration testing tools are the natural target for erosion of market-share by open source simply because these type of tools are generated in large volumes by testers as they work through engagements and it’s natural for people to want to share. Take for example the most popular penetration testing platform hands-down right now – Metasploit. HD Moore’s community-sourced platform has grown into a monster, with code commits in volumes that would make even the most diligent enterprise security software shop blush. Now – not all this code is clean and brilliantly functional, but it does the job and most of all it’s on the leading edge of innovation as ideas can quickly be pushed to the code base, and taken up and built upon by members of the community. A brilliant approach that a closed-source enterprise software shop will struggle to match. Tools in this space that have price tags north of $10k, $20k are struggling for the very simple reason that even though the open-source equivalent isn't necessarily as polished and pretty – in many cases technically it’s more advanced, more up-to-date, and as a result of being community driven ... more innovative.

I don’t think this means certain death for the big closed-source enterprise security vendors though. It would appear that there is still a customer demand for the tools that these shops release – but it’s a matter of understanding your target market. As I see it there are at least three classes of users: specialist, expert, and master ranked by the person’s abilities.

At the specialist level you’re typically not getting someone who can write their own complex scripts, or exploit code. In fact they’re generally just looking for a blunt implement to do the job…with no requirement to do it exceptionally well. In software security these people manifested as an analyst who was tasked with a series of tasks and one of them just happened to be testing the organization’s web applications. These people weren't testers by skill or trade, nor were they particularly knowledgeable about penetration testing, or even coding web applications. They needed to do a job and that job was to test the app with the least amount of pain. They would look at low-cost or open-source alternatives which weren't particularly polished, required lots of manual work/scripting, and buckets of knowledge and (generally) understood that this wasn't going to be a relationship for them. They sought out the enterprise software which gave them push-button results, and did some of the thinking for them. No assembly required, batteries included.

The next rung up the ladder was always tricky – because the expert class was just smart enough to be dangerous to themselves. What I mean by that is these people knew a little bit about software development, they knew the basics of security and were willing to tinker. They still had a job to get done, and the enterprise software path would have been more productive in many of these cases, but either because they had a passion and desire to learn, or because they simply wanted to see if they could do it – they would compare open-source to closed-source tools (apples and pumpkins) and not fully understand that these were different. This user was tricky because they often times felt that the open-source alternatives were “good enough” even though their skills wouldn't be good enough to produce value from the tools they were using. This as you can guess produced scary results.

The third category was the master class. I met only a handful of these types in my time in that supporting role. These people had no desire or need for enterprise-built security testing tools… they were too constraining, too slow-moving, and just not good enough. They could do run into a problem the enterprise software couldn't tackle and effectively code their own way around it… which allowed them to move faster, be more agile and adapt to the changing nature of the target better. I stress that there were very few of these people out there…and most fell into the expert class. Master class is typically achieved after years of experience immersed in technology and focused work. Generalists rarely get to this level of skill – and it’s really the enterprise’s fault for pushing people to perform 3-4 different roles on average … only 1-2 of them really fall into the person’s wheelhouse.

“Horses for courses” one of my colleagues would say. Make sure you’re advising intelligently so that the right person has the right tool in their hand for their skill set, and the job at hand.

The interesting thing, I think, is that as more security professionals flood the market there dynamics are being skewed. I believe that the distribution between specialist, expert and master is dramatically dramatically changing. A few years ago I would have said that we had 50% specialist, 40% expert, and 10% master class out there. Today’s security industry is becoming more community-enabled and driving that distribution to the right more. So now I think we’re closer to 30% specialist, 50% expert, 20% master and continuing to shift towards master at a good pace. This is both good for the overall state of security, and bad for enterprise security software companies which relied on the specialist and expert classes as a target buyer.

The good news is that enterprise security software isn't dead, it’s just being forced to re-evaluate its value propositions. Enterprise software shops need to re-focus their efforts to engage the community more, make their tools more open (one of the biggest gripes ever is proprietary closed formats of data) and extensible and start to think about lowering their costs… or maybe shift their licensing models to the as-a-Service model. I’m not saying I have the answers, but these shifts are happening and they’re not asking for permission. Enterprise security software shops, if they wish to stay relevant, need to address the shifting sands of skills, needs, and community. Or they can become irrelevant …

Tuesday, April 29, 2014

Penetration Testing Does Not a Secure Enterprise Make

Now that the dust has somewhat settled from the Target breach, and the subsequent law-suit madness is hopefully over I feel like it's safe to write about this topic, as much as it can ever be to discuss a touchy subject. Much of the writing and rhetoric, and finger-pointing for blame, around that breach centered around the fact that a 3rd party was hired to 'find faults' in the 1st party, and the 3rd party apparently failed to do so. Or ... something like that.

I wish I could say that this is the first time I've heard a confused understanding of what penetration testing is, but it's not. I also wish I could say that the purpose, limitations, and actual best-use of penetration testing is well understood amongst enterprises - but again - it's not.

Penetration testing as TVM

An organization I'm familiar with basically used penetration testing by a 3rd party as their stand-in TVM (threat & vulnerability management) program. The case was that the internal security team's ability to identify weaknesses and toolset were weak, and the CISO believed that the best way to identify threats that his team should focus on, in order to best position his defenses, was to be regularly penetration tested. So, four times a year his organization would undergo a structured, scoped and time-boxed penetration test which - of course - Information Security was ready and prepared for.

I'm sure you can already pick out the few glaring issues with this approach, but it continues to disturb me that the defensive posture of an enterprise is allowed to be determined by the testing capability and talent of another organization. Not to take anything away from the company that is currently charged with the penetration testing contract - because I have no reason to doubt their talents - but it's foolish to think that they'll find "all the issues", or even the most important ones. While I think penetration testing is important to identify the things that are glaring, and obvious from a complete outsider's perspective - it should be in no way (in this blogger's humble opinion) authoritative on what you should consider important. Third-party penetration testing does not replace a threat and vulnerability management program, period, end of story. It just can't.

There are too many variables here. The thing that's most important to understand is that penetration testing is ultimately too limited, in the way it's implemented by CISOs, and has a very little chance of being holistic enough. Penetration testing will definitely identify some externally visible, exploitable vulnerabilities if you hire a good crew. Otherwise you'll get what you pay for, the output from a Nessus scan copied and pasted into a PDF. The problem here is that you need a more complete picture. There are nuances. Different testers look for different things, they have different approaches, and will likely have different results. You need a consistent, repeatable, and continuous approach to identifying your vulnerabilities supplemented by penetration testing. You simply can't swap out a TVM program for even regular penetration testing. It won't work.

Penetration testing leads to security

An organization, any organization, cannot simply test itself secure. That's as insane as an auto manufacturer crashing cars until they stop failing crash tests. You still have to actually fix the issues! And we all know how that goes. How many of you have stories where you go out and test one of your clients, only to discover that nothing, or barely anything, has been 'fixed' from the last round of testing?

While penetration testing is definitely a good way to identify exploitable, visible security issues in your enterprise when done right, it's not going to make you more secure unless you do something about the problems. Therein lies the challenge... too many CISOs are looking for someone to come in and find nothing wrong and move on. We call this the compliance with penetration testing requirements.

Good security leads to good security. Whether you're hiring outside firms to perform penetration testing or not. There is no substitute for sound strategy, executed well and with purpose and executive leadership's backing.

What's the point then?

You may think I'm down on penetration testing, at this point. You're wrong. I think there is a time and place for one of the most important validation activities a security program can perform. I stress that this is a validation activity - once you've shored up your issues you seek to validate your posture with a good and thorough testing.

For those enterprise CISOs who are building or optimizing their security program penetration testing is a validation exercise. First and foremost, you need to know what your high-value assets are. There is no substitute for this, and penetration testing nor crystal ball will not help you here. Identification of critical assets is a primary activity of any security program, and everything you do will be based from that point. Next make sure you've built a solid TVM infrastructure, with good policies and practices. Ensure you have a workable definition of critical, and how you make go/no-go decisions when it comes to remediation, deferring a fix, or simply accepting a risk. Then make sure you have the necessary backing to ensure that you can execute when it's time. Once you've done all that, and you're sure you've done enough internal test-fix rounds have someone perform a thorough penetration test on your organization to show you all the things you've missed or simply not thought about. It's amazing how many times someone can get at a high-value target through what we perceive is a low-value asset...

Lastly, don't get too mad at your 3rd party penetration testing organization for failing to identify the avenue of infiltration that caused your big breach. There are a lot of factors that go into what is considered a 'good' penetration test - and many of the failings fall on the shoulders of the client...but that's a discussion for another time.

Wednesday, April 23, 2014

Best Practices - The Only Thing Worse Than Compliance

There's only 1 thing worse than hearing a CISO talk about their organization's culture of compliance. There is only 1 thing which makes compliance sound like a worthy goal. There is only 1 thing that makes my skin crawl when someone in an enterprise security role discusses.

That one thing is hearing a CISO proudly announce his organization's adherence to "best practice". What does that even mean?! Let's ask Wikipedia-
"A best practice is a method or technique that has consistently shown results superior to those achieved with other means, and that is used as a benchmark."
That definition makes sense. Except ... in security every organization is just different enough so that what works great for one enterprise, will likely fail miserably for another. I know everyone believes their enterprise is a special snowflake, and that ultimately none of them are that different - but there are enough differences such that the term best practice is almost meaningless in my humble opinion.

What makes this whole matter worse is that while the security professionals community laments the notion that enterprises are largely made up of static defenses their adversaries are highly dynamic. Make no mistake - your adversaries don't follow "best practice" when it comes to penetrating your weak defenses. They scout you, identify a weakness, and exploit it. Done.

Sunday, April 6, 2014

The Great WindowsXP Cataclysm - Part 1

This post is cross-posted to my HP Corp blog as well at http://hp.gom/go/white-rabbit
The end is nigh!

Let me start off this two-part series by saying that I survived the first time this happened. If you've been around a long time in IT you may remember this operating system called WindowsNT 4.0 - and I was there when it finally, for real this time, truly and for sure went end-of-life. I think there is much parallel between what happened then, and where we are today.

Saturday, March 29, 2014

Analyzing the Target Breach "Kill Chain Analysis" Report

-- If you haven't read it yet, the document "A "Kill Chain" Analysis of the 2014 Target Data Breach" is a must read for anyone in the role of enterprise [cyber] defense.

I've been studying recent breaches through the looking glass of the "Lockheed Martin Kill Chain". If you'd like a primer on the importance and background of the kill chain methodology you should read Rodrigo Bijou's fantastic analysis. The LM kill chain methodology for examination and defense from an attack is actually quite brilliant. It's not necessarily revolutionary - but enterprise security professionals now have a structured and documented way of trying to thwart attackers, and learn from breaches. So it's fair to say that this is something everyone in defense (and oddly enough, offense) should know like the back of their hand.

Tuesday, March 25, 2014

Attribution - The 10 Ton Elephant in the Room

First let me tell you why I'm writing this post you're reading, and why I hesitated to write this post in the first place. I am not a full-time threat or security researcher, let me just get that out of the way. I'm fully aware I don't qualify to have the in-depth attribution conversation which I'll leave up to the experts but there are many things that still fall into my wheelhouse, so here is a semi-organized collection of my thoughts on this specific topic of attribution in cyber.

This current discussion on the DailyDave regarding the APT1 report Mandiant put out (one year on) list is seriously boiling my bunny(tm).

Monday, March 10, 2014

Here a box. There a box. Everywhere a breach. Notes from RSA 2014

TL;DR - More of the same, and security is still a 1U 'solution' that fails every time, eventually.

Hey everyone, I’m writing you from the settled dust of RSA Conference 2014. It typical fashion I made grandiose plans to meet up with people I’d not seen in years, and meet people I only knew by a handle over Twitter or some other online forum … and it all went to hell. Best laid plans and all that, right? Every year RSA Conference is the same. You show up in San Francisco and hit the ground in a fast sprint. Although I don’t feel like I was sprinting so much as the ground underneath me was moving so fast I could only keep up by running my hardest. Analogies aside, I ended up with a talk, a panel and some booth time and of course time with our often very interesting client base. Then I made the mistake of walking the showroom floors. That’s right, there’s an s at the end of that word because there were in fact two sides of Moscone this year that were used for exhibition.

Sunday, February 16, 2014

Entry level hiring in InfoSec - the comedy of errors

I have a good friend who is trying to get work as an entry level InfoSec talent. He's a distinguished army vet, a family man, and genuinely the kind of person I'd love to live next door to. He's never really had specific work in Information Security, but he can talk processes, tools, and technologies and I feel like he's on one of those rare people who get it when it comes to making relevant policy decisions for enterprise security.

I bring him up because the guy can't seem to get a break.

You see, he doesn't have any real InfoSec experience to speak of, and while he's doing the certifications thing and as I've already said he knows his stuff - it's a weird world out there. I started looking amongst my circles and it appears that the conclusion I'm reaching is that hiring, at the lower levels of the Information Security talent spectrum is an absolute train wreck.


It seems that every entry level gig I've been able to dig up that would be even remotely worthwhile (for loose definitions of worthwhile) require ~2 years experience and a CISSP. Say what?

He told me the other day that in an otherwise promising interview path he was asked about specific flags for tools like NMAP and others ... Say what?

So let me get this get an entry level job you have to already have 2 years+ relevant work experience and the ~5yrs of practical experience to have a CISSP? What definition of entry level does that match? Certainly not one I'm aware of.

What this industry is doing is effectively filtering out those that are eager to provide fresh perspectives, and alternative viewpoints from the outside in a time we are absolutely desperate for that exact thing. I talked to a director of DFIR at a global financial services firm and he's actually stopped hiring people with infosec backgrounds and started hiring accountants and other types right out of college. Coincidentally he needs people who can do forensic accounting and DFIR work - but you can teach the tools and techniques to be a good response analyst but you can absolutely not fake the external perspective.

So why the hell is this happening? Myopia... new song, same lyrics as before.

Hiring managers who have no clue what they actually need look for 'penetration testers' and people who know the specific technologies they're currently using thinking this makes a good employee. Wrong. You should never hire someone based on whether they're intimately familiar with the details of your current setup - hell I would have failed many of these job interviews! What you should be looking for is someone who says "yes, I'm familiar with that tool, it does x, y, z, and the way to figure out the detailed command line switches is flag --h (or whatever)" ...

Bottom line - you need people who can learn and are smart enough to know when they need to go look it up in an intelligent way. "I don't know that answer, but it'll take me 10 seconds to get it" should be more than adequate... but it's not and these jobs are going to people who are from that same rut that we have a problem with now. People who do the same job, day in and day out, same technologies, same principles and never think outside their little boxes. This is such a recipe for failure I can't even begin to express it here... just look around your peers in the industry and you should see many examples of this.

/Rant over ... but seriously this is nuts.

On a serious note, if someone out there is looking for a strong analytic mind, someone who questions and has that special drive to be an InfoSec revolutionary while supporting and bettering your processes today... let me know, I'd love to help out a friend.

Saturday, February 1, 2014

Guest Post: Follow up to "Where Risk Calculations Fall Apart [Again]"

In a previous post "Where Risk Calculations Fall Apart [Again]" I made the argument that a complex formula variable in a risk calculation like "likelihood-of-exploit" is essentially (at best) undesirable, and at worst detrimental if not nonsensical. I posted the blog link to Twitter and as expected debate struck up. I think I'm going to write another follow-up on this because there still seems to be some confusion as to what I am arguing ... I appreciate all the replies and discussion so far. I even received an email from a colleague who agreed with my viewpoint and had put together a very comprehensive reply but couldn't fit it into the comments section so instead here it is in it's entirety ... I encourage you to read Heath's lengthy, but extremely well-thought-out reply.

Wednesday, January 29, 2014

Where risk calculations fall apart [again]

I suspect this may upset some people who believe these types of things are possible, or are even performing such actions today - and to those folks I apologize in advance but this is merely my opinion.

This morning, one of the few people who actually understand application/software security, Jeremiah Grossman of White Hat, dropped an interesting tweet. Lots of intelligent people replied, and what seemed like an interesting debate was unfolding.

Then Dan Cornell said something interesting, which got me thinking.

Monday, January 13, 2014

On withdrawing your [RSA Conference] talk in protest

By now the news has settled a bit in people's brains, that RSA (the company) was allegedly paid by the NSA some $10M to weaken encryption. Reuters broke the story with this quote:
"Documents leaked by former NSA contractor Edward Snowden show that the NSA created and promulgated a flawed formula for generating random numbers to create a "back door" in encryption products, the New York Times reported in September."
Enough about the alleged wrongdoings of an encryption company and our own National Security Agency. Whether they did it, or they didn't, needs to be vetted in public, and RSA not denying the allegations is making this issue even more interesting. But let's talk about some of the fallout in the security community.

What has become interesting is the slow trickle of #InfoSec echo chamber big-shots that have been 'cancelling their talk' at RSA. Now, I'm not criticizing anyone's moral imperative ... but if you're cancelling your talk/training/etc long after many of the attendees have purchased their tickets and scheduled their attendance - who are you really hurting? This is a sticking point with me. If you're going to take a stand against RSA's alleged malfeasance, then you should do it in a way that creates the least amount of collateral damage, and cancelling your talk or training is a, in my personal opinion, poor choice.

So, here are a few things you could do instead of cancelling your appearance and screwing over attendees:

  1. Make a T-shirt that says "RSA has violated our trust" and wear it during your talk
  2. Take 2 minutes at the start of you talk, and discuss the issue you're taking with RSA's alleged behavior
  3. Blog about the issue and publicize it
  4. Change your talk, without telling the organizers, to be about the damage that their alleged wrong-doing have caused
  5. Speak at the conference, but refuse to give RSA any positive press
  6. Speak at Security BSides SF and draw attention to the issue
  7. Make a sign and stand outside the RSA Conference venue in protest
  8. Refuse to buy/use/endorse RSA products/services
  9. Urge others to refuse to buy/use/endorse RSA products/services
  10. Work with the industry to identify and flag uses of the weakened crypto component in software packages - as a vulnerability finding
..there are, of course, many more ways to protest. You don't need to hurt the attendees in the process, and I think that's exactly what cancelling your talk and refusing to speak does in the end.

My $0.1999 ...if you disagree or believe I'm wrong - use the comments section or catch me on Twitter.