Wednesday, September 24, 2014

Software Security - Hackable Even When It's Secure

On a recent call, one of the smartest technical folks I can name said something that made me reach for a notepad, to take the idea down for further development later. He was talking about why some of the systems enterprises believe are secure really aren't, even if they've managed to avoid some of the key issues.

Let me explain this a little deeper, because this thought merits such a discussion.

Think about what you go through if you're testing a web application. I can speak to this type of activity since it was something I focused on for a significant portion of my professional career. Essentially the whole of the problem breaks down to being able to define what the word secure means. Many organizations that I've first-hand witnessed stand up a software security program over the years follow the standard OWASP Top 10. It's relatively easy to understand, it's fairly well maintained, and it's relatively easy to test software against. It's hard to argue with the notion that the OWASP Top 10 is not the standard for determining whether a piece of software is secure or not.

Herein lies the problem. As many of you who do software security testing can testify to, without at least a structured framework (aka checklist) to go against, the testing process becomes never-ending. I don't know about you, but I've never had the luxury of taking all the time I needed, everything always needed to go live yesterday and I or my team was always the speed bump on the way to production readiness. So we first settled on making sure none of the OWASP Top 10 were present in software/applications we tested. Since this created an unreal amount of bugs, we narrowed scope down to just the OWASP Top 2. If we could eliminate injection and cross-site scripting the applications would be significantly more secure, and everything would be better.

Another issue, then. After all that testing, and box-checking, when we were fairly sure the application didn't have remote file includes, cross-site scripting (XSS), SQL Injection or any of that other critical stuff - we allowed the app to go live and it quickly got hacked. The issue this caused for us was not only one of credibility, but also of confusion. How could the app not have any of those critical vulnerabilities but still get easily hacked?!

Now back to the issue at hand.

The fact is that even when you've managed to avoid all the common programming mistakes, and well-known vulnerabilities you can still produce a vulnerable application. Look at what EBay is going through right now. Fact is, even though there may not be any XSS or SQLi in their code - they still have issues allowing people to take over accounts. Why? It's because there is more to securing an application than making sure there aren't any coding mistakes. Fully removing the OWASP Top 10 (good luck with that!) from all your code bases may make your applications more safe than they are now - but it won't make them secure. And therein lies the problem.

When you hand your application over to someone who is going to test it for code issues like the OWASP Top 10, and only that, you're going to miss massive bugs that may still lurk in your code. Heartbleed anyone? Maybe there is a logic flaw in your code. Maybe there is a procedural mistake that allows for someone to bypass a critical security mechanism. Maybe you've forgotten to remove your QA testing user from your production code. Thing is, you may not actually know if you just test it for app security issues with traditional or even emerging tools. Static analysis? Nope. Dynamic analysis? Nope. Manual code review? Maybe.

The ugly truth is that unless you have someone who not only understands what the code should do under normal conditions - but also what it should never do, you will continue to have applications with security issues. This is why automated scanners fail. This is why static analysis tools fail. This is why penetration testers can still fail - unless they're thinking outside the code and thinking in terms of application functionality and performance.

The reality is that for those applications that simply can't easily fail - you not only need to get it tested by some brilliant security and development minds, but also by someone who understands that beautiful combination of software development, security, and application business processes and design. Someone who looks at your application and says: "You know what would be interesting?"...

In my mind this goes a great deal to explaining why there are so many failing software security programs out there in the enterprise. We seem to be checking all the right boxes, testing for all the right things, and still coming up short. Maybe it's because the structural integrity hasn't been validated by the demolitions expert.

Test your applications and software. Go beyond what everyone tells you to check and look deep into the business processes to understand how entire mechanisms can be abused or entirely bypassed. That's how we're going to get a step closer to having better, more safe and secure code.

Friday, September 5, 2014

Managing Security in a Highly Decentralized Business Model

Information Security leadership has and will likely continue to be part politicking, part sales, part marketing, and part security. As anyone who has been a security leader or CISO in their job history can attest to, issuing edicts to the business is as easy as it is fruitless- Getting positive results in all but the most strictly regulated environments is nearly impossible. In high centralized organizations, at least, the CISO stands a chance since the organization likely has common goals, processes, and capital spending models. When you get to an organization that operates in a highly distributed and decentralized manner the task of keeping security pace grows to epic proportions.

As I was performing a recent ISO 27002 controls audit against one of these highly decentralized organizations the magnitude of their challenge really hit me. While the specific industry is relevant to this example I can simply say that they are in the business of making, testing and selling stuff. Parts of their business make thing. Parts of their business test things. And parts of their business sell both the things the other businesses do for various use-cases. Some of the business is heavily regulated. Some of the business isn't regulated at all. All of the enterprise is connected via a single network, with centralized IT services, applications and management. I could stop right here and you'd understand why this is nearly impossible to make universally applicable.

Bad Math

What makes this even more difficult on the security organization is that their core team is exactly .04% of the overall company staff. Their full staff complement, including recently hired new members, are less than 5% of the total IT staff count. The security device-to-staffer ration is horrible, their budget is insignificant, and for all intents and purposes the security function is relatively new when compared against the rest of the enterprise. I'm not a statistician, or particularly good at math, but even I know those numbers don't work out well.

Diversity Challenges

Security in the enterprise is largely about building and operationalizing repeatable patterns of process and methodology to achieve scale. This works well in even very large, but very centralized and uniform enterprises. The problem is when you get into enterprises that are extremely diverse in business practices, technologies, and goals and compliance initiatives repeatable patterns fail to scale well since you end up building a new unique set for every different piece of the organization.

In this situation the only chance enterprise security has is local representation from inside the business. Generally, though, you're not going to find many security experts in my experience from within these business that have "an IT guy/gal" or three. The situation just keeps getting worse.

Think about this- from an operating platforms perspective you may have some OS/2, lots of UNIX variants, Mac OS, Windows from WinNT 4.0 through Windows 8.1, and then some device specific platforms like VxWorks. If you're lucky all you have is Ethernet (Category 5/6) cabling and nothing else... Now add specialized programs, PLCs, Industrial Controls Systems (ICS), and it gets messy fast.

At this point it almost doesn't matter how many security resources you have, the only way you'll scale is automation.

The Catch-22

 Sometimes things become a chicken vs egg problem. In order to have better scale with fewer resources your security organization clearly needs more automation. The problem with more automation is it tends to create the need for more security resources to manage it (you don't actually believe the marketing or sales hype that these things manage themselves, do you?) to get effective scale. Either way - you don't have the people to do this.

Bad, Meet Worse

Where things go from bad to untenable is when the business alignment and co-operation isn't ideal. As in real life, not all business units will be friendly or even want to deal with "corporate". In that case you're not only facing the impossible challenge of addressing the business security issues, but now you're fighting against politics as well. Sometimes you just can not win.

If you factor in that generally security isn't the most loved part of the IT organization because of its history of being "the no people" you quickly realize that the deck is heavily stacked against you. There are certainly ample opportunities to trip on your own untied shoelaces and fall flat on your face. The key to not doing this lies in a multi-step process which includes assessment, prioritization, buy-in, and effective operationalization.

Steering the Titanic by Committee

As the CISO or security leader of a highly decentralized enterprise you're not going to get many wins that come easily. You're probably not going to do a very good job at preventing and preempting that next breach. Heck you may not even be able to detect or respond in a timely fashion. But the key to not failing as hard is to not go at it alone. Even if you have a centralized security team of 100+ you're still going to fall prey to these same challenges. You need support from the various edge-cases in your enterprise structure. You need help from your corporate counterparts, and your outliers.

Cooperatively working towards better security is hard. It may be an order of magnitude harder than anything else you can do from a central control model - but if that's the only operating model you have available to you then it's time to make lemonade. In the next few posts I'll try to apply some of the lessons learned and recommendations from a series of these types of engagements. Maybe some of them will help you make better lemonade. Or figure out when it's time to move to a new lemonade stand.

Wednesday, August 20, 2014

The Indelicate Balance Between "Keep it Working" and "Keep It Safe"

Security professionals continue to fool themselves into believing we walk a delicate balance between keeping the business functional, and keeping it safe (secure). This is, in many people's belief including me, a lie. There is no delicate balance. The notion of being able to balance these on a teeter-totter looks like this:

Guess which one the 'safe and secure' is? Exactly.

An interesting conversation (warning: profanity, not so safe for office) happened earlier today. And as per the usual, someone very smart and seasoned in the enterprise side of defense made the point clear.

The bottom line is this:
  You can't ever cross the line into 'breaking business stuff' because you likely never get the chance again.

Each time the pendulum swings into the "secure" side of the spectrum it stays only for a tiny fraction of time, and we as security professionals have to work very hard to make it stick, or it swings back the other way... quickly.

So the question then is, how do we "make it stick"?

Simple! We demonstrate the business value of good security (aka keeping the enterprise safe). Of course, there are few things that are more simple than this, including tightrope walking the Grand Canyon, being an astronaut, and nuclear physics. Whoops, hyperbole ran away with me there for a moment, sorry. Back to reality.

So the key is to make security sticky. You need to align security to something the business can get behind. Hence, business value is so important to measure. But if you're still stuck reporting useless metrics - like how many port scans your firewall blocked, or how many SQL Injection instances your Software Security program identified - you're miles away from demonstrating business value.

This brings me back to KPIs, and the development of data points which strongly align to business/enterprise goals. All of this is predicated on someone in the security organization (or everyone?) being alert and aware to what the business is trying to accomplish at the board/strategic level. Does your organization have this type of awareness and knowledge? Are you leveraging it?

I can tell you that if you're not, the picture above will continue to be your fate... from yesterday to today and on into the future.

Wednesday, August 13, 2014

Getting in Our Own Way

The security community has this widely-understood reputation for self-destruction. This is not to say that other communities of professionals don't have this issue, but I don't know if the negative impact potential is as great. Clearly I'm not an expert in all fields, so I'll just call this a hunch based on unscientific gut feeling.

What I do see, though, much like with the efforts of the "I am the Cavalry" movement which has sent an open letter via to the auto industry, is resentment and dissent without much backing. In an industry which still has more questions than answers - and it gets worse every day - when someone stands up with a possible effort towards pushing a solution you quickly become a lightning rod for nay-sayers. Why is that?

One of my colleagues who is the veteran CISO has a potential answer - which for the record I'm uncomfortable with. He surmises that the collective "we"(as in security community) aren't actually interested in solving problems because the real solutions require "soft skills like personality" and business savvy in addition to technical accumen. It turns out that taking the time to understand the problem, and attempt to solve it (or at least move the ball forward) is very hard. With the plethora of security problems in nearly everything that has electricity flowing to it, it's near-trivial to find bugs. Some of these bugs are severe, some of them are the same 'ol, same 'ol SQL injection and buffer overflows which we identified over a decade ago but still haven't solved. So finding problems isn't rocket science - actually presenting real, workable solutions is the trick. This is just my humble opinion based on my time in the enterprise and consulting in.

I once worked for a CISO who told his team that he didn't want to hear about more problems until we had a proposed solution. Furthermore, I'm all for constructive criticism to help contribute to the solution - but don't attack the person or the proposed solution just to do it. Don't be that person.

I think it may have been Jeff Moss that I heard say it - "Put up or shut up"... so give me your solution idea, or stop whining things are broken.

Friday, August 8, 2014

Why Your Enterprise Most Likely Doesn't Have a Zero-Day Problem

It should come as no surprise that at Black Hat 2014 this week there were an enormous amount of invaluable conversations, as always. We talked about attacks, exploits and exploitation techniques as well as defenses basic and exotic. A few of these ended up in the same place, logically, and have led me to conclude that the majority of enterprises out there don't have a zero-day problem. Let me explain...

It should by now be clear if you're a security professional that the average enterprise struggles with even the most basic security hygiene. This of course makes life difficult when we start to pile on cross-silo dependancies - for example configuration management - for security effectiveness. While I certainly don't mean to imply that every enterprise can't do the basics, I have yet to meet a CISO who is comfortable with the fundamentals of asset, configuration and user management on an enterprise scale and in a timely fashion.

That being said, I further submit that zero-day attacks and exploits are an advanced level of attack typically reserved for targeted organizations which have significant levels of security capability mandating these advanced levels of effort. Basically if you've got your fundamentals right, and you're doing good block and tackle security, your users are well educated to be skeptical of links and things sent to them the determined attacker will be forced to turn to exploiting yet unknown and unpatched weaknesses in your software to get through your defenses. The truth is, I have come to believe, that the vast majority of enterprises just don't have their act together enough to merit that level of effort from the attacker.

From what I know, an attacker burning a zero-day exploit is a non-trivial matter. Zero-days, while still fairly plentiful, have a cost associated with them and an attacker will use one of these once he or she has exhausted the typical, and often easy, methods of breaching your security. There are simply too many options further down the chain. You have to look no further than a conversation with David Kennedy of TrustedSec who makes it clear exploits aren't required to break in. All that's required, in still far too many instances, is sending someone in the organization a malicious link, or a malicious file and they'll open the door and show you their closely-guarded intellectual property ... and probably hold the door for you as you walk out with it. Yes, indeed it is that simple to exploit corporate security with brain-boggling results.

So why burn a zero-day? Attackers typically won't unless they've encountered roadblocks in other avenues. Since PowerShell is installed on every new Windows PC, it's the perfect tool to use to execute an attack, legitimately, on a target host. All the user has to do is let you in...and we all know that most users will still click on the lure of a dancing bear or the promise of nude photos of their favorite celebrity.

So while your enterprise security organization may actually encounter some malware with zero-day exploits in them, they likely aren't targeted at your organization. The problem your average enterprise has is poor fundamentals - leaving you open to all manner of exploit and penetration without the use of any more advanced techniques than "asking the user for permission". So why would an attacker burn a precious zero-day against you? They likely wouldn't. Unless, you know, you're a target.

Friday, August 1, 2014

Security on a Weak IT Foundation

The interesting question of maturity

Earlier this week, Bill Burns asked me this question...
"can a security team have a higher level of maturity than the IT team that handles its operational tasks?"
It's an interesting question, and one that certainly requires some level of thought. My top-of-my-head response was - well ... no. This is clearly a "lowest common denominator" problem.

The more I thought about it, the more this seemed like an obvious answer - a CMMI level 2 IT organization was never going to support a CMMI level 3-5 security organization. That should seem rather obvious. But the more I thought about this, the more I think that a CMMI level 2 IT organization can't support anything but an n-1 security organization. Let me explain my thinking here-

Weak foundations, weak security

It should be rather obvious that a weak foundation cannot support a tall, strong structure. You simply don't have the stuff it takes to hold it all up, from a building perspective.

In the IT world, if you have weak operational IT practices, you'll never get anything better than weak security practices. For example, let's look at how IT views and assesses assets on the corporate network. If IT can't tell you every asset on the corporate network right now in an on-demand manner, with troves of accurate meta-data then you can't possibly expect to build a strong security operations program on top of that. Security needs foundational things such as the ability to know what's on the network and loads of meta-data about each asset in order to make decisions on the risks these assets pose.

Decomposing that even further to the most simple blocks - if IT doesn't know what's most critical to the business in terms of supporting function, security has absolutely zero chance of successfully crafting a defensive response strategy or operational plan. If an asset is suspected of being malicious or compromised (an IP address, for example) meta-data is needed to decide whether the alert could potentially be a false-positive, or if it even warrants a response (maybe it's just some lab machine which can simply be turned off). As a kid G.I. Joe taught us that knowing was half the battle - and not knowing means you're lost.

Weak foundations, weaker security

In an effort to try to understand this more, my line of thinking leads me to believe that organizations with a particular CMMI score when it comes to general IT, can only support an n-1 CMMI score for security maturity.

The reason I believe this is that security operations, by their very nature, cross many IT silos and require well-thought-out and precisely executed workflows and communication to function well. When you cross team boundaries, silos and responsibilities these inherently break down even a little - thus diminishing what you can build on top of them. Like the great pyramids - the higher you build the more you have to stack inward. Security - at least in my narrow view - is sitting right at the top of the IT ladder, thus making it fairly difficult to do well if the base of the IT operations is shaky.


The long and short of it is this - if your enterprise has poor IT hygiene, and ranks low on the CMMI scale - focus security effort and resources on helping IT level up before you start to drop in expensive and complicated security kit. In essence, flashy boxes or solutions won't do you much good when you try to operationalize them on top of poorly functioning IT infrastructure, processes and methodologies.

Saturday, July 26, 2014

Ad-Hoc Security's Surprisingly Negative Residual Effect

Security is fraught with the ad-hoc approach. Some would argue that the very nature of what we do in the Information Security industry necessitates a level of ad-hoc-ness and that to try and get away from it entirely is foolish.

CISOs are challenged with this very thing, every hour of every day. Threats pop up that they aren't prepared for, and present an imminent danger to the business, so they must react. These reactions are necessary to keep the business operational, no one will argue that, but it is when they have a residual effect on the enterprise that we run into problems.

It's the old snowflake rolling down the mountain analogy... sort of.

How it starts

Since no security program I'm aware of has managed to account for all the threats it will encounter, let's take any one of them as an example. The threat may be some semi-custom malware which targets a particular piece of software in their industry vertical, or it may simply be something as common as a banking trojan. The CISO realizes that they simply don't have the supporting infrastructure to mitigate or help in remediation of the threat - so off to the ad-hoc bin we go.

There are, in general, three possible courses of action which follow.

First the ever-popular "we'll write some code" option. Many CISOs have access to some amazing security talent, and thus the ability to whip-up some custom-coded solution which takes care of the issue. Quite common. I'm not even saying this is a bad option! If you've got the talent, why not utilize it to its full potential.

Second, the almost-as-popular "hire an army of consultants" option. External consultants descend on your enterprise and identify, contain, and work to mitigate the current threat. Your hope is that they document their work, and maybe leave behind some clues as to what was done, why, and how you can repeat this procedure int he future.

Now for the most popular option, unfortunately, if the issue is big enough. This is the "let's buy a box" option. CISOs who feel overwhelmed look to their partners and often times the analysts to provide them with options. Not surprisingly, much of the time the 'solution' comes in a nice 2U rack-mountable appliance, with a yearly maintenance contract.

With the threat, at least temporarily, addressed, it's on to the next big issue. Playing whack-a-mole is the modus operandi for all too many in security leadership... and it's not a commentary on their effectiveness or abilities, it's just simply the way it is.

Once you've moved on from the previous problem what we have left is what is commonly referred to as a "one-off".


Entirely too many networks are simply littered with "one-offs". Solutions which once served some point purpose which have either been forgotten, fallen out of maintenance or support, or simply no longer serve the greater mission of the enterprise security organization. So many of these "one-offs" don't integrate well, aren't interoperable, or don't scale ... or worse they're simply not manageable at the level that your organization needs.

The problem with ad-hoc security measures is that we tend to create too many one-offs like this. Databases getting ripped off through the web apps? Drop in a WAF (Web Application Firewall). PCI requires you to log? Drop in a low-cost SIEM solution. Having difficulty managing the JAVA runtime in your environment ... err ...let's leave that one alone for now. You get the idea.

One of the biggest transgressors in this space is the Identity and Access Management tools in an enterprise. Since the problem is so challenging, enterprises tend to use multiple tools to solve niche, and timely, issues. What's left over is a patchwork of several different IAM tools, identity stores, and rights-management consoles.

The real problem with ad-hoc

The real problem with ad-hoc isn't there are way too many devices, servers, systems, and tools to keep updated and functional. Yes this is definitely a problem, but not the problem, in my opinion. The biggest problem is one of resources. Resources - we're talking about people here. Human beings need to sleep, eat lunch, hang out at the water cooler and take bio breaks. Humans who spend their time trying to make a few tools play nice are really wasting a lot of time...

The challenge of ad-hoc security is that you end up leaving behind a wake of poorly operationalized hardware, software and processes. This turns into a black hole for your people's time, and I don't have to tell you that this creates opportunities for attackers.

The realization

The unfortunate end-result of ad-hoc security, then, is decreased security. You're not really reducing risk over the long-haul but rather increasing it, due to the increased complexity, resource drain, and low levels of inter-operability. It makes perfect sense then that CISOs who don't take a pre-planned approach feel like they're forever on a hamster-wheel and are never getting anywhere in spite of superhuman efforts.

The better approach

Many of you CISOs and security leaders have already discovered and are implementing program-based security measures. You start by defining a business-aligned security strategy, which pre-plans the 'big picture' approach you will take. You set out the high-level guidance, and set timelines and try to manage projects with the understanding that things come up - but you can be ready for them.

This doesn't mean you suddenly stop tactical security measures - you just try to avoid ad-hoc situations which have you dropping in processes and technologies which don't fit in with your long-term goals and strategy. This isn't entirely difficult, but takes having that strategy first!

As always, I look forward to your replies, comments, suggestions and experiences.

Monday, July 21, 2014

Tackling 3rd Party Risk Assessments Through a 3rd Party

In the enterprise, sometimes absurd is the order of the day.

Earlier this week I ended up in a conversation with a colleague about 3rd party risk. We started talking about the kinds of challenges his organization faced, and as the leader of the 3rd party risk program what he's up against. As it turns out when the organization set out to tackle 3rd party risk a slight mis-calculation was made. Long story short, his group has over 100+ vendors to manage in terms of 3rd party risk. That's 100+ vendors that interact with the network, the data, the applications, the people, and the facilities his enterprise has.

His team is staffed by a whopping 3 people, including him. To put this into perspective, and given that there are 250 business days a year, it means his team needs to complete 50 reviews per analyst. With 250 total days to work with, that means that they can spend a maximum of 5 days per 3rd party. Of course, we're not counting vacation days, sick days, or snow days. We're also not counting travel to/from sites to actually do investigative work, or the time it takes to do an analysis, debrief, or any of that.

This started to unravel in my mind, pretty quickly. I pressed my colleague for an answer to how he could possibly achieve any measure of compliance and completeness, to which he answered: "We outsource the evidence gathering to a 3rd party".

My head exploded.

I'm not saying it doesn't make sense, or that there are very many real alternatives - but you have to know how crazy this sounds. They've outsourced the fact-finding portion of 3rd party risk assessments to a 3rd party. BOOM

The truth is that there is a lot that he was doing behind the scenes here which made this a little easier to swallow. For example, a standard questionnaire was developed based on a framework they developed and approved internally which minimized the amount of 'thinking' a 3rd party assessor had to do. Each category of required controls had a gradient on which the 3rd party being assessed was graded, and there was really very little room for interpretation. Mostly.

If you think about it, I'm confident that there are many, many enterprises out there with this minor challenge. Every enterprise does business with at least dozens, on average with hundreds of 3rd parties to varying degrees. From your outsourced payroll provider, to the company that shreds your documents once a week, to the company who sends the administrative assistant who sits at their desk and answers calls and surfs Facebook all day. Every enterprise has a vast number of 3rd parties which need to be assessed - and risks identified.

While I'm definitely not crazy enough to think companies should only handle this with internal, trusted employees, I'm not completely convinced hiring out to a 3rd party is that fantastic of an idea either. There is so much to consider. For example, if that 3rd party assessor misses something, are they liable, or does that fall to your company? Ultimately in the court of public opinion - this is a trick question. The answer is always you.

I suppose the long and short of it is that enterprises have little choice but to use a 3rd party to help them manage 3rd party risk. But then the only question is - do they assess that 3rd party which will be doing the 3rd party risk assessments for unnecessary risk? It's enough to make your head spin, I know it gave me a headache just thinking about it.

What do you think the mature 3rd party risk assessment looks like? Do you have leading practices you could share? Contact me as I'd like to share them with our peers, and others who are struggling with this task right now.

Thursday, July 10, 2014

Compliance and Security Seals from a Different Perspective

Compliance attestations. Quality seals like “Hacker Safe!” All of these things bother most security people I know because to us, these provide very little insight into the security of anything in a tangible way. Or do they? I saw this reply to my blog post on compliance vs. security which made an interesting point. A point, I dare say, I had not really put front-of-mind but probably should have.

Ron Parker was of course correct…and he touched on a much bigger point that this comment was a part of. Much of the time compliance and ‘security badges, aka “security seals” on websites, aren’t done for the sake of making the website or product actually more secure … they’re done to assure the customer that the site or entity is worthy of their trust and business. This is contrary to conventional thinking in the security community.

Saturday, July 5, 2014

Critical Infrastructure as the Next "Cyber War"

I'm tired of reading headlines that say stuff like "It's [cyber] the next war!" because not only are they spreading FUD (fear, uncertainty, doubt) but if this was really the case we [as Americans] would already have "lost".

One of the things the FUD-sters like to ballyhoo about is the nation's critical infrastructure and how our power plants, water treatment facilities and chemical processing plants will be [or already are] targets for foreign nation states in a sneaky digital assault. News flash - this has been going on for some time, and while it's crystal clear to anyone paying attention that the nation's critical infrastructure is in a seriously neglected state when it comes to security - this likely isn't America's biggest problem.

Thursday, July 3, 2014

Harmonizing Compliance and Security for the Enterprise - The Introduction

Pursuit of compliance in the enterprise is proving to be a staggeringly bad security investment, if you ask nearly any enterprise security professional. And yet, we continue to see companies who get breached fall back on the same press releases: "We were PCI-DSS compliant! It's not our fault we were breached!"

I ask myself why, every time it happens. I still don't have a good answer.

Monday, June 16, 2014

Choosing the Right Entry Point for a Software Security Program

The topic of software security, or AppSec, has once again cropped up recently in my travels and conversations so I thought it would be prudent to address that here on the blog. As someone responsible for software security in an enterprise, Fred was being given a small pool of money and a chance to plan, design, and implement a software security program. The big question on Fred's mind then, was where to start.

As we talked through the options, and I discussed some of the mistakes I've made and have witnessed others make, I tried to advise Fred to be cautious. One of the most important things one can do wrong when starting a software security program from scratch is starting in the wrong part of your Software Development Lifecycle or SDLC. This can be exacerbated by the fact that many organizations have many more than one software development lifecycle, and picking the wrong starting block is quickly amplified.

Sunday, June 15, 2014

Getting Wrapped Around the CISO Reporting Structure Axle

CISOs are in the lime-light right now as the parade of data breaches marches on. One of the big topics is the issue of reporting structures. Where should the CISO report to? Should the Senior Information Security leader be a company officer? All valid questions, and more.

Tuesday, June 3, 2014

In Defense of Reactive Security

Warning: This post contains a Sun Tzu quote...

Let's start here:
You're driving down the street, minding your own business and doing the speed limit. Both hands are on the wheel, no cell phone in sight, radio turned down to a moderate level, and you're generally driving like the books tell you to. As you approach the intersection where your light is green you take a quick glance to your left, then to your right. All is right, and you have the clear go-ahead. Now as you come into the intersection a child on a skateboard dives into the street in front of you...
In your mind, right now, you've slammed the breaks and are laying on the horn, right?

Every one of us reacts to our environment, it's how we survive. And yet - when you say "reactive" security today you get looks from people like that's a dirty word. Why is that? Much like other circumstances where perfectly reasonable terms and ideas get hijacked ... I blame marketing.

A responsible enterprise security program plans for as many possible negative scenarios as possible and accounts for them in advance (called being pro-active) and then reacts as conditions in the environment change (called being reactive). One without the other simply makes no sense, and yet all the marketing literature has CISOs thinking that being reactive is somehow bad.

It would appear that in the quest to invent new problems for the many 'solutions' out there, the term reactive has been ascribed some meaning I'm not familiar with. To clarify - both reactive and pro-active security measures are required - in harmony.

There's this interesting quote from Sun Tzu that applies here, mostly-
Strategy without tactics is the slowest route to victory. Tactics without strategy is the noise before defeat.
Pro-active security is better known as strategy. This is all the planning a security leader will do based on a survey of their current resources, capabilities, technology and environment - and if you're lucky maybe based on history as well. Being pro-active is a great idea, in fact, it's absolutely essential. Anyone who's ever tried to paint a room, or lay tile, or heck even sleep-train children will acknowledge that without a proper plan you may be able to get half-way in before you realize you're lost. There is a divergence with Sun Tzu's quote here, in Information Security. Strategy without tactics, in our industry, is certain failure. I don't mean the type of failure where you get hacked or breaches, I mean the type of failure where you get hacked or breached and you find out 9 months later because someone reports it to you... or the media calls your PR officer and asks for a quote on the giant breach you've experienced.

Reactive security is better known as tactics. You need tactics. Your organization, and your strategy is nothing without tactics. The principal reason is that sometimes, just sometimes, those bad guys/gals that we all plan for get creative and adjust their behaviors. Sometimes the markets shift, and business climates, technologies change. Sometimes a vulnerability is found in something you consider core to your security - maybe like SSL, for example - and you have to adjust quickly and decisively. Reactive or tactical security is something you can indeed plan for, but only as much as you can plan for it happening...and you have to give yourself and your security program enough flexibility to be able to adapt and adjust.

From experience, one of the biggest issues to date [that I've come across in my clients and personal experience] in security programs is that they become inflexible, unable to adapt to their changing environments. Once a security strategy is laid out, funding is set, and projects are launched everything is set in stone. Should needs change, adversaries surface we didn't account for, or simply new technologies or methods arise - we're left with a shrug of the shoulders and "Well, the budget for this year is set, we can plan for that for next year" - which is absolutely insane.

So I give CISOs which I advise 3 simple rules to go by:

  1. Develop a strong plan, which has clear goals and has the ability to be flexible when needed
  2. Develop a tactical capability to pivot on-the-fly as needs, environments, and adversaries change
  3. Expect to have to adapt either or both of those

Tuesday, May 27, 2014

Hacking the Registry to Keep WindowsXP Updating - A Bad, Bad Idea

When WindowsXP officially expired on support a while back - I wrote a post blog post titled "The Great WindowsXP Cataclysm" which talked about some of the reasons organizations had for staying on the antiquated operating system. Some of those reasons were valid, especially if you were running a Point-of-Sales (POS) terminal system which is based off of WindowsXP Services Pack 3 called "Windows Embedded POSReady 2009". According to this Microsoft lifecycle support site, this POSReady system runs embedded WindowsXP, and is supported until April 9th, 2019.

Leave it up to the security community to figure out that a simple registry key which identifies the POSReady 2009 operating system could be hacked into the registry of a WindowsXP machine to keep it getting updates. Well ... sort of. This is where it gets weird. Read this ZDNet article with Microsoft's response carefully... and notice that while they admit this will update WindowsXP systems, there is a  string of caveats that should make you think twice.

It's important to acknowledge that this hack (and that's all this really is) essentially tricks the update service into thinking your OS is a point-of-sales WindowsXP embedded device. The essential questions, which Microsoft hints at, is just how different is WindowsXP from WindowsXP Embedded? The answer is - quite a bit, actually. Check out this paper on the difference between WindowsXP Professional and WindowsXP Embedded and decide for yourself if you're willing to take that risk. Architecturally, the two operating systems are close, obviously since they're both based off the same kernel. Once you start getting into the add-ons and run-time environment options Professional and Embedded start to look dramatically different - in my opinion. This means that if you start applying patches and bits meant for the embedded operations system onto your corporate desktops at very least the results would be unpredictable...

So let's summarize my thoughts here.
  • some organizations are still on WinXP on the corporate desktop (and elsewhere, obviously)
  • for those that haven't migrated, excuses are critical... not necessarily valid, but critical
  • a quick registry hack is available which tricks Windows Update into pushing patches and updates meant for a variation of your WindowsXP operating system onto your machine(s)
The hack is a bad idea for the following reasons:
  • potentially de-stabilizes your WindowsXP operating system
  • necessitates significantly more testing to ensure compatibility
  • quite obviously breaks your software agreement
  • could potentially get you into a CFAA or other legal situation
Essentially, my thoughts are this - if you're resorting to hacking the registry to get patches which are meant for an OS similar to yours onto your machine for security - you've got a big, big problem. The energy you're expending, and potential hazards you're creating on top of system stability and unknown security issues ...should get you fired. Immediately.

Folks - this isn't a viable work-around to keep WindowsXP alive. It's a bad, bad idea.