Tuesday, October 21, 2014

The Other Side of Breach Hysteria

In a world where everyone is trying to sell you something, security is certainly no exception. But separating the hype from the truth can easily turn into a full time job if you're not careful.

With all the recent retail data breaches, it would appear as though the sky is falling in large chunks right on top of us. Every big-name retailer, and even some of the smaller ones, are being hacked and their precious card data is bring whisked away to be sold to miscreants and criminals.

Now enter the sales and marketing pitches. After every breach it would seem our mailboxes fill up with subject lines such as-
"Learn how not to be the next , read how our latest gizmo will keep you secure!"
I don't know about you, but the snake-oil pitch is starting to get old. While it's clear that the average buyer is getting the message about data breaches and hackers - I believe there are two other aspects of this which aren't talked about enough.

First there is the notion of "breach fatigue". If you read the news headlines you would have thought that everyone's bank accounts would be empty by now, and everyone in the United States would have been the victim of identity theft by now. But they haven't. Or they haven't been impacted directly. This leads to the Chicken Little problem.

You see, many security professionals cried that security incidents did not receive enough attention. Then the media took notice, and sensationalized the heck out of incidents to an almost rock-star fervor. The issue here is that I believe people are starting to grow weary of the "Oh no! Hackers are going to steal everything I have!" talk. Every incident is the biggest there has ever been. Every incident is hackers pillaging and stealing countless credit card records and identities. The average person doesn't quite know what to make of this, so they have no choice but to mentally assume the worst. Then - over time - the worst never comes. Sure, some get impacted directly but there is this thing called zero fraud liability (in the case of card fraud) that means they are impacted - but barely enough to notice because their banks make it alright. More on this in a minute.

We as humans have a shocking ability to develop a tolerance to almost anything. Data breach hysteria is no exception. I've now seen and heard people around televisions (at airports, for example, where I happen to be rather frequently) say things like "Oh well, more hackers, I keep hearing about these hackers and it never seems to make a difference." Make no mistake, this is bad.

You see, the other side of the awareness hill, which we are rapidly approaching, is apathy. This is the kind of apathy that is difficult to recover from because we push through the first wave of apathy into awareness, and then hysteria, which leads to a much stronger version of apathy where we will be stuck - I believe. So there we are, stuck.

If I'm honest, I'm sick and tired of all the hype surrounding data breaches. They happen every day of every week, and yet we keep acting like we're shocked that Retailer X, or Company Y was breached. Why are we still even shocked? Many are starting to lose the ability to become shocked - even though the numbers of records breached and scale of the intrusions is reaching absurd proportions.

Second point I'd like to make is around the notion of individual impact. Many people simply say that "this still doesn't impact me" because of a wonderful thing like zero fraud liability. Those 3 words have single-handedly destroyed the common person's ability to care about their credit card being stolen. After you've had your card cloned, or stolen online and had charges show up - you panic. Once you realize your bank has been kind enough to put the funds back, or roll-back the fraudulent charges you realize you have a safety net. Now these horrible, terrible, catastrophic breaches aren't so horrible, terrible and catastrophic. Now they're the bank's problem.

Every time someone has a case of credit card fraud the bank covers under zero fraud liability (and let's face it, most cards and banks have this today) - their level of apathy for these mega-breaches grows. I believe this is true. I also believe there is little we can do about it. Actually, I'm not sure if there is anything that needs to be done about it. Maybe things are just the way they're going to be.

There is a great phrase someone once used that I'm going to paraphrase and borrow here - things are as bad as the free market will support. If I may adapt this to security - the security of your organization is as good (or bad) as your business and your customers will support.

Think about that.

Saturday, October 11, 2014

Security Lessons from Complex, Dynamic Environments

Security is hard.

Check that- security is relatively hard in static environments, but when you take on a dynamic company environment security becomes unpossible. I'm injecting a bit of humor here because you're going to need a chuckle before you read this.

Some of us in the security industry live in what's referred to as a static environment. Low rate of change (low entropy) means that you can implement a security control or measure and leave it there, knowing that it'll be just as effective today as tomorrow or the day after. Of course, this takes into account the rate at which effectiveness of security tools degrades, and understanding whether things were effective in the first place. It also means that you don't have to worry about things like a new system showing up on the network very often or a new route to the Internet. And when these do happen, you can be relatively sure something is wrong.

Early on in my career I worked for a technical recruiting firm. Computers were just a tool and companies having websites was a novelty. The ancient Novell NetWare 3.11 systems had not seen a reboot in literally half a decade but nothing was broken so everything just kept running and slowly accumulating inches of dust in the back room. When I worked there we modernized to NT 3.51 (don't laugh, I'm dating myself here) and built an IIS-based web page for external consumption. That place was a low entropy environment. We changed out server equipment never, and workstations every 5 years. If all of a sudden something new showed up in the 30 node network, I'd immediately suspect something was amiss. At the time, nothing that exciting ever happened.

Fast forward a few years and I'm working for a financial start-up. It's the early 2000's and this company is the polar opposite of a static company. We have at least 1 new server coming online a day, typically 5-10 new IP addresses showing up that no one can identify. We get by because we have one thing going for us. That one thing is the on-ramp to the Internet. We have a single T1 which connects us to the rest of the world. We drop a firewall and an IDS (I think we used an early SNORT version, maybe, plus a Sonic Wall firewall). When that changed and our employees started to go mobile and thus VPN things got a little hairy.

Fast forward another few years and I'm working at one of the world's largest companies on arguably one of the most complex networks mankind has ever seen. Forget trying to understand or know the everything - we're struggling to keep track of the few things we DO know. Heck we spend 4 weeks NMap'ing (and accidentally causing a minor crisis, oops) our own IP subnets to find all the NT4 systems when support finally and seriously for real this time, ran out.

Now let's look at security in the context of this article (and reported breach) - http://www.nextgov.com/cybersecurity/2014/10/dhs-attackers-hacked-critical-manufacturing-firm-months/96317/. Let me highlight a few key quotes for you-
"The event was complicated by the fact that the company had undergone corporate acquisitions, which introduced more network connections, and consequently a wider attack surface. The firm had more than 100 entry and exit points to the Internet."
You may chuckle at that, but I bet you have pretty close to this at your organization. Sure, maybe the ingress/egress points you control are few, and well protected, but it's the ones you don't know about which will hurt you. Therein lies the big problem - the disconnect between business risk and information security ("cyber") risk. If information security isn't a part of the fabric of your business, and a part of the core of the business decision-making process you're going to continue to fail big, or suffer by a thousand papercuts.

While not necessarily as sexy as that APT Defender Super Deluxe Edition v2.0 box your vendor is trying to sell you, network and system configuration management, change management and asset management are things you absolutely must get right, and must be involved in as a security professional for your enterprise. The alternative is you have total chaos wherein you're trying to plug each new issue as you find out about it, while the business has long forgotten about the project and has moved on. This sort of asynchronous approach is brutal in both human effort and capital expenditure.

Now let's focus on another interesting quote from the article. Everyone like to offer advice to breach victims, as if they have any clue what they're saying. This one is a gem-
"Going forward, “rearchitecting the network is the best approach to ensure that the company has a consistent security posture across its wide enterprise," officials advised."
What sort of half-baked advice is that?! Those of you who have worked incidents in your careers, have you ever told someone that the best thing to do with your super-complex network is to totally rearchitect it? How quickly would you get thrown out of a 2nd story window if you did? While this advice sounds sane to the person who's saying it - and likely has never had to follow the advice - can you imagine being given the task of completely rearchitecting a large, complex network in-place? I've seen it done. Once. And it took super-human effort, an army of consultants, more outages than I'd care to admit, and it was still cobbled together in some places for "legacy support".

Anyway, somewhere in this was a point about how large, complex networks and dynamic environments are doomed to security failure unless security is elevated to the business level and becomes an executive priority. I recognize that not every company will be able to do this because it won't fit their operating and risk models - but if that's the case you have to prepare for the fallout. In the cases where risk models say security is a business-level issue you have a chance to "get it right"; this means you have to give a solid effort and align to business, and so on.

Security is hard, folks.

Monday, October 6, 2014

To Reform and Institutionalize Research for Public Safety (and Security)

On October 3rd, 2014 a petition appeared on the Petitions.WhiteHouse.gov website titled "Unlocking public access to research on software safety through DMCA and CFAA reform". I encourage you to go read the text of the petition yourself.

While I believe that on the whole the CFAA and more urgently the DMCA need dramatic reforms if not to be flat-out dumped, I'm just not sure I'm completely on board with there this idea is going. I've discussed my displeasure for the CFAA on a few of our recent podcasts if you follow our Down the Security Rabbithole Podcast series, and I would likely throw a party if the DMCA were repealed tomorrow - but unlocking "research" broadly is dangerous.

Wednesday, September 24, 2014

Software Security - Hackable Even When It's Secure

On a recent call, one of the smartest technical folks I can name said something that made me reach for a notepad, to take the idea down for further development later. He was talking about why some of the systems enterprises believe are secure really aren't, even if they've managed to avoid some of the key issues.

Let me explain this a little deeper, because this thought merits such a discussion.

Friday, September 5, 2014

Managing Security in a Highly Decentralized Business Model

Information Security leadership has and will likely continue to be part politicking, part sales, part marketing, and part security. As anyone who has been a security leader or CISO in their job history can attest to, issuing edicts to the business is as easy as it is fruitless- Getting positive results in all but the most strictly regulated environments is nearly impossible. In high centralized organizations, at least, the CISO stands a chance since the organization likely has common goals, processes, and capital spending models. When you get to an organization that operates in a highly distributed and decentralized manner the task of keeping security pace grows to epic proportions.

Wednesday, August 20, 2014

The Indelicate Balance Between "Keep it Working" and "Keep It Safe"

Security professionals continue to fool themselves into believing we walk a delicate balance between keeping the business functional, and keeping it safe (secure). This is, in many people's belief including me, a lie. There is no delicate balance. The notion of being able to balance these on a teeter-totter looks like this:

Guess which one the 'safe and secure' is? Exactly.

An interesting conversation (warning: profanity, not so safe for office) happened earlier today. And as per the usual, someone very smart and seasoned in the enterprise side of defense made the point clear.

The bottom line is this:
  You can't ever cross the line into 'breaking business stuff' because you likely never get the chance again.

Each time the pendulum swings into the "secure" side of the spectrum it stays only for a tiny fraction of time, and we as security professionals have to work very hard to make it stick, or it swings back the other way... quickly.

So the question then is, how do we "make it stick"?

Simple! We demonstrate the business value of good security (aka keeping the enterprise safe). Of course, there are few things that are more simple than this, including tightrope walking the Grand Canyon, being an astronaut, and nuclear physics. Whoops, hyperbole ran away with me there for a moment, sorry. Back to reality.

So the key is to make security sticky. You need to align security to something the business can get behind. Hence, business value is so important to measure. But if you're still stuck reporting useless metrics - like how many port scans your firewall blocked, or how many SQL Injection instances your Software Security program identified - you're miles away from demonstrating business value.

This brings me back to KPIs, and the development of data points which strongly align to business/enterprise goals. All of this is predicated on someone in the security organization (or everyone?) being alert and aware to what the business is trying to accomplish at the board/strategic level. Does your organization have this type of awareness and knowledge? Are you leveraging it?

I can tell you that if you're not, the picture above will continue to be your fate... from yesterday to today and on into the future.

Wednesday, August 13, 2014

Getting in Our Own Way

The security community has this widely-understood reputation for self-destruction. This is not to say that other communities of professionals don't have this issue, but I don't know if the negative impact potential is as great. Clearly I'm not an expert in all fields, so I'll just call this a hunch based on unscientific gut feeling.

What I do see, though, much like with the efforts of the "I am the Cavalry" movement which has sent an open letter via Change.org to the auto industry, is resentment and dissent without much backing. In an industry which still has more questions than answers - and it gets worse every day - when someone stands up with a possible effort towards pushing a solution you quickly become a lightning rod for nay-sayers. Why is that?

One of my colleagues who is the veteran CISO has a potential answer - which for the record I'm uncomfortable with. He surmises that the collective "we"(as in security community) aren't actually interested in solving problems because the real solutions require "soft skills like personality" and business savvy in addition to technical accumen. It turns out that taking the time to understand the problem, and attempt to solve it (or at least move the ball forward) is very hard. With the plethora of security problems in nearly everything that has electricity flowing to it, it's near-trivial to find bugs. Some of these bugs are severe, some of them are the same 'ol, same 'ol SQL injection and buffer overflows which we identified over a decade ago but still haven't solved. So finding problems isn't rocket science - actually presenting real, workable solutions is the trick. This is just my humble opinion based on my time in the enterprise and consulting in.

I once worked for a CISO who told his team that he didn't want to hear about more problems until we had a proposed solution. Furthermore, I'm all for constructive criticism to help contribute to the solution - but don't attack the person or the proposed solution just to do it. Don't be that person.

I think it may have been Jeff Moss that I heard say it - "Put up or shut up"... so give me your solution idea, or stop whining things are broken.

Friday, August 8, 2014

Why Your Enterprise Most Likely Doesn't Have a Zero-Day Problem

It should come as no surprise that at Black Hat 2014 this week there were an enormous amount of invaluable conversations, as always. We talked about attacks, exploits and exploitation techniques as well as defenses basic and exotic. A few of these ended up in the same place, logically, and have led me to conclude that the majority of enterprises out there don't have a zero-day problem. Let me explain...

It should by now be clear if you're a security professional that the average enterprise struggles with even the most basic security hygiene. This of course makes life difficult when we start to pile on cross-silo dependancies - for example configuration management - for security effectiveness. While I certainly don't mean to imply that every enterprise can't do the basics, I have yet to meet a CISO who is comfortable with the fundamentals of asset, configuration and user management on an enterprise scale and in a timely fashion.

That being said, I further submit that zero-day attacks and exploits are an advanced level of attack typically reserved for targeted organizations which have significant levels of security capability mandating these advanced levels of effort. Basically if you've got your fundamentals right, and you're doing good block and tackle security, your users are well educated to be skeptical of links and things sent to them the determined attacker will be forced to turn to exploiting yet unknown and unpatched weaknesses in your software to get through your defenses. The truth is, I have come to believe, that the vast majority of enterprises just don't have their act together enough to merit that level of effort from the attacker.

From what I know, an attacker burning a zero-day exploit is a non-trivial matter. Zero-days, while still fairly plentiful, have a cost associated with them and an attacker will use one of these once he or she has exhausted the typical, and often easy, methods of breaching your security. There are simply too many options further down the chain. You have to look no further than a conversation with David Kennedy of TrustedSec who makes it clear exploits aren't required to break in. All that's required, in still far too many instances, is sending someone in the organization a malicious link, or a malicious file and they'll open the door and show you their closely-guarded intellectual property ... and probably hold the door for you as you walk out with it. Yes, indeed it is that simple to exploit corporate security with brain-boggling results.

So why burn a zero-day? Attackers typically won't unless they've encountered roadblocks in other avenues. Since PowerShell is installed on every new Windows PC, it's the perfect tool to use to execute an attack, legitimately, on a target host. All the user has to do is let you in...and we all know that most users will still click on the lure of a dancing bear or the promise of nude photos of their favorite celebrity.

So while your enterprise security organization may actually encounter some malware with zero-day exploits in them, they likely aren't targeted at your organization. The problem your average enterprise has is poor fundamentals - leaving you open to all manner of exploit and penetration without the use of any more advanced techniques than "asking the user for permission". So why would an attacker burn a precious zero-day against you? They likely wouldn't. Unless, you know, you're a target.

Friday, August 1, 2014

Security on a Weak IT Foundation

The interesting question of maturity

Earlier this week, Bill Burns asked me this question...
"can a security team have a higher level of maturity than the IT team that handles its operational tasks?"
It's an interesting question, and one that certainly requires some level of thought. My top-of-my-head response was - well ... no. This is clearly a "lowest common denominator" problem.

The more I thought about it, the more this seemed like an obvious answer - a CMMI level 2 IT organization was never going to support a CMMI level 3-5 security organization. That should seem rather obvious. But the more I thought about this, the more I think that a CMMI level 2 IT organization can't support anything but an n-1 security organization. Let me explain my thinking here-


Weak foundations, weak security

It should be rather obvious that a weak foundation cannot support a tall, strong structure. You simply don't have the stuff it takes to hold it all up, from a building perspective.

In the IT world, if you have weak operational IT practices, you'll never get anything better than weak security practices. For example, let's look at how IT views and assesses assets on the corporate network. If IT can't tell you every asset on the corporate network right now in an on-demand manner, with troves of accurate meta-data then you can't possibly expect to build a strong security operations program on top of that. Security needs foundational things such as the ability to know what's on the network and loads of meta-data about each asset in order to make decisions on the risks these assets pose.

Decomposing that even further to the most simple blocks - if IT doesn't know what's most critical to the business in terms of supporting function, security has absolutely zero chance of successfully crafting a defensive response strategy or operational plan. If an asset is suspected of being malicious or compromised (an IP address, for example) meta-data is needed to decide whether the alert could potentially be a false-positive, or if it even warrants a response (maybe it's just some lab machine which can simply be turned off). As a kid G.I. Joe taught us that knowing was half the battle - and not knowing means you're lost.


Weak foundations, weaker security

In an effort to try to understand this more, my line of thinking leads me to believe that organizations with a particular CMMI score when it comes to general IT, can only support an n-1 CMMI score for security maturity.

The reason I believe this is that security operations, by their very nature, cross many IT silos and require well-thought-out and precisely executed workflows and communication to function well. When you cross team boundaries, silos and responsibilities these inherently break down even a little - thus diminishing what you can build on top of them. Like the great pyramids - the higher you build the more you have to stack inward. Security - at least in my narrow view - is sitting right at the top of the IT ladder, thus making it fairly difficult to do well if the base of the IT operations is shaky.


TL;DR

The long and short of it is this - if your enterprise has poor IT hygiene, and ranks low on the CMMI scale - focus security effort and resources on helping IT level up before you start to drop in expensive and complicated security kit. In essence, flashy boxes or solutions won't do you much good when you try to operationalize them on top of poorly functioning IT infrastructure, processes and methodologies.

Saturday, July 26, 2014

Ad-Hoc Security's Surprisingly Negative Residual Effect

Security is fraught with the ad-hoc approach. Some would argue that the very nature of what we do in the Information Security industry necessitates a level of ad-hoc-ness and that to try and get away from it entirely is foolish.

CISOs are challenged with this very thing, every hour of every day. Threats pop up that they aren't prepared for, and present an imminent danger to the business, so they must react. These reactions are necessary to keep the business operational, no one will argue that, but it is when they have a residual effect on the enterprise that we run into problems.

It's the old snowflake rolling down the mountain analogy... sort of.


How it starts

Since no security program I'm aware of has managed to account for all the threats it will encounter, let's take any one of them as an example. The threat may be some semi-custom malware which targets a particular piece of software in their industry vertical, or it may simply be something as common as a banking trojan. The CISO realizes that they simply don't have the supporting infrastructure to mitigate or help in remediation of the threat - so off to the ad-hoc bin we go.

There are, in general, three possible courses of action which follow.

First the ever-popular "we'll write some code" option. Many CISOs have access to some amazing security talent, and thus the ability to whip-up some custom-coded solution which takes care of the issue. Quite common. I'm not even saying this is a bad option! If you've got the talent, why not utilize it to its full potential.

Second, the almost-as-popular "hire an army of consultants" option. External consultants descend on your enterprise and identify, contain, and work to mitigate the current threat. Your hope is that they document their work, and maybe leave behind some clues as to what was done, why, and how you can repeat this procedure int he future.

Now for the most popular option, unfortunately, if the issue is big enough. This is the "let's buy a box" option. CISOs who feel overwhelmed look to their partners and often times the analysts to provide them with options. Not surprisingly, much of the time the 'solution' comes in a nice 2U rack-mountable appliance, with a yearly maintenance contract.

With the threat, at least temporarily, addressed, it's on to the next big issue. Playing whack-a-mole is the modus operandi for all too many in security leadership... and it's not a commentary on their effectiveness or abilities, it's just simply the way it is.

Once you've moved on from the previous problem what we have left is what is commonly referred to as a "one-off".


"One-offs"

Entirely too many networks are simply littered with "one-offs". Solutions which once served some point purpose which have either been forgotten, fallen out of maintenance or support, or simply no longer serve the greater mission of the enterprise security organization. So many of these "one-offs" don't integrate well, aren't interoperable, or don't scale ... or worse they're simply not manageable at the level that your organization needs.

The problem with ad-hoc security measures is that we tend to create too many one-offs like this. Databases getting ripped off through the web apps? Drop in a WAF (Web Application Firewall). PCI requires you to log? Drop in a low-cost SIEM solution. Having difficulty managing the JAVA runtime in your environment ... err ...let's leave that one alone for now. You get the idea.

One of the biggest transgressors in this space is the Identity and Access Management tools in an enterprise. Since the problem is so challenging, enterprises tend to use multiple tools to solve niche, and timely, issues. What's left over is a patchwork of several different IAM tools, identity stores, and rights-management consoles.


The real problem with ad-hoc

The real problem with ad-hoc isn't there are way too many devices, servers, systems, and tools to keep updated and functional. Yes this is definitely a problem, but not the problem, in my opinion. The biggest problem is one of resources. Resources - we're talking about people here. Human beings need to sleep, eat lunch, hang out at the water cooler and take bio breaks. Humans who spend their time trying to make a few tools play nice are really wasting a lot of time...

The challenge of ad-hoc security is that you end up leaving behind a wake of poorly operationalized hardware, software and processes. This turns into a black hole for your people's time, and I don't have to tell you that this creates opportunities for attackers.


The realization

The unfortunate end-result of ad-hoc security, then, is decreased security. You're not really reducing risk over the long-haul but rather increasing it, due to the increased complexity, resource drain, and low levels of inter-operability. It makes perfect sense then that CISOs who don't take a pre-planned approach feel like they're forever on a hamster-wheel and are never getting anywhere in spite of superhuman efforts.


The better approach

Many of you CISOs and security leaders have already discovered and are implementing program-based security measures. You start by defining a business-aligned security strategy, which pre-plans the 'big picture' approach you will take. You set out the high-level guidance, and set timelines and try to manage projects with the understanding that things come up - but you can be ready for them.

This doesn't mean you suddenly stop tactical security measures - you just try to avoid ad-hoc situations which have you dropping in processes and technologies which don't fit in with your long-term goals and strategy. This isn't entirely difficult, but takes having that strategy first!


As always, I look forward to your replies, comments, suggestions and experiences.

Monday, July 21, 2014

Tackling 3rd Party Risk Assessments Through a 3rd Party

In the enterprise, sometimes absurd is the order of the day.

Earlier this week I ended up in a conversation with a colleague about 3rd party risk. We started talking about the kinds of challenges his organization faced, and as the leader of the 3rd party risk program what he's up against. As it turns out when the organization set out to tackle 3rd party risk a slight mis-calculation was made. Long story short, his group has over 100+ vendors to manage in terms of 3rd party risk. That's 100+ vendors that interact with the network, the data, the applications, the people, and the facilities his enterprise has.

His team is staffed by a whopping 3 people, including him. To put this into perspective, and given that there are 250 business days a year, it means his team needs to complete 50 reviews per analyst. With 250 total days to work with, that means that they can spend a maximum of 5 days per 3rd party. Of course, we're not counting vacation days, sick days, or snow days. We're also not counting travel to/from sites to actually do investigative work, or the time it takes to do an analysis, debrief, or any of that.

This started to unravel in my mind, pretty quickly. I pressed my colleague for an answer to how he could possibly achieve any measure of compliance and completeness, to which he answered: "We outsource the evidence gathering to a 3rd party".

My head exploded.

I'm not saying it doesn't make sense, or that there are very many real alternatives - but you have to know how crazy this sounds. They've outsourced the fact-finding portion of 3rd party risk assessments to a 3rd party. BOOM

The truth is that there is a lot that he was doing behind the scenes here which made this a little easier to swallow. For example, a standard questionnaire was developed based on a framework they developed and approved internally which minimized the amount of 'thinking' a 3rd party assessor had to do. Each category of required controls had a gradient on which the 3rd party being assessed was graded, and there was really very little room for interpretation. Mostly.

If you think about it, I'm confident that there are many, many enterprises out there with this minor challenge. Every enterprise does business with at least dozens, on average with hundreds of 3rd parties to varying degrees. From your outsourced payroll provider, to the company that shreds your documents once a week, to the company who sends the administrative assistant who sits at their desk and answers calls and surfs Facebook all day. Every enterprise has a vast number of 3rd parties which need to be assessed - and risks identified.

While I'm definitely not crazy enough to think companies should only handle this with internal, trusted employees, I'm not completely convinced hiring out to a 3rd party is that fantastic of an idea either. There is so much to consider. For example, if that 3rd party assessor misses something, are they liable, or does that fall to your company? Ultimately in the court of public opinion - this is a trick question. The answer is always you.

I suppose the long and short of it is that enterprises have little choice but to use a 3rd party to help them manage 3rd party risk. But then the only question is - do they assess that 3rd party which will be doing the 3rd party risk assessments for unnecessary risk? It's enough to make your head spin, I know it gave me a headache just thinking about it.

What do you think the mature 3rd party risk assessment looks like? Do you have leading practices you could share? Contact me as I'd like to share them with our peers, and others who are struggling with this task right now.

Thursday, July 10, 2014

Compliance and Security Seals from a Different Perspective


Compliance attestations. Quality seals like “Hacker Safe!” All of these things bother most security people I know because to us, these provide very little insight into the security of anything in a tangible way. Or do they? I saw this reply to my blog post on compliance vs. security which made an interesting point. A point, I dare say, I had not really put front-of-mind but probably should have.

Ron Parker was of course correct…and he touched on a much bigger point that this comment was a part of. Much of the time compliance and ‘security badges, aka “security seals” on websites, aren’t done for the sake of making the website or product actually more secure … they’re done to assure the customer that the site or entity is worthy of their trust and business. This is contrary to conventional thinking in the security community.

Saturday, July 5, 2014

Critical Infrastructure as the Next "Cyber War"

I'm tired of reading headlines that say stuff like "It's [cyber] the next war!" because not only are they spreading FUD (fear, uncertainty, doubt) but if this was really the case we [as Americans] would already have "lost".

One of the things the FUD-sters like to ballyhoo about is the nation's critical infrastructure and how our power plants, water treatment facilities and chemical processing plants will be [or already are] targets for foreign nation states in a sneaky digital assault. News flash - this has been going on for some time, and while it's crystal clear to anyone paying attention that the nation's critical infrastructure is in a seriously neglected state when it comes to security - this likely isn't America's biggest problem.

Thursday, July 3, 2014

Harmonizing Compliance and Security for the Enterprise - The Introduction

Pursuit of compliance in the enterprise is proving to be a staggeringly bad security investment, if you ask nearly any enterprise security professional. And yet, we continue to see companies who get breached fall back on the same press releases: "We were PCI-DSS compliant! It's not our fault we were breached!"

I ask myself why, every time it happens. I still don't have a good answer.

Monday, June 16, 2014

Choosing the Right Entry Point for a Software Security Program

The topic of software security, or AppSec, has once again cropped up recently in my travels and conversations so I thought it would be prudent to address that here on the blog. As someone responsible for software security in an enterprise, Fred was being given a small pool of money and a chance to plan, design, and implement a software security program. The big question on Fred's mind then, was where to start.

As we talked through the options, and I discussed some of the mistakes I've made and have witnessed others make, I tried to advise Fred to be cautious. One of the most important things one can do wrong when starting a software security program from scratch is starting in the wrong part of your Software Development Lifecycle or SDLC. This can be exacerbated by the fact that many organizations have many more than one software development lifecycle, and picking the wrong starting block is quickly amplified.

Google+