Friday, October 31, 2008

Risk Rating - When Is Critical Not?

I apologize if you're reading this twice - but it's important enough I wanted to publicize as much as possible.

Over on my other HP blog, I posted an article on Risk Ratings, and the notion that critical isn't always critical - but "tools" just don't do risk ratings justice. I'm looking for feedback, serious thoughts, and volunteers to help me take this research to the next level.

Please read:
Following the White Rabbit:

Thank you.

Tuesday, October 28, 2008

Framework for Realistically Addressing IT Risk (Security Issues)

Better security is often the result of a poorly-timed disaster... -me
Man is an impulsive creature... We tend to try and solve problems before we fully understand their nature.

Once you accept that as truth, you can begin to realize why tIT Security and Risk Management is in such a sorry state, and why we're perpetually bailing water from a sinking ship. Risk is a difficult concept to understand, I get that; it took me arguably 8 years into my IT career to fully grasp that risk is a "gray" area and never binary. For many "IT Security" practitioners I've worked with over the years this is where things go south.

After thinking about the causes for poor risk mitigation and security practices in today's business world, I've channeled my efforts to developing a way lay out the problem-solving process in a way that makes sense, and can get us closer to the zero-horizon than we've previously been able to come to. Here's what I have been able to come up with... keep in mind it's still a work-in-progress but I'm putting it out there so as to solicit responses and maybe help me refine this process. Think of it as a ... practical guide to security/risk problems.

These are the steps that I feel one (or many) should go through to resolve any clear and present danger facing an IT Security/Risk group...

  1. Admit there is a problem - Take your head out of the sand, admit there are issues that need to be addressed and begin to try and gather the "big picture" around the existence of these issues. Just admitting there is a problem is the first step, but often the hardest.
  2. Implement a tactical stop-gap - Stop the bleeding; forget trying to wrap your head fully around the problem... just find a way to stop the bleeding short-term while you work to resolve long-term.
  3. Understand the nature of the problem - Now that you've got the wound triaged, look deeper and wider into the actual nature of the issue. Look beyond IT, "think outside the box", ask for other input from people who may have a different perspective.
  4. Admit the resultant risk will never be zero (full resolution) - You will never bring the risk equation all the way down to zero; never going to happen. I think it's paramount that those attempting to mitigate the risk understand this.
  5. Resolve to work towards a realistic strategic solution - Forget the perfect Utopia-like resolution where everything is perfect (see step 4)... set realistic goals for mitigation, and resolve to get there in a sane manner. Put this on paper, tack it somewhere everyone will see it.
  6. Provide real effort to resolve the problem holistically - In order to resolve a problem dealing with real-world risks, real-world efforts must be made. Think beyond your walls, identify all possible permutations of this risk and provide effort to resolve this holistic problem. This costs time, money, and resources. Be prepared for those costs, allocate them in advance or you'll doom yourself to fail.
  7. Implement the strategic resolution in good-faith - Once there is a resolution it'll take real effort (see #6) by your business to implement this resolution. Make sure you have solid backing from the business... not just IT.
  8. Continue to provide feedback for the future - Risk is never solved with a point-in-time approach. Risk evolves, morphs, and changes the rules just when you think you're safe. You must continue to re-visit to make sure yesterday's strategic resolution still works today.

There you have it. Hopefully this ground-work will help build a more solid foundation for risk-related problem solving.

I welcome your input, feedback, criticism and everything else you may have.
Just be constructive.

Monday, October 27, 2008

T-Mobile Android Has a Vulnerability

Stop the presses. Call your mother. Google Android Mobile (a la T-Mobile G1) has a security issue you say?

Good news first...
  • Google wrote Android with security in mind (we'd like to think) and its applications run within an isolated "sandbox" type environment
  • So... trojan'ing the browser (which is WebKit-based) means you don't get acces to the entire system
  • The wording from the body that discovered the flaw (Independent Security Evaluators, ISE) indicates that there is an existing fix for the flaw (which exists in one of the many open-source packages used)
Now the bad news...
Since most people use the browser on their phone for nearly everything, this means that you can't trust the browser in your phone - thus defeating a vast majority of the functionality people crave.
Perhaps it was the rush to market, or perhaps it was the lack of attention to security - I won't speculate; what I can tell you is that it's obvious security researchers couldn't wait to find a flaw in Android... it sure didn't take long.
Does it mean that you shouldn't buy one? Probably not.
Does it mean that Google's security is bad? Probably not.
Does this mean that Android is just another piece of consumer-ized gadgetry? Absolutely.

Friday, October 24, 2008

FDIC Pushes Back ID Theft Red Flags Rule Enforcement

That's right.

As reported by in an article posted today, the FDIC is delaying enforcement of a rule that has been on the books for quite some time because entities covered by this regulation aren't in compliance yet. Although the FDIC initially published a notice to this rule on November 9th, 2007 (enforcing the Fair and Accurate Credit Transactions Act of 2003), and the rule went into effect January 1st, 2008, with compliance required by November 1st, 2008 - this is now being pushed back 6 months because the the "we didn't know we needed to comply, give us more time" argument was thrown down. How absolutely irrisponsible!
"...FTC observers saw that many industry segments were unaware of the
compliance date..."

Isn't that a little rediculous? The FDIC attempts to explain itself here, in this release... I understand that it's a good practice to give affected parties ample time to comply before bringing down the hammer (I would say 11 months is fair, wouldn't you?), and according to some of the analysts closer to this issue than I this rule-enforcement is broadening those entities covered under the 2003 regulation - I still can't see a reason why a reasonable regulations and compliance officer wouldn't figure this out.

I will admit that this goes beyond banks to credit unions, car dealers, and public utilities - basically anyone that handles your credit/personal information. I will further take the stance that this reglation falls under the "It's about da** time" argument, and delaying enforcement is irrisponsible at best, and criminally negligent at worst.

Let's analyze what this regulation requires - for those that aren't familiar with it...
"In designing its Program, a financial institution or creditor may incorporate,
as appropriate, its existing policies, procedures, and other arrangements that
control reasonably foreseeable risks to customers or to the safety and soundness
of the financial institution or creditor from identity theft." (Source here)

  • This regulation requires an institution to establish a "Red Flag Program" to have a written policy for detecting identity theft/fraud via "red flag" activities (high-risk activities) which is then enforced within the institution
  • This program is based off of the institution's experience with identity theft (from past incidents?) - which is an interesting requirement...
  • The Program framework requires the use of historic data on identity theft to be pro-active in preventing new and mitigating existing identity theft and fraud
  • More information on the framework and requirements of the program available here.
  • The actual regulation language available here.

Some soapbox commentary:

{Steps on soapbox}
If you are an institution which typically deals (or has dealt with in the past) identity theft or identity-fraud-related activities... is boggles the mind that you would not have a program of "Red Flags" to identify when/how this is happening. I suspect this is a sad commentary on the state of identity theft... it's running so rampant that there are now specific regulations from the Federal Goverment (FDIC) which are forcing businesses and institutions to implement programs to identify and precent identity theft and credit fraud. I believe it is a further sad commentary that the FDIC has "relaxed" the enforcement date for businesses based (no doubt) on some lobbying efforts from groups which simply don't feel like complying. Look folks, programs like this don't cost incredible amounts of money to implement. They should be fundamental to all businesses models, not just banks and credit cards companies and retailers.

I firmly believe that institutions which do *not* have these types of programs (are non-compliant) after the May 1st enforcement date which have incidents of identity theft and fraud should be fined and sued for negligence by anyone who has their identity compromised through these entities. It's black and white here... there is no gray-area. Ordinarily I wouldn't say this but you're either compliant or not. You are either responsible with people's information and have a program in place to detect and root out identity theft and fraud - or you're negligent and should be severely punnished.
{steps off soapbox}

Sunday, October 19, 2008

Quantum Crypto - Schneier Commentary in Wired

While ordinarily I have to admit I find some of Bruce's stuff a bit... harsh and pointy, I read his recent commentary on Quantum Cryptography in Wired and found myself nodding my head in agreement.

I don't think it's a secret I tend to be a realist when it comes to security; and often find myself arguing against the concept of "piling on" when there are much weaker links in the chain. Bruce's assertion that the level of extra security gain from quantum crypto (the assurance that no one is listening in) is great but we have bigger problems. Well, no kidding!

I can't remember whom I was talking to about this at OWASP '08 (I think it was RSnake... I'm fairly sure) but the other person's assertion was that encrypting/signing stuff is inherently broken for most applications. Interesting huh? I'm fairly certain it was RSnake (now that I think about it) that said this, referencing MITM (man-in-the-middle) attacks. I include my PGP key in my signature on my personal email - but how do you really know it's coming from me and it wasn't altered along the way? Did I give it to you in person, and did you verify it was really me? See, this builds upon the interesting basic question of how much trust do you have in any given system. Do you trust the PGP key-maintenance system? And if you do, why? Think it over for a minute.

Cryptography really depends on the mechanism of distribution of the key(s), and how "trusted" that mechanism is. Within the ranks of the DoD, I imagine but don't have any first-hand knowledge, they've probably built their own key management system that is ~100% trusted (or darn near 100%). But I digress.

Quantum crypto is a wonderful theoretical concept - but another one of those things that has very little real application beyond academia. Bummer... neat idea though.

Sunday, October 12, 2008

ClickJacking - A Perspective Problem

While ClickJacking is the latest apocalyptic threat in IT Security, I wanted to point out something yet again, as I did back when Dan Kaminsky reported his DNS flaw and it because catachlysmic for its 15 minutes of fame.

I've been reading interviews, insights, write-ups and blogs on ClickJacking and I've had so many discussions with some of you my head spins trying to remember it all but something I saw a couple of days (weeks maybe?) ago is staying with me so I looked it back up and wanted to briefly talk about it.

This quote from Jeremiah Grossman, is disturbing.
"Recently we're [Grossman & RSnake] told we’ve been told that its been known by the browser vendors since 2002." [CGI Security interview, 10/5/08]

Why is this disturbing, do you ask? Think about it. If this statement isn't stretching truth (and I haven't found Jeremiah to be a sensationalist) then this has been an open, the-sky-is-falling-drop-everything issue for ~6 years. Not 6 days, months but YEARS. So the question we have to ask ourselves [but already know the answer to] is why in the world is it still an issue in 2008?

I'd love to know a few things:
  • Why did we [security professionals] not freak out about this in 2002?
  • Why haven't IE7+ and Firefox (at least?) resolved this issue dead?
  • Why hasn't the standards body [the W3] taken this up as a standards issue?
The answer is simple, so painfully simple. Functionality wins over "vulnerability" every time.

Now, if you'll excuse me I'm going to go cancel my Internet connection, put a sledge-hammer to my computers and walk around aimlessly.

EDIT: Sun. Oct 12, 2:02pm CDT

I just read Jeremiah's comment, and then started reading the link he posted to the Bugzilla post on the bug Jesse Ruderman posted first in 2002, and Robert O'Callahan's (from Mozilla) continued stance against his views. I think it is important for everyone interested in security to read that thread to really understand what we [security professionals] are up against in the world of technology. Understandably functionalit has always been, and will always be the antithesis of security.

There is a much, much deeper conversation to be had here. If any of you are going to InfoSec World in Orlando in March, I'd like to get a "thought group" on this topic together. Email me directly and we'll put it together. I'm not saying we're going to solve anything - but maybe come up wth a better way to think this through as a community.

Friday, October 10, 2008

Closing thoughts for a Friday

Hey folks - just some closing thoughts for a Friday. Hope everyone's had a decent week, and by now you've got a cold one in hand. Here are some thoughts I had as this week tails off into another weekend.
  1. Has anyone paid attention to the sheer stupidity of public services lately with regard to data loss/theft? I mean, seriously! I have a Google Alerts "as it happens" notification set up for "security breach" +data and if you haven't paid attention there have been an absolutely stupifyingly overwhelming amount of data breaches that involve our government or its entities in some way. Foreign governments, schools, social services - all losing laptops, getting hacked and the toll is mounting. Last count we're somewhere in the 2MM+ records lost in the past few weeks. When will the carnage stop? (More on this in a future post as I have some serious research to do. If you'd like to help ping me directly.)
  2. Cloud computing... so I was talking to a colleague and friend over at PureWire, and he is absolutely religiously convinced that in a short period of time (and I quote) ... "Everyone will be doing it [in-the-cloud security], it's inevitable". I tend to disagree, in fact - I think "In the Cloud" security is a bit of a scary proposition - but I'm hoping to have a 20-questions type of interview posted here on this blog with the folks that are running the gears over at PureWire.
  3. I'm finally going to get around to posting that interview I did with a "semi-ehtical-DarkSEO dude" in the next few weeks when thing settle down at the ranch a little. It's been sitting on my desktop, and everyone's been killing me to publish it - problem is - it's huge (10+ pages of good info, I think). Does anyone know where I can post it? I'll post part of the interview to the blog here as a teaser, and the rest to a site somewhere, as a PDF/paper. Your suggestions are welcome.

Data Security in Financial Crisis

If you've not looked up from your screen in a while - there is a major world-wide recession underway. When you look around and see a company like Lehman Brothers basically out of business, the first instinct is to panic because the financial markets are clearly crumbling.

Rich Mogull's write up on entitled "Impact of the Economic Crisis on Security" was definitely worth a read, if you've not gotten a chance to read it yet. After you read Rich's blog entry... think about this: what's happening with all the data that's being "liquidated"? Scary isn't it. Those behemoths of Wall Street hold terabytes of information - PIA (personally identifiable information) of all types.

Once again I think we're right to rant and rave about how those CEOs should be behind bars, or worse - but let's consider the data. The data, or what's happening (or going to happen) to it is what's scarring me to death. I actually have data, my personal information, at some of those failed firms all over the place. When they're liquidated, or parted off and sold... is there a governing body somewhere that's keeping track and making sure disks are wiped clean, digitally shredded so they can't be used in fraud or identity thefts? All the government oversight we're proposing today, and the $700Bn (that number boggles my mind) "bailout" and not a single mention of information management anywhere in there.

I think there is a much deeper crisis here than just collapsing financials - because like it or not that ship will list and right itself, eventually (likely at the expense of you and I, the taxpayers) but the data that's mis-handled, lost, stolen and forgotten about... who's going to bail ME out when my identity is stolen as a result?

Anyway... thought I'd just share what's on my mind. Feel free to reply, comment and rant with me.