Monday, September 20, 2010

Data Breaches - Who Really Loses

It's unfortunate that when a data breach happens the real losers are those who have no stake in the matter whatsoever.  In fact, the real losers in a case like that of the Lucile Salter Packard Children's Hospital at Stanford University are likely patients who have had nothing to do with this data breach.

When information is lost, the first thought often is to fine, fine, and fine again these institutions we find to be negligent in either securing their patient's data, or reporting the breaches.  The problem comes in when the fines actually start hitting, and you come to realize who's really paying them.  I'm all for levying large fines against institutions who neglectfully lose my patient health records, but is it really in my interest to fine the institution large sums when the costs will most likely simply be passed along back to me as the patient?

Think about it.  Really think about who's paying the costs for the fines being levied against hospitals, doctors and other practices when patient data walks out the door with a computer like in this case.  This $250,000.00 fine isn't coming out of the hospital administrator's salary.  It's probably not coming out of the pool of money that gets paid to the hospital's top administrative team as a yearly performance bonus.  Nope, it likely gets absorbed as an operating cost, and passed on either through higher rates or some other crap to the patients that end up there looking for care.

Let's forget the Lucile Packard Hospital case and take any particular medical establishment that has data breach issues.  As yourself who makes the decisions to skimp on security and then who gets to face the media when it comes to being the scapegoat.  It's interesting that I've never seen a clause that comes with these types of fines that says something to the effect of "fine must be paid out of hospital administrator's salary" or something like that.  Of course, it'll never happen with the amount of money the medical industry spends lobbying our dear members of the government...

By the way, let's go back to this Children's Hospital for a second.  If you read the article I reference you could almost be convinced the hospital did everything right, including launch its own investigation and determine that the patient information was in no way compromised, etc, etc, etc ...(wait ...what?).  The incident centers around an employee who used a computer which had access to patient information (so the data access is computer-based, not user-based ...interesting access model, wouldn't you say?), and was allowed to walk off premises with the computer (how does something like this happen, in real life?)... and they're surprised that the computer was not recoverable?!

There are two stellar quotes in this article I referenced... one from Susan Flanaga, RN, COO, which reads
"The privacy and security safeguards we employ are some of the most advanced technologies and controls available to hospitals today."
I chuckled when I read that.  These supposed advanced safeguards couldn't prevent a person who shouldn't walk out with a computer from taking it home with them?  The other awesome quote is this one:
"Even though the investigation revealed that no patients were harmed and apparently no patient information was compromised..."
Wait did he [Ed Kopetsky, the CIO] determine that?  Since they could not recover the computer, how exactly did they know that none of the information was compromised?  Isn't that the whole point?

I'm sure they could have been using full-disk encryption, combined with software that prevented the machine from booting off-site, combined with an automatic-self-destruction program ... but then the story would have been much less exciting and the fine probably wouldn't have happened.  Right?

Oh well, I guess the costs get passed onto the patients, they throw on another "agent" to every one of the machines or have every employee sign yet another affidavit saying they won't steal data and life goes on...

Sunday, September 12, 2010

100 Years of Credit Monitoring

[steps on soapbox]

I don't know if you've noticed, and you probably have, but there have been a lot of data breaches lately.  Every single silly one of them works just like this:

  1. Company is negligent* with customers' data
  2. Company gets breached
  3. Company tries to sweep the incident under the rug
  4. Company gets caught/noticed/outed
  5. Company send "Sorry" letters and 1 years' worth of credit monitoring to customers
Now, if you have gotten one of these "We're [not really] sorry" letters you probably have found comfort in the fact that the company who just lost your data to an attacker who will use it against you is going to pay for credit monitoring for you.

Probably not though, since you already have gotten 4-5 other letters like this in the past year or so and you've already got all the credit monitoring you can possibly need, want or even stand.  See, there is a key here that is lost on most people who happily accept this resolution and move on.  The attacker who just took your data will use it for their own financial gain.  Period. End of story.  Full stop.  These bad people don't raid databases and mass-compromise millions of machines because it's fun (although admittedly it can be- not that I would know) but because your pain is their gain.  I hope that's crystal clear.

So this leads me to the next question my mind logically jumps to ...what if you sustain monetary or personal damages from one of these many data breaches.  Obviously it's next to impossible (usually) to prove which one of the many, many breaches your data was a part of but even if you do ...what then?

Well, there are a few options you have:

  1. Hope you've bought identity theft insurance and you can get your life on track
  2. Hope your bank gives back all the money that was stolen (unless you're a business this is actually still fairly likely)
  3. Cry
  4. Sue someone
  5. Be like 99% of the victims and do nothing...
So then.  We've got a bit of a problem.  Namely - you the consumer are screwed.

Here are several sad facts we're facing in the immediate future (if you've not already experienced these):

  1. You will get several "We're [not really] sorry" letters from organizations who have your private data; many of which you shouldn't have given it to
  2. You will have your identity compromised, and receive bills or collections notices for items you never actually purchased (well "you" did, but not know what I mean)
  3. These same organizations will not improve their overall security, many of whom see data breaches as a calculated financial risk and are willing to just deal with them
  4. The same organizations will continue to be industry-regulation compliant (*cough* PCI DSS *cough*) and hide behind that when you try and legislate against them
So then... you have 100 concurrent years of credit monitoring, no one to pay for the actual damages poor security of your data causes you -leaving you stuck with the bill (this is the criminal's money now), and nothing changes.

I really wish someone would legislate a bill that would make the victim (interesting word to call the organization which just made you the victim) of a data breach financially and legally responsible for how that affects each and every single person in their compromised pool.  Of course there are the difficulties proving that your difficulties came from any specific breach, etc, etc, etc - but at least this type of action would start to put the fear of God into these irresponsible organizations...and then I woke up, right?

[steps off soapbox]

Friday, September 3, 2010

Ambition Over Intelligence - Twitter, OAuth, and Wrong

If you're using Twitter, and most of you are, you've probably had your client break in the last day or few?  If you haven't it's because your client is either written by the folks over at Twitter themselves, or you've updated your client very very recently.

If you do a web search for "hacked twitter account" you'll get thousands upon thousands of entries.  Most of them are from celebrities crying that their Twitter account was hacked when in fact someone guessed or deduced their lame password and used it to post even more insane things (or less insane?) than the celebrity would post themselves.  At any rate ... all this craziness about hacked accounts has no doubt prompted Twitter to do something to increase security.

Unfortunately, as with many things so far in its short life, Twitter got it wrong.

The Ars Technica piece here [Titled: "Compromising Twitter's OAuth Security System"] probably says it much better than I can - so I urge you to go read this brilliant piece of technical writing.  Ryan does a masterful job breaking down the issues with OAuth, the problems Twitter has with their specific implementation, and some of the reason why hacking Twitter "consumer keys" will be a hobby for bored school-kids for the foreseeable future.  I will, however, add my own commentary as I always do.

By the way, Ryan also wrote an OAuth primer (dealing with OAuth and OAuth WRAP) which you should probably read if you haven't already... it explains some of the OAuth details and behind-the-curtains issues that make it a flawed setup from the word go.  Seriously, mega-kudos Ryan, great chunk of writing there.

So as the title of the post says, ambition got the better of Twitter it seems.  While I'm ordinarily on the other end of this conversation urging technology to leave laggards behind, a technology socially rooted in its 3rd party applications like Twitter will suffer for their ambition, unfortunately.  Choosing to pull the trigger and disable basic authentication was a big move - but using their own version of OAuth (filling in some of OAuth's inherent holes) is a big mistake.

You see, we're back to a function vs. security conversation.  What do you really care about?  Do you want your social medium to be explosively adopted by virtually any 3rd party... or do you want to provide the illusion of better security?  A tough call right?

Twitter's biggest misstep in my humble opinion is threatening to invalidate secret consumer keys once they're discovered and published.  I think this is a major flaw in OAuth to begin with - but completely invalidating keys that are embedded in software particularly when it could cause a very interesting effect such as developers knocking each others' products off of Twitter's good graces.  Can you imagine the carnage?

I think it'll be interesting to see what transpires.  I'm just angry I guess that my 2 favorite Twitter clients haven't worked (and still don't work today...although I guess I need to blame the app developers more than Twitter, right?) and it's making me cranky.

Oh well ...maybe I'll actually be productive and be forcibly social, in real life.