Tuesday, March 31, 2009

New Fraud Circulating: Oprah Winfrey Millionaire Contest

The IC3 (Internet Crime Complaint Center) is circulating an Intelligence Note about a new fraud scheme.

The short version is that predators are targeting people who are willing to give away their information for a chance to get on the Oprah show in April, and become a millionaire.

Full text is copied below... if you've received one of these emails you're encouraged to go file a complaint so these people can be tracked down and brought to justice, at www.ic3.gov.

"The IC3 has been alerted to the circulation of a fraudulent e-mail, purportedly from the "The Oprah Winfrey Show", notifying recipients of their nomination for the "Oprah Millionaire Contest Show." To participate, recipients are requested to mail their contact information such as full name, address, telephone number, and e-mail address. Verified contestants are then required to purchase airfare and a ticket to attend "The Oprah Winfrey Show," as well as complete a forthcoming contest form containing personal questions. The contestants are then promised a seat for "The Oprah Winfrey Show" in April and asked to provide their responses to the personal questions for a chance to win a million dollars.

Consumers always need to be alert to unsolicited e-mails. Do not open unsolicited e-mails or click on any embedded links, as they may contain viruses or malware. Providing your personally identifiable information will compromise your identity!"

Sunday, March 29, 2009

Hackers Refocus on Users

A warning to owners and operators of high-traffic web sites - you're under attack.

While that isn't really a revalation, this might be - attackers are after your users.

Because I've been following the compromises of thousands of web sites over the past several months it has become clear that hackers are re-focusing their attack strategy to the user.  The attack may be the same (SQL Injection is still the most popular way to execute) but rather than focusing on simply ripping off your database, defacing your site, or shutting you down the recent shift in strategy has seen injection of scripts and links to malware-infection sites the attackers control; focusing ever more keenly on infecting the user with some sort of malware.  The more bold attackers simply inject the malware into poorly-written sites and infect the users straight-off.

Recent disclosures of 0-day... scratch that... unpatched root-level defects in Internet Explorer 8, FireFox [and rumored 0-day in other browsers] could be aiding attackers in planting nasty little bugs in the browsers of average-Joe users silently.  This then serves to act as a lasting-attack against the user... likely stealing information such as usernames and passwords as well as commonly used links, credit cards and other valuable information.

The worrisome shift becomes glaringly evident when one does a quick Google-search to discover over 57,000 hits in the last month alone!  That's an incredible number of articles written on these types of compromises - with many high-profile sites being "hacked" in this manner.

High-profile sites like Peugeot, various overseas embassies, the USAID website, BusinessWeek and many, many others have recently been turned into malware-distribution engines via some sort of script injection attack - most often injecting a hidden iframe into the window and delivering the malicious payload.

A perfect example of a timely, Easter-based attack uncovered by CyberInsecure.com... targeting search-engine results for Easter... 

This trend has focused, I believe, on the users for at least two reasons.  First off effective end-user protection mechanisms simply don't exist to combat custom-written malware which mutates and adapts quickly.  Second, and perhaps more importantly, the value of attacking an end-user is growing.  As computers become used for more and more tasks they will continue to house more and more highly-sensitive information about their users... and attackers will continue to target that information to harvest it for their own malicious gains.

The lesson to take away here is two-fold.  Protecting your site is even more important than ever; but you already new that.  The second, and perhaps discouraging thing, is that there is no forseeable solution to this issue if you're an end-user.  While most of these delivery mechanisms rely on attacking your browser, or injecting script using JavaScript (which is most common) that functionality is typically needed to make the site usable.

What's a user to do?  Here's a checklist for end-users.  It won't guarantee your safety but it'll help.
  1. Never install anything off the web that you don't know for a fact is benign
  2. Your best-bet for browsers is still FireFox 3
  3. While it's not simple, using NoScript for FireFox is a *must* on sites you don't trust...
  4. Never click buttons on pop-ups... use ALT+F4 to close windows

Good luck users - it's up to you to pressure site owners to adopt better security practices by telling them you take your security seriously.

Stay tuned... I'm putting together a collaborative list of "How to Spot Evil on the Web"...

Saturday, March 28, 2009

Buffalo NAS & Internet File Sharing

If you've ever wondered what the people over at Buffalo NAS are thinking about... check out this flash-based video about their awesome web-based file-sharing product:  http://www.buffalo-technology.com/files/LinkSystem_Flash_04.swf.

From their page (here)...
"Buffalo's unique Web Access feature allows LinkStation Live users to share their pictures, music or other files with friends and family through any ordinary Web browser.
You don't need to install any software and neither do those with whom you want to share your files."
2 things wrong with this... 
  1. Isn't this file-sharing?  the illegal kind potentially?
  2. Remote access to your system (NAS) without having to install any software?  I hope they at least do serious authentication and encryption!  And it's a good thing browsers are secure enough to keep your system safe!
"BuffaloNAS is the portal site that is responsible for establishing a peer-to-peer connection between Buffalo NAS (Network Attached Storage) servers, such as the LinkStation Live and external users."
So if I understand this correctly, this portal site run by Buffalo NAS gains access to your computer, and enforces share permissions over the web.  That sounds secure.
"For example, if you have a LinkStation Live at your home or office, you can configure the integrated Web Access server so that certain shares on your system become available to users on the Internet."
What can I say... making it idiot-proof to share your company's documents over the web is a great idea, right?
"The configuration is a simple process. If you have an UPnP enabled router (most all recent routers support UPnP) you don't even need to configure anything on your firewall. All you have to do is enable the Web Access server at your local LinkStation.
Don't forget to set your access permissions in the "Folder Setup" section of the main menu.
Then, you merely need to enter a name (i.e. BaldEagle) and key (i.e. 12345) and wait for the acknowledgement from BuffalNAS.com. If no one has picked your name already, you are set to go."
Someone should conduct a security audit of this service!  How many users out there do you suppose you could guess the name of?  Further - there are no requirements for complexity on the passwords or anything!

There's even a Quick Start page too... to get you going quickly.  My favorite feature is to allow anonymous access to web-shared folders on the NAS.

Interestingly enough, you can simply build a quick script to exploit this service (or at least gather some great intel) in about 1 minute or less.  As an example, I just typed in something obvious such as https://buffalonas.com/steve and got the following screen, first alerting me that the site's SSL certificate wasn't trusted... then once I accepted that prompting me for a username and password.  I stopped there...

I'm sorry - but this is just irrisponsible on Buffalo's part.  Allowing access to a NAS system over the Internet and advertising it as simple as they do - it's just irrisponsible.

Weaponized Malware - Your Protection

Has anyone noticed that malware is being weaponized at an alarmingly increasing rate lately?  What's worse, it's the dollar amount that seems to be stolen as a result of these malware payloads [often incorrectly referred to as viruses] that is being reported on more and more... and the victims are starting to pile up.

To see the far-reaching effects of [potentially] custom/purpose-written malware check out this story out of Carl Junction, MO.
"School superintendent Phil Cook said Monday that agent Gayle Warrener took the payroll department computer, which was believed to have been compromised during the theft. A computer virus that struck on Feb. 26 allowed someone to access the district account.

The $196,000 was electronically transferred to a number of banks nationwide in increments of about $8,000."

Obviously, the FBI has its hands full... and they're certainly investigating these incidents but in yet another "forest for the trees" move I think a point is being missed.

As the carnage mounts I can't hep but to look at my work laptop and wonder.  I've got anti{spam|virus|adware|spyware} agents all over the place sucking down my CPU cycles like I drink Mt. Dew... and I just have to wonder is it really keeping my laptop safe?  Symantec, McAfee, and many others continue to sell desktop protection products - but are they keeping your computer any safer than the operating system would have natively?  I won't argue that point because the answer is obviously yes... but to what extent?  Is leaving your computer's safety and security to a piece of "anti-virus software" a smart idea in 2009?  I think that answer is best answered by looking at what some of these anti-malware companies are charging for their products.  Symantec's product is FREE if you get the 100% rebate, others have been selling for a similar price... does that mean these companies are valuing their products at $0?

Think about this the next time you go to check that compliance report box for anti-virus software... "Check! We're secure..." isn't a valid answer.

Thursday, March 26, 2009

"Name That [RIA] Talk!"

Current Top 3 - Submissions end end-of-day today (Friday)
not in any particular order

  • RIA: Rich Insecure Application - Jim Manico
  • A Laugh RIAt: Security in Rich Internet Applications - Quine
  • OMG! pwn RIA via API - Lisa Hall
----------

Hello everyone - I'm once again at a creative impass.  I've been trying all day to figure out a catchy, creative, and most importantly short title for my next talk which is scheduled for the OWASP Canada trip I'm making in 2 weeks, as well as a submission to OWASP '09 (cat's outta da bag, whoops)...

Anyway, here are the rules for naming the talk... I will select the 3 best then let people vote on here and twitter... winner gets an honorable mention in the talk (and an adult beverage at next meet-up).
  • Must reference hacking (somehow)
  • Must reference RIA (Rich Internet Applications) as a whole...
  • Must be relatively short
  • Must be humorous
Submit them as comments here in the blog, or via twitter DMs... top 3 chosen SOON!

URL Shortening Services - the Bigger Picture

Now that I've had a chance to let the events of the TinyURL URL shortening issue soak in and run their course, I wanted to write this post about a much bigger picture than we (the collective) seemed to focus on.

There was more than a serious information disclosure, and vulnerabilities which would allow relatively simple compromise on that service at play - and I think most everyone (myself included) at the time got caught up in the microcosm of the moment and completely missed the proverbial forest for the trees. Allow me to explain.

There are a large number of these URL shortening services available, Google it... last count there were at least 10; with about half those publicly used all over the place. Some of those (tinyURL, bit.ly, zi.ma, is.gd...etc) are embedded in twitter applications (those 140 characters make it hard to past an entire URL properly!) and are used hundreds if not thousands of times a day.

Think about that.

If you have a service that several thousand people click-through a day, that has the sole purpose of creating obfuscated URL redirects - what could you do with that? Weaponizing this sort of service can amount to a catastrophic result. At first glance you could do everything from passing clickers through some site you control to run up your hit-count... or you could point everyone to a site that distributes malware in drive-by format, or you could simply sp00f legitimate sites and harvest credentials... the possibilities are endless.

Here we come to my point about the "big picture"... these types of services are likely extremely high value targets due to their weaponized yield... they have a higher responsibility to deliver unparalleled levels of security for their users. The problem is most people don't realize what they're clicking on, and couldn't tell you why it's important to worry about security on these services.

The responsible thing to do would be to penetration-test every single one of them, and once we find the breaks fix them and move on to the next... maybe create some standard for the way they function? I don't have a great answer aside from this plea - Please enforce some basic best-practice security measures... you're endangering the computers & browsers of many, many people who implicitly trust you for no good reason.

Tuesday, March 24, 2009

Is Google a liability?

Google today is what "Uncle Gino" is to ever mob family. He's that treasure-trove of information, some of it insider information, who will readily spill it to anyone smart enough to ask the right questions.

Think about it - Google indexes your sites and applications and uncovers things you may not have intentionally put there... things like error pages, configuration files, and other goodies that should be kept reasonably close to the vest. The problem is, once Google gets a hold of the information and indexes it - it's stored up in the cache and find-able for long, long periods of time... and it's virtually impossible to get rid of.

Finding nuggets of information a la Johnny's "Google Hacking DB" [GHDB] is almost simple now that there are formulas for some very, very informative searches.

For example, what if you wanted to look for SQL database dumps... simply enter this string into Google, and away you go --> "# Dumping data for table". Here's a great example of juicy data (http://pandoramon.sourceforge.net/sql/pandora_db.sql).

Looking to dig into someone's WSFTP configuration file, which should not be on a public web server? Try this "intitle:index.of ws_ftp.ini" ... which will lead you to this beautiful catch (http://www.radicalempiricism.org/courriel/ws_ftp/WS_FTP.ini) and yes, those are weakly-encoded passwords in there!

See, Google knows a lot about you and your sites and applications and all one has to do is ask to get that information from Google. Gaming the Googleplex isn't that difficult as has been proven over and over by black-had SEO techniques which snatch clicks and distribute malware. Optimizing pages for maximum Google-bility has almost become an arms race between the black-hat SEO ninjas and the Google geniuses; one side is always trying to out-flank the other and Google's brilliant brains haven't figured out a way to effectively win this battle.

So let's recap... Google has a lot of information about you, some you'd rather Google not share; add that to the fact that the search giant can be gamed while retrieving search results and you have an interesting paradox. Google is essential to the 'net because of the information that it gathers and puts at your fingertips (and your potential customers fingertips!); but on the other side of that coin is a very real chaos...

Ask yourself... is Google a liability?

Monday, March 23, 2009

Reflections on 0-day disclosure

With the topic of no more free bug disclosures heating up and people taking sides I figured I would once again take the opportunity to point out some flaws in each side's argument...

First, at those that are claiming that because security researchers (legitimate, ethical ones like Alex, Charlie and Dino) are asking to get paid for the bugs they find that they are somehow being irresponsible and will somehow usher in the apocalypse ... get over it. Like the author here points out ...
"Security vulnerabilities exist, they always have and they always will. Get over it. Bugs exist much longer than days as it takes most vendors months to fix anything and once you have reported the bug to a vendor — it is no longer a secret"
... that is absolutely right. Allow me to add to that brilliant statement by saying that those bugs are likely known by others ... for a time previous to the date of their disclosure. A security bug's first effective date is the day the software is released, not the day that it's publicly announced. Not disclosing it isn't the same thing as using it for evil... don't equate the two.

Now, on to the other side. It's everyone's responsibility to push for better security. We're not going to get there by trying to get vendors to pony up cash rewards for disclosing their bugs to them - real people and systems are at real risk. You're [likely] not the only one who knows about this bug so as a good guy you're ethically challenged to do the right thing and disclose to the vendor so it can be mitigated. If you're hoping to get paid handsomely [or at all] for finding security bugs - you're wearing the wrong colored hat...

The bottom line? While they're technically not doing anything wrong, asking to get paid is a fair request. In a perfect world you would get compensated for your hard work ... but you and I know you don't live in that perfect world where research is compensated so do the right thing, disclose the bug, or admit you'd rather be a black-hat.

Psst! ZDNet... your directory is showing...

You know how it's a good idea to turn directory listing off, pretty much universally?

http://i.zdnet.com/blogs/ Careful loading this, there are a *lot* of images in there!

While this is certainly a non-issue as far as security goes, it's a little whoops which once again reminds us that simple configuration errors, as abundant as they are, can appear anywhere. Just thought I'd post so anyone interested could go browse around ZDNet's jpg archives... some cool oldies in there!

Wednesday, March 18, 2009

FOX News Fail on Twitter

You have to admit, twitter is the social medium today. I even get my as-it-happens news from CNN, Fox, and other news sources (just to be balanced, of course); so you can imagine my chuckle when I saw this on my tweetdeck this morning.

I'm amused... apparently their MySQL connection failed :)

WHOOPS.

UPDATE:
So, as @mubix so aptly pointed out, this really isn't a FOX problem, but it looks like a Twitter "oops"... after doing a quick search I found hundreds of these mysql_connect and other assorted MySQL errors all over the place. Sounds like there's some issues over at Twitter... or maybe something else?



~~ Z O M G ~~ --> 3/18/09 @ 2:41pm CDT
In case you're curious... this has absolutely devolved into a very serious configuration/flaw issue - tinyurl's site has a very serious problem. As you can see, going to http://tinyurl.com/php.php (the PHP configuration display page) shows you way, way, waaaaay too much information.

Can someone please tell me WHY this exists?

----------
Thanks to Steve Ragan from The Tech Herald for this link... more data-mining type information, all publicly available, about TinyURL.com: http://www.robtex.com/dns/tinyurl.com.html

Woohoo! We got written up in The Register! Thanks Dan!

FINALLY - 3/18/09 @ 10:12pm CDT
... TinyURL finally got the sense to turn OFF the tinyurl.com/php.php page... FINALLY. I guess better late than never.

UPDATE - 3/19/09 @ 7:00am CDT
I received this email overnight from Kevin "Gilby" Gilbertson...

While our backup server was misconfigured to show the php errors, you
incorrectly concluded that we are running the webserver under a root
or administrator user.
--
Kevin "Gilby" Gilbertson
TinyURL.com, Founder
http://tinyurl.com

UPDATE - 3/19/09 @12:10pm CDT
Well... I can see some of you have gotten quite worked up about this issue... so let me address it as such.

  • First, thank you to Kevin Gilbertson (founder, TinyURL.com) for the constructive email exchange and removal of some of the dangerous content on the root site; and in addition to that Kevin has promised a full security review of his infrastructure which makes me (and should make you) feel better about the service. Progress is everything here; in the security profession we all understand that vulnerabilities are a fact of life - it's how you deal with them and how quickly you close them that separates the wheat from the chaff. Again, thanks to Kevin for being constructive - great job.
  • Next, regarding the "root" user question -
Web Server: Lighttpd/1.4.21
Sample Config page: http://redmine.lighttpd.net/repositories/entry/lighttpd/branche/lighttpd-1.4.x/doc/lighttpd.conf
... so from there we can discover that Lighttpd does *not* automatically jump to a non-privileged account when it's done starting up, in stead it's fully configurable, as so:
188 # chroot() to directory (default: no chroot() )
189 #server.chroot = "/"
190
191 ## change uid to (default: don't care)
192 #server.username = "wwwrun"
193
194 ## change uid to (default: don't care)
195 #server.groupname = "wwwrun"
So while this in no way conclusively proves that this server is running as root:wheel, it also does not tell us that it is not... therefore the only thing that could prove this would be a "ps" from the CLI on the box itself...
  • Finally... Those lobbing nasty comments are missing the point. The TinyURL site/server had serious security issues, which I have since sent along to Kevin and also removed from the post above for the sake of being responsible... Kevin's has done a good job closing this issues up and will be following up on what is left on his own. Disclosing the full PHP information is a serious security risk and if you don't feel so... I feel bad for your customers/employer. I will certainly welcome any healthy debate about the topic.
As a final though... think of the "big picture" here. This service is utilized thousands (maybe more) of times per hour, and who knows how many times per day. If someone could simply take over this service... think of the carnage that could be done by redirecting every click to some drive-by malware site. Posting config like that may seem like a "tempest in a teacup" but I assure you there is a bigger picture here - and it is not pretty.


Thanks for reading, I appreciate the debate and welcome continued discourse!

Monday, March 16, 2009

KGB File Compressor - MALWARE

"KGBCompresor - 1GB into 10MB + Hacked Firefox-30% speed more" .... yea, it's malware

DO NOT DOWNLOAD
_http://darapid.com/downloadx/kgbcompresor-1gb-into-10mb-hacked-firefox-30-speed-more.html
DO NOT DOWNLOAD

I hate these
fake free tools you can download on the 'net. The latest I've come across is the KGB Compressor tool, which touts that it can squeeze 1Gb into a 10Mb pack... right. The crazy part is that when you download it there are 2 MP3s in the bunch, I can only imagine what those are trojaned with...

Passing the binary through VirusTotal nets the following results:

File kgb_archiver_.rar received on 03.16.2009 06:07:40 (CET)
AntivirusVersionLast UpdateResult
a-squared4.0.0.1012009.03.16-
AhnLab-V35.0.0.22009.03.16-
AntiVir7.9.0.1142009.03.15-
Authentium5.1.0.42009.03.15-
Avast4.8.1335.02009.03.16-
AVG8.0.0.2372009.03.15-
BitDefender7.22009.03.16-
CAT-QuickHeal10.002009.03.16-
ClamAV0.94.12009.03.16-
Comodo10572009.03.15-
DrWeb4.44.0.091702009.03.16-
eSafe7.0.17.02009.03.15Win32.Constructor.sl
eTrust-Vet31.6.63882009.03.09-
F-Prot4.4.4.562009.03.15-
F-Secure8.0.14470.02009.03.16-
Fortinet3.117.0.02009.03.16-
GData192009.03.16-
IkarusT3.1.1.45.02009.03.16-
K7AntiVirus7.10.6712009.03.14Constructor.Win32.SlhBack
Kaspersky7.0.0.1252009.03.16-
McAfee55542009.03.15-
McAfee+Artemis55542009.03.15-
McAfee-GW-Edition6.7.62009.03.16-
Microsoft1.44052009.03.15-
NOD3239372009.03.15-
Norman6.00.062009.03.13-
nProtect2009.1.8.02009.03.16-
Panda10.0.0.102009.03.15-
PCTools4.4.2.02009.03.15-
Prevx1V22009.03.16-
Rising21.21.00.002009.03.16-
Sophos4.39.02009.03.16-
Sunbelt3.2.1858.22009.03.15-
Symantec1.4.4.122009.03.16-
TheHacker6.3.3.0.2822009.03.16-
TrendMicro8.700.0.10042009.03.16-
VBA323.12.10.12009.03.15-
ViRobot2009.3.16.16492009.03.16-
VirusBuster4.6.5.02009.03.15-

Spread the word... another FAKE utility out there...

3/16/09 @12:41pm CDT -- UPDATE
VirScan.org Results
Scanned the file with VirScan.org (per @mubix's recommendation)... found NOTHING.
http://virscan.org/report/53e7315a0b1661a97922f7e3eb9b6622.html

CWSandbox Analysis
Nothing that would jump out at me... obvious; interesting.
http://www.cwsandbox.org/?page=report&analysisid=1237679&password=jbmpa

Saturday, March 14, 2009

It should *not* be this easy...

... to hack into a National Library, much less come back almost 2 days later only to discover the page is *still defaced* and the hacks are *still there*.

Not that I really want to promote this type of activity - but this one you have to see for yourself. I would love to say that this sort of thing can raise awareness, but it doesn't. It will cause temporary embarrassment, maybe a little bit of confusion, and then the system will be restored and people will forget about it in a few days. It's just sad, really.

Anyway - check out the Jade Crew hack here... http://jadecrew.org/blog/?p=34

Friday, March 13, 2009

"Swoopo" - How not to code a login page

With the amount of attention web applications (and particularly their authentication) have gotten over the past 1+ years it would stand to reason that when a company built an eCommerce site they would not fall prey to the same silly issues that have plagued its predecessors.

Not so over at Swoopo.com, no sir. See if you can figure out what's wrong with this series of shots.

First, let's try a random user name and see what happens...


Interesting, what about if we have a real username, and bad password?


My mouth was agape... really. I'm glad I decided to do a little security recon before I tried this site out, I can't imagine what the rest of their "security" is like.

Thursday, March 12, 2009

InfoSec World 2009 - Slides Available

For those of you who've been asking ... I've promised and am now delivering. The slides from the 2 InfoSec world 2009 talks I gave...
  1. Application Security Testing - Results You Need - v1.0: A high-level foundational (management-level) approach to web application security testing. An overview of what you want to know, and how you may want to go about it...
  2. Total Browser Pwnag3 - v1.0: [with Josh Abraham from Rapid7] This talk focused on the web browser itself and presented a series of attack vectors aimed at destroying any faith anyone has left in the "web browser" as a tool... and also included a sprinkling of never-before-demonstrated exploits... good stuff.
They are both posted here [www.slideshare.net/Rafallos]. Josh will be posting the video demos with voice-over very very soon... and I will link to it from here.

InfoSec 2009 - Karaoke Goodness (part 1)

First and foremost thanks to everyone who I got to meet and hang out with this week at InfoSec World 2009 - it was truly (as we've said all week) epic.

In case you missed it, you missed some absolutely insane, socially unacceptable and once mind-blowing parties. This post is just a very small glimpse into what happens when liquor, travel exhaustion, and anticipation of a speaking engagement collide... at a karaoke bar... in Orlando.

This is Mike Murray (@mmurray on twitter) doing his best rendition of Vanilla Ice's "Ice Ice Baby"... and before you ridicule think of how much cajones it took to not only get up there and do this... but agree to be filmed and allow it to be posted. Mike, you are a madman.


Wednesday, March 11, 2009

Curse of the Cloud - So it begins

It's official... I'm a "cloud" cynic.

I've been trying to avoid commentary on "Cloud Security"... but apparently I can no longer hide. A colleague of mine, Joe Dibiase sent me a link to the article off BreitBart.com regarding the recent Google snafu which exposed some documents from their cloud repository. The issue isn't one of Google's ability to secure their gargantuan globally-distributed environment, and it's not even about the meager .05% (or so they claim) of private-turned-public documents that were exposed.

No, the over-arching issue here is that companies continue to think it's a good idea to stuff private (and in some cases extremely private (a la PII)) data into "the cloud". Let me cut off the comment before it formulates in your brain now, I am fully aware of the power of leveraged infrastructure, cost and efficiency savings and such. The problem doesn't lie in the theory ladies and gentlemutts... it's the execution. Cloud Computing with associated storage models, looks brilliant on paper, and if executed well it can be truly special. The problem I'm having as I look over these issuse is that this all relies on people to execute well... fail.

Here's my break-down... feel free to criticize, comment... constructively.

POSITIVES
  1. Leveraged cost-model (shared infrastructure)
  2. Global diversification of infrastructure (fault tolerance)
  3. Decreased waste of processing cycles (less idle hardware all over the world?)
  4. Availability (not the same as fault-tolerance)
NEGATIVES
  1. Epic failure is one mouse click, shell script, or configuration error away
With all the positives, and hoopla surrounding cloud computing (mostly as a result of cost-savings in today's challenging environment) generated by marketing machines people are starting to jump on board the cloud computing bus before they actually think about what they're doing. Is the cloud computing principle a good idea - I believe it can be. Is there a possibility of cloud computing being applicable to nearly every aspect of our businesses in the near futrue - yes, I believe so. Do I belive that the pro/con teeter-totter is tilting in the right direction? No.

Discuss.

Tuesday, March 10, 2009

HELP - Appropriate Social Protocol

Greetings from Disney's Coronado Springs Resort where InfoSec World 2009 is in full swing.

I need your help... now, I have to preface it by saying I've met a lot of people, and I am totally terrible with the whole "remembering names" thing unless we've interacted in person at least a few times... so that being said...
What is the appropriate social protocol when someone (who obviously knows you) comes up to you randomly and starts talking (no introduction, assuming you remember their name) and "catching up"?
Can someone with good social skills help me out here?

For the record, if I have ever forgotten your name I am so sorry... I am horrible with names (working on it) and it is not a reflection on you...

Friday, March 6, 2009

Why Do Businesses Buy Security?

Companies from small to enterprise-scale don't just want to run security programs or project. Something drives them to do so. Over the past several years of helping extremely large enterprises and small businesses work through security programs and projects I've noticed that there really are 4 main drivers for this activity.
  1. Compliance - to comply with internal or external regulations
  2. Compelling Event - a response to an incident (typically a breach)
  3. Competitive Advantage - as marketable a to their customers
  4. Due Diligence - demonstrate some effort
I personally find that they break out as so [rough unscientific numbers]:
  1. Compliance --> 50%
  2. Compelling Event --> 30%
  3. Competitive Advantage --> 5%
  4. Due Diligence --> 15%
[Compliance]
I've rambled and ranted for and against compliance in past conversations; and I think this illustrates my point even further. Now, I know these aren't scientifically accurate numbers but based on experience most of the customers I've dealt with over the most recent 12 months have been driven to purchasing products & services because PCI says so. They're not actually interested in better security, they just want to do the minimum amount of work that allows them to check the box, and move on.

[Compelling Event]
I have gotten many frantic, panicked calls over the last several months from people who read my blog and figure out where I work and want to evaluate (as an example) web application security tools because they've had some incident they can't tell me about... but it's clear they're about to be audited or fined by some regulatory body and they must demonstrate they're trying to right their ship... like yesterday. This rarely goes well because the intentions here are to fix (as Arian Evans pointed our recently on the WASC mailing list) a single instance of what troubles them.

[Competitive Advantage]
I've had the pleasure of working with 1 (yes, 1... in 12 months...) company who I cannot name that has started a comprehensive security programme to then use that as a marketable competitive advantage. Whether this is will be a straw man dressed in fine clothes... that is yet to be seen.

[Due Diligence]
Doing due diligence work is tough. There is a fine line between being able to say "we've done something" and "we're confident we have mitigated our risks appropriately" - not surprisingly most companies go for the former. Due diligence is all about demonstrating that you've done "what is necessary and proper" - which sucks because it's always left to interpretation. Who gets to say that you've done enough?

In the end you'll probably by now realize that there is no option #5 - "To Be More Secure". Maybe it's today's economic climate, maybe it's that we're still selling life insurance to a reckless youth, or maybe we simply can't measure our own success... I'm going to go with all of the above for a thousand please, Alex.

How *not* to take credit cards

I'm going straight to hell for this one, honestly - but it has to be posted. *sigh*...

Being a good Catholic I got one of these at least once a year (actually 2 of them once a year, but anyway...) and I could not just let this go. I've spoken time and time again to people who feel they need to ask folks to put their credit card info (and signature, expiration, etc) - basically all that you'd need for card-not-present fraud, out in the open. I know you're supposed to trust your fellow Church-goers but this is a little bit silly, no?




Thursday, March 5, 2009

Chinese Hackers ... like mushrooms

I'm not sure if you've caught this from the 4X Security team blog, but apparently Chinese hackers unions are popping up like "mushrooms". Their story is interesting enough regarding the Russian Consulate so I won't re-iterate that - but what I do find interesting is the rate at which the Chinese are starting to weaponize hacking.

Chinese hackers have long been known to perpetrate politically motivated web site defacements; which really isn't a big deal to be honest, but it appears as though the "cyber-violence" is starting to escalate at an alarming rate, and the groups of hackers (unions) are organizing faster and faster.

As an intelligent person one has to ask themselves if the Chinese government has anything to do with this... the answer is relatively simple. If you read this piece from last year, you can see that while hacking in China has evolved much like anywhere else, the Chinese government has brainwashed their own population into performing these acts of cyber-crime on their behalf. I suppose saying that Chinese hackers are popping up like mushrooms isn't far from the truth - because the Chinese government is keeping them in the dark, and feeding them shit.

The one odd thing is that as organized and smart (I won't say intelligent, that's a different thing) as some of these hackers are they can't seem to free-think, or break through their government's "Great Firewall of China"... but they can hack US and other websites hosted all over the world? Something doesn't add up.

Mushrooms.... indeed.

Wednesday, March 4, 2009

The Issue with Willful Negligence

As I wrote the other day in my analysis of the American Recovery and Reinvestment Act of 2009 (ARRA), I'm just disappointed with the wording of that piece of legislation.

You can always tell *who* wrote a piece of legislation (or rather, contributed to the writing of) by looking at the language. You can tell this piece was written by people who have a vested interest in keeping themselves shielded from the nuisance of compliance violations by reading the hard language that is used in the document. Essentially, in order to make any sort of fine or penalty count (a real cost that would be felt) there have to be more than 500 records lost and there has to be proven willful neglect.
‘‘(c) NONCOMPLIANCE DUE TO WILLFUL NEGLECT.— ‘‘(1) IN GENERAL.—A violation of a provision of this part due to willful neglect is a violation for which the Secretary is required to impose a penalty under subsection (a)(1)."
Has anyone ever tried to prove, beyond the shadow of a doubt, "willful neglect"? The term willful neglect means that someone has to prove that the entity in question willingly did not take precautions to avoid the possible breach. All a company has to do is a ridiculously little amount of work and then justify not doing more and magically they can be shielded from the willful neglect persecution.

This is incredible! How is it that anyone is expected to be held accountable when there is enough wiggle room in the enforcement section of the compliance regulation to drive an 18-wheeler through.

I guess all I can say is that this is quite typical of the government's regulations. Look at SOX, or some of the other regulations that have come from the government - they're terrible! Any company with a half-decent lawyer will figure out a way to get around this... I can't wait for the first case to be brought up.

Monday, March 2, 2009

Analysis of the Stimulus Bill and Healthcare Privacy

There is a substantial problem out there with many of the regulations, compliance initiatives, and associated laws. They lack teeth. If you don't believe how bad this is, navigate over to the PCI Security Standards Council and do a quick search for the word "penalty" - I'll save you the effort because you'll get no results.

George Hulme published an interesting piece today regarding this very topic on the Health Information Trust Alliance (HiTrust) Central site... and while I don't doubt that the government means business, there are some problems right off the start...
  • The fines are $100 - $25,000 per violation for violations that are shown to have occurred "without knowledge"
  • The fines get worse to $10,000 - $250,000 for willful neglect; and if the entities don't fix problems the fines continue up to a cool $1.5MM
I already see some major issue with this rhetoric. It's only rhetoric... I went to the source (and it wasn't easy, this thing is a monster) and found the exact text in case you're curious, here, start around page 164.

I'm pulling specific pieces out, below, for your [dis]pleasure. I recommend you read this on your own, as your mileage on my interepretations may vary - I am *clearly* not an authority in such matters.

First off, let's define the term breach, here off page 164:
"(1) BREACH.—The term ‘‘breach’’ means the unauthorized acquisition, access, use, or disclosure of protected health information which compromises the security, privacy, or integrity of protected health information maintained by or on behalf of a person. Such term does not include any unintentional acquisition, access, use, or disclosure of such information by an employee or agent of the covered entity or business associate involved if such acquisition, access, use, or disclosure, respectively, was made in good faith and within the course and scope of the employment or other contractual relationship of such employee or agent, respectively, with the covered entity or business associate and if such information is not further acquired, accessed, used, or disclosed by such employee or agent."
Immediately I am struck with the marked distinction between what is an unauthorized disclosure and what is not. If you notice, the definition of what it is is one sentence, while the definition of what it isn't is the rest of the long paragraph. I've bolded the two interesting parts there... particularly the "or other contractual relationship" bit, and "in good faith". It's all interesting legal double-speak.

There is also a revision provision on page 169 that forces a yearly review and guidance revision to the technical safeguards - this is good because it forces a yearly revisitation of what works, what doesn't (at least I hope that's the intent).
"ANNUAL GUIDANCE.—For the first year beginning after the date of the enactment of this Act and annually thereafter, the Secretary of Health and Human Services shall, in consultation with industry stakeholders, annually issue guidance on the most effective and appropriate technical safeguards..."

I was thrilled to see Section 4402 (Notification in the case of Breach) in this document as well - maybe we'll see a flood of medical breach notifications coming...

This section is quite interesting:
"(c) BREACHES TREATED AS DISCOVERED.—For purposes of this section, a breach shall be treated as discovered by a covered entity or by a business associate as of the first day on which such breach is known to such entity or associate, respectively, (including any person, other than the individual committing the breach, that is an employee, officer, or other agent of such entity or associate, respectively) or should reasonably have been known to such entity or associate (or person) to have occurred."
So... if a hacker reports the breach to authorities, after successfully stealing medical records, it isn't considered "discovered" until someone at the entity acknowledges it!?

The document then goes on to talk about timeliness of disclosure, saying that the entity has no more than 60 days to notify all those who were compromised, which isn't necessarily quick but at least there's a deadline for disclosure. And then there's this bit...

"(2) MEDIA NOTICE.—Notice shall be provided to prominent media outlets serving a State or jurisdiction, following the discovery of a breach described in subsection (a), if the unsecured protected health information of more than 500 residents of such State or jurisdiction is, or is reasonably believed to have been, accessed, acquired, or disclosed during such breach."
You have to know this will strike some fear into the hearts of medical records care-takers... everyone fears the media and public disclosure of such a breach is often followed by public riddicule and nastiness. The entity must also notify the Secretary and pust the breach to the Dept. of Health and Human Services website - so more public shaming.

Page 181 also struck me as important because it identifies, for the first time that I have seen, that data taken and kept should be the minimum required to accomplish a task... this is a giant leap forward and actually seeks to lay out that only data sets that are absolutely needed must be used, whereas currently I see entities keeping way more information than they could possibly need. While I think this will be difficult to define, it is certainly a necessary first step.

Where it really gets good, or as George would say, where the teeth are is on page 196, section 4409. Reading through this section is appears quite clear that accidental disclosures, if due care is taken are very lightly penalized, with as little as $100 per record; while willful neglect fines start at $50,000 per record and top off at a cool $1.5MM... ouch. While the fines are a generally good deterrent coupled with the public shaming and disclosure laws - consider something elese.

In the case where an entity has a very serious case of the HIPAA non-compliance blues, and can convince the auditor that they've done what is considered due care or due diligence ... while spending as little actual time, money and resources as possible - it may very well end up being less expensive to simply pay the resulting fines than to have actually good security protections in place. Even in the worst-case scenario where willful neglect is proven (and let's face it, that's nearly impossible without an internal whistle-blower) the maximum fine is only $1.5MM... while the costs of associated security technologies, manpower, and process improvement may run well into 10x that cost.

So at the end of the day we have to ask ourselves... while this law is a decent starting point - does it do enough to protect citizen's medical data? Or is this simply another display of hand-waving and rhetoric we've come to expect from government-sponsored compliance regulations?

Sorry, but I'm not impressed.
Google+