Friday, December 21, 2007

Who do you blame?





First - take a peek at this, get over the fact that I used IE7 to take the screen shot, and then think about this: If you're a random person browing the web, hit this site and are welcomed by this error - who do you blame?




When it's time to lay blame, there are few choices on who to blame, namely:
  1. A lazy administrator
  2. Poorly written software
  3. Anonymous "bad guys"

Without a doubt, incidents like this only further throw kerosene on the flames of the old religious argument - Linux/Unix vs Windows... Sadly - I would argue that administrators of modern websites have a trio of problems - workload, software quality, and bad guys. If you're an administrator you know what I'm talking about. You've got a hundred projects or tasks and only 20hrs in a day to get it done (you have to sleep some time). Of course, the quality of modern software isn't helping anyone do their jobs better... every time you turn around you hear of another bug that has to be patched, or some default mis-configuration that has to be changed to avoid exploits. Why can't we just get quality software that's stable, bug-free, and configured securely by defaults? The last factor, and perhaps one of the only predictable ones is the bad guys. I say predictable because as an administrator you can count on being attacked, guaranteed.

I guess it's easy enough to pick on this specific goof and say "See, that's what you get for running Microsoft's software". I'm not sure I agree - the problem is, I can just as easily mis-configure a LAMP (Linux, Apache, MySQL, PHP) application component to expose my server to attack. The lesson learned here is this - anything can be misconfigured in haste or laziness. Linux is more complex, so the administrators (typically) know more because they have to be more knowledgeable to run the stuff. Windows is more point-click so the generalization that MS-admins are relatively low-skilled can hold at times. But are either of these generalizations "law"? Can you find an administrator of Microsoft systems that's just as good at security their infrastructure as a skilled Linux/Unix admin? Of course. Conversely - can you find a Linux/Unix admin that's just as inept as a bad Windows admin? Definitely.

So I come back to - who do you blame? Well - first the site is owned by a security company so a gaff like this is inexcusable. Second - if you're going to host a site on the Internet at least check your defaults, and hire a competent administrator...

Who do I blame? The admin - sorry - no matter how bad the software is you're making a conscious effort to use it, and should therefore have a strategy to keep it secure and running. As an analogy - if you buy a car that has a bad reputation of breaking down, and an overly simplistic setup without the proper tools to keep it going - it's your own fault when you break down in the middle of the motorway...

Thursday, December 20, 2007

MioNet - Western Digital take it online.

Western Digital has a new product out, called MioNet (you can read about MioNet here), which allows people who buy these massive external disk drives to "share" them out to the Internet, using Western Digital's secure MioNet application. There are complications, of course, as you would imagine... but are we inviting in problems? Let's take a look.

A new article appearing on PC World's site addresses this product lightly, but in my humble opinion, completely misses the point. The article criticizes the MioNet software for restricting "user rights" by employing some internal DRM mechanism to limit the sharing (between different users) of identified music/video file-types (list available here). Sure, the MioNet blocks users from sharing media files (audio/video) between users simply because it's next to impossible to verify digital rights. So in that light, if I buy an MP3 somewhere, and try and share it with a friend whom I've given access to my MioNet shares, it will be blocked by the system. On the same side of the coin, if i create some custom music or audio files which just happen to be in one of these blocked formats - I can't share them with another users since MioNet has no way to verify that they have rights to this file. Now - it's easy to complain and point the finger at Western Digital and say how they're restricting people's rights to share files - but after all, they are providing a service, and don't wish to end up as the next hot-bed for illeglal file-swapping so they're taking precautions. You can still share your pictures, it's just multimedia files you can't share... I say get over it - or find another way to do this? It is a service after all... no one's forcing you to use it. Someone commented on this article that they would be refraining from purchasing WD products in the future and urged others to do the same... why? Because they're trying to error on the side of caution and digital rights? Anyway - as I said before... I think this article misses the point. Forget illegal MP3/Movie/etc swapping that everyone's in a tizzy about... I wish someone would address the security and privacy part of this. After all, you're allowing your private files which could contain financial information, personal legal records, or other personal information to be shared to the Internet, bypass your firewall (which by now I'll assume you have...) and be held at the mercy of a 3rd party you're supposed to trust. Even if Western Digital has a perfect application, with unbreakable (read: hackable) internals such that I can't bypass their access (AuthZ) controls... it's still all hinging on a username/password combination for access to these files. Hackers and malware authors everywhere must be thrilled to read this. I can just imagine a whole new wave of malware looking to steal people's MioNet access credentials. I don't have the product installed so I can't tell if it requires "strong passwords" but I'm going to guess no.

A quick pro/con analysis of this new way of avoiding uploading files to the general Internet looks like this...

Pro
  • Ability to access your files remotely (in case you forget something at home?)
  • Secure access to the system using only a login and password
  • No firewall configurations needed at home (the MioNet software does it auto-magically)
  • Share non-DRM files like pictures, documents, etc with friends, family or co-workers
  • Remote computer control and screen sharing
  • Remote monitoring of a web-cam you can set up with access credentials (monitor your computer's webcam from the office!)

Con

  • Remote access to your internal network files over the Internet (this doesn't even sound like a good idea)
  • Untested, unverified (or at least unpublished) system (MioNet) being trusted to guard your potentially private files
  • Notice that one of the "features" that WD touts is that this application can bypass your firewall, and you don't have to do anything to get it working (network back-door anyone?)
  • Potentially limiting DRM technology (although crude) limits your ability to share home-made movies of the kids or dog with your in-laws

So there it is, and I think the success of MioNet will be quite simply put. The positives (for most users) far outweigh the negatives as they your typical end-user will see it. Most users aren't as concerned with the cons as security professionals and paranoids - they see all these great features coupled with the fact that the system is "password protected", and they're sold. But there are clearly problems - or at least issues that need to be addressed to make this system more viable.

First - I would like to see a 3rd party certification that this product is "hacker tested" or at least source-code-reviewed to ensure any major and simple security defects are found and eradicated. Second, I would like to see some sort of "strong authentication" option for those users who want to share more than just photos (such as highly sensitive material like financial and personal documents). Aside from that, I think this product has some potential - and no - I don't think that the DRM'ish attempt to curb illegal file-sharing (albeit crude, I'll admit) should be removed.

Wednesday, December 19, 2007

Hijacking Google's ads...

The battlefield is changing, even the goals are now changing. In fact - the good vs. evil of the Internet world is changing so fast it's hard to keep up with what us good guys are supposed to protect.
A quick analysis of a recent article (here: Google advertisements hijacked by trojan) shows that the we're now facing something entirely different. The hacks aren't targeting what you'd expect, this trojan hijacks keyword results to send the user to different sites that the attackers control. The goals of this is anyone's guess but there are a few main possibilities:
  • Deliver malware based on search keyword hijacks
  • Deliver traffic (and thereby make money from) unsuspecting end-users
  • Cause financial damage against a company (Google here is the likely target)
In this type of attack, keywords are the battleground. With advertising and site traffic as the "easy way to make effortless money" as a source put it, there is no wonder that malware infestations have now been detected to hijack selective keywords and ad re-directions.
It's getting so that people can't even trust the ads they're being served up anymore... what's the world coming to?! Think about the bigger picture though - what happens when trust in some technology or technique starts to falter? Applying that to an advertising technique of a major Internet ad-placement vendor such as Google yields a very likely scenario if you subscribe to conspiracy theories... someone is trying to put Google out of business, or at least hurt their business by demonstrating that their system is vulnerable and ineffective for advertising.
Maybe I've stretched a little far, but the point is solid, I think. If you can't build a better mouse-trap, engineer mice that easily prove your competitor's product ineffective.

Genius. Or something far worse - either way you just never know what will be next. You can be sure of one thing though - it'll involve some way for the criminal element to scam money from legitimate business with as little effort and up-front cost as possible. Money is the root of all the evil out there folks, and if you still don't believe that... then you're simply not paying attention.

Read more about the Qhost.WU trojan here, at BitDefender's site.

SuckerWare: The cost of "Free Smileys" Paper Released

Hello readers - I'm writing this blog to announce that the final revision of "SuckerWare - The Cost of a Free Smiley" is finally released on my site. I've done some research into the EULAs of some of the "addon" software that's popular; and the results appalled even me.

You can read about it here: http://www.ishackingyou.com/ and go to the Reading Room -> IsHackingYou_Publications section. Please leave any comments you have or reactions, as I love to hear what you think!

Thanks for reading, I hope you enjoy the paper.

Friday, December 14, 2007

Privacy Debate - Who really cares and why

I've been thinking about this a lot lately, and with the recent rash of articles about Google.com's search features storing IP and search query string information, and ASK.com's ability now to erase your information from their servers upon request - I had to simply stop and ask myself... do I care?

I've come to the conclusion that there are 3 different types of people out there when it comes to privacy and awareness. I'll try and give my categories, and examples of each and you can agree or disagree as you will - but I'll make my case for my system of classification anyway

  1. The Concerned - This is the type of person who isn't necessarily doing anything wrong, or searching for anything like "Jihad", "chemical weapons manual" or anything of the like, they're using the Internet for personal or business uses, and other normal, everyday stuff but still cares that he or she is being tracked and monitored. This can be a healthy paranoia, meant to keep "big brother" from taking over our lives.
  2. The Clueless - Happily clicking and typing away this type of person hasn't a care in the world and often has that glazed-over look when someone starts talking about tracking and Internet monitoring. This type of user will generally not have a clue that they're being tracked, why they're being tracked, or how it can be used against them. This user can be a saint or sinner - but in the end doesn't know/care about privacy.
  3. The Paranoid - Whether they are doing anything wrong or not this class of user will generally rebel against any type of "Big Brother" activities. Often times this type of user will forgo special service offerings, or targeted information simply because it provides someone, somewhere with hint into their private life. The paranoid are a closed-minded group typically and see any form of monitoring or information gathering as evil.
I've grouped these three categories together because I think this makes it easier to discuss the merits (or demerits) of each and show where the line between unfounded paranoia and healthy skepticism lies. Most people fall into category 2, the Clueless. My parents, and even some of my co-workers fall into this category. I would venture a guess that a lot of the community of writers that blast this or that company for data aggregation, collection and monitoring are in category 3. These category 3 folks have crossed the line to loony-town and often have very little realistic basis for their positions but will defend them to the death if cornered. Between those two lies category 1 where a healthy mix of paranoia and acceptance makes for intelligent discourse and reaction - I would hope that most of my readers are in category 1 or are striving to get themselves there.

There are dangers with being in either category 2 or 3 and without diving into a rant I'll quickly tell you why. Being clueless allows for others to run your life, and take over your privacy. You should be concerned over your privacy and know who is doing what with your personal information, history, records, etc. This information can lead others to track your every move, and potentially lead to a very Orwellian society which you shouldn't want. Being overly paranoid forces you to question and see the negative side of every potential idea without being able to understand the positives. As a concrete example, speed cameras can clearly be a violation of personal liberties by tracking drivers, their speed, and where they come and go on the motorways. Looking at the other side of the coin; however, they were installed with the intention of being able to locate and catch speeders which is a danger to the million's of driver's on Britain's roads. The intelligent thing to do is analyze the risk vs. reward for this situation and simply look at the big picture. Are cameras so bad? Can they be limited in some way to only capture the information they absolutely need to snag speeders? Who controls the information, and how well is it guarded?

Look - I'm not saying that it's not OK to be paranoid - because I sure am some of the time - but let's be reasonable. The government shouldn't be tracking our reading habits because that can lead to something more sinister - but if I search for "radar detector" on Google's search engine - it should be OK for Google to send me targeted ads that could potentially save me money or help me track down the best product as long as that information is carefully guarded, policed, and disposed of.

Privacy, my friends, is a slippery slope so repel wisely, and draw a line you're unwilling to cross but be reasonable- then be paranoid when someone pushes you past it.

Friday, December 7, 2007

Those damn "smileys" are so EVIL

We all know there's no such thing as a free lunch - but check this out.

I've gotten annoyed by all the little "smiley" options there are out there, and how they all claim to be "AdWare/SpyWare free"... which is utter nonsense and we all know it. SO... with that in mind I did a little investigating and am writing a feature about it on my site (http://www.ishackingyou.com/), but for now, I'll leave you with this interesting tidbit from the "SweetIM EULA/TOS/etc"... you think this over. (source: http://www.sweetim.com/):


In order to receive the benefits provided by the SweetIM Software, you hereby grant permission for the SweetIM Software to (i) utilize the processor and bandwidth of your computer (ii) use certain personal information that you have submitted to your instant messenger provider. You understand that the SweetIM Software will protect the privacy and integrity of your computer resources and communication and ensure the unobtrusive utilization of your computer resources to the greatest extent possible. The Software is exposed to various security issues, and should be regarded as unsecure. By accepting this Agreement, you acknowledge and accept that the Software, and any information you download or offer to share by means of the Software, may be exposed to unauthorized access, interception, corruption, damage or misuse, and should be regarded as insecure. You accept all responsibility for such security risks and any damage resulting therefrom.

OK, so the software will use my personal information and use my CPU for some reasons yet not described... and it's known to be filled with security bugs - which I'm taking responsibility for?! Hold on, did I read that right?

If this was on the package of some software you were about to buy and use for business... would you in your right mind install it? Yet - I'm willing to be it's installed in dozens of places on your network, and in maybe in your home.

More on this soon.

Thursday, December 6, 2007

Regular shameless self-promotion

If you've not already done so, point over to my IsHackingYou.com site for more stuff to read and comment on.

I now return you to your regular reading, thanks!

ZDNet looks forward into the past... huh?

I was reading some email today, from our friends over at ZDNet, and if you haven't caught their stuff lately - it's pretty good reading. Their blogs and news articles tend to have good coverage on the Microsoft side of the house, with Mary Jo Foley's "An unbliking eye on Microsoft [RSS link]" column... but then my eyes wandered over to this link, and I couldn't help myself. I'm sorry to sort of, rag on the subject, but... what the hell?

This whitepaper titled "Where Online Hackers Are Headed in 2007: "Coming Soon" to a Website Near You (and Your Hard Drive)!" by Kevin Prince (Chief Security Officer for Perimeter eSecurity) from Feb 2007 is posted front-and-center on the Thursday, 12/6/07 ZDNet Must-Read News Alert email. It's in the section "White Papers from our partners". I looked at it, and thought for a second. Why am I getting this in December? And more importantly... did Kevin get it right?

Well, while I can't tell you what ZDNet's motivation was for sending me this "must read" WhitePaper from Feb '07 (maybe they're out of sponsors so they're re-hashing some of the old crap?) but I'll pull some points out of it for you to analyze and think over. [Sorry Kevin, I'm really not picking on you].

For the most part, the first few sections hit the nail on the head in reference to history, and what the past few years have brought us in terms of attacks. Yes, the past used to be people attacking us at the desktop/server level with an outside-in attack... things have changed, and that is rightly pointed out. I love the sentence "Stopping new attack types demands strong security posture" uhmm... yea?

Here are the main points I think Kevin makes (Kevin, please reply if you feel I've mis-interpreted your paper).
  • Attacks for 2007 will move from exploiting vulns to social-engineering people into exploiting themselves -- check!
  • Attacks for 2007 will be browser-based -- check!
  • Malicious websites will lure users using SPAM, messaging and hijack-redirection -- check!
  • A layered approach will be required to reduce malware threats -- duh!
Kevin goes on to talk about some of the methods that'll be needed to stop aggressive malware. I'll break these down, and do a mini-analysis. If you'd like to read more, I'll be releasing a larger analysis of what it takes to stop malware these days on my site (http://www.ishackingyou.com) - check there for the "whitepaper" in a few days.
  • Intrusion Detection/Prevention: Old news! 2007 saw IDS/IPS become yesterday's technology. Yes, everyone should have this on the desktop by now and I realize few do but that doesn't mean it's the next big thing - in fact... IDS is the last old thing in my humble opinion. The buzz words for 2007 were "extrusion detection"...
  • URL Filtering: Yes - I have to agree there... this is a big frontier that in 2007 we didn't address enough, but should have. I think that stretching into 2008-2009 we as security professionals should be utilizing web filtering technology a lot more to save our desktops from attacks
  • SPAM filtering: Obviously. The horse is dead, and we're still kicking it - SPAM rules the SMTP gateways, and I saw some statistic yesterday that the UK gets something like 50% of the world's SPAM? SPAM filtering should be done at every company, and if you're not going to do it yourself, hire someone to do it for you that's better at it... next!
  • Policies& PC Restrictions: I lumped these together even though Kevin kept them separate because they're essentially the same thing. You can't do one without the other... you should be restricting your users from hurting themselves... after all - there is still no patch for the ignorant end-user.
  • Gateway A/V: In 2007 I think we as security pros did more of it, but aren't utilizing the technology enough. I agree with Kevin, it should have been an initiative in 2007 - but we're still burning resources at the desktop doing this... why?
  • Vulnerability Scanning: Remember, if you're not scanning for vulnerabilities on your network and perimeter, someone else with bad intentions is. I'll leave that one alone.
So there you have it - for the most part, I think the paper (aside from stating way too much of the obvious) was on the mark. The sad fact is... it doesn't matter how many crystal ball papers like this our security managers and business leaders read... the messages will still likely go unheeded.

Good luck out there.

Monday, December 3, 2007

Psst! Hey buddy, wanna buy an 0-day vuln?

If you haven't lived in a cave for the last few months, you've undoubtedly heard about WabiSabiLabi, the self-proclaimed "eBay of vulnerabilities". Well, if you've been on the site since it's August inception, or read any of the press on it... you know first-hand that's pure farce.

I've seen some interesting articles on the site, most notably this great review of it at Darknet.org.uk; but I thought I'd think this through myself a little and see if I can gleam something meaningful from the roaring sound of crickets chirping as the site bustles through. (Obviously you've picked up on my sarcasm by now?)

First, let me do a quick break-down of the stuff that's available for purchase at the site right now.

  • 20 total vulnerabilities available
  • 45% Windows-based
  • 30% Linux-based
  • 25% web application-based
  • 2 vulnerabilities have been bid on
I'm not even going to get into the significance of the Windows versus Linux vulnerabilities, but I do want to point out that there are a significant amount of web application vulnerabilities here, by percentage (even if they are rather weak-looking)

Let's face it, if eBay ran like this they would have been out of business on week 2. I'm absolutely amused with these guys who run this site. I think that the Darknet writer breaks it down with smashing pin-point accuracy when referring to the vulnerability market...
Perhaps they didn’t think the whole concept out. Most of the people that need these kind of exploits - have access to them. Those that code trade, those that don’t code steal and trade - those that have no skills..pick up the left overs.
Nail hit on head. One has to ask themselves - what's the business model here? Are the folks at WabiSabiLabi marketing (or pandering) to the security companies? perhaps to the BlackHats (as unlikely as that seems)? maybe to some other crowd? What's your target market WabiSabiLabi?

It's no great revelation that a site which puts "0Day vulnerabilities" up for auction is a bit of a strange animal. If you have an 0day vulnerability, why would you risk exposing it to the world, when you can clearly make much more money selling it underground? Perhaps I've stumbled upon something here... is this a marketplace for second-rate hacks who've found some mediocre defect in some code somewhere, have no contacts to sell it to the underground, and are looking to connect with people who want to buy? Perhaps this is the target market... so let me build a quick profile of the typical seller:
  • mediocre code-monkey
  • no contacts to really "sell" an 0day vulnerability to the underground
  • no ethics to use responsible disclosure to get it fixed through the vendor or OpenSource owner
Really? I can't even imagine what idiot would bid on one of these auctions... I'm going to make a mental stretch here, and shout at me if you think I'm wrong, but I'm going to say that the majority of the real, legitimately dangerous 0day stuff is sold or traded (or horded?) in the dark corners of the Internet, or in pubs or uneventful money-exchanges where they laugh at the guys running WabiSabiLabi and go about their business.

Friday, November 30, 2007

Are you Sync'd?

I've been intrigued by the new Ford initiative to push Microsoft's "Sync" technology into their cars. On the surface, it sounds really cool, being able to voice-activate your music (iPod or compatible player), your cell phone for voice calls... that's pretty cool.

What bugs me though - is the fact that there is very little information about how secure this all is. Given that this technology is built upon Microsoft's rock-solid security foundation, I can imagine that we're covered... right? I'm jest - but the reality is that we should have some security information for the Sync system. I mean, I don't want someone to be able to connect to my music player, or my car's phone while I'm not paying attention!

I've looked over some of the available documentation, and there is surprisingly little on the "PIN" feature for the BlueTooth connectivity in Sync. The support pagee says that Sync generates a PIN, but it doesn't really tell you whether it's the same PIN all the time, or whether it's a one-time or system-generated "pseudo-random" PIN?

Also - I'd love to delve deeper into the world of "Upgrading your Sync" (click here for more info). As the link provided says, you're supposed to be able to plug your USB device into the Sync, and have it upgrade itself from the provided storage device. I'm guessing that lots of time was NOT given into securing the system from hackers - but what do I know?

I'm wondering whether someone will quickly crack the Sync interface, and install custom apps, voices, etc onto that thing - and whether something nasty may one day end up on the system? What is someone sends a maliciously-crafter text-message that overflows and exploits an unchecked buffer in Sync? What if someone figures out how to either steal or remotely inject stuff into my phone book using BlueTooth? What if the media player has a fault and a crafty-coded MP3 exploits the system to corrupt my address book or hijack my cell phone to constantly dial those 1-900 numbers in the Carribean?

I know one thing though... if they ever decide to plug that entertainment system into anything critical to the operation of the vehicle... I'm going to avoid it like the plague!

If you're interested, Sync's page has some FAQs around security... very skimpy though.. (click here) - this quote is by far my favorite...

What if Sync gets a virus? Could that cause my car to malfunction?
The Sync platform is independent of a vehicle’s engine. Security is vitally important to the Ford Motor Company and Microsoft. Effective measures have been taken to protect Sync from viruses, and we do not share any information on our strategies or tactics.
---
You just can't make that kind of PR up folks... they're taking great measures to protect your security, they're just not willing to release any of those details ... just in case.

Wednesday, November 14, 2007

Silent cure, spreads disease (WSUS and Microsoft)

It's amazing, every few months it seems Microsoft gets called out on something that you just look and say... why? For example, today all across the US, WSUS servers began breaking with the dreaded "unknown error" problem. I'm not going to beat a dead horse since you can read about the issue in depth here, on SearchSecurity.com, but rather I think I'll once again take a slightly different angle.

Here'a the short and ugly of it...

The problem was that on Sunday evening, Microsoft renamed a product
category entry for Forefront to clarify the scope of updates that will be
included in the future. Unfortunately, the company said, the category name that
was used included the word Nitrogen in double quotes (appearing as "Nitrogen").
A double quote is a restricted character within WSUS, which created an error
condition on the administration console.
This isn't the first time WSUS users
have run into trouble. After
Microsoft's
May 2007 security updates
, several users reported WSUS malfunctions.
Put yourself in the following situation, and then ask yourself what your answer would be to my hypothetical question.

You're running a very large enterprise, 200+ servers, 1,000+ workstations all spread about in different locations. You depend on Microsoft to patch their software and you're using the native WSUS software to keep your systems up-to-date.

Given the above, what's worse... an unreliable patching tool, or no patching tool at all? The reason I pose this question to you, my readers, is because it's appalling what things system admins have to put up with. We're almost a decade into the patching circus and still the cure is often the cause of the disease? What's worse, now the "cure" is often a silent update that breaks several major components in key systems. This presents a different security challenge than people normally see. Let's assume that on a good day, WSUS patches 100% of your systems in record time, delivering peace of mind to your organization in a matter of minutes. Let's also say that on a critical day, MS WSUS breaks, and some major super-critical patch that's coming out today cannot go out. Does it even matter that you have the patch in hand when your delivery mechanism is busted?

I would argue that once again, this is a clear case where separate tools must be decoupled. You can't have the company that builds the software also writing the software that patches that buggy software. That's a case of the fox guarding the henhouse in the worst way.

To Microsoft I say "Shame on you!" for at least two reasons.

  1. First, you've clearly managed to have poor Quality Control (really? this many years later?) and you're introduced a bug in a major, mission-critical piece of software which manages our Microsoft infrastructure
  2. Second, you're obviously managed to send out this magical update without alerting anyone, or even hinting that a new update to the updater is up and on your systems
  3. Third... you still haven't learned.

That's really all there is - just another blunder. Maybe we're hearing about things like this because MS is such a big target but isn't that the bane of being the biggest? You should by now know you have the biggest target painted on your forehead, and every spotlight is on you waiting for you to screw up. Well, Microsoft, thank you for not letting me down, once again.

To be fair, I run gentoo linux, which over the years that I've updated my systems, has never broken itself...

Wednesday, November 7, 2007

The H1-B and security?

Allow me to explain.

This news story getting front-(virtual)-page coverage on ComputerWorld , is extremely interesting, and at the same time boring, old news. For years now, we've been debating how the H1-B visa is taking away jobs from US-based workers. Whether you agree with it or not, let's address the issue from a different angle. I would like to turn your attention to how this affects security, and the viability of a company.

Let's assume for one minute that Company A (some big-pharma company) is hiring H1-B "contractors" from off-shore. Let's further assume that they're effectively vetting their employees because they understand the value of knowing everything about the people you hire to help stem insider security threats. Now, maybe I'm reaching a bit here, but here is how I see a company effectively vetting their employees:
  1. Employment history check (calling previous employees)
  2. Criminal background check
  3. Drug testing
  4. Credit check
  5. Extensive background check (military-grade) for those who work in security or super-secret labs making the next wonder-drug
Again, let's not look at whether these H1-B candidates take away jobs from US citizens, but let's address security. All of the above checks can be run against a candidate from the United States, but how many of these things can you effectively check against, for example, someone from China? You can only get the employment history that their contracting agency gives you, and have to assume they're not just making things up (really, do you trust them?). You also have absolutely no way of doing a criminal background check (save for InterPol, which if they show up on you've done something wrong), maybe you can ask them to submit for a drug test, but certainly they'll have no verifiable credit history or extensive background check available.

What does that mean to you, the hiring manager at Company A? It means you're hiring a threat. Period. It doesn't matter how you try to word-smith your contracts, the fact is you're hiring an unknown, and in security unknown equals threat. If you don't know what's in the box, odds are you're not stupid enough to allow it into your perimeter. But - the fact is, we allow contractors we haven't fully vetted into our environment all the time. We then give them access as system administrators, customer service representatives, database administrators, researchers, lab assistants and many, many other sensitive positions.

So let me summarize. The article is interesting in that it brings back up an old debate which has raged on for years and will likely not be settled by you or I, but rather by greedy politicians who cede positions and votes to lobby groups headed up by Oracle, Microsoft, and other greed-based organizations who's goal it is to hire the absolute cheapest labor, period. But that's not the point. The point is that we're allowing threats into our environment, nae, we're asking for threats to come into our environments. That's a security issue. That we need to raise.

Wednesday, October 31, 2007

Shameless self-promotion...

Have you checked out http://www.ishackingyou.com? No? What are you waiting for, an invitation? OK, find, you're invited ...

Thanks. Be safe!

Playing in a sandbox

I've been thinking about this for a while now. It's bothered me to the point where I can't help but to write about it and suck it up and do some research.

I'm talking about sandboxes. Not the kind your kids play in, but the kind that you want to run something in when you don't trust it's intentions. Say, for example, you have a website you want to go visit that's not exactly super trustworthy. You really want to go grab some latest hacker-tool, or download some script, or some piece of knowledge -whatever. You know you're not going to use MSIE (doesn't matter what version) and you're not sure that even FireFox will protect you adequately against what this particular site may throw at you...

So what are you paranoid about? Remember you're not paranoid if they really ARE out to get you. And boy howdy are they. They being the "bad guys" out there, in cyberspace, trying to take over your computer, steal your credit card information, use your computer to attack the government of Canada, and any number of nasty things.

So you are now faced with two choices if you're the conventional user.
  • Option 1 is to simply forego visiting the site... not necessarily the best way to go - but at least it's safer
  • Option 2 is to take a chance and hope you're not infected with some trojan, BHO, or XSS'd out of your life savings.
  • But there's a secret Option 3! That's right - you can "sandbox" your browser and render it (or at least your computer) impervious to those nasty bugs out to get you.

Right about now you're either telling yourself that this I'm some loony selling snakeoil to a dying man. You're only half right. I'm doing some research in this area, and will have some intersting findings soon. If you happen to know some vendors (big or small) who are selling or giving away tools to help in this endeavor - please contact me! If you're a vendor - contact me. I'll publish the results of my research in a few weeks and hopefully make us all a teenie bit safer.

I'm looking for products to review, and people to help test "real life" scenarios. Please let me know if you'd be willing to participate.

/Be safe!

Thursday, October 25, 2007

Airline misses the point... spins story

Have you heard anything about a Delta Jet which left Chicago Midway airport, and upon take-off, the baggage door flew open and some duffel bags flew out? Yes, there could have been serious risk to loss of life, but I bet you didn't hear it that way. The story that's being carried on local news, and on national media (what little there is) sounds a little like this:

Jet fallout: Girl's dolls lost from plane

'RATHER TRAUMATIC' | Duffel bag fell out of open cargo door

October 25, 2007

Pat Telan regrets talking his 9-year-old daughter Abby Ann into checking a duffel bag containing her favorite dolls at the gate before they boarded an Atlanta-bound flight out of Midway Airport.

The bag fell out of the Delta Connection plane Sunday after a door in the cargo hold opened after takeoff. The plane landed safely, and no one on the ground was injured by the two pieces of luggage that fell out.

One of the bags was found and returned to its owner, but Abby's duffel is still missing. Now she is trying to cope with the loss of some of her closest friends.

What's wrong with this story as it's covered? This situation feels a little like that from "Pulp Fiction" where the cleaner is called in to clean up a mess that the two main characters created. In this case, the mess that's being cleaned up and quietly swept under the rug is the fact that Delta's carelessness caused a cargo bay to fling open during takeoff. How is this not being covered as a Delta blunder in security/maintenance/operations and rather as a human interest piece on a little girl losing her dolly.

There had to be some serious coverage of this issue somewhere... and here it is. This article mentions some interesting points about Delta's failure...
  • Airline inspectors had recently written up the plane, a 70-passenger Bombardier CRJ700, for deferred maintenance on a malfunctioning indicator light on the cargo door, the FAA said
  • The plane did not fly all the way to Atlanta, but rather turned around and landed safely
  • It is extremely rare for a latch to come off during flight, according to the FAA
  • The most frequent cause, according to the FAA of latch failure was ground crew failure

Perhaps we need to analyze the potential effect if a cargo bay flies open during takeoff?
  • Possible loss of cabin pressure leading to a crash and thus loss of life
  • Additional luggage 'fallout' which would rain down on traffic, homes, or unsuspecting people later in the flight
  • Plane flight de-stabilization due to the gaping hole in the bottom of the plane
You'll notice none of the above end well and yet the story isn't being carried as a piece on airline safety and security - but rather as a piece about a little girl who lost her dolly. The cleanup man has done his job, apparently. Personally - I'm appalled.

Wednesday, October 24, 2007

Fraudsters evolve to stealing YOU

In the ever-evolving world of Fraud - the bad guys adapt and change strategy as us security folks find ways to curb their effectiveness. This constant and on-going struggle is no surprise to anyone (or shouldn't be!) reading this.



What does strike me, is the evolution of the tricks used to part Joe User from his personal information (PII). When this whole game started tricksters and fraudsters were out to steal your credit card number, cvv/cvv2, and your expiration date along with your userID and password. The game has certainly changed, hasn't it?



Since we the security community, with the help of the media (or hinderence if you read the NY Times) have been beating people over the head with the idea that they shouldn't give away this information to just anyone, the fraudsters have gotten more clever as they had to in order to continue to get their information. What started out as campaigns of mass-email to everyone that appeared to come from eBay, PayPal, and the like has now turned into a targeted, micro-targetted attack against people who have a very high likelihood of not only being part of the organization you're seeking to defraud, but also likely to fall for your fraud.



Blasting two hundred million emails to everyone in the world worked well the first few times - but then people started hearing from the news man, the neighbor, the computer guy at the office, and your own mum that you shouldn't fall for those schemes. OK, problem solved right? Wrong. Then came a slightly more targeted attack, customer lists were lifted from, say, CitiBank as an example and these customers' emails would then be peppered with "We're from CitiBank, click here to re-set your password"... since these people were also customers, and their neighbor and mother didn't get the same email, the people who got them figured they were authentic and fell for the frauds. Millions of targeted attacks later, people have heard about this, and for the most part ignore them when they hit your mailbox.



Now you get something like this, an email in your mailbox that says what you've been hearing from all around - organizations of good repute will never send you email soliciting you to send them your personal account information. Sounds logical, right? But look at the bottom of the email! This is a classic confidence scam. Lure the person in with a confidence-builder so that they figure they can at least minimally trust you, then steal their information.


Amazing!



And now the really bad news... notice that the site (pasted here) doesn't even ask you for anything more than a username and password (which, in all fairness should be a good sign that wait a minute... wouldn't they want me to authenticate first... then give my information??) and the answers to some simple questions. But once you've answered these questions - the person can then become YOU. These aren't just ordinary questions, mind you. These are questions that the more advanced "multi-factor authentication systems" such as RSA/Passmark and others utilize.



See the bigger picture now? You can change your password for Regions Bank's website, easy enough... but can you change your ...favorite food? what about your first dog's name? what about the high school you went to? your first car? Now you get it, hopefully. The attack is much deeper than just password theft - this is stealing some information about you which you CANNOT CHANGE, and which will be asked over and over in other "high security" sites and applications.



This type of theft, once it's happened to you, has **no defense**. You can't go change your information... easily. Unless you now change your favorite color, college roommate's name, and street you grew up on in your head - and remember that you changed it - you're in for a world of hurt.



Lookout folks - fraudsters are getting smarter in their efforts to part people from their hard-earned money and their identities. This is scary stuff. We must spread the word, let people know this is happening... write about it, tell people about it, and make yourself heard because we all pay the price from our own wallets.



Good luck out there, and be safe.

Wednesday, October 17, 2007

Storm [worm] SuperComputer

With the recent coverage coming out the Washington Post blog posting, covering the Storm worm - I think there has been insufficient analysis of what the point of this absolutely massive super-computer will be. I won't re-present the facts to you, but if you haven't read it I'll sum it up for you by saying that Storm worm is more powerful than all the [previous] top 10 Supercomputers... that's saying quite a lot. The other major point, is that this is the first time in history that the world's most powerful "computer" (or cluster in this case) is not owned by a nation, a research organization - but by organized crime.

The analysis and breakdown of the raw power of the Storm worm is here, at the Full Disclosure archives.

I challenged a friend of mine who is also in the "security research" field to think about what in the world we could use this type of supercomputing power, and more importantly, why the apparent owners of this botnet/supercomputer are partitioning out this massive herd, using encryption keys for small blocks of bots.

I have a theory on both these things, and I disagree with the mainstream media analysis so far. The mainstream analysis has been wrong so far, I think. Jim Carr over at SCMagazine online quotes SecureWorks' Joe Stewart saying
"This [partitioning and encrypting communications in smaller groups] effectively allows the storm author to segment the storm botnet into smaller networks [and] could be a precursor to selling storm to other spammers."
I don't agree with this at all. If you apply logic to this from a different angle you have to realize that this botnet took lots of time and energy to build, ergo, you're going to go to extreme length to ensure that if someone manages to crack your communication scheme - they can't take over the whole lot. This is, I strongly believe, the point of encrypting communication to these botnet groups in small chunks, effectively partitioning herds of Storm-bots. To make my point even more clear, here is what I think the herder(s) are doing.
  • Storm worm botnet will be used for obviously malicious intent
  • Establishing ciphers for encryption of small chunks of the herd at a time gives it resiliency against infiltration and full compromise
  • No one in their right mind would "sell" a piece of such a masterpiece and raw power grid
  • It is more likely that the herder(s) will allow people to pay for time-slices on this massive supercomputer to do whatever it is they want to do, computationally...
This brings me to my next point. What in the world would someone want with this much absolute raw power? I think an interesting hint is given by Lawrence Baldwin, of myNetWatchman...
Baldwin said the raw power of the Storm botnet might be taken more seriously if it were more often used to take out large swaths of the Internet, or in attempting to crack some uber-complex type of encryption key used to secure electronic commerce transactions.
Fasinating. Cracking encryption keys is just one of the possibilities - here are some more:
  • On-the-fly cracking of some of the world's strongest encryption - remember all security is based on the fact that we develop technologies that are computationally improbable to break... not anymore?
  • DDoS at will - how will you stop a ~320Gb/sec onslaught?(assuming ~10million users at 256kb/sec DSL speeds, which is estimating low) What's worse is that it's not like you can block a certain netblock, or router... these Storm bots live all over the Internet.
  • Rainbow tables anyone? - Sure, there are plenty of arguments made that rainbow tables are useless now because any developer with a brain uses a salt... this is a false statement, guaranteed. Even if people salt, salts can be predictable, and what's worse, computer even if there are quadrillions of combinations; remember you have 10 million CPUs at your disposal here. Why not hash (SHA-1, SHA-256?) all permutations of credit card numbers, passwords, DOBs, SSNs, and other important pieces of data so that when you steal a hashed table from a database, it's only a [short] matter of time before you can successfully pull out the real information!
  • SPAM - yes, I acknowledge that this is a great vehicle for spam... but it's such a waste of power to spam from these botnets... maybe I'm nieve
That's just my analysis - because I think people that have looked at this have missed the point a little bit. There is so much raw power, that I simply think that human greed will prevent this thing from being partitioned out and sold off to high bidders... I don't see it happening, there is simply too much status, power, and prestige from being the one that controls 10Million slaves, and the most powerful supercomputer on the planet.

God help us all.

So what did we expect?

With the major malware detection companies out there screaming bloody murder and how there are now several thousand new permutations of viruses, worms and various malware out there every other week... we as security researchers and US patriots need to investigate what's going on. I find it my civic duty to apply some logic, a little research, and do some basic 2+2 -style math to come up with this analysis.

Let me lay out some facts, first, to make my following point more clear:
If this is all still obvious to you, then I applaud you for noticing the obvious. What I don't hear about in the papers, the digital media, or any security-related publications, etc is how much these political tensions are directly impacting the IT Security space via the malware business. Malware as a business is booming. Developers are writing re-usable code modules, programs and scripts, selling these off or even doing SAAS (Software as a Service) in some cases. When investigators identify and attempt to track some of these software writers they often realize that they are in politically unfriendly countries to the United States and therefore cooperation is difficult to come by if not impossible. More often than not requests for foreign support go ignored, or given little attention.

Again, this may be completely obvious to you, and I will not disagree. What I am disgusted with is the following:
  • The US government is absolutely clueless on digital security and digital asset protection
  • The US government and most specifically the current administration (and some candidates for the 2008 Presidential race) are creating a hostile global climate for US-based interests when it comes to security
  • No one has been successfully (in my humble opinion) able to put these simple facts together and speak these in an open forum, to the highest heads of state
To sum this up in a nutshell... the US is creating a hostile environment, which the digital security forces in the US will suffer the consequences of. Given the current degree of apathy and cluelessness by the United States government in digital security, this does not bode well for our national secrets and "secret/secured" systems. This bodes even worse for our economic situation as it relates to fraud, identity theft and resulting losses, and commerce disruption in general.

Unless the efforts to fix this obviously political time-bomb are started soon, we in IT Security are going to be pushing an even bigger boulder up the hill of our daily jobs... this one will be ticking.

Good luck out there.
.

Wednesday, October 10, 2007

Illinois DMV loophole enabling fraudsters!

This is a little off-topic on IT, but it's interesting and worth writing about in my opinion, so here it is.

I went to get my new Illinois driver's license (being that I recently moved back from Georgia) and found an interesting loophole in the system that even the employees here acknowledged was ridiculous. If you read the requirements for getting a license in Illinois, they are as follows: passport or birth certificate, Social Security ID card, and a utility bill with your name and address on it. Given that I didn't want to have to drive home and get a utility bill, I asked if there was anything else I can do. The lady at the counter was quick to ask if I had to register my vehicle as well - to which I said yes. In that case, she stated, I could simply get the auto registration changed first (since that didn't even require an ID!), and then come over to the license side and use the registration as a form of identification and address verification.

So, let me get this straight... to verify that I live where I say I live, and I am who I say I am (as a third factor), I can go register a car to an address/name/etc that I don't have to present proof for, and then come and use that as proof of address??

There is something seriously wrong here...

So you may be wondering - so what? How can you possibly exploit this? Well allow me to explain. I can make up an address (as long as I can register some non-verified car) and bring that as my proof of residency at the address I'm claiming. This does indeed help create a situation where I can basically make up where I live, and not have to really prove it in any way.

Yes - it's not like I can create a fake ID with this loop-hole... but in a way, yes I can. I can come in from out of state and get an Illinois driver's license without actually proving that I live in Illinois... doesn't that bother anyone? I asked around, and everyone there that I asked agrees it's an egregious fault... but no one really cares enough (or doesn't have the power to) do anything about it.

That's a sad, sad state of affairs. I wonder how many other states have this provision? Maybe I an get a license in another state? If this kind of thing isn't a fraudster's dream...I don't know what is.

Spy Bugs -- 007 in real life?

The Washington Post is running a very interesting article on micro-bug-like spy devices. Rick Weiss, staff writer at the Washington Post wrote a piece which started out as a science-fiction read, and turns into a very real "what if?" as the article progresses. Despite the speculation, the denial of the existence of such technology, and paranoia discussed in the article, it's an interesting read.

Of course, for those of us in the security field, this brings a whole new set of problems to light. Industrial espionage, intelligence, and orther forms of "security" that may be more commonplace may already be benefiting from the types of technologies discussed in the article. Now, the article does address challenges with fuel to power micro-spy devices, as well as issues like cross-winds, bid attack, and other unavoidable mishaps, the implications are immense.

Why bother using spyware or other now-detectable forms of malware to infect a computer if you can simply employ a mosquito-sized "bug" camera to follow a victim around and record voice conversations, photos, and maybe even live video? Right now this all sounds like technology 007 would use, but remember that DARPA has had precursor technology to this as early as possibly 30 years ago! Where has technology come in 30 years plus? Of course, no comment from the 3-letter agencies.

So should you be preparing your enterprise against micro-bugs? Chip-infused moths? Mosquito-borne surveillence? Probably not quite yet... of course, unless you work in government!

Do your own research, and figure out what you believe... and if you find anything to share post it here.

More research:

Friday, October 5, 2007

DHS DDoSes itself... with email

I've never been a big supporter of the DHS' security initiatives, and even less of the government's efforts to be "secure" (I mean, their track record alone speaks volumes) but this latest oops is too much. I guess I'm glad I'm not on their mailing list, because I wouldn't want to be spammed to hell now that some of their personal email addresses are out.
What's worse, you have to wonder who the original "reply-to-all" person was, or if it was really a "user who un-checked a box..." somewhere in the mailserver. What's particularly interesting to me is that the mailing list doesn't use traditional listserv or MajorDomo distribution channels, and obviously uses some bungled Domino install at a contractor site.

Lovely. I'm sure their internal security is much better...

Read all about this snafu here on eWeek, in case you've missed it.

Thursday, September 20, 2007

The Hen House Has a Fox Guarding It!

I've been seeing these interesting issues crop up lately and it's bugging me, so I've decided to write about it. Well, OK, not so much write but more rant a little bit. Hey - everyone needs to vent sometimes.

What I'm particularly miffed about is this idea of self-assessments for policy policing. Here's a scenario-

Company A is PCI certified (yes, I'm beating the PCI horse again because it's so easy...) and demands that all its partners and vendors be the same. This is fine, and quite honestly very responsible. The problem is in the execution. Normally - when I want to validate something, I go do it, or pay someone to do it for me if I'm too busy. [This is where self-assessments drive me nuts.] Company A now sends a requirement to Partner B to be compliant with whatever PCI regulations exist, and sends along this questionnaire for Partner B to self-assess their company against requirements.

This drives me insane because I've seen time after time requirements be "creatively answered"... let me explain. A requirement may be that all PII (personally identifiable information) is encrypted while at rest. To me that means database encryption, and encryption anywhere data "sits" at rest. Partner B looks at their network and says to themselves, "Gee, we don't encrypt databases 100%, but yea, sure, we encrypt flat files and most of our databases, so sure, check!" Now, if I've got someone auditing Partner B, they fail. If Partner B self-assesses and checks the box, no one is there to call them on it.

But what's the problem, you say? The ownership of a breach lies squarely on Partner B when someone breaks into their systems and steals tons of data, right? This is where you pull out your piece of paper that they self-assessed and say "but they said they were compliant!" That's nice in the legal world - but in the court of public opinion you're no less screwed then you would be if this was a gaff on your own part.

Self-assessments are a joke because there are so many times I've seen (and a few times I've been asked to participate in) "creative answers" to these self-assessment questions. When an auditor shows up it's black or white - you're either compliant or you're not. Of course, the topic of auditor subjectivity is another rant for another day.

My point is this folks, if you're serious about the security of your partners, don't ask them to fill out questionnaires and assume they're being honest with you. People lie. And it gets worse when a company is less-than-prepared to do business on a high-security plane and is presented with a big opportunity. They will lie to you. Don't trust them. Have we lost this mentality? What happened to trust but verify?

Keeping a paranoid mentality, and assuming human nature holds and people lie will save your assets, and just maybe it'll save that nice space on the front and center of the Wall Street Journal for your competitor.

Good luck.

Wednesday, September 19, 2007

Policies and Enforcement in the Real World [or, how not to be a speedbump]

I've spent several years working for companies big and small, and have this interesting view on the past 10+ years of experience in IT. I've been lambasted in the recent past for being a bit of a cynic on policy where there is no teeth - so I feel the need to explain my position a little bit. Just maybe you'll sympathize and "get me"...

There are 3 "tiers" of corporations out there with respect to policy as it relates to IT Security, in my humble opinion. They are as follows

  • Anarchy (30%): A complete disarray. These types of companies may have a policy or two but they are likely incoherent, poorly distributed, and broadly ignored. These are companies that grow too quickly, disregard their IT departments, and abuse their technologies - these folks have two options - move up a tier or brace for disaster. One of the two will happen.
  • Law (60%): This tier is where, I hope, most companies find themselves. You have policies, they may even be well-written, well-defined, cover all your base risks, and are distributed well amongst your population of employees and relevant users. Your problem is that you can't enforce these policies.
  • Order (10%): If you find yourself in this tier, consider yourself very lucky. Companies in this tier not only are well documented and distributed on their policies, but someone actually enforces them. The difference between having something and having something that's enforecable is here at this tier. This is a very select population of companies, and you'd be surprised to know it's rare to find them in the "enterprise" class of companies.

My point - the buzz-word in the IT Security world as it relates to upper management has been policy in the last several years. Yes we all have these "cool" technologies we implement and some of us even draft up acceptable use documentation and guidelines; some of us call these policies and put them in bound and glossy folders and give them away to our users. That's all well and good but what happens when someone challenges your policy? What happens when your policy says that non-company assets are not allowed to be plugged into your network; but a middle-manager has an army of consultants which come in and insist on plugging in their laptops to do whatever it is they were commissioned to do? Ask yourself this question and the oft-given answer is -- the need to conduct business will win ever time and the policy will be ignored (at least temporarily) So what is the point of the policy? Remember, that one 'exception' will quickly become the accepted standard as general users watch the policy be subverted. So regardless of what your policy says, a few weeks later everyone is having random visitors plugging in their laptops into your network. What went wrong?

I would argue that the problem was not with your policy, but with the "teeth" behind it. What should have happened, was that rather than simply saying "you've being over-ridden in the name of business" the manager wanting to subvert policy should have needed to escalate the matter. The matter should have reached the CISO, or maybe even the CIO - and that's where you should have had your backing. The CIO should have heard the argument, and made an assessment as follows:

  • The policy exists for a reason and it's the accepted standard (law) in the company and cannot be subverted
  • An alternative is to provide the consultants with resources to use for the time they are on your network -OR-
  • The consultants should have their machines somehow vetted by the security team, and an ammendment is made to the policy to provision for this type of requirement going forward

What would your CIO do if faced with this situation? Do your policies have teeth? I will offer you (from my experience) one method of giving your policies teeth. I will dispense my formula to you, which you can use at your own will - I make no guarantees this will work for you, but will tell you that it's worked for me in the past.

  • Never develop a policy in a vacuum; consider business requirements and non-IT factors (such as the consultant example above) that may complicate your execution of the policy you're writing
  • Write a policy that's straightforward - the shorter the better honestly. Number your sections, lay it out logically, and proof-read it for logic problems and contradictions.
  • Give it to your grandmother, wife, or anyone that doesn't work in IT. Ask them if they understand it, and what challenges to productivity they can find in it (trust me on this one... this is a gem)
  • Give it to your marketing or communications team. If you don't have one of these, find a PR firm, pay a small fee and have your policy "polished" for user-friendliness and effectiveness.
  • Receive approvals in steps -start with local managers, and work your way up to the CIO. When you reach your CIO's office, you can show how you've done your homework and that people below the CIO agree with your policy, that it's user-friendly, and that you "understand the business needs" while trying to enforce security policy
  • Ask for the ability to enforce the rules you've set forth. Have a conversaion with your CIO, and this is a tough one, about how policies will be enforced. Ask what penalties there will be, and what actions will be taken on escalations, etc. Do not let this go.
  • Market the policy to your users. Make it fun for people to read/understand what the policy is... have a contest to see who can answer the most questions correctly on a quiz which roots its answers in the policy. Offer some trinket or chachkie to your users who do the best job -- incent your users to read your policy.
  • When a chance comes up to enforce the policy, always play the understandig role first and foremost. Never give the answer "well you can't do it, period, because the policy says so" - you'll lose the escalation to the CIO, almost guaranteed. Try to understand the need, offer suggestions, and document this for further escalation or compromise.
  • When you need to, pull out the big bat, and make sure you pick your battles wisely. Don't squabble and run to your CIO with every little violation - try and handle your own playground. Yuo don't always want to run to mommy every time your pride is hurt.
  • Overall, enforce policy consistently, fairly, and with a gentle ear and a firm hand.

Good luck out there.

Wednesday, September 12, 2007

Quick ZFS nerd humor

As I was reading a little more about ZFS for a project today, I came across this Wikipedia entry, which I thought was humorous in a nerdy sort of way - and I figured you readers would enjoy if you haven't seen it yet. We've all heard the term "boil the ocean" but this takes it to a whole new level: (quoted directly from Wikipedia here...)

Project leader Bonwick said, "Populating 128-bit file systems would exceed the quantum limits of earth-based storage. You couldn't fill a 128-bit storage pool without boiling the oceans."[1] Later he clarified:

Although we'd all like Moore's Law to continue forever, quantum mechanicskilogram of matter confined to 1 liter of space can perform at most 1051 operations per second on at most 1031 bits of information [see Seth Lloyd, "Ultimate physical limits to computation." Nature 406, 1047-1054 (2000)]. A fully populated 128-bit storage pool would contain 2128 blocks = 2137 bytes = 2140 bits; therefore the minimum mass required to hold the bits would be (2140 bits) / (1031 bits/kg) = 136 billion kg. imposes some fundamental limits on the computation rate and information capacity of any physical device. In particular, it has been shown that 1

To operate at the 1031 bits/kg limit, however, the entire mass of the computer must be in the form of pure energy. By E=mc², the rest energy of 136 billion kg is 1.2x1028 J. The mass of the oceans is about 1.4x1021 kg. It takes about 4,000 J to raise the temperature of 1 kg of water by 1 degree Celsius, and thus about 400,000 J to heat 1 kg of water from freezing to boiling. The latent heat of vaporization adds another 2 million J/kg. Thus the energy required to boil the oceans is about 2.4x106 J/kg * 1.4x1021 kg = 3.4x1027 J. Thus, fully populating a 128-bit storage pool would, literally, require more energy than boiling the oceans.[5]

Tuesday, September 11, 2007

What I learned... Part 1

The lights are out, the vendors are gone, the guest speakers and panels are silent... the conference is over. Now it's time to share what I've learned in the hopes that it will benefit you, my readers, and make us all better people.


So without further ado, here's the first and most important thing I've learned - We're going about this security issue all wrong! Before you hit the comment button to tell me just how wrong I am, keep reading.

In the last 2 days I heard some amazing stories about how people have gotten bigger budgets, patched faster, tested better, and secured more. What I absolutely did not hear is how much risk was avoided, mitigated or eliminated. None. Process that.

What I can honestly say is that we as security professionals solve problems as we see them. Remember the old addage, if you're holding a hammer, everything looks like a nail? This is so true in IT Security. We're approaching everything with our IT backgrounds, with our security hats on. Things are black and white, right or wrong, we either have a good practice or we don't. There is no gray area. Wrong.

If it's one thing that I think should have been driven home more than it was, it's that security is not black and white. It's all gray. It's not a matter of solving a problem. You're never going to "secure the end-user" or "secure a server"... unless you've cut the power and all other cords and encased it in concrete, then wiped the disks and all IP is eradicated. What we as security professionals should be striving to do, in order to function as an extension of the business, is lower/mitigate risk. If you've heard this already, and I'm preaching to the choir you can stop reading. If this is a new idea to you and the first time you're hearing it it's time to wake up. The last time we as security pros "solved" a problem was... well... when was it? Let me take a few examples and discuss the "measures we have taken" and how they have absolutely not solved the problem.

First up are viruses, on everyone's list since the early 90's.
  • The problem: viruses are constantly out there to destroy productivity and destroy the machine, steal data, or other malicious activity
  • Our solution: Anti-virus software on all laptops, desktops, servers
  • Why we failed: Quite honestly, viruses still exist, they still infect machines. Yes it's true they aren't as nasty as they were since we've put virus detection and remediation software in place, but viruses have only morphed into different forms, and continue to attack our systems. We haven't "eliminated the virus threat".
  • The risk-based approach/view: Due to the introduction of virus mitigation systems software on our systems, viruses are a controllable threat which is now for the most part effectively minimized as a threat. The key is that the threat is minimized and not eliminated.
Based on this example we should be able to apply the same principle to any number of business-problems. Try not to be holding a hammer. Let me take another example, and walk us through a risk-based approach, with a real-world example.

Let's make up a country, call it Elbonia (from Dilbert land). Elbonia has a population which has not yet been exposed to credit cards and credit like we have been in the US and is thus an emerging market for credit products. Elbonia has a store chain, called Elbonia-Mart which sells everything from watermelons to widgets, and wants to partner with a vendor to come in and offer credit so that Elbonia-Mart's users can afford to purchase some of the bigger-ticket items on credit such as televisions. Keep in mind these facts: Elbonia has no credit reporting agencies, no good way to track users identities like the US Social Security System (although that is arguably flawed), and Elbonia is ultra-low-cost when it comes to implementing solutions. You are now faced with a problem. Credit card applications are being processed on a computer which is set up on a 56k DSL VPN to the corporate DMZ for credit decisioning online, the terminal then prints out the application and "temporary shopping pass" when a user is approved for credit. The problem is, these terminals are not up-to-date on personal firewall, virus protection, or patches so they are out of compliance with the corporate standard the vendor (your employer) has set out. What do you do, what problem do you solve and how?

If you're like a very large percentage of people in our industry - you immediately start to solve the issue of patches, personal firewall and anti-virus updates to these low-bandwidth relatively insecure PCs. You fail to do a proper risk assessment and your efforts fail. I'll explain why even if you do manage to get these machines "secured" to corporate standards without over-spending you've failed.

Re-read the section above where I discussed the parameters of Elbonia. There is no credit agency, there are no national identities. While the first thing we think of is to protect the terminal against identity theft and data theft think again. Why would I, as an organized criminal, attack the terminal to steal someone's identity? I can just as easily make one up! Furthermore, why not just walk by and pick up the stack of printed applications (paper is our enemy) which are there when the sales guy is distracted? These two methods will immediately circumvent whatever digital security measures you've put in place, if you went that way. A proper risk assessment would have told you that (a) identity theft is not a problem, and therefore not something to deal with in the immediate future, and (b) paper should be eliminated quickly too! After doing a proper risk assessment, you would may very well have simply decided to take the relatively simple step to either eliminate or secure the paper copies, and call the project done and monitor further for other signs of malicious activity. You've just saved your business money - and what's better, you've proven to your business people that you understand what the real threat is.

In a nutshell, this is the most important take-away from this conference. We're solving problems without fully understanding the situations, without fully analyzing the situation from all sides, and in an "IT vacuum"... that is, without talking or understanding the business model and drivers behind these issues.

The lesson? Known the risks. Understand the full threat. Talk to your business and understand that security is not black and white... it's [your] gray matter that makes it work.

Monday, September 10, 2007

Update: Exciting stuff

Hello readers... I'm attending an event this week (Monday and Tuesday) called "The Security Standard" with lots of executive panels and speakers on IT Security from all over the indutry. Lots of vendors as well, so I'll be posting lots of interesting new topics... here's a taste to get Pavlov's theory going...

Upcoming Topics
  • Toothless Regulations
  • The Politics of Data Breaches
  • The Fraud-sniffing Dogs of War
  • Compliance Nonsense
  • Auditors - Friend or Foe?
  • ...and more as the conference unravels! Stay tuned.

Friday, September 7, 2007

Applying real-life principles to technology

It's amazing how much we in technology don't learn from our counterparts in the physical world. Let us take, for example, banks. The physical bank is built to withstand significant attack. Even if we look back a hundred years, banks were built with vaults, gates, and other protection mechanisms. So ask yourself why we don't apply that same principle to securing our 'digital' assets... our data-stores. I'm going to analyze for everyone's benefit the points that the digital can take from the physical. I'd dive into why we don't take the queue from the physical security folks - but that turns into a bit of a philosophical rant (which I will admit, I will write about at least a little).

So let's take a would-be bank robber's point of view into a bank. Pick a bank, any bank really, and you'll notice that going outside in you will encounter the following things:
  1. proper site planning to keep the getaway (escape route) as complex and difficult as possible
  2. well fortified structure of the building
  3. surveillance cameras
  4. armed guards at the entry
  5. properly separated internal chamber of the building (separated entryway from bank main lobby)
  6. properly separated and secured teller stations (where the cash, at least some of it, is)
  7. properly separated back office
  8. silent alarm triggers strategically placed where the tells can reach without being spotted
  9. outer vault door locked by a key held by bank manager or designee
  10. highly fortified, hardened outer vault door
  11. separated inner vault (with all the trimmings such as hardened steel, etc)
  12. non-descript inner-vault chamber compartments
  13. tagged (booby trapped) loot in case it's stolen to help track it and identify the thieves
Now, I'm no banking security expert, so these thirteen (13) things are just ones that I have observed with the naked eye when in a bank, and from a tour I was privileged to get. That's a lot of theft deterrent! Are we applying the same principles to securing our digital assets? I can say with a high degree of accuracy that a vast majority of the corporations out there don't protect their digital assets nearly as well. Now, there may be reasons such as your data not being of as much value, or simply not knowing where the digital assets are which need to be protected -- but that's absolutely no reason to weasel out of it.
I will analyze the above thirteen (13) observed physical security measures of a bank, and parallel them with digital or 'cyber' security technologies. The numbers above will match up to the digital equivalent in the analysis below, so pay attention.

  1. system design to disallow easy exit with stolen data
    • a system which disallows servers to initiate communications to the Internet; if a cracker breaks into your server, forces a shell, and wants to ftp the stolen data somewhere off the system your network firewall had better disallow that!
  2. hard outer perimeter
    • firewall rules limiting exposure of internal servers, networks, and users; properly configured access lists or firewall rules to allow only absolutely necessary exposure of services to the outside world
  3. digital surveillance at the perimeter
    • at least an IPS to filter for anomalous traffic patterns, or at very last pattern-based filtering and alerting (with blocking) mechanisms... some way to notify the security operations team something is wrong
  4. enforcement points at your outer points
    • IPS at your perimeter, NAC controls to keep just anyone from plugging in and rummaging around... these are the very very basics
  5. separation of services (a la DMZ, or zones)
    • (see #7)
  6. separation of services (see above)
    • (see #7)
  7. separation of services (see a pattern here?)
    • DMZs, containers, or whatever you'd like to call them; compartmentalize, segregate and filter access into different segments or zones on your network. Servers accessible to users should be on a separate network segment from servers used by administrators for all-access to out-of-band management, which should be kept separate from router and switch networks... segregate, segregate, segregate
  8. internal processes to notify security of a potential issue before it becomes a breech
    • workflows, ticketing systems, "response team" phone numbers... anything which can give users who see suspicious activity on your network a simple way of contacting your security operations team
  9. separation of privileges on digital systems (least-privilege model, with master-key kept secret)
    • separate data administrators from data users; the database administrator should have no reason to read the user_full_info table in its entirety, only administer rights to that database, table, etc; take away all privileges which are not explicitly required to perform work function!
  10. separated server networks
    • separated (logically at least, physically and logically at best) subnets and containers for servers, away from subnets for general users; these of course should be firewalled and have an IDS/IPS at the gateway into and out of them
  11. separated server farms acting as data containers
    • containers for databases, file-servers, and other sensitive servers which are kept well guarded and away from being accessible by the general public via firewall rules and anomaly detection devices such as an IPS
  12. non-identifiably labeled asset tags
    • server labels which look more like USSRVWIN0001A and less like FINANCE_FILESERVER to keep potential thieves guessing and grasping while your detection mechanisms work their magic
  13. false data to trigger alerts if/when used outside the organization
    • fraud teams and credit bureaus create 'fake identities' which look perfectly real and indistinguishable from others in the system, that when used trigger a fraud alert... brilliant!
Now, the sad fact here is that as security professionals we haven't really learned much (or haven't applied it, if you prefer) to the digital world. Why? There are a myriad of excuses including poor budgets, poor management, inability to execute or affect change... but whatever your excuse it's exactly that, an excuse. We as IT pros should learn from the physical nature of the world, and how protected storage is done in 'real life'. If you don't like my bank vault example, use an old medieval castle - same concepts! You'll see that the same principles have been applied for hundreds of years in the application of physical security. Let's hope it doesn't take us that long to figure it out in the IT world...

Good luck.

Friday, August 24, 2007

Define... "securely transferred"

Here's something for you folks out there to ponder, and I'll give my take on it as well, but first I want to pose the scenario -- and offer a chance to think it over and maybe reply publicly if you're daring...

Scenario:
You're a financial company, or rather, you work for one. You have a vested interest in protecting your clients' data; whether it be cardholder information, investor information, or banker information... it's all critical and sensitive. Now say you work with a 'partner' (or vendor) who will do something with some portion of your customer data records. To make it more concrete, let's assume that this vendor will provide outsourced "rewards redemption" on the line of credit cards you offer... "Pet Points" as an example. So if I own a "Pet Points" card you issue, and I want to redeem my points for a spa treatment for Fifi, I dial up an 1-800 number, and get your vendor-partner who's CSRs use the data you have about me to allow me to redeem all those hard-earned "points". Of course, included in the necessary data that the vendor-partner has to have about me is my card number, expiration, home address, name, maybe some other morsels of information too? Now, for this vendor-partner, let's call them Partner X for brevity, they have to have this information sent to them in a flat-file so they can input it into their system as a nightly batch job (standard for financial systems these days). This flat-file, as you would imagine, is brutal if it falls into the wrong hands, yet your partner tells you they only support "in-transit" encryption and that nothing like PGP is supported as "it is too complex and difficult to support". What do you do?

Allow me to break this down for you:
  • Sensitive cardholder information in a flat-file
  • Flat-file sent over to a 3rd party
  • Link is encrypted "edge to edge" (meaning, router to router, or firewall to firewall)
  • Flat-file encryption is not supported by your vendor
So ask yourself... "Self... what do I do?"

This is an egregious act of negligence. I'll tell you why. Feel free to disagree.
  • First, the argument that the data is "encrypted on the way over" is crap. PVC's, VPN's, even private copper (very rare) is still only part of the puzzle. That data is exposed the second it drops out of the encrypted tunnel
  • Next, how much do you trust your internal employees? If you're intelligent the answer is very little, cardholder data should never be stored unencrypted even on "internal" systems
  • Additionally, as the client, I have the right to tell my vendor how/where/when I want data secured -- if you're a vendor telling me you won't support something I feel is fundamental, I'll find someone who will
I think I'd like to take that first point a step further. Data in motion is encryped, typically, which is great. The problem is where that data is de-tunneled, or de-crypted. Are the systems that handle the data in a DMZ? Are those systems in the internal network where they are accessible by all your employees? Can I "plug" wirelessly into your network and possibly see this data? If you the answer to any of those question is yes, then you have a problem. That data is not secure. So you see, it's great that the tunnel you have established encrypts data as it passes from my firewall to yours, but that system that receives the data... which was hacked a month ago by someone from Elbonia, that's still insecure. You're still facing at least a loss of my business, at worst a lawsuit that'll put you out of business.

Thursday, August 2, 2007

How do you help someone who won't help themselves?

So as I think about consumer protection, I recall an old parable my Sunday school teacher told us. I'll give you the abridged version. Basically a guy was stuck on a rooftop in a floor, and asked God for assistance. A few hours later a guy in a boat came by to rescue him but he sent him away saying God was going to rescue him. It happens two more times, another boat and then a helicopter - then the guy drowns and asks God why he didn't save him. God's reply is simple - I sent you two guys in boats and even a helicopter to rescue you... all you had to do was let me help you. The moral of the story- you need to let yourself be helped when you're in over your head, literally and figuratively. This is the reality us security professionals live in. Consumers who won't let us protect them - so often the worst enemy of the consumer is... you guessed it - the consumer.


Many of you know exactly where I'm going with this. Consumers expect, nae, demand to be protected online when making purchases, reserving their vacation tickets, or buying grandma's birthday present, but it seems a rare one who is willing to do something about it. I get marketing people in my ear every day how I can't make people use 'stronger' passwords because they won't use the application or site. I can't make a partner site (which potentially has financials in it) require more than an email as a UserID and your ZIP CODE as your password... everything else and ... get this... the consumer will go to a competitor who allows easier access. If you're reading this blog, and this sounds like a recent project you've heard me wail about... yes I work with you :-) Some days I'm tempted to put my foot down and say "Fine, let them go to the competition; but when their accounts are empty because someone guessed their idiotically simple password - we can say we told you so!"

Before I get too far off on a rant about my marketing folks (sorry, you're such easy targets) I need to make my point. Consumers won't let us, as security professionals, protect them in the obvious ways. So we have to do things the sneaky way. We have to write filters, and scripts and other behind the scenes types of things which will keep them safer, without letting them know we're doing it. This drives me bonkers... what about you? Sure, one-time passwords via RSA token aren't the end-all, and can still be tricked via man-in-the-middle attacks or skimming attacks (session riding) but we at that point would significantly up the ante - we would force the 'bad guys' to work that much harder for their stolen money.

So - I have to ask the consumer... what is wrong with you people? I feel like I can answer myself... complexity is bad but there has to be a happy medium... somewhere. If any one of you readers (how ever many I have) have ideas - let's discuss... maybe we can get an open forum going? I'd love to hear people from across the industry present ideas, and maybe we can creatively solve this problem together? Maybe education is part of the answer, and an industry-wide 'mandate' or (dare I say it) another compliance policy which mandates something more 'complex' than simple userID and passwords?

I think I can safely say, and not get too many blank stares, that the userID/password is dead for high-risk use. There has to be a better way, but unless consumers realize that this is a "takes two to tango" scenario... we're screwed.
Google+