Friday, November 30, 2007

Are you Sync'd?

I've been intrigued by the new Ford initiative to push Microsoft's "Sync" technology into their cars. On the surface, it sounds really cool, being able to voice-activate your music (iPod or compatible player), your cell phone for voice calls... that's pretty cool.

What bugs me though - is the fact that there is very little information about how secure this all is. Given that this technology is built upon Microsoft's rock-solid security foundation, I can imagine that we're covered... right? I'm jest - but the reality is that we should have some security information for the Sync system. I mean, I don't want someone to be able to connect to my music player, or my car's phone while I'm not paying attention!

I've looked over some of the available documentation, and there is surprisingly little on the "PIN" feature for the BlueTooth connectivity in Sync. The support pagee says that Sync generates a PIN, but it doesn't really tell you whether it's the same PIN all the time, or whether it's a one-time or system-generated "pseudo-random" PIN?

Also - I'd love to delve deeper into the world of "Upgrading your Sync" (click here for more info). As the link provided says, you're supposed to be able to plug your USB device into the Sync, and have it upgrade itself from the provided storage device. I'm guessing that lots of time was NOT given into securing the system from hackers - but what do I know?

I'm wondering whether someone will quickly crack the Sync interface, and install custom apps, voices, etc onto that thing - and whether something nasty may one day end up on the system? What is someone sends a maliciously-crafter text-message that overflows and exploits an unchecked buffer in Sync? What if someone figures out how to either steal or remotely inject stuff into my phone book using BlueTooth? What if the media player has a fault and a crafty-coded MP3 exploits the system to corrupt my address book or hijack my cell phone to constantly dial those 1-900 numbers in the Carribean?

I know one thing though... if they ever decide to plug that entertainment system into anything critical to the operation of the vehicle... I'm going to avoid it like the plague!

If you're interested, Sync's page has some FAQs around security... very skimpy though.. (click here) - this quote is by far my favorite...

What if Sync gets a virus? Could that cause my car to malfunction?
The Sync platform is independent of a vehicle’s engine. Security is vitally important to the Ford Motor Company and Microsoft. Effective measures have been taken to protect Sync from viruses, and we do not share any information on our strategies or tactics.
You just can't make that kind of PR up folks... they're taking great measures to protect your security, they're just not willing to release any of those details ... just in case.

Wednesday, November 14, 2007

Silent cure, spreads disease (WSUS and Microsoft)

It's amazing, every few months it seems Microsoft gets called out on something that you just look and say... why? For example, today all across the US, WSUS servers began breaking with the dreaded "unknown error" problem. I'm not going to beat a dead horse since you can read about the issue in depth here, on, but rather I think I'll once again take a slightly different angle.

Here'a the short and ugly of it...

The problem was that on Sunday evening, Microsoft renamed a product
category entry for Forefront to clarify the scope of updates that will be
included in the future. Unfortunately, the company said, the category name that
was used included the word Nitrogen in double quotes (appearing as "Nitrogen").
A double quote is a restricted character within WSUS, which created an error
condition on the administration console.
This isn't the first time WSUS users
have run into trouble. After
May 2007 security updates
, several users reported WSUS malfunctions.
Put yourself in the following situation, and then ask yourself what your answer would be to my hypothetical question.

You're running a very large enterprise, 200+ servers, 1,000+ workstations all spread about in different locations. You depend on Microsoft to patch their software and you're using the native WSUS software to keep your systems up-to-date.

Given the above, what's worse... an unreliable patching tool, or no patching tool at all? The reason I pose this question to you, my readers, is because it's appalling what things system admins have to put up with. We're almost a decade into the patching circus and still the cure is often the cause of the disease? What's worse, now the "cure" is often a silent update that breaks several major components in key systems. This presents a different security challenge than people normally see. Let's assume that on a good day, WSUS patches 100% of your systems in record time, delivering peace of mind to your organization in a matter of minutes. Let's also say that on a critical day, MS WSUS breaks, and some major super-critical patch that's coming out today cannot go out. Does it even matter that you have the patch in hand when your delivery mechanism is busted?

I would argue that once again, this is a clear case where separate tools must be decoupled. You can't have the company that builds the software also writing the software that patches that buggy software. That's a case of the fox guarding the henhouse in the worst way.

To Microsoft I say "Shame on you!" for at least two reasons.

  1. First, you've clearly managed to have poor Quality Control (really? this many years later?) and you're introduced a bug in a major, mission-critical piece of software which manages our Microsoft infrastructure
  2. Second, you're obviously managed to send out this magical update without alerting anyone, or even hinting that a new update to the updater is up and on your systems
  3. Third... you still haven't learned.

That's really all there is - just another blunder. Maybe we're hearing about things like this because MS is such a big target but isn't that the bane of being the biggest? You should by now know you have the biggest target painted on your forehead, and every spotlight is on you waiting for you to screw up. Well, Microsoft, thank you for not letting me down, once again.

To be fair, I run gentoo linux, which over the years that I've updated my systems, has never broken itself...

Wednesday, November 7, 2007

The H1-B and security?

Allow me to explain.

This news story getting front-(virtual)-page coverage on ComputerWorld , is extremely interesting, and at the same time boring, old news. For years now, we've been debating how the H1-B visa is taking away jobs from US-based workers. Whether you agree with it or not, let's address the issue from a different angle. I would like to turn your attention to how this affects security, and the viability of a company.

Let's assume for one minute that Company A (some big-pharma company) is hiring H1-B "contractors" from off-shore. Let's further assume that they're effectively vetting their employees because they understand the value of knowing everything about the people you hire to help stem insider security threats. Now, maybe I'm reaching a bit here, but here is how I see a company effectively vetting their employees:
  1. Employment history check (calling previous employees)
  2. Criminal background check
  3. Drug testing
  4. Credit check
  5. Extensive background check (military-grade) for those who work in security or super-secret labs making the next wonder-drug
Again, let's not look at whether these H1-B candidates take away jobs from US citizens, but let's address security. All of the above checks can be run against a candidate from the United States, but how many of these things can you effectively check against, for example, someone from China? You can only get the employment history that their contracting agency gives you, and have to assume they're not just making things up (really, do you trust them?). You also have absolutely no way of doing a criminal background check (save for InterPol, which if they show up on you've done something wrong), maybe you can ask them to submit for a drug test, but certainly they'll have no verifiable credit history or extensive background check available.

What does that mean to you, the hiring manager at Company A? It means you're hiring a threat. Period. It doesn't matter how you try to word-smith your contracts, the fact is you're hiring an unknown, and in security unknown equals threat. If you don't know what's in the box, odds are you're not stupid enough to allow it into your perimeter. But - the fact is, we allow contractors we haven't fully vetted into our environment all the time. We then give them access as system administrators, customer service representatives, database administrators, researchers, lab assistants and many, many other sensitive positions.

So let me summarize. The article is interesting in that it brings back up an old debate which has raged on for years and will likely not be settled by you or I, but rather by greedy politicians who cede positions and votes to lobby groups headed up by Oracle, Microsoft, and other greed-based organizations who's goal it is to hire the absolute cheapest labor, period. But that's not the point. The point is that we're allowing threats into our environment, nae, we're asking for threats to come into our environments. That's a security issue. That we need to raise.