Wednesday, November 25, 2009

Open Wide: 2 Sides of Every Coin

Last month at the CSI: Annual 2009 conference as a few of us sat around contemplating and discussing the finer points of InfoSec, an interesting topic came up.  I managed to stir up the "functional vs secure" question again and we went round and round on the question of whether it would be better for the overall state of end-user security if updates were forced (much the way Google Chrome just auto-magically updates itself) and end-users could do nothing about it... OR whether it's better to simply let people decide [for themselves] to whether or not to update.  Both sides of the point were argued (by InfoSec professionals, mind you) and I wanted to present the debate from both sides for your consideration ... and maybe get an idea of where some of you stand.

The focus revolves fundamentally around whether it is better for the users to choose to update their own computer software components (for information security reasons) or whether it would be better to simply push updates on the user without giving them an option.

First let's look at the obvious answer ... OK, maybe not so obvious but at least it's the easy, top-of-the-mind answer right?  Let's talk pros and cons... Let's pretend we can force updates on end-users.

On the positive side of the coin, it's good for the overall state of security on the Internet when you can force connected systems to update buggy software ... right?  Imagine if back when those network-borne worms were cruising and crushing Windows boxes all those machines would have self-patched themselves [from the central source] with the click of a button back at Microsoft HQ.  That's a pretty rosy picture, all those exposed, vulnerable machines and unsuspecting end-users magically patched, no user intervention required.  When someone comes to me and tells me their machine is hosed up with some piece of malware I'm always tempted to check how far behind in their Windows O/S updates they are.  Sure enough, 9 out of 10 people that come to me for help, are months behind on their Windows patches ... at best.  Some have sadly never gotten the memo and continue to ignore the little red shield in the bottom-right corner of their screen begging them to update; and it's equally likely they have never updated their machines and are vulnerable to all sorts of things.  Now, we all know that a vulnerable machine is rarely an isolated thing.  There is always collateral damage when some Windows box gets nailed with yet another nasty bug.  Once you're infested the machines around you tend to fall prey pretty quickly (as they're often just as out-of-date as your computer) and Heaven help us if you're connected to some corporate VPN or something important.  Schools, businesses, libraries, homes ... all fall victim to carelessness (or cluelessness, your pick if it even matters) when it comes to leaving machines unpatched.  It would truly be awesome if any Internet-connected machine would automatically grab and install the latest updates as they become available (using some realistic interval of every 6hrs or something) - and I'm willing to bet the incident count would drop substantially.

Sure, problems would all be immediately fixed up and the worms would die quickly ... but don't forget the side-effects.  The ugly truth is this - remember when you last updated that "super-critical" Windows patch and your super-critical business application stopped working?  Now imagine that on a massive scale.  For reasons beyond my comprehension developers tend to exploit unintended functionality otherwise known as defects to make their programs work.  Thus, when the vendor comes along and patches a gaping hole allowing crazy functionality ... you guessed it, the applications break and have to be re-engineered.  How many of these can you name off the top of your head?  I bet it's more than 1.  In the real world not all patches are deployable to our workstations because they may just break something we can't live without.  It doesn't matter that the break is caused by a fix for a critical security issue the application is exploiting ... it only matters that the application cannot break down, and the fix cannot be applied.  Without sufficient choice or option a lot (and I mean, a lot) of businesses would be in seriously hot water almost every patch Tuesday.

So really, neither of these options comes out as the clear choice in any real-life setting.  While I would love to enforce updates on everyone I know that doesn't know how to use their computer properly ... the reality is it would break a lot of things people cannot function without.  The reality of security is that if you can't do something it really doesn't matter whether you're secure or not.  So there has to be a happy middle somewhere?

What if... when you installed Windows it asked what type of PC you were installing and gave you the choice between "Home User and Enterprise User"?  In Home User mode it would ask you if you're a computer expert and if you answered NOT it would simply change all the internal settings to auto-update, no choice.  If you fancied yourself a computer genius the O/S installer would ask you if you wanted forced updates or if you would simply like to be alerted of updates that you can then go install on your own.  In the enterprise/corporate world of course the choice would be made at the central control servers (maybe via an AD policy element).  This would then allow a business to choose which model it wants to follow, although I highly suspect few would choose the forced-updates.

The real answer, for those of you living in today's reality is that while we all would love to force updates on people ... it's simply not feasible to do so.  Pushing updates may make everyone safer to some measurable degree but it may also drop productivity and usability by about the same percentage which drives us to a catastrophic failure.

What do you think?  Where do you stand?  Now is your chance to provide that sound argument for your beliefs and aspirations.  I look forward to reading your comments!  (when you post a comment please let me know if you do NOT want it published!)

Friday, November 20, 2009

Apple vs. Kaspersky - Functionality Wins

Let me set the backdrop for you like this... I just loaded a new machine with Windows 7 primarily to continue to use some of the "can't replace" Windows apps ... one of them being iTunes for my iPhone.  As far as installations goes, everything went great!  I installed the OS first, then my stand-by anti-virus Kaspersky (KAV) 2010... then I went and installed iTunes 9.  Everything was solid.

Once I got everything else I needed installed I started to re-load my iTunes library from the ginormous external drive I have ... and still, all was good.  Last thing I needed to do was re-download all my podcasts.

Now, let me remind you in case you've somehow managed to forget, how much I value functionality over security.  I don't.  I think the rate at which outrageously unnecessary functionality wins out over common-sense security is appalling.  Moving right along with the story ...

I bought a few songs via iTunes, downloaded them successfully and started rocking out while the podcasts were supposed to download.  I read some email ... I "twittered", and read some blogs from my Google reader.  I then went back to my iTunes only to find that it had failed at downloading every... single ... podcast.  Every single one had failed with an error -3458.  Googling the error I couldn't find anything coherent, or relevant beyond iTunes 7 ... even some stuff that recommended I check permissions on folders.  But since iTunes had just installed itself on a new machine, and everything else was working - even downloading newly purchased music - I was baffled.

This is where my spidey sense kicked in and I thought ... "hrmm, what if Kaspersky is somehow causing this?"  What I did next was turned OFF (paused, as KAV calles it) the anti-virus client and tried downloading the podcasts again.  The result?  You guessed it ... everything started downloading smoothly.

I was absolutely baffled.  Why in the world would downloading regular music work fine, while downloading podcasts fail?  Totally baffling.  Without digging into a packet sniffer (which I had not yet installed on that machine) I emailed my go-to Kaspersky support guy and Kenneth quickly responded (as he always does ... which makes me wonder if he sleeps?).  Anyway ... there was no internal knowledgebase hint at Kaspersky but what he suggested was mind-boggling from a "security oriented person".

Kenneth suggested I configure Kaspersky Antivirus to trust iTunes.exe and iTunesHelper.exe ... for no other reason than "it would probably work".  Did it?  Yea, sadly this solution works.

Now, we had a longer conversation about what's going on behind the scenes, and apparently it has something to do with the way that iTunes (thanks Apple) is ever-expanding what iTunes actually does on your system ... and something dealing with the way that podcasts are downloaded goes beyond what the normal profile for an application allows ... thus podcasts fail to download unless you explicitly trust iTunes binaries on your machine.

OK, so here's my problem.  First ... what the hell is Apple doing with iTunes that requires such a "constantly changing software profile" as Kaspersky support put it?!  I would really like to figure out what Apple's doing, and why they feel the need to change the program fingerprint "with every update" ... very interesting indeed.

Now, what has this taught me?  Once again boys and girls ... functionality has run amuck.  The answer, of course, if you want the cool things that programs like iTunes do ... you have to take away the security controls.  I don't know about you but explicitly trusting iTunes makes my skin crawl ... I really wish that there were other alternatives for connecting to the iTunes online store.

I'm mad as hell folks ... mad as hell that functionality, over and over, and over ... continues to win over common-sense security controls.  I guess as long as cool widgets are built that even people like me can't seem to live without ... this will remain the status quo and there is no incentive to change.


Have you run into anything like this?  Have a feature vs security story to tell?  Either leave a comment or catch me on Twitter (@RafalLos) - I want to publish the best one out there!

Thursday, November 19, 2009

Bring on (the) KY

Hey everyone!

Just a quick note that tomorrow [Friday, November 20th, 2009] I will be speaking at the Louisville, KY chapter of ISACA (more info here on their homepage) on the topic of "Solving Problems That Don't Exist".  If you missed my ISACA eSymposium earlier in the year, and you happen to find yourself near Louisville tomorrow ... register and come by!

The initial talk via webcast picked up something like 1,600 folks so I welcome everyone to come by.  Bring friends, co-workers, your management ... maybe you'll learn something new or just spend a good lunch getting to hear what folks around you in similar industries are doing about this type of issue.  You DO need to reserve your space, so do so now if you haven't already!

If you want more information, or the slides ... or a seat please let me know!

Solving Problems That Don’t Exist
Building better security practices

Friday, November 20th 2009
11:30 – 12:00 Networking
12:00 – 1:00 Speaker

Waterfront Plaza (Directions)
10 Floor east tower
321 W Main Street
Louisville, KY

COST: $5 (Pizza will be served) – 1 Hour of CPE
Please RSVP to .
(Cash will be accepted at the door, but please RSVP)

Tuesday, November 17, 2009

OWASP 2009 (AppSecDC) Thoughts

I'm finally home and have a minute to write about the past week's OWASP AppSec DC 2009 conference.  And what a conference it was - far and away the best conference on information security of the year.  This includes the organization, the venue, the audience/attendees and the presenters.

I think some of my favorite presentations were Josh Abraham's 20-minute "Synergy! A world where tools communicate", Tom & Kevin's "Social Zombies: Your friends want to eat your brains", Chris Weber's 2 outstanding talks "Finding Hotspots" and "Unicode Transformations", and of course RSnake's "The 10 least likely and most dangerous people on the Internet".  If you missed of those (or just want to re-visit them) the OWASP folks will be posting the videos and slides shortly if not already... check here.

I think it needs to be said that the OWASP crowds are some of the more passionate folks around ... while there are still some zombies like there were at CSI: Annual 2009, it's nowhere near as bad!  People actually participate, and I saw many hallway discussions that happened - and not just amongst the speakers either.  This was a great chance to combine ideas, pick people's brains and think about how to solve some of the problems plaguing application security.

Perhaps the most interesting presenter was Chris Weber with the "Unicode Transformations: finding elusive vulnerabilities" talk ... that was seriously fascinating.  I know I sat and stared as Chris demonstrated his mastery of the Unicode world and some of the ways of encoding, double-encoding and other tricks that even made my head spin.  I can't wait to dig into this topic more...

As always the OWASP projects were presented and updated, and I think the 3 that are on my personal watch-list (and should be on yours) are the ESAPI (Enterprise Security API) project, the OWASP O2 Platform, and the ESAPI Web App Firewall.  Some really big dents can be made in the general insecurity of web applications if these 3 are executed right, and deployed properly.

I'd like to thank everyone who attended my "When Web 2.0 Attacks!" talk, and if you have any questions, comments, discussions or other just want the slides you can always email me directly or leave a comment with your contact info!

See everyone at the next OWASP event!

Monday, November 9, 2009

The iPhone "worm"... SRSLY

Read carefully because I'm only going to say this once ... the "iPhone worm" everyone is buzzing about is possible because of the fact that people jailbreak their phones and then do not change their admin password from the default.  That's seriously asking for it.

At any rate, if you read up on the iPhone, infections like this are only [at least currently] possible on a jailbroken iPhone due to the iPhone's inherent code-signing feature.

When ikee was interviewed over IRC for JD's blog, the virus writer had this interesting tidbit to say:

[09:05] {ikee} Secondly i was quite amazed by the number of people who didn't RTFM and change their default passwords.
[09:07] {JD} How far did you expect it to spread, exactly?
[09:08] {ikee} Well i didn't think that many people would have not changed their passwords I was expecting to see maybe 10~ or so people, at first I was not even going to add the replicate/worm code but it was a learning experience and i got a tad carried away :)

Well there you have it.  Even ikee didn't think that there were enough people who didn't "RTFM: Read the F*****g Manual" and neglected to change the default password.

Lesson here?

  1. RTFM
  2. Always know what you're doing when you apply any "hack"

Friday, November 6, 2009

Completely Missing the Point

You know what really grinds my gears?  "Writers" who publish articles on topics they clearly have no understanding of ... that's magnified even further when they write for a publication (physical or digital) that has a legitimately large reader-base.

I write this after careful consideration of an article a good friend of mine sent me the other day ... which made me just "WTF" all over.  His email went something like this:

So, a colleague forwarded me the URL of a slate article.   (copied below)
It got me thinking, especially the complaint about drupal blocking javascript.  
  1. Business schools seem to be churning out NYT readers.
  2. NYT readers also probably read the Washington Post and Slate.
  3. These readers likely believe everything written.
  4. These readers as C* people (CIO, etc.) have the typical "superior", "I know all" attitude.
  5. They read articles like this and see it as gospel.
This is why web app security is difficult to explain to the higher-ups ... after all, the experts at Slate tell us that Javascript is a 14-year-old technology and we shouldn't be blocking it on our website!

So ... I thought about it some.  So now I will tell you why I think Chris Wilson needs to stop writing about technology ... at least until he's learned a little about it.

First off, I'm not an open-source bigot; in fact, I'm not for or against open or closed source ... each has their merits and has their place in our very large technology world.  Second, I learned a long time ago that open source people are their own special breed and much like their closed-source counterparts have their unique quirks, nuances and such.  Lastly, I think this article is both inflammatory and misguided, and it misses the point entirely.  In fact, I think it's so misguided that I agree with my friend in his thought pattern on how this article actually can lead to less understanding of security concepts!  But me ranting isn't going to make my point on its own, let's analyze this article... follow along boys and girls.

  1. Chris must get paid for flowery language ... or his audience is just so much higher-brow than I because the first few paragraphs remind me of Bill Murray dropping C4 explosives into a gopher hole ... way, way over-done.  By the way, what "swing demographic" is he referring to?  I know many, many sites that are built on Drupal and none of the administrators I know (personally, mind you) would call Drupal "pocked with political landmines".
  2. Drupal knows best: First off, I'm thrilled Drupal doesn't trust end-users (particularly novice admins) with the ability to drop JavaScript into where it doesn't belong.  I mean, gee Chris ... it's only JavaScript right?  What could possibly go wrong?  By the way, high fructose corn syrup is really, really bad for our children and is the leading cause of childhood obesity...
  3. Drupal is impenetrable: I have to give Chris points for his Dennis Miller -esque humor here ... although I think he meant to say INS (Immigration and Naturalization Service) not the ICE (Immigrations and Customs Enforcement) ... right?  Anywho ... Drupal's steel learning curve isn't a bad thing kids ... it discourages people from the normal "click, click, click, I've got a site" mentality.  Holy crap, you have to know something do publish a website ... no way, Wayne!
  4. Drupal hates change: Nice dig on the farm bill ... I won't even dignify this point by rebutting it.
  5. Drupal is righteous: Yes, and they damn well should be ... they built the thing and they know better than you about how it runs and what the inner workings are.  I love the "Drupal doesn't break web site. People with Drupal break web sites" ... uhmm... yea, so?  See point 4.
Alright, here's why I really think this is an article worthy of the hall of shame and why Chris needs to go back and actually do some research.  If Chris had done some research, maybe gone over to's vulnerability database he would discover that Drupal has had 264 vulnerabilities since it's been tracked... and guess what - an overwhelming majority of those have been in add-on modules.  Drupal's core is actually, by my count (and someone please, correct me if I've misjudged here) pretty well secured.

Anyway ... that comment on 14 year old technology being blocked is the genius point here, from my reading.  For my money, it doesn't get any better than when someone says something like this:
"Should you, say, go completely rogue and try to add some Javascript in the body of a page—a 14-year-old technology that controls interactive components like buttons—the platform will have none of it."
demonstrates a clear contempt for the power of "14 year old technology like JavaScript" ... which by the way remains one the web's biggest vulnerabilities.

Some advice Chris ... think before you write ... and if you have no expertise - please don't make our jobs in InfoSec any harder by spreading stupidity in the ranks.

... hey, you were all thinking it, someone had to say it.