Last month at the CSI: Annual 2009 conference as a few of us sat around contemplating and discussing the finer points of InfoSec, an interesting topic came up. I managed to stir up the "functional vs secure" question again and we went round and round on the question of whether it would be better for the overall state of end-user security if updates were forced (much the way Google Chrome just auto-magically updates itself) and end-users could do nothing about it... OR whether it's better to simply let people decide [for themselves] to whether or not to update. Both sides of the point were argued (by InfoSec professionals, mind you) and I wanted to present the debate from both sides for your consideration ... and maybe get an idea of where some of you stand.
The focus revolves fundamentally around whether it is better for the users to choose to update their own computer software components (for information security reasons) or whether it would be better to simply push updates on the user without giving them an option.
First let's look at the obvious answer ... OK, maybe not so obvious but at least it's the easy, top-of-the-mind answer right? Let's talk pros and cons... Let's pretend we can force updates on end-users.
On the positive side of the coin, it's good for the overall state of security on the Internet when you can force connected systems to update buggy software ... right? Imagine if back when those network-borne worms were cruising and crushing Windows boxes all those machines would have self-patched themselves [from the central source] with the click of a button back at Microsoft HQ. That's a pretty rosy picture, all those exposed, vulnerable machines and unsuspecting end-users magically patched, no user intervention required. When someone comes to me and tells me their machine is hosed up with some piece of malware I'm always tempted to check how far behind in their Windows O/S updates they are. Sure enough, 9 out of 10 people that come to me for help, are months behind on their Windows patches ... at best. Some have sadly never gotten the memo and continue to ignore the little red shield in the bottom-right corner of their screen begging them to update; and it's equally likely they have never updated their machines and are vulnerable to all sorts of things. Now, we all know that a vulnerable machine is rarely an isolated thing. There is always collateral damage when some Windows box gets nailed with yet another nasty bug. Once you're infested the machines around you tend to fall prey pretty quickly (as they're often just as out-of-date as your computer) and Heaven help us if you're connected to some corporate VPN or something important. Schools, businesses, libraries, homes ... all fall victim to carelessness (or cluelessness, your pick if it even matters) when it comes to leaving machines unpatched. It would truly be awesome if any Internet-connected machine would automatically grab and install the latest updates as they become available (using some realistic interval of every 6hrs or something) - and I'm willing to bet the incident count would drop substantially.
Sure, problems would all be immediately fixed up and the worms would die quickly ... but don't forget the side-effects. The ugly truth is this - remember when you last updated that "super-critical" Windows patch and your super-critical business application stopped working? Now imagine that on a massive scale. For reasons beyond my comprehension developers tend to exploit unintended functionality otherwise known as defects to make their programs work. Thus, when the vendor comes along and patches a gaping hole allowing crazy functionality ... you guessed it, the applications break and have to be re-engineered. How many of these can you name off the top of your head? I bet it's more than 1. In the real world not all patches are deployable to our workstations because they may just break something we can't live without. It doesn't matter that the break is caused by a fix for a critical security issue the application is exploiting ... it only matters that the application cannot break down, and the fix cannot be applied. Without sufficient choice or option a lot (and I mean, a lot) of businesses would be in seriously hot water almost every patch Tuesday.
So really, neither of these options comes out as the clear choice in any real-life setting. While I would love to enforce updates on everyone I know that doesn't know how to use their computer properly ... the reality is it would break a lot of things people cannot function without. The reality of security is that if you can't do something it really doesn't matter whether you're secure or not. So there has to be a happy middle somewhere?
What if... when you installed Windows it asked what type of PC you were installing and gave you the choice between "Home User and Enterprise User"? In Home User mode it would ask you if you're a computer expert and if you answered NOT it would simply change all the internal settings to auto-update, no choice. If you fancied yourself a computer genius the O/S installer would ask you if you wanted forced updates or if you would simply like to be alerted of updates that you can then go install on your own. In the enterprise/corporate world of course the choice would be made at the central control servers (maybe via an AD policy element). This would then allow a business to choose which model it wants to follow, although I highly suspect few would choose the forced-updates.
The real answer, for those of you living in today's reality is that while we all would love to force updates on people ... it's simply not feasible to do so. Pushing updates may make everyone safer to some measurable degree but it may also drop productivity and usability by about the same percentage which drives us to a catastrophic failure.
What do you think? Where do you stand? Now is your chance to provide that sound argument for your beliefs and aspirations. I look forward to reading your comments! (when you post a comment please let me know if you do NOT want it published!)