Sunday, February 10, 2008

Breaking Vista: Psycho-analyzing Microsoft's mixed-message

Much has been written of late about Vista security, specifically the WGA/Windows Activation features. To date, there were two good hacks that worked great if you had a non-legal [pirated] copy of Vista, as briefly discussed here, on ZDNet. With Vista SP1 hitting, those two well-known hacks will no longer work, and I can understand why this is happening. What I don't understand is this mixed-message that Microsoft is sending.

First, Microsoft announced that it would remove the disabling feature within Windows Vista that would essentially [almost] completely disable Vista if it was not activated within the specified time-period, and replace that system with a system of "nags". That's odd. Now, Microsoft is moving aggressively to disable these hacks that have had some people running illegally installed versions of Vista, and thus creating a whole new set of problems for some unsuspecting (or some suspecting) folks. Interesting.

So why is Microsoft sending this mixed message? If you look at the big picture and recall that non-genuine versions of XP had the WGA validation removed for IE7 installations the picture gets even more "interesting".

I think I understand though - it's not about keeping hackers at bay, or beating those black-hats. It's about the larger-volume issues. It's about the home users (who would otherwise be legit) which have been OEM'd bad copies of Vista, and will now have to go spend the money to "buy" their legit versions. The IE7 bit is about market-share, that's hopefully obvious, in the wars between competing browsers.

So here's the big-picture analysis in a nutshell - Microsoft isn't after the "hackers" because they're basically (from what I can tell) admitting they won't beat them. This is a good move, playing the one-up game for the very small amount of licenses in the big picture is a losing proposition. Instead, Microsoft is after those home users who have unknowingly obtained a non-legal copy of their newest operating system, and are going to make them purchase a genuine license - but not at the price of shutting down their PC - MS wants to use the gentle prod versus the pointy stick... good move. I think overall, their strategy is solid - let's see what happens next.

Friday, February 1, 2008

The Psychology of Patching...

I read an interesting article on ZDNet, and thought I would add my own spin on it. Giving credit to the original source (http://blogs.zdnet.com/security/?p=843) I think Larry Dignan found an interesting topic. Why? Because there really is a psychology to patching, and it's not unique to Oracle, or Microsoft, or any particular software vendor. Here's why.

Eric Maurice of Oracle brings up an interesting point about how patching is really a negative experience... Russian roulette if you will - with your production environments. How many of us have had our production environment stop working, seemingly randomly, due to an unintended consequence of a patch? Think about this for a second, if at least one instance pops right into your head... you know where Eric Maurice is coming from. What I don't think I agree with are Eric's two possible solutions. First solution is to make patching mandatory, period. Second solution is to make it a measurable metric which would hopefully bring about better patching entitlement...interesting concept but I don't agree that these are the only methods of looking at patching. In fact, I don't actually see patching as a 100% must-do. I'll pause a second while you clean the soft drink which you just sprayed on your monitor...

You're wondering how, as a security veteran, I can say that patching isn't always critical. Well, it's precisely because I've been doing this for so long that I can say that. It's the exact same reason a critical vulnerability in a web application isn't always critical... you understand right? Let me elaborate here... and maybe clear things up a little and hopefully disspell your thoughts of me as a lunatic.

First, I think of patching as a last-line of defense. If your systems is so exposed so as to have to rely solely on patching, you may have a problem to start with. This is a bit controversial, as a significant number of my colleagues would argue that a patched system should need no other defenses - but allow me to make my point. I'll make no argument that front-line systems must be patched, but then... those are likely more heavily defeneded by things like firewalls, IPSes, and so on ad nauseum. Patching is important here, definitely, but since these systems are likely clustered it's likely that you can test one or a small cluster without taking down the entire system at once. When you talk about things like databases, and other such critical pieces of infrastructure - you should have multiple layers of defenses so that patches should only be a piece of the overall defense strategy. Even an internet facing design, if it's done properly, has n+3 tiers (at least 3...) and mitigants in-between the layers such as IPSes, firewalls, database front-ends possibly (look into Imperva's database cloaking technology), on top of properly secured access rights, encryption and the whole of things that are security. If all the other pieces are working right, then patching should be but a small component of the puzzle.

I'm not saying patching isn't critical and important, don't get me wrong - but it shouldn't be as "all-important" as some people from our security realm would dictate. If you don't patch to the latest Oracle patch-bundle, the sky won't fall if your application design is security-conscious.

I will admit, I have alterior motives here as well, ones that are not security-related. Keep in mind that in order for IT to be supported, as it is likely not a profit center for the company, the business must make money. If a business is to make money, downtime must be minimized, and performance must be streamlined. If you throw "must-do-it-now" patching into that mix, you will have yourself a recipe for disaster, I can virtually guarantee that. I have spent many hours on calls trying to figure out and then later explain why the latest patch set for product X my security team required is now impacting production systems - and the conversation never ended well. The CIO will always play the "it has to stay up and be functional" card over "it has to be 100% bulletproof", always. It's almost irrelevant how fool-proof your strategy is to test patches before they go into production systems, it's almost impossible (almost, because with a vast budget anything is possible) to duplicate every aspect of your production systems on a test system. Sometimes things happen you can't even predict, and BAM! You're on an outage call, but I digress...

So to my original point - patching should not be seen as a thou shalt do directive in every case. Treat your systems with the proper front-line precautions and it'll be a significantly lesser burden on you when Microsoft or Oracle, or IBM releases that next critical bug.

Feedback, as always, is welcome!

How to tell your WebAppSec program has failed you...

Reference: http://thedailywtf.com/Articles/Not-Exactly-AJAX.aspx

A friend of mine often sends me links to this site and sometimes I read them, sometimes I don't. For some reason I read through this - and it immediately hit me. This Web App Sec program failed, big. You may be saying to yourself, "Self, it doesn't even look like there is a Web App Sec program here!" Bingo.

Not to point out the obvious - or even to pick at the stupid, but let me point out the 3 deadly sins that this "programmer" committed.
  • First, he or she is building SQL statements on the fly. If reading about the latest web hack and data compromise should have taught you anything, it's that you never, ever, under penalty of death construct SQL statements on the fly like this. Yes, there are special cases, but those have to be very carefully thought-out and planned, and secured... this obviously is not one of those cases. SQL injection anyone?
  • Second, the User ID and password in the code. Viewable in your browser (at the client!) by going to view source in whatever browser you're using. If you as a developer are still relying on the "it's not there in plan text on the page, no one will see it", dig your head out of the sand and kiss your a** goodbye. That mentality became obsolete about 8 years ago and anyone who codes this way should have their keyboard permanently taken away.
  • Lastly - You have, obviously, a database server out on the Internet. Forget SQL injection, how about firing up your favorite SQL front-end (for MS SQL Server here) and doing database calls directly. Let's just cut out the middleman, save ourselves the trouble of "hacking into" the database and go straight into full database devastation mode. Incredible.
So what have we learned by seeing this absolutely atrocious piece of code? Better, could it have been prevented? The answer is an emphatic yes - and here's how, using some very simple logic and process. If you don't recall the 3 components of any good solution, they are the P P T or People, Processes, Technology that make solutions viable and workable. Here's how PPT could have saved this deadly piece of code:
  • People: (1) Trained developers rarely write stupid code like this. (2) Even if they do, having someone double-check their work, or "cross-check" will eliminate 99.9% of "stupid" coding errors.
  • Process: (1) A proper SDLC methodology would typically prevent a tragedy like this by providing for a structured approach to development, reviews of code, cross-checks, and audits along the way to producing production applications. (2) A formalized process is repeatable, and thus reinforces good habits and intelligent thinking - and learning from mistakes of yourself and your peers.
  • Technology: Either a static code or hybrid analysis tool (like Ounce Labs, Fortify SCA, or DevInspect) or Application Security Scanner (like HP's WebInspect or Watchfire AppScan) would have caught this egregious error in judgement. Other analysis and scanning software is also available out there which could have caught this as well!
What are we to learn from all this?
  1. Software developers are very seldom security-conscious, and will write bad code given the chance. (Remember, bad doesn't necessarily mean it doesn't work right; the application may work great with proper inputs and no malicious intent - but this software is easily breakable!)
  2. Developers need help. Get them trained, give them resources like a formal development process and tools to check their code
  3. Your bad code will be analyzed. Whether it's someone like me, someone at the DailyWTF, or someone trying to steal your data. It would be intelligent to make sure you or someone internal to your company is the first to find critical defects before they become catastrophic to your business.
Good luck.
Google+