Software testing generally involves 3 facets - functionality | performance | security - if you're doing it right. The true problem for any tester or manager is when these three components don't make it into every testing cycle. This is akin to having to choose which of your 3 children will get the braces and which simply get a toothbrush and a slap on the back.
Since starting to look at web application testing more in-depth just over 2 years ago I've learned a great deal about testing cycles. While this may seem like a simple concept, there are nuances which can make your head spin!
In my mind, it all breaks down to 3 simple questions:
- When to test
- What to test
- How much of it to test
To get you started think about the real world scenarios that you encounter every day. Applications (and not just those written for the train-wrecks we call web browsers) are released on a regular basis at your place of employment - I guarantee it. If you don't know about it you have an even bigger problem than I am addressing here...you should talk to someone about that process problem.
Think about how many applications your company delivers. Think about whether you're doing Agile or traditional Waterfall development methodology. Think about how long your release cycles are, how many people are involved and what powers you have to stop a poorly written application from going live.
Now- I want you to scroll back up and look at those 3 points I've highlighted for you. How do you decide each of those 3 pieces? Who makes the decisions?
So while your brain is going, write them down and either post them here (anonymously, or otherwise) or email them directly to me. I want to get some real-life input from some of you to get your feedback and figure out how you solve some of these problems so that others can learn from your experiences and mine.
Thanks for reading ...I look forward to your feedback!