The Software Testing Club recently put out an eBook called “99 Things You Can Do to Become a Better Tester“. Some of them are really general and vague. Some of them are remarkably specific.
My goal for the next few weeks is to take the “99 Things” book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions. 
Suggestion #99: Question the veracity of 1-98, and their validity in your context – Kinofrost
Heh, I think it’s somewhat appropriate that we close out this project (and land on #99) with a need to talk about context directly. Yes, I admit it, I consider myself an adherent and a practitioner who believes in and tries to follow the context driven principles (below in the workshop, btw 😉 ). Too often we talk about context driven testing as though “it depends” solves all the problems. I’m going to do my best to not do that here. Instead, I want to give you some reasons why being aware of the context can better inform your testing than not being aware or following a map to the letter.
Workshop #99: Take some time to apply the values and principles of Context-Driven testing, and call on them when it comes to determining of anything from these past 98 suggestions actually make sense to be used on what you are working on right now.

First, let’s start with what are considered to be the guiding principles of context-driven testing (these are from

– The value of any practice depends on its context.

– There are good practices in context, but there are no best practices.
– People, working together, are the most important part of any project’s context.
– Projects unfold over time in ways that are often not predictable.
– The product is a solution. If the problem isn’t solved, the product doesn’t work.
– Good software testing is a challenging intellectual process.
– Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.

The most likely comparison you will hear is something on the polar oposite ends of the spectrum, i.e. the differences between testing a medical device like a pacemaker or the control software for the space shuttle vs. a video game app for an iPhone. On the surface, this should feel obvious. The scope of a project like a medical device and the repercussions of failure are huge. The outcome is literally life or death for some people. When it comes to a video app or an inexpensive game, it hardly warrants comparison. 
With that in mind, let’s try something a little more direct in comparison. What level of testing should go into the actual control software for a pacemaker vs. a monitoring application that resides on a computer for a pacemaker? The pat answer doesn’t work as well any longer, but there is still a question here that’s not trivial. Are there differences in testing? The answer is yes. The pacemaker controller itself would still be going through much greater levels of scrutiny that the monitoring software would. In the event of system failure, the monitoring system can be rebooted or the program turned on or off, with no effect at all on the pacemaker itself. If the monitoring software did cause the pacemaker to malfunction, that would not only be seen as catastrophic, it would also be seen as intrusive (and inappropriate). 
This opens up different vistas, and again, begs the question “how do we test each of the systems?”. The first aspect is that the pacemaker is a very limited device. It has a very specific and essential functions. There’s less to potentially go wrong, but its core responsibility has to be tested. In this case, the product absolutely has to work, or has to work at a astonishingly high level of comfort for those who will be using them. For them, this is not a math theorem, this is the power of life or death to them. The monitoring software is just that. It monitors the actions of the pacemaker, and while that’s still important, it’s of a far secondary level of importance compared to the actual device. 
This brings us back to our past 99 examples. advice I’ve given may work fine for your project(s), but in some cases, it may not be wise to use the approaches I gave to you. That is to be expected. I can’t pretend I will know every variable you will need to deal with, and for you to say “well, that may be fine for your project, but my manager expects us to do…” and yes, there you go, that’s exactly why we tend to not spell things out in black and white when it comes to context driven testing commentary. We need to look at our project first, our stakeholders next, and the needs of the project after that. IF we are planning our testing strategy without first taking those three things into account, we are missing the whole point of context-driven testing. 
Bottom Line:
In this last statement here, I’ going to be borrowing from my own website’s “What it’s all about” section. In it, I share a quote from Seth Godin that says “Please stop waiting for a map. We reward those who draw maps, not those who follow them.” In this final post, I want to make sure that that is the takeaway that this whole project gives. It would be so easy to just look at the 88 Things, assign the ideas to our work, and be done with it. I’ve strived to put my own world view, my own context, and write my own map in these posts. I may have succeeded, I may not have, but if there’s any one thing I want to ask of anyone who has followed this is to not follow any of these ideas too closely. 
If these workshop ideas feel uncomfortable for what you are doing, don’t get frustrated. Instead, focus on why they feel uncomfortable. What is different in your case? What could be modified? What approaches should be dropped altogether? It’s entirely possible that there are better ways to do any and all of these suggestions than what I have spelled out here. I encourage you to find out for yourselves. I’ve drawn a map, but it may not be the best map for you. If it’s not, please, sit down and draw your own map. the testing world is waiting to see where you will take it.