I‘ve posted this blog in two parts. Part 1 focused on the issues affecting testers, this part will address the concerns often raised by managers, specifically

  • What if the assigned tester goes off sick?
  • How can we convince the project team Exploratory Testing is acceptable?
  • If we don’t have scripts, how can we supply test evidence for an audit?

A frequent concern raised by managers considering exploratory testing is “what happens when a tester is off sick and there is a deadline to meet?”
Often when suggesting test analysis brainstorming sessions (as mentioned in part 1) people say they have no one to work with as there is only one tester assigned to each stream. In reality, most managers have more than one person reporting to them, so in these teams we have implemented a ‘you scratch my back, I’ll scratch yours” approach. Someone from a different stream gives time to join a brainstorming session and we reciprocate when they need the assistance of brain power. If there truly is no one (not even a developer or BA) then the lack of test scripts isn’t an issue as there’ll be no one to pick up the testing anyway. As mentioned in Part 1, team ‘survey sessions’ help design even better testing ideas, but also solve the problem of who to turn to when a tester is out sick and a crucial deadline is looming. Those included in the brainstorming gain knowledge so are more able to pick up the area at short notice. They also have better risk awareness than someone trying to follow a test script with no prior knowledge of the project.

Often a major headache for test managers is convincing the project team that detailed test scripts are not essential or a productive use of testers time. To be fair, some project teams don’t care how we test as long as we don’t mess up. But some projects use test scripts as a way to manage the test team. “Tell me how many scripts you plan to run each day so I can monitor your progress” (most common when the test teams are offshore), or they use a simple pass fail count to attempt to judge the quality of the product under test. Finally, some review the written test scripts as a way to ensure the right aspects of the product are tested. As testers I believe *we* have the responsibility to help the project team improve the way they engage with us, so they can get the best from our service… Enhanced communication is the key to this.
Our testers who use the analysis techniques mentioned in part one, are able to present a detailed understanding of the system, the associated risks and the recommended coverage very early in the project. This verbal communication is far more powerful than sending 100’s of detailed test scripts to be reviewed. The post-script review approach inevitably results in miscommunication and re-work. Scripts are frequently mis-read or worse, don’t get read at all providing a false sense of security to the whole project. A walkthrough meeting serves the same purpose as scripts (ultimately assuring the project that the important risks and required coverage are being addressed by testing) but with less chance of a misunderstanding going unnoticed.
We have had some challenges to get the project team listening on the first occasion, but its worth the struggle. To date every project team who have been walked through a mind-mapped test approach have been impressed with the effort taken to map out their system and are happy to continue engaging with us as they realise the learning opportunity it provides them too.  Missed scope is also more easily spotted by presenting our analysis in this way. Its far easier for developers/BA’s to spot a missing branch or connection in a logic map than in a list of test scripts. 
By pro-actively demonstrating our depth of system knowledge this way, our test teams have earned credibility and found the project teams less inclined to micro-manage them in a counter-productive way. 

As mentioned in part one of this post, when I talk about exploratory testing I do not mean no documentation. I work in an audited environment, often controlled by regulatory bodies. We absolutely have to show traceability and audit-ability of the testing performed. By placing the Session Based Test Management (SBTM) framework around Exploratory Testing; we provide the coverage traceability and test evidence required, but still allow testers the freedom to continue learning, designing and executing the most appropriate test. Testers create Charters to guide the mission of their exploratory testing, allowing managers to plan and report coverage. We use a tool (such as RapidReporter or OneNote) to aid note taking whilst testing. These session charters and test notes form the audit-able test evidence required. They are also used as a learning aid… not to dictate how to test but to inform how something was tested previously.

Removing the detailed scripting part of the traditional test process frees up significant time for the test team to focus on the important goals of thoroughly understanding the system and actually performing tests. 
If you would like the freedom to exploratory test but are still concerned about letting go of test scripts, ask yourself what you are gaining from them and consider whether that can be replaced with something more efficient which still helps achieve the project goals.