At the TestBash in Cambridge,
Steve Green introduced an 8-layer model for Exploratory Testing.
Steve started with some quotes about Exploratory Testing that he hears from other people:
- Ad-hoc and random
- Unstructured and unplanned
- just trying to break the system
- don’t now what you’ve done
- don’t know what you haven’t done
- not repeatable
A bunch of them I encounter as well during my work.
If you take a look into other professions, exploration is a highly valued activity, Steve explained. Sir Francis Drake for example certainly had a plan and knew where he had been, still people took him very seriously. How come this is different in testing?
In Steve’s earlier days, he started with steps to test a system. They eventually broke the whole system. Identifying the steps to reproduce the bugs, and finding the root cause thereby, they could identify why it failed. That took a lot of time earlier. Over the past ten years our community came to an understanding of Exploratory Testing that is highly structured in nature. These structures includes the six building blocks:
- Inventory – what is there to test?
- Oracles – how do we know if it’s right?
- Test plan – a flexible outline of our work
- 8-layer testing model – a structured approach to exploration
- Reporting – minimal but sufficient test reporting is crucial in a management report, and we can zoom into more detailed information if necessary
- Management – ideally session-based (but I don’t dare to call it a best practice)
The 8-layer model is a framework, not a process. It helps us plan and control our testing. It also helps us to find bugs with the simplest sequence of events and the most “vanilla” data, making diagnosis and bug advocacy easier. The 8-layer model also provides a vocabulary for reporting test coverage.
Steven explained that there is an underlying paradigm. We just do things to the extend that it is useful to do so. That might mean documentation that is minimal in regards to test coverage and results.
The 8-layer testing model consists of
- Input constraint and data validation tests
- Input combination tests
- Control flow tests
- Data flow tests
- Stress tests
- Basic scenario tests
- Extended scenario tests
- Freestyle exploratory tests
(ICICCFDSBSESFE doesn’t form a Mnemonic, unfortunately.)
You can leave some of the layers. For example if your developers have earned your credibility that they are doing decent unit testing, then you can focus more on the later layers of the model.
In the first layer, we focus on input constraints and data validation tests. The TestObsessed heuristic cheat sheet provides some examples for these. Mandatory fields, maximum length of fields, as well as domain constraints like permitted character, and formatting rules are some examples that Steve mentioned. If there is a functional specification or data dictionary we can compare the actual behavior with the intended behavior. If there isn’T, we progressively build our own data dictionary. Steve explained that they once used a voice recognition software to feed data into the system. At that point they found out that they were no keyboard and mouse clicks, still the software should work.
On layer two we start looking for combinations and how things interact with each other. We test relevant combinations if we know that inputs interact with each other. We also look for non-documented interactions. Steve also pointed to the pairwise testing work from James Bach and Justin Hunter which can help come up with a minimal set of combinations for a given set of inputs.
In layer three we take a look on flows through the system. These are aimed at the business logic in a structured manner. We identify all logical paths through the system, and the data required to force the system through those paths. Generally we use very “vanilla” data to avoid triggering bugs that are not related to the logic. We use unique data in every field where possible, constructed such that we can easily tell if it is corrupted, missing or it is in the wrong place.
Layer four is very related to this. Here we test data flow through the system. It’s similar to control flows in layer three. We push data in through the front-end and identify where it goes as all the logical paths are exercised.
In layer five we stress the system. Once we have identified where all the data goes, we push the maximum possible amount through each field and look for truncation or other forms of corruption.
Steven’s sixth layer refers to basic scenario tests. Individual functions are executed in sequences that replicate basic happy path user behavior. In comparison the extended scenarios in layer seven combine multiple basic scenarios, and execute large numbers to simulate real user behavior over a longer period. That might mean that we repeat the same test many times, or the same set of scenarios in different sequences. He referred to James Whittaker’s book on Exploratory Software Testing whose tours expand on this concept in layer seven.
The eighth and final layer of the model uses “What if…” tests to investigate what the user on a system can do. We use our full knowledge of the system to do things like looking for race conditions, or multiple concurrent login on the same account, or edit URLs. He referred to “How to Break Web Software” from James Whittaker for examples on this.
Overall I think that the proposed model can help more traditional testers see the bridge between &§$%&-certification lessons and Exploratory approaches to software testing. I expect thinking testers to adapt from that soon, and find more creative ways to test. If you don’t do that as a tester, you’re probably more sticking to the dogma that Alan Richardson referred earlier today. In the end I wondered how the layers could map onto different mission types in session-based Exploratory Testing – but I leave that to the ambitious readers of my blog. 🙂
Recent Comments