The last couple of days have had some posts regarding the difference between testing and checking – in addition to a further comparison of machine checking compared to human checking.
The posts that I have seen so far are from James Bach and Michael Bolton: Testing and Checking Refined
Iain McCowatt’s blog in response to that post: Human and Machine Checking
This is my response to Iain’s post. You may need to read the above posts before reading this post – or perhaps not.
First let me say that I have been a big fan of the distinction of testing vs. checking since Michael Bolton first mentioned it to me at TWST (Toronto Workshop on Software Testing) about 4 years ago (I think). I am very grateful to James, Michael and Iain for making their stance on this topic very clear with very nice blog posts.
I have a point that I would really like to make and I feel has been clear all along, but after talking with Michael Bolton last night, it is clear to me now that the point I hope to make is not currently clear in the minds of all (or even most) testers.
When a “tester” (for this post I will use this term to refer to someone that is assigned to execute one or more manual scripts) starts to execute a script they will not behave in the same way as a machine executing an automated script execution would. This should be quite obvious to anyone who stops for a moment to think about it. The machine has the ability to execute commands and compare the results much faster than a human BUT the machine can only compare what it has been programmed to compare. In Iain’s post he quite nicely states in his summary:
Computers are wondrous things; they can reliably execute tasks with speed, precision and accuracy that are unthinkable in a human. But when it comes to checking, they can only answer questions that we have thought to program them to ask. When we attempt to substitute a machine check for a human check, we are throwing away the opportunity to discover information that only a human could uncover.
Iain also very eloquently mentions that humans will always be able to do more than just checking:
What a machine cannot do, and a human will struggle not to do, is to connect observations to value. When a human is engaged in checking this connection might be mediated through a decision rule: is this output of check a good result or a bad one? In this case we might say that the human’s attempt to check has succeeded but that at the point of evaluation the tester has stepped out from checking and is now testing. Alternatively, a human might connect observations to value in a way such that the checking rule is bypassed. As intuition kicks in and the tester experiences a revelation (“That’s not right!”) the attempt to check has failed in that the rule has not been applied, but never mind: the tester has found something interesting. Again, the tester has stepped out from checking and into testing.
The part that I didn’t see Iain mention (and this is the point that I wanted to make) is that not all “testers” will notice much more than a machine. I suggest that the tester will likely only notice more than what they are asked to check (i.e.: more than a machine) IF they possess at least one of these traits:
- engaged,
- motivated,
- curious,
- experienced,
- observant,
- thoughtfully applying domain knowledge (I couldn’t think of a way to shrink this one down to a single word).
Some of the traits above may exist and will not mean that the tester will necessarily notice something “outside” the script – but without any of these traits being present during the script execution I suggest there is little hope that the tester will notice anything “off script”.
I have an acquaintance who is the director of a large system test group at a Telecom company (and not at my previous employer – Alcatel-Lucent). She was wanting to assess the effectiveness of her manual test scripts so she had over 1000 fault reports raised by manual testers analyzed for the trigger that made the tester raise the bug. She found that over 70% of the fault reports that had been raised over the past year had been raised by a tester noticing that something was wrong that was NOT specified in the script. Only 30% of the faults were triggered by following the script.
To me this is incredibly important information! If I was to replace all of those tests with automated tests, then my fault finding rate would drop by 70%. If I was to outsource my testing to the cheapest bidder then I predict that my fault finding rate would drop off dramatically because the above traits would likely not be present (or not as strong) in the off-shore test team.
As I reflect on what I have been saying about testing vs. checking over the past few years I have been assuming that when I talk about “checking” I am talking about unmotivated, disinterested manual testers with little domain knowledge OR I have been talking about machine checking. Once you introduce a good manual tester with domain knowledge, then you will find it very difficult to “turn off” the testing. To me the thought of a good tester just “checking” is absurd.
Good testers will “test” whenever they are interacting with the system – whether they are following a script, a charter, or just galumphing. Bad testers will tend to “check” when they interact with the system – either because they don’t care enough or because they don’t have the required knowledge to “test”. Machines will “check” (at least for now – who knows what the future will bring?).
Recent Comments