Recently, I was invited to deliver a talk at a local meetup group that focuses on automation in testing, the Software Test Automation Group. Since this group has a strong leaning towards technical aspects of testing, I decided to explain the terms testing and checking to them and why understanding these terms matters. During the talk, I touched on Heuristics and Oracles as approaches of testing and I could see the puzzled look on many faces. These terms were clearly new to many people in the room and I am sure for many others the word oracle was limited to the company Oracle. I wasn’t surprised, though.
For many testers, heuristics and oracles are strange or new concepts. They find them pedantic, too philosophical and not practical for use in real situations (yes, I have heard these comments from testers as well as non-testers).
It takes some education and explanation about heuristics and oracles (and how we can use them as effective tools to amplify our testing) before people start to “get it”.
In its simplest definition, a heuristic is a rule of thumb that you apply to solve a problem. Wikipedia definition is here:
A heuristic technique, often called simply a heuristic, is any approach to problem solving, learning, or discovery that employs a practical method not guaranteed to be optimal or perfect, but sufficient for the immediate goals. Where finding an optimal solution is impossible or impractical, heuristic methods can be used to speed up the process of finding a satisfactory solution. Heuristics can be mental shortcuts that ease the cognitive load of making a decision. Examples of this method include using a rule of thumb, an educated guess, an intuitive judgment, stereotyping, profiling, or common sense.
Often enough, when we face a problem, we try some solution that may work. If you suddenly find your computer inoperable, what do you do? You may press the power button; if that doesn’t work, then you may check the power cable, and if that also doesn’t work, then you may try plugging it into a different power source and so on. Most of us try a number of steps to solve our problems. These steps may not always work but we try them because we know that they may work.
When you test software, you use these heuristics either knowingly or unknowingly. Even when you have your test cases or scripts written down to minute detail, you don’t usually follow them exactly as-is. Have you noticed there are times while testing that something captures your eye that is either not in your scripts or not even in the requirements and you say to yourself, “hang on, that doesn’t look right. Let me try this (something) to see what happens.” You suspect a problem because heuristics (your knowledge, learning and experience) are guiding you.
What are oracles?
An oracle is a principle or mechanism by which we recognize a problem.
I often use an example of a calculator to explain oracles. Let’s consider adding two numbers on a basic, digital calculator. Let’s press 3, then press 4 and then press the equals button. What do you expect? You may expect to see a 7 on the screen because your previous experience with calculators – and mathematics – has taught you that 7 is the right output.
Now what would you do if you see this 7 appearing upside down, or as 7.000 or on the left side of the screen instead of the right? You will probably suspect that there is a problem there. And how do you know that there was a problem? The experience with calculators is your oracle. You are comparing your results with similar, comparable products and also from your familiarity with other calculators. In other words, you are seeking consistency or using “consistency heuristics”.
In our example, the oracle will fail if what we saw as problems are in fact features of the product.
James Bach and Michael Bolton devised a mnemonic called FEW HICCUPPS where F represents familiarity and the first C represents comparable products. These heuristics are fallible, but they help recognize a problem. These techniques, like any other testing technique, are fallible and context-dependent.
Familiarity. We expect the system to be inconsistent with patterns of familiar problems. Explainability. We expect a system to be understandable to the degree that we can articulately explain its behaviour to ourselves and others.
World. We expect the product to be consistent with things that we know about or can observe in the world.
History. We expect the present version of the system to be consistent with past versions of it.
Image. We expect the system to be consistent with an image that the organization wants to project, with its brand, or with its reputation.
Comparable Products. We expect the system to be consistent with systems that are in some way comparable. This includes other products in the same product line; competitive products, services, or systems; or products that are not in the same category but which process the same data; or alternative processes or algorithms.
Claims. We consider that the system should be consistent with things important people say about it, whether in writing (references specifications, design documents, manuals…) or in conversation (meetings, public announcements, lunchroom conversations…).
Users’ Desires. We believe that the system should be consistent with ideas about what reasonable users might want.
Product. We expect each element of the system (or product) to be consistent with comparable elements in the same system.
Purpose. We expect the system to be consistent with the explicit and implicit uses to which people might put it.
Statutes. We expect a system to be consistent with laws or regulations that are relevant to the product or its use.
World. We expect the product to be consistent with things that we know about or can observe in the world.
History. We expect the present version of the system to be consistent with past versions of it.
Image. We expect the system to be consistent with an image that the organization wants to project, with its brand, or with its reputation.
Comparable Products. We expect the system to be consistent with systems that are in some way comparable. This includes other products in the same product line; competitive products, services, or systems; or products that are not in the same category but which process the same data; or alternative processes or algorithms.
Claims. We consider that the system should be consistent with things important people say about it, whether in writing (references specifications, design documents, manuals…) or in conversation (meetings, public announcements, lunchroom conversations…).
Users’ Desires. We believe that the system should be consistent with ideas about what reasonable users might want.
Product. We expect each element of the system (or product) to be consistent with comparable elements in the same system.
Purpose. We expect the system to be consistent with the explicit and implicit uses to which people might put it.
Statutes. We expect a system to be consistent with laws or regulations that are relevant to the product or its use.
Google Calendar has a logo and a search bar on the page. When I asked people at the meetup what they expect if I click on the logo, many said that they expect to navigate to the Google home page. Why? Because that is what most other websites do. So that behaviour became an oracle. An inconsistency in this behaviour might have prompted some of them to log a bug (and their bug report would have more credibility by referring to their oracle than just expressing an opinion).
Learning more about heuristics and oracles can help you test better. There are numerous resources available online. For me, I often resort to my usual source of knowledge, Michael Bolton’s blog. Specifically, read “oracles from inside out, part 1-9” on the developsense blog along with these posts to learn more about heuristics:
A useful tool that I have been using is the Test Heuristics Cheat Sheet created by Elisabeth Hendrickson et al. If you have come this far, I would urge you to learn and then start using Heuristic Test Strategy Model created by James Bach.
Recent Comments