Following article was published in the August 2015 issue of Testing Trapezehttp://www.testingcircus.com/testingtrapeze-2015-august-edition/).

Testing v/s Checking – Applied

The discussion about testing versus checking has been alive in the software testing industry ever since the distinction was first made by Michael Bolton in his thought provoking talk at Agile 2009. Michael later expanded his talk in thispost. A relatively recent postby James Bach and Michael refined the distinction even further.

What prompted me to write this case study was the success I achieved in making the distinction between testing and checking clear to the stakeholders at my current organisation. These stakeholders include the Chief Information Officer (CIO), the Head of IT Infrastructure & Service Delivery, the Development team and the Release Manager among others.

Through this case study I have also demonstrated that the distinction between testing and checking is not ‘just semantics’ but can in fact have a direct impact.

Background of this case-study:

In 2014, I moved to Australia when I was hired by my current employer, a health regulation agency under Australian National Law. The management team at this organisation had observed areas in the product as well as in the process that they wanted to improve. The managers were also keen on bringing positive changes through enhancing the capabilities of the testing team.

At the time, the testing team was struggling to cope with the demands of stakeholders because of a lack of structure and organization. The team members were trying to manage different priorities defined by different stakeholders and they were often failing to manage them effectively. There was a lot of documentation which included a Test Strategy, an Automation Test Framework and a Performance Test Strategy. There were templates for test planning, test estimation, test reporting as well as other processes. The team KPIs were defined on metrics such as test execution percentage, test coverage and number of test scripts.

What I just described may appear to some of you as a properly defined process. So what was the problem there? Quoting Gerry Weinberg:

A problem is a difference between things as desired and things as perceived.

For our testing team the so-called testing process model described above, had created the perception among the stakeholders that the team was well organized and capable of responding to the high rate of change. So while the testing team perceived a lack of support from management, the stakeholders had an expectation that the testers would deliver the outcome they wanted. This expectation was a problem in itself, as it was in fact a desire. While testers were supposed to test faster, they were also expected to write test plans and test cases merely so that they could tick all the boxes of the testing process model. Another problem was that due to bugs in product the business users had the impression that the testing team had not tested the product adequately before it was released for user testing.

I felt that there was a need for change. A change in the stakeholder’s perception, expectations and desires; a change in testing team’s communication to others and a change in the overall testing process. The team needed mentoring and coaching to understand that what was being considered testing was not testing; it was checking. Almost everyone in the group was making the reification error by presuming that test cases were testing.

I arranged multiple internal coaching sessions (with supporting reading materials) and also held regular brainstorming sessions over several months to help me transform the team. However, it seemed to me that there was a greater need to change the stakeholders’ perception of testing (and checking) if they were to accept our new processes. And due to the separation of duties across various teams, I also needed the engagement of the broader management team so that they could help me implement good testing practices and mandate the change.

Convincing the Managers:

My experience is that many senior managers do not know much about testing. Some of them also may not know much about other areas such as development or architecture. But that does not mean that they do not care about these teams. They do care because they know that without these teams they will not be able to deliver the products or services that they are accountable for. Sometimes the managers’ attitude may appear to be indifferent or uncaring, but it may be simply due to a lack of communication.

I consider myself lucky because my manager is a reasonable person. He is a good listener and supports his teams. When I explained to him the challenges that the test team was facing, he agreed that there was a need to make changes. He suggested that I prepare a roadmap for achieving these improvements within the next twelve months.

I explained the distinction between testing and checking to him using some examples. I also explained why this distinction was important and why focusing on just checking presented risks. I demonstrated the risk of focusing on checking only with an example of an important batch process that our team had just finished testing. The batch affected hundreds of thousands of records and was considered critical to the business. There were bugs that had evaded checks but when a team member ran an ET session, a critical bug was found which had the potential to destroy the whole process directly affecting clients. By using the distinction between testing and checking I was able to highlight the weakness of our processes in a way that he could understand.

In order to execute my intended improvements, he asked me to arrange for a discussion with the CIO. The discussion with Mr. CIO was a brief one. After I explained the terms to him he agreed with my suggestions understanding their impact and our objective. Indeed he welcomed the change.

The next challenge was to communicate this message to everyone in the IT and the Project Management teams.

Communicating the change to the wider audience:

My testing thinking has been shaped and greatly influenced by Michael Bolton and James Bach whose work and websites have been a great source of knowledge and information for me.

In this case, I was aware of a letter that Michael and Rob Sabourin wrote to help one of our testing colleagues some time ago. While the definitions and/or ideas behind this distinction have slightly changed since they were first written, the original letter worked perfectly in my context with some slight modifications. For example, in my message to our stakeholders, I added that they were important for us as our work affected them in some way. I also added a reference to an overview session on ET and SBTM provided by Lee Hawkins which some of the email recipients had attended. I was concerned that there might be questions whether we were planning to abolish automated checking and the human checking; so I included a specific reference to those too which were not there in the original letter. You can read the original version here

Here is the email message that I sent to Developers, Architects, Business Analysts, Project Managers, Infrastructure team, Testers and the IT Management team:
—————
Hi there,

You must be wondering why you are receiving this message about “Testing v/s Checking”. You may think that you are not a tester and you do not need to know what testers do with their ways of delivering things. But, from Testing and I.T. Team’s perspective, you are an important stakeholder and whatever we do impacts your work in some way.

I recently explained the difference of testing and checking to both Graeme (The CIO) and Con (my boss). I am delighted to tell you that they not only understood this difference, but also asked me to share this with you. We believe that it demonstrates the value that I.T. team is able to extend to you as an important stakeholder.

Testing is not checking. Checking is a process of confirmation, validation, and verification in which we are comparing an output to a predicted and expected result. Testing is something more than that. Testing is a process of exploration, discovery, investigation, learning, and analysis with the goal of gathering information about risks, vulnerabilities, and threats to the value of the product. The current effectiveness of our automated scripts is quite excellent, yet without supplementing these checks with “brain-engaged” human testing we run the risk of serious problems in the field impacting our customers, and the consequential bad press that follow these critical events (reputational risks).

At [the workplace], much of our “testing” has been focused on checking. This has served us fairly well but there are many important reasons for broadening the focus of our current approach. While checking is very important, it is vulnerable to the “pesticide paradox”. As bacteria develop a resistance to antibiotics, software bugs are capable of avoiding detection by existing tests (checks), whether executed once or repeated over and over. In order to reduce our vulnerability to field issues and critical customer incidents, we must supplement our existing emphasis on scripted tests (both manual and automated) with an active search for new problems and new risks.

There are several s
trong reasons for integrating exploratory approaches into our current development and testing practices:

     Scripted tests are perceived to be important for compliance with [regulatory] requirements. They are focused on being repeatable and defensible. Mere compliance is insufficient—we need our products to work (better).
     Scripted checks take time and effort to design and prepare, whether they are run by machines or by humans. We should focus on reducing preparation cost wherever possible and reallocating that effort to more valuable pursuits.
     Scripted checks take far more time and effort to execute when performed by a human than when performed by a machine. For scripted checks, machine execution is recommended over human execution, allowing more time for both human interaction with the product, and consequent observation and evaluation.
     Exploratory tests take advantage of the human capacity for recognizing new risks and problems. Exploratory testing is highly credible and accountable when done well by trained testers. The findings of exploratory tests are rich, risk-focused, and value-centered, revealing far more knowledge about the system than simple pass/fail results.

The quality of exploratory testing is based upon the skill set and the mindset of the individual tester. A few months ago I invited Dr. Lee Hawkins to give an overview of the structures and disciplines of excellent exploratory testing and Session Based Test Management (SBTM). My team has greatly benefitted from it.

What about Checking and automation then? Are we going to stop doing that?
No. Checking and automation still have their relative importance. We will:
     Continue with checking along with extensive testing using better ways of exploration, learning and investigation
     Still use automation scripts to ‘check’ for regression

Practical Findings:
We have been very successful in finding serious issues with our products with a fairly small test team using exploratory test approaches. As an example, recently we found critical issues in the last CI release as well as in [Mobility] projects within just couple days of exploratory testing in a load that had taken many days previously using scripted tests (test case driven).

We have also been successful in providing the requested coverage with lesser effort without disrupting our other commitments. The typical approach would have yielded at least twice the effort (or close to it) which would have caused a delay in the project’s subsequent release. Our experience using exploratory testing has demonstrated improved flexibility and adaptability to respond to rapid changes in priorities.

I will be glad to answer any questions that you may have regarding this.

Best Regards,
Rajesh Mathur
———————————-

Impact of this communication:

After this message had been sent, I began receiving responses from managers of other divisions appreciating the efforts and success the testing team had achieved in such a short time span. They had seen the testing team transforming from a process heavy team to a dynamic context-driven team. They see our team practicing ET, pairing and SBTM along with scripted checking. They notice us writing our own automation frameworks. They also now see the testing team as their trusted advisors because we provide them the information that assists and guides their decisions. They know the information they get is the result of a cognitive process and careful inspection rather than the process driven checks received previously. The testing team has a roadmap which guides it forward. Where required, the team changes direction to align with the business objectives. Team members are self-motivated for learning and sharing their knowledge; they work collaboratively and have trust and mutual respect. They challenge each other on complex tasks and also provide coaching to others where required. Each one of our team members is responsible for their own work and exercises discretion and judgement. There are no secrets in the team and other teams observe these positive behaviours. The stakeholders are impressed that the team is finding and communicating important bugs faster.

Some managers said that they were not even aware of the difference between testing and checking. Others said that this message enlightened them. One of the responses that I received was from a manager in IT Security who had little interaction with the testing team. His response was:

Rajesh,

Very informative email as always and congrats on the success.

Testing team has played a pivotal role in the releases and will continue to do.

It is great to see comms coming out of the team as well which has definitely raised the profile of the team and makes us all realise how important the testing team is and the value it brings to the organisation.

Looking forward to reading more success stories and informative material.
———————–

It is obvious from the responses I received that it is not hard to make stakeholders aware of what good testing is if you make an effort to explain and demonstrate it to them. Some people criticize, question or complain whenever there is an effort to refine testing terminology.  There are two categories of these people; those who do not truly understand testing as a cognitive task (let’s call them blindfolded testers) and those who prefer to keep it obscure to benefit their own ends (let’s call them snake-oil sellers of the testing industry).

In my experience, the blindfolded testers represent many testers outside of the Context-Driven Testing community. For them automation is important because they believe that automation can test even though it can only assist in checking.  These people also believe that merely memorizing a glossary of terms is more important than the cognitive process which guides the exploration and investigation while testing a product. These testers like to follow the testing process models, templates and documentation that some international standard prescribes instead of performing intelligent testing which fits their context.

The snake-oil sellers are those who either sell the certificates to the first category of people or employ them in their test factories. They encourage standardization of testing because standardization is easier to sell to their gullible clients. Any discussion about cognition in testing unsettles them because they do not understand good testing. Be aware of these snake-oil sellers as they will discourage you from applying the distinction between testing and checking undermining the value of testing.

Michael said in his initial post about this subject, “I can guarantee that people won’t adopt this distinction across the board, nor will they do it overnight. But I encourage you to consider the distinction, and to make it explicit when you can.”

It may take a while to bring people onboard when you do something different, but it is not an impossible task. This case study just proves that.

References and Suggested readings: