For almost a year now, those who follow this blog have heard me talk about *THE BOOK*. When it will be ready, when it will be available, and who worked on it? This book is special, in that it is an anthology. Each essay could be read by itself, or it could be read in the context of the rest of the book. As a contributor, I think it’s a great title and a timely one. The point is, I’m already excited about the book, and I’m excited about the premise and the way it all came together. But outside of all that… what does the book say?

Over the next few weeks, I hope I’ll be able to answer that, and to do so I’m going back to the BOOK CLUB format I used last year for “How We Test Software at Microsoft“. Note, I’m not going to do a full synopsis of each chapter in depth (hey, that’s what the book is for 😉 ), but I will give my thoughts as relates to each chapter and area. Each individual chapter will be given its own space and entry. Ths entry covers Appendix D, and this is the final entry in this series.

Appendix D: Cost of Starting up a Test Team by Anne-Marie Charret

For some organizations, it’s entirely possible that you don’t need a test team. You may have a culture of ownership of quality and your development team may be doing a very good job at being testers. It’s possible that your customer support team may be fulfilling the roles of testers (and I myself can state that many really excellent testers came out of the tech support ranks or spent significant time doing technical support; it helps train them to be customer focused and look at problems from their perspective).

However, what if that’s not enough? What if your development team and your technical support team aren’t able to handle all of the testing needed? What if you have decided you would like to see the quality and performance of your application increase? Are you sure that a creating a test team is the solution to your problem?
Some might want to see that processes are followed, that quality issues are addressed. That’s all good, but will adding a test team confirm that the company’s policies are followed? It might, but then again, it might just add another group that doesn’t communicate or use the process. Before testing can be called on to fix a problem, it’s really helpful to determine what the problem actually is.

A cost effective test team is one that meets your organization’s needs. Many times testers are brought in to solve a problem, but what they are addressing isn’t the real problem. Instead, they are addressing a symptom that points to a bigger issue. Why are there quality issues? What’s really the cause of them? Why do some companies need a test team and other seem to do well without them? Testing is not a simple commodity; it can’t just be “plugged in” and left to run. It’s strongly influenced by the culture and beliefs of a given company. Test teams taken from one company and dropped into another will not perform exactly the same (even with all members being the same people). The company itself shapes the test team to its value system over time.

Cem Kaner describes software testing as:

“An empirical technical investigation conducted to provide stakeholders with information about the quality of the product or service under test”

The term “stakeholder” means anyone who cares about the quality of the product. Stakeholders may be in any department (sales and marketing especially). Having these people or entities in mind as you test will inform the testing, what you do, and how you do it.

It’s possible you see the value of a test team, but don’t have the budget for one at the present. It still would be valuable to go through and see what you would need in the way of testing resources, and after investigating the potential benefits of incurring that expense (remember, testing doesn’t actually make money) you may decide that, down the road as the company grows, there is a benefit to developing and growing an internal test team.

“Why do you want a test team?”

A common reason why companies want a test team is they believe the tester will be the enforcer of software quality. That’s a common perception, and frankly, it’s a dangerous one. Michael Bolton, in his talk “Two futures of Software Testing” explains that:

“Although testers are called the quality gatekeepers…
• they don’t have control over the schedule
• they don’t have control over the budget
• they don’t have control over staffing
• they don’t have control over product scope
• they don’t have control over market conditions or contractual obligations

Testers cannot enforce quality, that’s not their mandate. Even if it is their official mandate, they still cannot practically follow through on it. A test team can *influence* overall quality to improve by doing the following:

• Testers can find bugs and inform developers
• Testers can identify risks that threaten the value of the product
• Testers can highlight visibility and structure issues within the team (they just can’t fix them)
• Testers add to the overall knowledge of the system

Setting a realistic expectation for what the test team can and cannot do is essential to their success.

So you’ve decided to take the plunge and create a test team. What will its make-up be? Do you want an in house test team? Do you want an independent contract team that is off-site? How large do you want your test team to be? As I’ve said many times, my test team at my current company is dynamic. It has one dedicated resource (i.e. me) and at times we can call on others to help the process, often other people in our company in different roles, and especially our technical support people. As a dedicated and solo tester, I often sit with the developers and get to see what they see and understand a much as possible their environments and challenges so that I can help them meet their quality objectives.

Another approach is to contract with a company that has an external testing lab. They will be hired to do the testing and to report back on the status of a project. There are benefits to this. The external organization has the equipment, tools, and experience to handle a variety of testing challenges that an in house team might not have. They also can be used when they are needed, and when they are not, they are not part of your permanent payroll. The disadvantage is that they may not have as much familiarity with your organization and your expectations the way an embedded tester might. There is also the cost of time delays with turnaround of results, reporting results, and then following up based on the information provided. This can get to be significant if the external test team is half a world away.

One of the challenges any test team will face is that of the balance between manual and automated testing. A quote I’m fond of (paraphrased and not attributed, sorry) is that “the human mind is brilliant, articulate, inquisitive and slow. The computer is inherently stupid, lacking in any ability to think for itself, but it is very fast. Put together, the abilities of both are limit
less”. Manual testing and automated testing (or my much preferred choice of words “computer aided testing”) need to go hand in hand. All manual testing may yield good results, but it may be too slow to be practical. Fully automated testing will be enormously expensive to implement, and taking the human out of the equation may cause you to miss more bugs than you would with manual testing. The point is, your test team will need both.

When does it make sense to build a test team? Ask yourself and the stakeholders the following question:

“Is my company willing to take the risk for shipping the product as is?”

A test team can identify risk, but they can’t prevent it. Developers who will need to fix code. Project Managers need to allocate time and resources to fix problems. All parties need to realize that testing is not a panacea; they can communicate about the state of a product or service, but it’s the development team that ultimately fixes it.