The Software Testing Club recently put out an eBook called “99 Things You Can Do to Become a Better Tester“. Some of them are really general and vague. Some of them are remarkably specific.
My goal for the next few weeks is to take the “99 Things” book and see if I can put my own personal spin on each of them, and make a personal workshop out of each of the suggestions.
Suggestion #79: Respect programmers, designers, product owner and others involved.
Suggestion #80: Earn respect back by doing a great job and learn how to communicate your results. – Erik Brickarp
Respect and reputation are a lot like a savings account. It feels like it can take forever to build up a cash balance that is strong, but we sure do feel like we are spending it awfully quickly. Our work relationships, likewise, have a saving and spending of currency, but to borrow a phrase from Steven R. Covey, the currency is “Emotional Capital”. That’s an odd sounding phrase, but it’s appropriate. Instead of a savings account with cash, we have a savings account with credibility and reputation.
Both our reputation and our credibility are built on personal interactions with other people. They build up over time, and they are entirely emotional. They are also a very imperfect account, with a higher beta than even the most volatile junk bond. We can be unduly rewarded, and we can also be unduly punished, since reputation and credibility are all based on emotion… but they don’t have to be. Taking things from the emotional to the logical typically requires data, which means we have to quantify, in some way, what our credibility actually is.
Workshop #79 & #80: Practice the art of making the qualifiable quantifiable.
One of the trickiest aspects of testing is the fact that our “product” is ephemeral, at least in the literal sense. Programmers have code they can point to, or an application that works well or doesn’t. Features that are delivered can be quantified. Software testers have a slightly tricker time of it. Those who create automated test cases have something that can be quantified, i.e. they have x number of tests that run and they give a pass/fail. In short, they can be measured. Quantifiable things can be measured, apportioned and graded.
Software Testers deal much more with the qualifiable, and that’s harder to put into a number. A joke I used to use for years back when I was a musician was to talk about the difference between an audio engineer and an audio producer:
Audio Engineer: “Let’s roll off 3dB from the 200Hz range, and let’s set the panning for the microphones on channels 7-12 to be 90dl, 45dl, 30dl 30dr, 45dr and 90dr.”
Audio Producer: Make it more “blue”.
That’s hyperbole, of course, but it’s not far from the mark. The joke is centered on the fact that the audio engineer deals with the actual knobs, patch points and faders. It’s all nuts and bolts. They deal with the mechanics of getting from point a to point b and making it sound clean. The producer, however, is focused on the overall performance and the overall sound as the listener will receive it. The listener could care less about the roll-off of the microphones or what angles, but they will appreciate the fact that the singer can be heard to be slightly oscillating, and has a “ghostly” tone. Can an engineer quantify “ghostly”? Yeah. Can the average listener? No, not unless they themselves have done time as audio engineers or spent a lot of time in a recording studio.
Software testers produce a lot of artifacts in the process of our work, but if we are not careful, we can become slaves to the process. I well remember living in the era of ISO-9000 compliance, and having to produce a huge up-front test plan, to IEEE spec, that would record, in the finest detail, every test step I would make or every parameter I would need to consider. I spent a lot of time writing them. I spent several review cycles getting them approved… and to this day I lament how much time I lost having to do all of that. Why? Because we all knew, really, that those tests would never get executed as they were listed, and that most of the time, when we found something interesting, it was via an avenue that wasn’t even written into the test plan.
Today, I am much more interested in looking at ways that actually affect the end product, and what effect those actions have regarding our stakeholders/customers. Those do not align with “write a voluminous number of test cases” or “make and walk through a list of scripted test cases and mark off completed tests”. I have found that process to be a waste of time and effort, and not helpful in finding the really interesting bugs.
Yes, I see you all saying “OK, that’s great, Michael, but what can we provide that is quantifiable in the absence of what you are suggesting?”
I’m glad you asked.
1. As the suggestion says, treat developers with respect, do not treat them as adversaries. Blur lines where you can. Pair with them, test with them, talk through code design with them. Ask them questions, and document in a quick way what you are doing in these sessions with the developers. In short, be credible by presence.
2. Practice doing structured test explorations. Using a Mind-Map tool or a note taking app like Rapid Reporter, walk through your discoveries. Notate things that are interesting, highlight areas that you think might be bugs. Upload the reports and examine them with the team. Attach them to stories. Use these as supporting evidence for your bugs. Use a screen capture/recorder to record test steps that you can upload and review. In other words, use as solid a qualitative process as you can, and utilize the artifacts created to provide your quantifiable statistics.
3. Focus on the stakeholder’s concerns. Understand the overall business value of what you are doing. Spend some time learning what the executives and shareholders value, and work to address those areas. Perform regular risk assessments, and get an understanding of the areas that are genuinely the highest risk. Prioritize around those areas.
Recent Comments