It sounds crazy (maybe even arrogant), but the greatest epiphany I had at the recent “Let’s Test” conference in Runo, Sweden came from my own keynote — while I was on stage.  

I thought I was taking a “Critical Look at Best Practices”.  Two days earlier, I had hosted a workshop with 16 colleagues brainstorming, refining, and scrutinizing so-called “best practices” to not only see how they could never be “best” without some context, but to see how they could go terribly wrong in context. After all, this was a conference devoted to context-driven thinking. I was about to share the results, and knew it would be fun to see the reaction of the crowd.

The promise I had made was this:

“You may have heard some software development activities referred to as “Best Practices”, as in “it’s a best practice to write detailed specifications before programming starts” or “it’s a best practice to write a failed unit test and make sure it passes before giving to Test”. In this keynote, Jon Bach talks about the assumptions, risks, and considerations your colleagues think you should consider before using or recommending any so called “best” practice. He’ll share results of an all-day workshop he hosted at Let’s Test the day before this talk where he collaborated with attendees to examine several different notions of “best practices.”

So, armed with the details from the workshop, I delivered my keynote.

“Here are 64 different so-called “best practices” we came up with,” I said, showing the list over several slides, making sure not to say anything and to linger on each side so it could be read fully by the crowd.  


  • Learn how to tell a good testing story
  • Use a test management system
  • Carefully choose testing tools
  • Vary your test techniques
  • Bug advocacy: learn how to sell bugs
  • Work smarter, not harder
  • Make it clear to PM how much it costs to test or deliver proper software
  • Have Dev test it first before it goes to Test
  • Don’t spend time doing things a machine can do better
  • Make sure test coverage is approved by a stakeholder
  • Plan the test environment and data needs early
  • Document enough detail to save effort
  • Work as a team an communicate
  • Design code to be maintainable
  • Use exploratory testing
  • Test early
  • Be able to explain the value you add
  • Work with Developers to know who tested what
  • Work with Dev to build in testability and logging
  • Weak or unclear requirements will cause problems
  • Prioritize
  • Always add time to estimates
  • Be aware of your assumptions
  • Certify the testers
  • Do research — self education is important
  • Become part of the community
  • Create clear exit / entry / stop criteria
  • Give yourself more time
  • You can always find bugs in your software
  • Be aware (observation vs. reference)
  • Treat bugs as something positive
  • Be aware that metrics can be dangerous
  • Be aware of pitfalls when communicating metrics
  • Talk the same language (“test”, “integration”)
  • Don’t let Dev verify bugs
  • Use Session-Based Test Management
  • Test software
  • Check fixed bugs against new versions
  • Ask questions
  • Send testers to Per Scholas
  • Never trust developers
  • Talk to each other
  • Don’t plan too much; execute as well
  • Get feedback often
  • Write clear descriptions of bugs
  • Use daily standups
  • Automate
  • Use prototypes
  • Communicate with end users
  • Consult a variety of stakeholders
  • Get good at using your product
  • Be aware of your emotional responses
  • Don’t be afraid of failing
  • Gain and apply experience
  • Look out for the unexpected
  • Assume specs are incomplete
  • Expect that you will never have time enough
  • Focus and de-focus
  • Think critically
  • Use checklists
  • Perform smoke tests first
  • Use a bug-tracking system
  • Discuss issues with Developers first before reporting
  • Try to be effective and efficient

I was about to show the top ten according to the workshop attendees, then share what happened when I told them to find how each one of those ten could go wrong.

But before I could do that, my brother James spoke from one of the front tables.

“These aren’t practices,” he said.

James is known for argument, but no matter what he had in mind to say next, I knew I could meet the challenge.

All I had to do to win this argument with him was to point to my definition slide: “Practice: the actual application or use of an idea, belief, or method as opposed to theories about such application or use.”  

I could shut him down by expressing that since anything on the list could be used, applied, and practiced, it was… well… a practice!

In fact, I could even poke fun at the word “used”, meaning while any could be useful (i.e. “able to be used”), they might not all be useful (i.e. valuable or effective).  In those few milliseconds, I felt ready for him.

“These are all vague notions of practices,” he said.

And I realized right there on stage, what he meant.  If I followed through on showing the definition slide, I’d actually be making his point for him.

It was all in the definition… the PRECISION of the definition, the SEMANTICS of the definition, my PERCEPTION of the definition, the MEANING the word had for me, the INFERENCES drawn by the definition and ASSUMPTIONS inherent in the definition, and maybe dozens of other lingual and lexical aspects to consider when even attempting to describe any notion of a practice.

Checkmate, James.  

Immediately, I knew I had another keynote.  I could take these same vague notions of “practices” and turn them into a workshop of how many nuances there are when we communicate with each other.  Or more importantly (and correctly), how we only think we’re communicating with each other. We often don’t take the time to drill down on what we really mean. It’s time-consuming and exhausting to undergo such scrutiny, and in my experience, few people outside a testing conference has the patience for it.

I’m reminded of Michael Bolton’s incredible library of blog entries, in which he has written more than once about language.

Here is a recent gem from March

“To me, when people say “it works”, they really mean:

Some aspect
of some feature
or some function
to meet some requirement
to some degree
based on some theory
and based on some observation
that some agent made
under some conditions
or maybe more.”

Yes, exactly. It is exhausting to talk in this way, but these nuances matter.

It reminded me of a blog post my brother did when talking about the marketing of “two scoops of raisins” for Raisin Bran cereal and what “two scoops” could really mean.

I could summarize this epiphany (and likely will in a future talk), but for now I’ll let Michael do that for me here with this excerpt:

“Just as we need tests that are specific to a given context, we need terms that are that way too. So instead of focusing training on memorizing glossary entries, let’s teach testers more about the relationships between words and ideas. Let’s challenge each other to ask better questions about the language we’re using, and how it might be fooling us.”

Yes.  And if I may say so, “well said”. I know what you mean.