This month’s Lean Coffee was hosted by us at Linguamatics. Here’s some brief, aggregated comments and questions  on topics covered by the group I was in.

How important is exploratory testing?

  • When interviewing tester candidates, many have never heard of it.
  • Is exploratory testing a discrete thing? Is it something that you are always doing?
  • For one participant, exploratory testing is done in-house; test cases/regression testing are outsourced to China.
  • Some people are prohibited from doing it by the company they work for.
  • Surely everybody goes outside the test scripts?
  • Is what goes on in an all-hands “bug bash” exploratory testing? 
  • Exploratory testing is testing that only humans can do.

How do you deal with a flaky legacy automation suite?

  • The suite described was complex in terms of coverage and environment and failures in a given run are hard to diagnose as product or infrastructure or test suite issues
  • “Kill it with fire!”
  • Do you know whether it covers important cases? (It does.)
  • Are you getting value for the effort expended? (Yes,so far, in terms of personal understanding of the product and infrastructure.)
  • Flaky suites are not just bad because they fail, and we naturally want the suites to “be green”
  • … flaky suites are bad because they destroy confidence in the test infrastructure. They have negative value.

What starting strategies do you have for testing?

  • Isn’t “now” always the best time to start?
  • But can you think of any scenarios in which “now” is not the best time to start? (We could.)
  • You have to think of the opportunity cost.
  • How well you know the thing under test already can be a factor.
  • You can start researching before there is a product to test.
  • Do you look back over previous test efforts to review whether testing started at an appropriate time or in an appropriate way? (Occasionally. Usually we just move on to the next business priority.)
  • Shift testing as far left as you can, as a general rule
  • … but in practice most people haven’t got very far left of some software being already made.
  • Getting into design meetings can be highly valuable
  • … because questions about ideas can be more efficient when they provoke change. (Compared to having to change software.)
  • When you question ideas you may need to provide stronger arguments because you have less (or no) tangible evidence
  • … because there’s no product yet.
  • Challenging ideas can shut thinking down. (So use softer approaches: “what might happen if …” rather than “That will never work if …”)
  • Start testing by looking for the value proposition.
  • Value to who?
  • Value to the customer, but also other stakeholders
  • … then look to see what risks there might be to that value, and explore them.

Death to Bug Advocacy

  • Andrew wrote a blog, Death to Bug Advocacy, which generated a lot of heat on Twitter this week.
  • The thrust is that testers should not be in the business of aggressively persuading decision makers to take certain decision and, for him, that is overstepping the mark.
  • Bug advocacy isn’t universally considered to be that, however. (See e.g. the BBST Bug Advocacy course.) 
  • Sometimes people in other roles are passionate too
  • … and two passionate debaters can help to provide perspectives for decision makers.
  • Product owners (and others on the business side) have a different perspective.
  • We’ve all seen the reverse of Andrew’s criticism: a product owner or other key stakeholder prioritising the issue they’ve just found. (“I found it, so it must be important.”)