The second MEWT peer conference took place on the 13th September and, yes I’m biased but still – it rocked! A big thank-you is in order to Equal Experts for sponsoring the event, Richard Bradshaw for sourcing the venue and taking care of most of the organisational detail, and Bill Matthews for facilitating on the day.
Great job guys!
My day started pretty early with a drive to the Attenborough Nature Centre in Nottingham. As a co-organiser of the event, I had good feeling about the idea of holding the conference in a nature reserve since it seemed to me to provide exactly the right kind of atmosphere for something entirely voluntary, but with quite a high bar in terms of the kind of professionalism and content we expected from the participants. I was not disappointed.

The Attenborough Nature Centre – our MEWT venue for the day
With a general buzz of activity as folk continued to arrive, people set about unpacking their various electronics and note taking implements. By about 9am more or less everyone had arrived, so it was time to submit our talks for the day and vote in which order we wanted to hear them.
Once done, and after a quick bacon-roll break, we made a start.
Somehow, my talk [Simon Knight – @sjpknight] had made it to the top of the list. Probably something to do with the bombastic nature of the title – Lessons Learned in Root Cause Analysis, From Air Crash Investigations! The intent of the talk was simply to make the point that regression failures are often symptoms of some other issue and that as testers, we should feel comfortable with carrying out an investigation into what the underlying problem might be and taking the necessary steps to either get it resolved or communicate persuasively with people who matter in order to get it dealt with. My slides can be found here if you want to find out more.
After me came Timothy Munn, aka @NottsMunster with Regression Testing – You Don’t Have to be Mad to Work Here, But it Helps! He made some great some great points about regression testing basically being an opportunity to improve and expand our knowledge about our products, but that the idea of regression testing (doing the same thing over and over again) was basically a road to madness, per the well known Einstein quote.
Richard Bradshaw was up next with his Regression Testing – Rant, presenting flip-chart models of how he sees regression testing work [on agile teams] currently, and how he thinks it should work. One of the main things Richard tried to convey with his talk was the fact that when we actually carry out our regression testing, what we learn is likely to undermine the results of our previous testing, suggesting instead that the focus of our testing should be detecting and investigating change.
Up next was Stephen Blower, from up North where bugs are apparently made of steel. His talk, Myths & Illusions of Software Testing focused on common misconceptions about what Regression Testing actually is. Stephen set the bar high with his research efforts, supplying quotes from his current team and project along with further answers from a recent, public Skype chat about the same topic.
The final pre-lunch talk was from Bill Matthews, who fed our minds (if not our rumbling stomachs) with his assertion that we should “test for regressions” instead of carrying out regression testing in his presentation, How Do You Solve a Problem Like Regression Testing?
Did I say thanks to Equal Experts already? At lunchtime, lashings of ginger beer, sandwiches and cake were enjoyed on the balcony, courtesy of our sponsors and our hosts, the Attenborough Nature Centre.
Well ok, not the ginger beer. But it was a great lunch!
After the break, MEWTing continued with Mohinder Khosla [@mpkhosla] and The Minefield Analogy, and then a first-time MEWT talk from Dan Casely [AKA @Fishbowler] on Passive Regression Testing starting with the immortal words – why bother? “You’re going to break it anyway!”
The premise of Dan’s talk was that his organisation has had to take a pragmatic, risk-based approach to regression testing due to the insurmountable mountain of technical debt facing him when he arrived as the first tester on the scene. After some consideration arriving at the view that what needed to be done was… Nothing.
Neil Studd [@neilstudd] was next up with his talk Down the Rabbit Hole, elaborating on adventures in regression testing for companies with red logos. You can find his slides here. Only the names have been changed, to protect the innocent…
The theme of adventures and experiences in regression testing was continued in the next couple of talks from Luke Barfield [@lukebarfield83] and Paul Berry [@pmberry2007], Regression Risks and The Bane of the Software Tester respectively. Luke’s talk found a consensus with his assertion that “customers don’t care about regressions. They just care if there’s a bug.”
Amen to that brother.
The penultimate talk was delivered by first time MEWTer and budding testing speaker Ranjit Shringarpure [@ranjitsh]. His investigation – Mathematical Models for Regression Testing: Would They Help (in Making Regression Testing Cost Effective) – was one of the standout talks of the day for me and provided plenty of material for further investigation into how regression testing might be made cheaper and more effective.
[Edit] Ranjit’s slides can now be found here.
Finally we had Adam Knight [@adampknight] talk to us about why “lack of progression is a regression” in his presentation Progression Testing. Again this was something of an experience report, providing the MEWT attendees with insights into Adam’s evolving family and residential requirements how the Rainstor test architecture has necessarily evolved in capability and complexity in parallel with product and business growth, hammering home the point that if our testing doesn’t evolve to meet the demands of the marketplace then the quality of the products on which we work will inevitably suffer. A fitting end to the day, his slides can be found here.
I think it’s probably fair to say that a great time was had by all, with lots learnt in the process. Some of the main takeaways from MEWT for me personally were:
- Using Root Cause Analysis patterns to investigate and ideally resolve the problems causing regressions – treating the cause instead of the symptoms
- Seeing the regression test phase (if there is one) as an opportunity to improve the product on which we’re working
- Re-defining regression testing as “change detection”
- Using regression tests to “see if what you knew to be true previously has or has not changed ” and to “measure changes to existing functionality that don’t fall into the scope of intended development”
- To test for regressions, rather than regression test – and to be aware that regression testing carries with it opportunity cost (i.e. chews up time that could be used for other things)
- Not doing regression testing at all – replacing it with dogfooding and other measures
- Researching methods of improving regression testing efficiency and reducing regression testing cost
- Evolving my test approach alongside the organisation I’m working with and the product I’m working on
No doubt there were many more, and I look forward to updating this and other posts on the MEWT site with further stories as they come in. (Hint!)
Recent Comments