The 4th Midlands Exploratory Workshop on Testing took (#MEWT) took place on the 10th October. The attendees were (from left to right):

Speakers and abstracts in order of presentation:

Ash Winter – A Coaching Model for Model Recognition

As part of my role, I often coach testers through the early part of their career. In this context I have noted a pattern in the application and interpretation of models. They are generated internally through various stimuli (learning, influence of others, organisational culture) and then applied subconsciously for the most part, until there is sufficient external scrutiny to recognise them. To this end, I have created a model of questions to help testers to elevate their internal models to a conscious level and begin to articulate them.

To this end I hope to articulate at MEWT:

  • Presentation of the model of questions to determine internal models in use, without introducing models explicitly.
  • Use of Blooms Taxonomy to visualise a coachees modelling paradigm and the steps towards modelling consciously.
  • Practical examples of using this model to assist early career consulting testers to cope with new client information saturation.

Slides for Ash’s talk can be found here.

Duncan Nisbet – The Single Source of Truth is a lie

In some development circles, the automated test suite serves a single source of truth for the behaviour of the software.

I too held this belief until very recently when it was challenged in the ISST Skype forum.
The conversation I had in that forum helped me to realise several (obvious in hindsight) cognitive biases I had succumbed to & traps I had fallen into.

This experience report will outline how I came to hold my beliefs about the single source of truth & how they are now on their way to be altered.

I think in models, but on several occasions it has been demonstrated to me that I haven’t thought critically about those models.

I’m hoping this report & the subsequent conversation will really cement in me the idea that I need to think more critically about the models I choose to hold in high regard.

I also hope that the report & conversation might have some impact on the other attendees of MEWT.

Slides for Duncan’s talk can be found here.

Richard Bradshaw – Sigh, It’s That Pyramid Again

Richard Bradshaw's new Automation in Testing in Pyramid

Richard Bradshaw’s new Automation in Testing Pyramid

Earlier on in my career, I used to follow this pyramid, encouraging tests lower and lower down it. I was all over this model. When my understanding of automation began to improve, I started to struggle with the model more and more.

I want to explore why and discuss with the group, what could a new model look like?

Ard Kramer – Old models as an excuse?

In my current assignment at a major insurance company we are in a full transformation from waterfall to scrum. As well in waterfall as in scrum software development the testers cling to traditional phases (as a model) of testing, such as functional acceptance testing and user acceptance testing. What is in a name or better what is in a model?

The major question I have and the challenge I am facing is that I want my testers in the scrum team develop their own (shared?) model(s). The only difference which is made is between testing and acceptance as more or less a way to make the difference between (real) testing and checking.

Most of my testers (which I want to send to RST) in the scrum teams have the opinion that they are doing a good job, while I try to convince them that the tests their doing can be done and even must be done much better. This also means that they are able to make their own models of what needs to be tested. I am coaching them in thinking more visual.  These visualization must lead to more session based testing/test management.

I have a preference for models which are mostly visualizations of the landscape of the applications which are in scope for a change. Besides these kinds of models, the testers are going to need heuristics to test the different application and because different teams are developing the same application I think that they must develop a common heuristics. The heuristics and models must be the starting point for a test approach for the user stories, sprints (or even the software releases we are working with).

In my presentation of this case I want to present the situation I am dealing with at the moment and some ideas I have to seduce my testers for making (common?) models and heuristics. I am very curious if we can have a discussion what for approaches other MEWT testers are using for using a model or heuristic as a way of thinking and improving themselves and their colleagues. In my presentation I also like to talk about how changes –in general- can be accomplished so that testers can become better testers, using models, heuristics and how can use it during their test sessions. So a presentation with more questions than information but maybe the information can lead to an interesting model 😉

Three major points:

  • How to make a major shift towards (independent) thinking in Models and heuristics
  • How to make a tester better using models and heuristics
  • How can models help in making visual the value of a change testers are testing?

Slides for Ard’s talk can be found here.

Geir Gulbrandsen – Discovering My Models

Not having a lot of experience in thinking about my mental models, and even less discussing them, it probably goes without saying that I don’t have an example of “how we implemented a certain model we thought would be useful”. Instead I had to go through my career step by step and see if I could recognise the different types of models that were in play and how these were helpful or useful… or not. How did we benefit from these models, what problems did they make us susceptible to, and how can I learn from this to develop my own models?

Main points:

  • Just because you think about your thinking models doesn’t mean everybody else does.
  • Don’t assume everybody understands the same model in the same way.
  • Develop (or adapt) your own models/frameworks in order to truly own them.

Slides for Geir’s talk can be found here.

Mike Loundes – Chunking

Format goes along the lines of:

  • Some definitions of chunking
  • My interpretation

Then a look into how I applied the thinking into a team I was managing with the state of play when I started with a simplified example and what was done utilising chunking. Some thoughts into considerations for implementing chunking at different levels

Slides for Mike’s talk can be found here.

Del Dewar – The Mobile Traffic Meta/Model

Data-driven testing is a relatively common phenomenon in software testing. This talk is an  experience based report about a data-driven approach that gave birth to a highly complex  testing meta-model for a software product that was tasked with monitoring signalling messages  in mobile networks.

The talk will explain the constituent parts of the meta-model and what made it so complex and will touch upon:

  • The theoretical challenges involved in creating and evolving the meta-model and how this could provide value to the business.
  • The physical, procedural and collaborative aspects that wowed people to begin with that quickly became a crutch and impediment to the testing team and the wider business.
  • How in retrospect, we could have done things differently given the experience we amassed throughout the lifespan of the model

Slides for Del’s talk can be found here.

John Stevenson – Model fatigue and how to break it

John Stevenson's SCAMPER Mnemonic Used to Disrupt Stale Testing Models

John Stevenson’s SCAMPER Mnemonic Used to Disrupt Stale Testing Models

Many of us are familiar with the various testing models from RST such as FEW HICCUPPS, SFDPOT and others.  Within my organization we use these models extensively for forming coverage maps and creating testing missions. To do this we use mind maps.

However over the past year or so I have noticed that templates have appeared with the same details and the same type of thinking or in some cases no thinking.   To me many including myself have followed the path of least resistance and used what others have done without engaging our creativity. This has in some cases lead to basis etc. To try to resolved this I have over the last 9-10 months introduced some creativity models to try to overcome what I have come to call Model fatigue.

This experience report looks at these creativity models and discussed their success and failures.

Main points:

  • Models can become stale
  • We suffer model fatigue by not revisiting our models
  • Some creativity models can help (SCAMPER/ThinkPAK)

MEWT 4 was sponsored by the Association for Software Testing.