The CAST 2016 Tutorials are offered on August 8th, at the CAST 2016 venue in the Harbour Centre.

Tutorials are sold as part of a package with CAST admission. There are four tutorial packages offered:

  1. Michael Bolton: Testopsies — Dissecting Your Testing (Full Day)
  2. Fiona Charles: Learning to Say No – and – James Bach: What Catalyzes Testing? Testability! (Half Day Each)
  3. Robert Sabourin: Just-in-Time Software Testing (Full Day)
  4. Richard Bradshaw and Mark Winteringham: Let’s Take Automated Checking Beyond WebDriver (Full Day)

Descriptions follow. To Register for CAST 2016, Click Here

Michael Bolton: Testopsies — Dissecting Your Testing


Have you ever studied testing?  Too few testers, so it seems, have read a book on testing or its underlying principles. Even fewer have studied testing by deliberately observing evaluating it directly and systematically.

According to the Oxford English Dictionary, an autopsy is an “examination to discover the cause of death or the extent of disease”, ultimately derived from the Greek work “autoptes”, meaning “eyewitness”.  Doctors perform autopsies to learn about the human body and to discover how things might have gone wrong.  A testopsy—to use a word coined by James Bach—is an examination of testing work, performed by watching a testing session in action and evaluating it with the goal of sharpening observation and analysis of testing work.  Testopsies can help in training, assessment, and developing testing skill for novices and experienced testers alike.

In this one-day workshop, led by Michael Bolton, participants will learn from each other by preparing and performing a series of testopsies. The process begins with creating a coding system, mapping out the activities that testers perform and the skills and tactics they apply. Using the coding system to guide observation, participants will watch each other as they test software, record what happens, and then discuss the activity and refine the coding system.  Join us as we dissect our testing!


Michael Bolton is a consulting software tester and testing teacher who helps people to solve testing problems that they didn’t realize they could solve.  He is the co-author (with senior author James Bach) of Rapid Software Testing, a methodology and mindset for testing software expertly and credibly in uncertain conditions and under extreme time pressure.  Michael has 25 years of experience testing, developing, managing, and writing about software. For the last 15 years, he has led DevelopSense, a Toronto-based testing and development consultancy. Prior to that, he was with Quarterdeck Corporation for eight years, during which he managed the company’s flagship products and directed project and testing teams both in-house and around the world.

Contact Michael at michael@developsense.com, on Twitter @michaelbolton, or through his Web site, http://www.developsense.com.

Fiona Charles: Learning to Say No

OLYMPUS DIGITAL CAMERAAlthough we’d like to be able to say “yes”, there are times when saying “no” serves our projects, our teammates and our stakeholders best.

Testers can be subject to many conflicting or unreasonable demands. A manager may insist we work on several projects simultaneously, making it impossible for us to do good work on any of them. There may be enormous pressure to work long hours, which will jeopardize our health and the quality of our testing. Sometimes we’re expected to commit to something that we don’t know how to do. We can even find ourselves pressured to misrepresent our findings about the quality of the software.

Paradoxically, learning to say a good “no” enhances our ability to say a meaningful “yes”. If we can say “no” appropriately to demands we know to be wrong for us or for the project, then we can also say “yes” with whole-hearted commitment.

Saying “no” is not easy for anyone, but it is a skill that we can learn.

This half-day tutorial will consist primarily of experiential exercises and debriefs—as many as we have time for. Some volunteer participants will get to practice saying “no” to unreasonable demands. Everyone will have opportunities to observe the interactions, ask questions, discuss, and draw their own conclusions.

This session is intended for testing practitioners and managers at all levels of experience.

Learning Objectives

  • Why “no” can be a more positive answer than “yes” in certain contexts
  • How to recognize and resist the many tactics people can use to get us to say “yes”
  • How to say “no” when that is the right answer for us—simply, and with conviction, equilibrium and respect


I consult, write, speak, and conduct interactive workshops on software test management and testing, to help clients in large and small organizations address their unique software testing opportunities and risks.

With more than 30 years experience on challenging software development and integration projects in diverse business domains, I know that no single set of standard processes or “best practices” will work everywhere.

Good testing requires thinking.

Test Consulting

I work with managers and teams to understand the specific business challenges of software implementation in your organization, and identify which of them can be addressed by software testing and test management. I recommend practices that will best cover your needs in your circumstances, with a practical roadmap for achievable improvements to your testing and test management.

I work with each client to:

  • Assess testing capability in relation to software, project and business risks
  • Hire, motivate and develop effective test managers, testers and test teams
  • Implement pragmatic test management and testing practices designed to address your organization’s unique requirements

James Bach: What Catalyzes Testing? Testability!


Whether you are Agile or Waterfall, you want testability. Whether you release periodically or continuously, you want testability. Testability means how easily a product can be tested. In other words, do bugs hide from you, lurking deep in the folds of your technology? Or do they run out and surrender when you come by, wearing bright reflective vests? Developers need to know this, not just to help the testing process, but to improve debugging, maintenance, and eliminate irreproducible bugs. And testers need to know this, in order to make the case to developers and management that testability creates speed and enables agility.

We will first consider the big picture: a revised version of the Agile Testing Quadrants that shows how testability is a core element. Then we will delve into the five major dimensions of testability: project-related, value-related, subjective, intrinsic, and epistemic. Finally, we will deal with how to assess the testability of a product.

Learning Objectives

This tutorial is designed to help you talk about testability with confidence, and to analyze and report on it as well.


I started in this business as a programmer. I like programming. But I find the problems of software quality analysis and improvement more interesting than those of software production. For me, there’s something very compelling about the question “How do I know my work is good?” Indeed, how do I know anything is good? What does good mean? That’s why I got into SQA, in 1987.

Today, I work with project teams and individual engineers to help them plan SQA, change control, and testing processes that allow them to understand and control the risks of product failure. I also assist in product risk analysis, test design, and in the design and implementation of computer-supported testing. Most of my experience is with market-driven Silicon Valley software companies like Apple Computer and Borland, so the techniques I’ve gathered and developed are designed for use under conditions of compressed schedules, high rates of change, component-based technology and poor specification.

Robert Sabourin: Just-in-Time Software Testing – Powerful Tools for Fast-Changing Projects and Priorities


– Test projects that have few or no written requirements

– Conduct testing “triage” to find important bugs more quickly

– Learn to plan and schedule testing in a dynamic, unpredictable world

– Practice session-based exploratory testing to find show-stopper bugs and change the way you test

– Gain the confidence you need to succeed

Dealing with Software Project Turbulence

Turbulent development projects experience almost daily requirements changes, user interface modifications, and the continual integration of new functions, features, and technologies. Keep your testing efforts on track while reacting to changing priorities, technologies, and user needs. This highly interactive workshop offers a unique set of tools to help you cope with—and perhaps even flourish in—what may seem to be a totally chaotic environment. Practice dynamic test planning and scheduling, test idea development, bug tracking, reporting, test triage, exploratory testing, and much more. 

Getting Ready for Almost Anything They Can Throw at you

Be ready for just about anything that can happen in a software testing project such as a complex, customer-facing Mobile, Web, e-commerce or embedded applications. Learn to identify, organize, and prioritize your testing “ideas.” Respond effectively to business, technological and organizational and cultural changes to your testing projects. Create workflows to schedule testing tasks dynamically and adapt the testing focus as priorities change. Decide on purpose what not to test— not just because the clock ran out!

Real Techniques Proven in Real Projects

Just-In-Time Testing (JIT) approaches are successfully applied to many types of software projects—commercial off-the-shelf applications, agile and iterative development environments, mission-critical business systems, and just about any application type. Real examples demonstrate how JIT testing either replaces or complements more traditional approaches. Examples are drawn from insurance, banking, telecommunications, medical, and other industries. The course is packed with interactive exercises in which students work together in small groups to apply JIT testing concepts.

Who Should Attend

This course is appropriate for anyone who works in fast-paced development environments, including test engineers, test managers, developers, QA engineers, and all software managers.


Robert Sabourin has more than thirty-four years of management experience, leading teams of software development professionals. A well-respected member of the software engineering community, Robert has managed, trained, mentored, and coached thousands of top professionals in the field. He frequently speaks at conferences and writes on software engineering, SQA, testing, management, and internationalization. The author of I am a Bug!, the popular software testing children’s book, Robert is an adjunct professor of Software Engineering at McGill University.

Richard Bradshaw and Mark Winteringham:
Let’s Take Automated Checking Beyond WebDriver

RichardBradshawBioThe testing community is fixated on Automated GUI Checking.

The majority of automators are opting for Selenium WebDriver. Selenium is a fantastic project, and WebDriver has a superb API. If I was wanting to automate user journeys in the browser, I would turn to WebDriver. Unfortunately, WebDriver seems to be the default tool for a lot of (if not all) the automated checking teams do, regardless of context, and what it is they are actually trying to check.

mark-winteringhamThis can be problematic for multiple reasons. Primarily, these checks tend to be slow and brittle (this of course depends on the skill level of the person creating them). Another is that by nature of them being at the browser level, you almost always end up checking a lot more than what you intend to. They’re not focused and targeted on a specific piece of functionality or behaviour.

It doesn’t have to be this way, though. Tools below the GUI have come a long way in recent years. There are endless javascript libraries available for automated checking of javascript. With more teams adopting APIs, there has been an increase in tools available for doing automated API checking. There has also been huge advancements in visual checking tools, which teams can also take advantage of.

In this technical hands-on tutorial, Richard and Mark will introduce attendees to these new tools/frameworks. We will work as one big automation team to move existing GUI WebDriver checks further down or up the stack. After examining what the original intention of the check was, and now having more exposure to new tools, could we rewrite them at a different level in the stack? Then, reflecting on the impact this has had to our automated checking, including whether the checks are more targeted or faster than before.

The experiential aspect of this tutorial is that it’s up to you (the attendees) where we decide checks move to, if they move at all. We will be working as one big team, so there will be lots of lots of discussion and learning from peers. If you’re interested in advancing your automated checking, come along.

Learning Objectives

Attendees in this workshop will get exposure to many new frameworks, tools and libraries. They will learn that these new tools aren’t any more difficult than WebDriver. The will also see that working with WebDriver all this time has armed them with a lot more programming skill then they may have realised. Which in turn can really help them improve their automated checking tools, which in turn could improve the team approach to testing, improve quality and really help the business.

Attendees will be tasked with reviewing an existing suite of automated checks, attempting to understand what they original purpose was, a useful skill when moving to a new team or trying to improve existing checks. They will be given hint and tips on how to do this. They will partake in multiple discussions with attendees and experts from the field.


Richard Bradshaw is an experienced tester, consultant and generally a friendly guy. He shares his passion for testing through consulting, training and giving presentation on a variety of topics related to testing. He is a fan of automation that supports testing. With over 10 years testing experience, he has a lot of insights into the world of testing and software development. Richard is a very active member of the testing community, the founder of multiple meetups in the UK, and also one of the founding members of the Midlands Exploratory Workshop on Testing (MEWT). Richard blogs at thefriendlytester.co.uk and tweets as @FriendlyTester. He works for Friendly Testing, a provider of consultancy and training within testing. He can often be found in the bar, with a beer in hand, discussing testing.

Mark Winteringham

Mark is a freelance technical tester, testing coach and international speaker, presenting workshops and talks on technical testing techniques. He has worked on award winning projects across a wide variety of technology sectors ranging from broadcast, digital, financial and public sector working with various web, mobile and desktop technologies.

Mark is an expert in technical testing and test automation and is a passionate advocate of risk based automation and automation in testing practices, which he regularly blogs about at mwtestconsultancy.co.uk. He is also the co-founder of the Software Testing Clinic in London, a regular workshop for new and junior testers to receive free mentoring and lessons in software testing.

Mark also has a keen interest in various technologies, developing new apps and Internet of thing devices regularly. You can get in touch with Mark on twitter: @2bittester