Track Sessions

Track Sessions take place on July 16-17, 2012

Lightsabers, Time Machines, & Other Automation Heuristics

Adam Goucher
Once you become familiar with automation you realize there are a number of patterns that frequently occur across projects.  By recognizing the patterns and naming them, it is possible to catalog, discuss, and leverage them across a variety of projects.  Not “‘Best Practices,” these heuristics can help identify why automation roll-outs are unfolding in a certain way and can point to possible pitfalls along the road.

Misleading Validations: Be Aware of Acceptance Criteria

Anand Ramdeo
Millions of test cases executed thousands of time mean nothing when a catastrophic defect surfaces and threatens value of the product. In software testing, what is not tested is more important than what has been tested. However, with continuous adoption of agile and automation – focus has shifted to what has been tested.In this paper, I will explore how validations we seek are affected by fallacies and biases and why green is not good enough. This paper will also explore limitations of automation to highlight that testing is much more than meeting acceptance criteria and why test automation or meeting acceptance criteria should not become goal.This paper is based on an excellent book by Nassim Nicholas Taleb – “The Black Swan” and relates limitations of validation to software testing. This paper will highlight that green on acceptance tests gives us important information – but risk probably exists beyond acceptance criteria
Anand Ramdeo is principal test consultant and founder of Atlantis Software. He has been working in software development and testing for around 12 years. He has played various roles such as individual contributor, manager, facilitator, trainer and consultant in many waterfall, iterative, transition-to-agile and agile projects.Anand is a fan of open source test automation tools such as Selenium-WebDriver. He hosts workshops and provides training on test automation tools in London. He considers himself an exploratory tester and has used many tools in various languages & platforms to understand their limitations and use them wisely.Anand is a keen learner and continues to learn the craft of software testing by reading books, going to various testing conferences and informal meet-ups and by participating in Weekend and Weeknight testing events.He (And his wife Komal Joshi) loves to shares their knowledge and experience on their blog and on twitter @testinggeek

Embracing Continuous Deployment

Andrew Prentice
Continuous deployment, the practice of releasing features and fixes as soon as they are ready (as opposed to batching into releases), disrupts many traditional and Agile testing practices. At the same time continuous deployment’s mix of software as a service, DevOps, split & multivariate testing, real time analytics, multiple release channels, dark features, rapid customer validation and flexible timeframes (to name but a few here) offers considerable opportunities to improve and innovate on how software is tested. This presentation will outline the challenges faced, the costs and mistakes incurred and the benefits and successes realised by Atlassian, developers of software development tools, after moving from a quarterly release cycle to continuous deployment from the QA teams perspective.
Since joining Atlassian, makers of collaboration and software development tools such as JIRA & Confluence, four years ago as their first QA manager, Andrew has created the Atlassian QA team from scratch and implemented and improved testing practices across the company, including creating Bonfire, a session based testing tool that is now sold commercially. Prior to Atlassian, Andrew spent ten years working in testing and project management roles at large IT services and telecommunication companies in Australia and the United Kingdom.

Workshop: Thought Provoking Leadership

Anna Royzman
You can’t tell others what to think, but there are ways to engage people in activities and practices where they will discover it for themselves.In this interactive workshop, we will explore hands-on techniques and exercises that put people into situations where they want to ask questions and become motivated to develop better understanding of the context.  I will go over principles on how to translate the workshop experience to your own situation, and will discuss my “lessons learned”: overcoming challenges with this approach, what works and what doesn’t, your role (you don’t have to be a manager!), where and how to find opportunities, and how to prepare yourself for success.This session is not about leading a horse to water; thirsty horses will find their way to water. This session will teach you how to make them want to drink.
A context-driven scholar, professional tester for over 10 years and a thought leader, Anna is always on a quest for quality. Her passion is in discovering new techniques and creating environments that allow people to be most effective at what they do. Anna made her speaking debut at CAST 2011, Emerging Topics track.

So ‘waterfalls’ cannot be ‘agile’? Who says?

Bart Broekman
I thought you had to choose: Either follow a strict linear development process with procedures, standards and the lot –rigid, but controlled – OR… follow an agile iterative process – lean and creative, but unpredictable and less in control.I found out I was wrong.The presentation tells the story of my experiences in a large government organization that decided to implement a huge Improvement Programme.The programme started with defining procedures and standards. But in the end I spent most of my time and effort in doing almost the opposite, trying to make people “Forget about the rules and standards for now” and basically preaching the Agile Manifesto.The organization needed a controlled linear method. And the method needed an agile and critical thinking attitude to make it work.Controlled linear OR agile iterative?? For me that is not an ‘exclusive or’, it is YES to both.
Bart Broekman has been a software test practitioner for 25 years, fulfilling test assignments ranging from test automation to managing large test projects. Since 2008 he has been an independent test consultant. He is co-author of books on test automation, testing embedded software and the test method TMap-Next®. Bart is a regular speaker at international conferences.In The Netherlands new standards, methods and techniques are rapidly and frequently ‘invented’. However the concrete and useful application of these is still a problem for many organizations. Bart is especially appreciated for his ability to explain complex test concepts and to implement them in practical situations. His personal motto is “Software Testing in Practice”.

The Testing Dead

Ben Kelly
I’ve worked in places where I’d get up in the morning and pray for the zombie apocalypse to have happened overnight just so I didn’t have to go to work. I realized some time later that the apocalypse had already happened, and I’d been working with them – The Testing Dead.The testing dead are the slaves to process; the ones who stop and mill around aimlessly when there is no documentation to tell them how to act. Knowledge passes through them unmolested like bacon through a bar mitzvah.This presentation takes a light hearted look at what is actually a pretty serious problem in the field of testing – zombie testers, and what can be done about it.
Ben Kelly is a software tester living and working in Tokyo, Japan. He has done stints in various industries including Internet statistics, insurance and most recently online language learning. When he’s not agitating lively discussion on other people’s blogs, he writes sporadically at and is available on twitter @benjaminkelly.

Workshop: Thinking visually about testing problems

Bill Matthews
In my experience, testers frequently use diagrams but seldom create their own to explore, analyse and explain the testing they are doing – despite needing to solve problems that lend themselves well to this type of approach.This participatory workshop will introduce the idea of Thinking Visuallyand the use of images to enable us to investigate and think critically about problems and also to improve our communication about their solutions. We’ll do this through a series of practical exercises designed to explore different problem domains and how we might represent them. I will also present some of the pictures I have used to tackle issues such as:

  • Explaining a test strategy
  • Selecting test cases in a complex data domain
  • Explaining the link between testing and risk mitigation

No artistic talent required – if you can draw boxes, arrows and stick figures you can use this approach!

Bill Matthews has been involved in testing for over 17 years working in various fields covering banking, transport, government and scientific systems.  In 1998 he became a freelance test consultant and founded Target Testing. Bill now spends much of his time managing and delivering testing projects for clients, coaching their staff and encouraging companies to rethink how they approach testing. He has recently started blogging at

Workshop: Giving a tester feedback based on the session log

Carsten Feilberg
Learn to give feedback to your testers, praising them for their good testing, structure, thinking and helping them learn from mistakes, see opportunities they missed and let their skillset grow.Exploratory testers – especially new ones – need guidance in what and how much to put in their session logs.This exercise is designed to provide you – in fact all attendees – with a learning opportunity, by running through the drill of digesting a few session logs, then in a group discuss what feedback to give and how, and finally stage a role play, trying it out for real: giving feedback to a real human being.This conference is a perfect laboratory for such an exercise; a safe room where we can learn from our mistakes – and triumphs –  and as an extra bonus we get to discuss it in a facilitated way afterwards.
Carsten Feilberg has been testing and managing testing for more than a decade working on various projects covering the fields of insurance, pensions, public administration, retail and other back office systems as well as a couple of websites. With more than 18 years as a consultant in IT his experience ranges from one-person do-it-all projects to being delivery and test manager on a 70+ system migration project involving almost 100 persons. He is also a PSL graduate, blogger and presenter on conferences and a strong advocate for context-driven testing. He is living and working in Denmark as a consultant at House of Test.

Foundations of Facilitation and the Tester’s Environment

Chris Blain & Ben Kelly
How do you as a leader of testers, facilitate your team (and the people your team serves) in such a way as to allow them to solve their own problems? How do you create an environment in which testers have the freedom to add real value, especially when those that testers serve have a vested interest in keeping a low paid, low skill undertaking?This session aims to tackle these questions and speaking from the experience of the two presenters, as well as those of noted facilitators, offer possible solutions to the problems these questions reveal.
Chris Blain is a consultant who has more than fifteen years of experience working in software development on projects ranging from embedded systems to web applications. He is a former board member of the the Pacific Northwest Software Quality Conference, and recently starting speaking at conferences such as CAST. You can follow Chris on Twitter as @chris_blain.Ben Kelly is a software tester living and working in Tokyo, Japan. He has done stints in various industries including Internet statistics, insurance and most recently online language learning. When he’s not agitating lively discussion on other people’s blogs, he writes sporadically at and is available on twitter @benjaminkelly

You are a scientist – Embracing the scientific method in software testing

Christin Wiedemann
A software tester is nothing less than a scientific researcher, using all his/her intelligence, imagination and creativity to gain empirical information about the software under test. In this talk, I will give a brief historical overview of the birth and use of the scientific method, drawing parallels to the evolution of testing, and try to show that good software testing adheres to the principles of the scientific method. I will also talk about how truly understanding and embracing the scientific method will make us better and more credible testers.The talk will focus on the philosophy of science, and how I think it can – and should – inspire a corresponding philosophy of test. I will discuss empirical falsifiability and the differences between science and non-science, and how it translates to testing. I will also talk about how peer review is used in science and how it can add value to testing.
Changing careers after eleven years as an astroparticle physicist, Christin Wiedemann brings her logical and analytical problem-solving skills into the world of testing. Four years down the road, she is still eager to learn and to find new ways to test more efficiently. In her roles as tester, test lead, trainer and speaker, Christin uses her scientific background and pedagogic abilities to continually develop her own skills and those of others. Christin is constantly trying new approaches and is keen to share her experiences. In October 2011 Christin relocated from Stockholm, Sweden, to Vancouver, Canada, where she works for Professional Quality Assurance Ltd. (PQA).

Who are your customers? – Contextualizing testing with personae

Curtis Stuehrenberg
In his book “The Practice of Management”, Peter Drucker stated, “There is only one valid definition of a business purpose: to create a customer.”  This is as true for your business as it is for any other, but who are your customers?  Do you know?  Are you designing your products for an abstract job title or an idealized marketing demographic to which you have no relation or context?  Since your customers are people, wouldn’t it be better to treat them as such and design your products for actual people?Join me as we explore one technique for modeling your customers as real people through something called building out personae.  Personae model specific customers as fleshed out human beings which, if performed correctly, can enhance and frame your work like nothing else I’ve encountered.   We’ll have a short introduction and then jump right into practicing this powerful technique, so bring your thorniest user problems.
Curtis Stuehrenberg is a classically trained baritone and unsuccessful stage actor who stumbled into software testing when a friend pulled him, kicking and screaming, onto a project at Microsoft that would one day become Secure Business Server. The team wisely shunted him into the build and test lab where they assumed he would do the least harm. They were fortunately mistaken. Soon he was stalking the halls, causing fear and anger in developers and architects alike for having the effrontery to break “his” builds. Thirteen years later, he has mellowed somewhat and enjoys a challenging, rewarding, and at times successful career helping companies and teams walk the fine wire between craftsmanship and value. In what passes for his free time, he writes a little, leads the odd discussion, and argues passionately about subjects most of the world could care little about until things go wrong.

Helping Thinking Testers Think

Geordie Keitt
You know it when you see it, right? That spark of intelligence in a tester’s eye, the flash of big-picture thinking in an incisive question or critical find. Why can’t they think like that all the time? A key challenge for any manager is to engage your team at their highest cognitive level: to fan the flames of your test team’s brains. Elliott Jaques’ models of the relationship between cognitive processing and work performance provides a useful framework for thinking about how to max out your team’s intellectual horsepower. We will go over the four logical patterns of thinking and how they manifest and recur, how the levels of thought affect your tester’s experience of context, and what you can do to set their context so their thinking – and yours – is as valuable as possible.
Geordie Keitt has been testing software full-time since 1995. He heard James Bach’s keynote speech at the ASQ conference in New Orleans in 2000 and responded viscerally to James’ call for best efforts, not best practices. He apprenticed under James and Jon Bach in 2001. He was one of the first testers to implement Context-Driven Testing and Session-Based Test Management in the federal government sector, testing spectrum auction software at the Federal Communications Commission in 2003-2004. For several years he has tested Critical Chain project management software for the good folks at ProChain Solutions, Inc.

Workshop: Turning Offshore Teams into Thinking Testers

Gerie Owen
Are you experiencing difficulty and frustration managing offshore project teams?  Are project tasks taking longer to complete and results not as expected because your teams are executing without “thinking”?   The unique aspects of offshore teams such as multiple time zones, unclear expectations and language and cultural differences add to the challenge of creating a cohesive, “thinking” team.    Developing “thinking” offshore teams involves creating an environment where testers feel empowered to go above and beyond the plan, question freely and try different ways of testing without fear of failure.  This workshop provides the tools for creating a “thinking” environment throughout the project.  I’ll share experiences in managing offshore test teams and explore ways of providing explicit direction and making expectations clear while promoting a “thinking” environment.   Learn how to choose most effective means of communication based on the situation, how to motivate team members and develop an innovative, flexible “thinking” team.
As a Quality Assurance Consultant, Gerie Owen specializes in developing and managing offshore test teams.  She has implemented the offshore model, developed, trained and mentored new teams from their inception.  Gerie manages large, complex projects involving multiple applications, coordinates test teams across multiple time zones and delivers high quality projects on time and within budget.  Gerie’s most successful project team wrote, automated and executed over 80,000 test cases for two suites of web applications, allowing only one defect to escape into production.  With over 25 years of experience, she enjoys training and mentoring new Quality Assurance leads.  Gerie has held quality assurance roles at Metlife, Inc. and The Computer Merchant, Ltd. and recently joined NSTAR Inc.

Collaboration without Chaos

Griffin Jones
Some software testing over-values the efficient mechanical execution of tasks and fidelity to the collective wisdom embodied in organizational processes.  “Procedural Over-Specification” works – to a point.  But is there a more effective model that leverages the knowledge and creativity of the people doing the task, yet exerts reliable control in a different way? Yes, “collaboration without chaos” is possible and worth the effort to attempt. Griffin shares and dissects his team’s testing control model – showing the prescriptive and discretionary parts; and how “orient” is its’ beating-heart.  Explore some arch-types of control and the values that they are oriented on. Through a group exercise practice the creation of a collaborative test. Learn from his experience of how to apply and effective explain this testing control model to management, customers, or regulators. Leave with a more sophisticated model of collaborative control that can make your testing more valuable.
The owner of Congruent Compliance, Griffin Jones provides consulting services on context-driven software testing and regulatory compliance to companies in regulated industries. Recently he was the director of quality and regulatory compliance at iCardiac Technologies which provides core lab services for the pharmaceutical industry to evaluate the therapeutic efficacy or safety of their potential new drugs. Griffin was responsible for all matters relating to quality and regulatory compliance for an FDA regulatory compliant quality system, including frequently presenting the verification and validation (testing) results to external regulatory auditors. Griffin was previously a product quality lead for eighteen years at Eastman Kodak. He can be reached at

Changing the context: How a bank changes their software development methodology

Huib Schoots
This is a story about testing at Rabobank International. It is an experience report on working in an environment that changes to agile and where testers are trying to implement context-driven testing.
One year ago all testers took the Rapid Software Testing class. This kicked off change within the whole group of testers. This presentation shares our experience on the struggle of implementing agile testing and working with things we learned from Rapid Software Testing. It will zoom in on things like using mind maps, test plans based on heuristics, dashboards, exploratory testing, etc. This talk describes what worked for us and what didn’t and how we made it work.
The domain has several completely different business lines: large transactional systems to web based systems. This talk tries to find the common factor in the changes in the different teams: what were the context factors that made stuff work (and not).
Huib has 15 years experience in IT and software testing. After studying Business Informatics he became a developer. Soon he discovered that development was not his cup of tea and software testing is fun. Huib has experience in various roles such as tester, test coordinator, test manager, trainer, coach, but also in project management. He is currently team manager testing at Rabobank International. He tries to share his passion for testing with others through coaching, training and giving presentations on different test subjects.Huib sees himself as a context-driven tester. He is curious, passionate and has (unsuccessfully) attempted to read everything published on software testing ever written. He is a board member of TestNet, the association of testers in the Netherlands. He is a member of DEWT (Dutch Exploratory Workshop on Testing), student at the Miagi-Do School of Software Testing and maintains a blog on

Doctor Doctor, Give me the news

 Iain McCowatt
When medical professionals determine treatments, they must weigh a variety of factors such as symptoms, history, allergies etc. But how do they know what factors are relevant? Whether treatment X will kill or cure their patients?As context driven testers, we frequently face the same problem: that of deciding whether a practice will help or harm our projects. Such decisions deserve careful consideration.In medicine, this is addressed through the use of indications and contraindications: heuristics that describe contexts within which a treatment is advisable or not.Through example and discussion, this session will explore the relevance of such an approach to the context-driven selection of testing practices:

  • Might active discussion about indications and contraindications improve our understanding of the contexts under which a practice is a good idea?
  • Would such a framework help testers to think through their selection decisions?
  • Or might it serve to dangerously limit their thinking?
Iain did his first testing in 1996, when – because he was available – he was volunteered to manage UAT for a call center management system.For some reason, testing didn’t stick the first time around. Someone must have been trying to tell him something though: a couple of years later he got a sideways move into a system testing role.This time, he got “the bug”.Iain currently works with CGI in Atlantic Canada: by day as a program test manager in the banking industry, and by night blogging at

Mobile Testing: To Boldly Go…

Jean Ann Harrison
Testers ask how to test mobile device applications as they gravitate towards embedded testing.  Hardware and firmware awareness is becoming necessary as mobile software becomes more complex.  Testers are now required to design tests that incorporate hardware and firmware conditions which verify software behavior.Some areas to consider:How is the software behavior affected as a device heats up while charging?What effect does battery charge level have on wireless communications?What software controls CPU speed based on device temperature?Jean Ann will share real examples of thought processes for designing software tests on a mobile device.  Learn how to formulate heuristic oracles which boldly go into a new world of software test design. Exercises in how to come up with test cases as well as a couple of ninja tricks will be included to help with efficiency in documenting these tests.
Jean Ann has been in the Software Testing and Quality Assurance field for over 12 years including 4 years working within a Regulatory Environment.   Her niche is system integration testing, specifically on mobile medical devices.   Jean Ann has worked in multi-tiered system environments involving client/server, web application, and standalone software applications.   Maintaining an active presence in the software testing community, Jean Ann has gained inspiration from many authors and practitioners.  She continues to combine her practical experiences with interacting on software quality and testing forums, and attending training classes and conferences.

Standards and Thinking: Do standards make rules to be broken or should you ignore them?

Jon Hagar
ISO/IEEE29119 Software Testing Standard is now under development and will become a world industry standard for software testing. It covers test concepts, process, documentation and techniques.  I find myself in a position of doing something that some will see at odds with the context driven test community. In the company I work for standards are a fact of regulated and international business life. So I have a like-hate relationship with things like standards.  This presentation in a debate format will examine that dichotomy, considering :

  • ISO/IEEE29119 from the basis of making progress as a profession providing definitions and ideals which must be subjected to “scientific process”.
  • Standards are “rear looking” and not state of the art, but context-based ideals should be presented in them.
  • Even in the presence of standards, a thinking tester is needed.
  • Standards need to be “tolerable” and ethical.
Jon Hagar is a systems-software engineer and tester consultant supporting software product integrity, verification, and validation with a specialization in embedded/mobile software systems. Jon has worked in software engineering, particularly testing, for over thirty years. Embedded projects he has supported include: control system (avionics and auto), spacecraft, mobile-smart devices, and ground systems (IT) as well as working attack testing of the new smart phones (class/book in work). He has managed and built embedded test lab with test automation. He teaches classes at the professional and college level. Jon publishes regularly with over 50 presentations and parts of 3 books in software testing, verification, validation, Agile, product integrity and assessment, system engineering, and quality assurance.  Jon is lead editor/author on ISO 29119 software testing standard, model based test standard, and IEEE 1012 V&V plans.

Workshop: Brainstorming for Testers

Karen N. Johnson
Join this interactive workshop on brainstorming for testers. Learn exercises you can use for situations when you are the sole tester or otherwise working alone as well as exercises to use when you are working in a team. This session particularly  focuses on brainstorming to overcome these types of challenges:1. How do you clear your mind when you are overloaded?
Stressed? Learn ways to decompress and regain your focus.2. How do I find inspiration when none of my work seems interesting?Thinking testers realize when they are in a rut and need to shake things up. Where do you find new ideas when you are stuck?3. How do I host a tester’s brainstorming session?
Brainstorming session sound fun but being creative and being willing to share ideas takes an atmosphere of trust. How do you build an environment that enables trust and brainstorming?
Karen N. Johnson is a software test consultant. She is frequent speaker at conferences. Karen is a contributing author to the book, Beautiful Testing by O’Reilly publishers. She has published numerous articles and blogs about her experiences with software testing. She is the co-founder of the WREST workshop, more information on WREST can be found at: Visit her website at:

Exploratory Performance:  peeling an onion or a dog chasing its tail?

Mark Tomlinson
When it comes to working in a performance team, the real-time collaboration and dynamic adaptation of the testing objectives and test activities are a fact of life.  This results in the blurring of titles between performance tester and performance engineer; a very natural outcome of these successful methods for conducting effective performance work.  This lecture will cover the parallels between the interdisciplinary thinking required for Exploratory and Context-driven Testing and correlating these dominant approaches to activities in Performance Testing.  We will review enhancements to performance testing techniques and methods as learned from the state-of-the-art Exploratory testing ideology and practice.  The lecture will explore the barriers to driving Performance Testing into agile software development and design activities using the practices of Exploratory test techniques.  Lecture includes audience participation, humorous jokes and free take-home exercises.
Mark Tomlinson’s career began in 1992 with a comprehensive two-year test for a life-critical transportation system, a project which captured his aptitude for software testing and test automation. That first test project sought to prevent trains from running into each other — and Mark has metaphorically been preventing “train wrecks” for his customers for the past 20 years. For the majority of Mark’s career he has worked for companies in a strategic role and used the leading products for performance testing, profiling and measurement.  He worked for 6 years at Microsoft as a performance consultant and engineer in the Microsoft Services Labs, in the Enterprise Engineering Center, and in the SQL Server labs. His efforts focused on the performance of next-generation Microsoft products as part of a customer’s mission-critical operations. In 2008, as the LoadRunner Product Manager at Hewlett Packard, Mark delivered leading innovations for performance testing and engineering.

Workshop: Beyond Testing

Markus Gaertner
Software Testers and Test Managers largely benefit from skills way beyond testing skills alone. Soft skills for working in a team are necessary. Participants will have the opportunity to learn about, practice and observe three different soft skills that will help them in their day-to-day work. Systems thinking engages the tester in a holistic viewpoint; team building and empathy help testers and managers to overcome their patterns of behavior; communication and transactional analysis help each of us communicate more clearly.In this workshop participants will learn about these three pillars for better collaboration by applying them. They will exchange their thoughts and war stories, when they applied systems thinking in the past, which difficult communications they faced, and how they can deal with it. The participants will work in groups together, and exchange their experiences with each other. Markus will introduce the concepts, and guide the participants to learn from each other.
Markus Gaertner works as an Agile tester, trainer, coach and consultant with it-agile GmbH, Hamburg, Germany. Markus founded the German Agile Testing and Exploratory workshop in 2011, is one of the founders of the European chapter in Weekend Testing, a black-belt instructor in the Miagi-Do school of Software Testing, contributes to the Agile Alliance FTT-Patterns writing community as well as the Software Craftsmanship movement. Markus regularly presents at Agile and testing conferences all over the globe, as well as dedicating himself to writing about testing, foremost in an Agile context.

Workshop: Enough Talk Already – Let’s Get Testing!

Nancy Kelln
Software testing conferences tend to talk a lot about testing, but do you ever wonder “Where is the testing?” This session will explore how testers test and allow participants to experience hands on, exploratory testing with an embedded software product. (No computer, laptop or iPad required!) As we test this product we will explore:What testing should be? And what it shouldn’tHow do testers perform value add, amazing software testingHow to adapt your software testing approach to an exploratory testing mindsetAttendees can expect to experience:Discussion about the testing process and what great testing looks likeHands on testing of a product – no computer required!Excitement and passion for the software testing craft
An independent consultant with 13 years of diverse IT experience, Nancy enjoys working with teams that are implementing or enhancing their testing practices and provides adaptive testing approaches to both Agile and traditional testing teams. She has coached test teams in various environments and facilitated numerous local and international workshops and presentations. A co-founder of POST, Nancy is an active member of the Calgary Software Quality Discussion Group, Association for Software Testing, and the Software Test Professionals organization. Nancy and her family live in Airdrie, Alberta, Canada. Connect with Nancy online at or on Twitter @nkelln.

Interviewing for Success: Field-Tested Techniques to Identify Thinking Testers

Paul Holland & John Hazel
Selecting the right hire to complement your test team is challenging and risky. A resume helps to filter skills and experience, but the interview is the real opportunity to assess critical thinking skills and fit. Unfortunately, too many interviews miss the mark with spot-check validation of resume content instead of assessing critical thinking and problem-solving abilities.This talk presents a suite of non-traditional field-tested interview questions that allow the interviewer insight into the thinking brain of potential testers. Assessing a candidate’s approach to specific problems brings into focus his or her affinity to deal with a wide range of testing situations. Another benefit of this interviewing approach is the aggregate experiential data; the responses provide a taxonomy for how the applicant thinks (“digger”, “analyzer”, etc).  The interviewer can then determine whether the applicant’s “style” would complement that of the team and that style suits the job specific challenges.
Paul Holland is a test manager at Alcatel-Lucent. He has 17 years experience in software testing focusing on automation, embedded systems, exploratory testing and improving testing techniques. He led the creation and worldwide deployment of a new automation environment for the Access division of Alcatel-Lucent.Paul was on the Board of Directors for the Association for Software Testing for 3 years and was on their Executive Committee for 2 years.Paul has been consulting and delivering training at Alcatel-Lucent sites in three continents for the past 5 years. In addition he has delivered training to other large companies such as HP, RIM and General Dynamics. He has facilitated many software testing peer workshops hosted by many different companies including Microsoft and Google.John Hazel leads the Customer Focused Test team for Alcatel-Lucent’s Wireline Access portfolio. With over 20 years in telecommunications hardware and software development, he has spent the last 12 years in system testing and management. His focus is on working with test organizations to adapt existing practices toward a value-added risk-based approach regardless of the development environment. Most recently he led the transformation of the ALU Access team from requirements-oriented product test to customer-focused system test using exploratory techniques and direct engagement with the end customer.John is also an Executive Member and Chair of the Education Outreach Committee of the Professional Engineers Ontario Ottawa Chapter, partnering with schools and other organizations to unlock the inner scientific curiosity and creativity of elementary and university students.

Moneyball and the Science of Building Great Testing Teams

Peter Varhol
Moneyball is about baseball.  But it’s also about breaking down accepted preconceptions and finding new ways to look at individual skills and how they mesh as a team.  Sometimes the characteristics that we believe the team needs aren’t all that important in assessing and improving quality.Moneyball is also about people deceiving themselves, believing something to be true because they think they experienced it.  Some of the team’s accepted practices may have less of an impact on quality than we would like.This presentation examines how to use data to tell the right story about our state of quality and our success in shipping high quality applications.  It examines whether our preconceptions are supported by facts, and identifies characteristics for building a high-performance testing team.  It applies the Moneyball approach to testing and quality to give teams a new way to evaluate capabilities and software to deliver the highest quality possible.
Peter Varhol is a well-known writer and speaker on software and technology topics, having authored dozens of articles and spoken at a number of industry conferences and webcasts. He has advanced degrees in computer science, applied mathematics, and psychology, and is currently solutions evangelist at Seapine Software. His past roles include technology journalist, software product manager, software developer and tester, and university professor.

Right vs. Right: Ethical Issues for Software Testers

Scott Allman
Of course you know Right from Wrong.  You would never think of faking a test report.  But, what about Right vs. Right?  We will look at a host of ethical issues facing software testers and develop a simple framework to help us when our moral compass is all atwitter.Testers are by their very nature decision makers.  They must decide among alternatives and do so often without realizing either their options or the process by which they make their choices.  Even though testers make choices, often they cannot formulate the principles guiding the choices, nor are they prepared to defend them.Our ethical dilemmas arise because we are trusted to create and communicate truthful information.   But all too often we find ourselves uneasy. Is the problem technical, political, legal or ethical?  We will work though common ethical issues that arise in software testing.
Scott Allman’s daily work as a QA/Test manager inspires his writings and presentations about software testing.  A software developer since the late 1960’s his career spans universities, startups, aerospace, consulting, and big corporations working on four continents.  He has a BA and an MA degree in philosophy.  He is a long time member of SQuAD, Software Quality Assurance of Denver, Colorado, USA.

Developers Exploratory Testing – Raising the bar

Sigge Birgisson
In our company, it is a common practice to perform Developers Exploratory Testing (DET) sessions. The cool thing is that this way of performing higher level testing has actually become accepted by our developers, and they really enjoy it. In my current work of developing our organization wide practices for quality, I have made a deep dive into how DET is carried out on a regular basis. What I have seen is that DET is accepted and acknowledged as a valuable practice, but it is not really carried out to its full potential. There are many details and aspects of it to work on, especially regarding reporting and follow-up.This talk will gather my learnings from coaching many of our different development teams in their DET sessions. I will describe the basics of DET briefly and then dig deeper into certain aspects where more or less coaching is needed. I will talk about the involvement of the whole team testing together, giving a lot of value back to the project—like findings revealing the need for discussions in certain areas. I will also talk about the gained common understanding of current quality status amongst not only the development team but also stakeholders when closely involved in the test sessions.
Sigge Birgisson is a software testing consultant at Jayway. He is a dedicated tester with very strong feelings for the Agile values and principles, always having the user in mind when carefully testing a product without wasting resources on unnecessary things. Sigge also believes in effective communication as one of the strongest tools in the testers toolbox, used in all aspects of testing the product at hand.Sigge has been involved in many different types of projects, but mostly within the Agile setting, with close cooperation with developers as a key to success.As a speaker, Sigge has held several presentations and facilitated workshops internally at Jayway as well as presentations on software testing for students at the university. He also attended SWET2 and SWET 3. Trying to keep up with new testing practices, Sigge is an active blogger and follows many discussions on software testing and Agile practices on twitter.

Change the Way You Approach Change

Tony Bruce
We all have things we want to change, whether it’s at work or personally and change is hard, in some cases seemingly impossible. This will be a chance for people to get together and discuss change. Looking at the psychology behind change.  Focusing on case studies, research and personal experiences.Looking at using the framework of ‘Direct the rider, motivate the elephant, and shape the path’ we’ll discuss its general use which we’ve all experienced and most likely not realised, and look at how we can utilise it in our own lives.Breaking down the framework we’ll look at aspects such as :

  • Finding the bright spots.
  • Shrinking the change.
  • Tweaking the environment.

We’ll also look at why some people object to good ideas and learn to recognise how we might be able to overcome their objections.

What do you want to change? Let’s get started.

Tony Bruce is a professional, experienced and constantly learning and teaching tester.He is based in London and has worked in industries ranging from media to finance in various kinds and sizes of teams. He believes the testing community is a very friendly and encouraging one and wants to do anything he can to help keep building the community. He has an accent which is 1 part Aussie, 1 part English and 1 part American.

Testing Deliberately

Wade Wachs
Henry David Thoreau penned, “The mass of men lead lives of quiet desperation.” Desperate testers can be seen doing very peculiar tasks such as seeking certifications, creating mountains of testing documentation, or performing mindless checking.Thoreau continues, “I went to the woods because I wished to live deliberately… I wanted to live deep and suck out all the marrow of life.”  As we test deliberately, we too can ‘suck the marrow’ from our testing to provide as much value as possible to our employers, while finding an escape from the quiet desperation many testers face.Join Wade as he shares a story of a desperate testing team that was able to start testing deliberately through the use of two key principles: Expectation Management and Test Coaching.
Wade Wachs has been officially labeled as a tester since 2009 when he transitioned from 6 years in various support roles.  Over the last few years Wade has helped two different testing teams implement exploratory testing with great success.  Wade is also active on Twitter and writes on his blog when he gets the time.

Workshop: Thinking About Testing as a Service

Lynn McKee
Have you ever been frustrated with how the role of testing is viewed by your project team? Do you struggle to understand the project team’s expectations of your testing? Do you find yourself feeling that your testing could provide greater value to the project? Tackling these differences often begins with thinking about the service you would like your testing to provide in contrast with the expectations of your stakeholders. How can you bridge that gap and align perspectives?This workshop will:

  • Discuss how to explore the needs and expectations of your stakeholders;
  • Review common misconceptions on the value of testing on why stakeholders are stuck believing those expectations;
  • Share ideas on how to assess context and determine the services your testing will provide;
  • Examine influencing project team perspectives and expectations through advocacy and relationship management.
Lynn McKee is an independent consultant with 17 years experience in the software industry and a passion for helping organizations, teams and individuals deliver valuable software. Lynn is an advocate of the software quality management practices espoused by Jerry Weinberg and provides consulting on software management, leadership and testing. Lynn is active within the software testing community by speaking at conferences, writing articles, contributing to blogs and forums. She is also a co-founder of the Weekend Testers Americas and the Calgary Perspectives on Software Testing Workshop. You can reach Lynn online at

Validity & Software Metrics

Nawwar Kabbani & Cem Kaner
This talk is largely tutorial in nature, focused on the validity of software-related metrics. We will treat software metrics as human performance measures, ask what makes a measure valid, and answer it with an inventory of many types of validity (construct validity, predictive validity, internal validity, face validity, etc.) We are carrying concepts over from a social science literature and will explicitly apply them to software metrics (something that we haven’t seen explicitly done before). The goal of this talk is to make these concepts more understandable to the software quality community, in order to increase the frequency with which they are discussed and applied.
We will also spend a little time on some other metrics-related topics: It is easy to say that measures that are invalid are too dangerous to use and therefore we should not use software metrics. But rejecting the idea of software-related measurement is like rejecting the idea of software test automation. The rejection is so impractical that it damages (and should damage) the credibility of the rejecter. People will rely on metrics whether they are valid or not. Executives will demand concise summary information about cost, timeliness, staff productivity, product value (etc.) and managers will give it to them. So what can we do to help those managers give better summaries to the executives? Improving the validity of our measures is a slow process. While we work on that, we have the old-standby tools for the interim: balanced scorecards, scored inventories of tasks or attributes, error bars, etc. These are hardly perfect, but they have some value.
Nawwar Kabbani is a Ph.D. student in Computer Science at Florida Institute of Technology. Nawwar’s research interests include testing financial models and financial software, SOA testing, security testing, software metrics, and test automation. He is a Fulbright scholar. He holds an M.Sc. degree in software engineering from Florida Tech, a Master in Informatics from Institut National des Sciences Appliquées (INSA) de Lyon, France, and a B.Eng. in software engineering from the University of Aleppo, Syria.

Cem Kaner is a Professor of Software Engineering at the Florida Institute of Technology. He is the senior author of books on software testing and software consumer protection. His current interests are in software engineering education, theory and practice of measurement, high-volume test automation, and research methods in quantitative finance.

Workshop: Exploring the Dynamics of Describing

Henrik Andersson
Some say that a picture say more then a thousand words but does it take a thousand words to describe a picture? Testers use descriptions as input such as requirements, design documentations, user stories, code. We also describe our findings and observations to our stakeholders like bugs, status of a system, strategies and tactics.
This workshop explores the dynamics of describing and receiving description. We start with a simulation that we later build a discussion around. You will be divided into smaller teams. One team consists of two groups. One group receive an assignment that they describe in writing for the other group to carry out. From the outcome you have the possibility to iterate and improve. The following debrief focus on interactions between groups, constraints with the written word, different ways we express and interpret what we read.
This session will have a 50/50 ratio between exercise and debrief.
Henrik Andersson is founder of House of Test, a context driven testing consultancy and outsourcing based in Sweden, Denmark and China. He provide leadership and management consulting in the fields of Testing and Agile development. He test, coach, consult, speak, write, manage and think about software testing and problem solving.
Henrik has been presenting at the past three CAST. He is an PSL graduate and AYE attendee.
Twitter: @henkeandersson