WHET 4 (2007) in Seattle, WA
Focusing topic: Boundary Testing
- Henrik Andersson
- Ross Collard
- Tim Coulter
- James Bach
- Jon Bach
- Scott Barber
- David Gilbert
- Dawn Haynes
- Doug Hoffman
- Paul Holland
- Karen Johnson
- Cem Kaner
- Michael Kelly
- Sam Kalman
- Rob Sabourin
- Keith Stobie
Call For Participants
Workshop on Heuristic & Exploratory Techniques (WHET 3)
May 19-21, 2006
There is no charge to attend this workshop. Participation will be limited to 12 (at most, 20) people, selected on the basis of applications to attend. See the notes on HOW TO APPLY below.
Co-Hosts: Cem Kaner & James Bach
Facilitator: Scott Barber
Twenty-three years ago, Kaner coined the phrase “exploratory testing” as a contrast to traditional, scripted software testing. Since then, the approach has been alternately praised and vilified. Competing mythologies have developed about the risks, benefits (even the morality) of exploration. And — some people have gotten pretty good at it.
Over the years, the concept has evolved and been refined. We currently define software testing as a process of technical investigation, an empirical study of the product under test with the goal of exposing quality-relating information about it. (Validation and verification both fit within this definition.) We define EXPLORATORY testing as an ongoing interaction of three activities: learning, test design, and test execution. Throughout the testing project, the explorer researches the product, its stakeholders and their objectives, its market, competition, capabilities, platform, the products it interacts with, and risks associated with any of these.Based on this information, the explorer designs tests, runs them, and learns more about the product and how it should be tested.
Notice that what is different between exploratory and more traditional scripted testing is not the collection of techniques used. It is the cognitive involvement of the tester. In a scripted testing situation, the tester runs a test that applies to a test technique because a test planner previously decided that this was the right type of test and this was the appropriate test data. In an exploratory situation, the tester is the test planner. There might be a testing strategist, a coach whoguides the tester, but the tester picks the technique to use now based on her assessment at this time of what will most likely yield the most valuable information. The explorer might assess risks and develop a plan in advance, might prepare data in advance, but at every moment of testing, the explorer is responsible for deciding whether the plan for today’s work is still the right plan, whether and how to use the data, what new information is needed and what is the best way to get it.
WORKING WITHIN THE SCOPE OF THIS DEFINITION, we think it’s time for a task analysis. What do skilled exploratory testers do? How can testers get better at these tasks?
Participants in this meeting will be expected to come prepared, not just with ideas and experience reports about exploratory work. Also with some background reading and thinking about task analysis. We specifically recommend Jonassen, Tessmer & Hannum’s TASK ANALYSIS METHODS FOR INSTRUCTIONAL DESIGN.
To achieve common ground for this meeting, we ask that all participants read Jonassen et al before coming to the meeting, and write some notes, ready to be shared with the other participants, that apply ideas from the book to testing or on how the analyses described in this book can be applied to improve our understanding of testing. If you are not willing to commit to reading this book and thinking about its applications before coming to the meeting, please do not apply to come.
There are many other interesting and relevant books that might give you related ideas or help you understand some of the ideas in Jonassen et. al.A few examples are Gause & Weinberg’s EXPLORING REQUIREMENTS: QUALITY BEFORE DESIGN, Schraagen, Chipman & Shalin’s COGNITIVE TASK ANALYSIS, Hackos & Redish’s USER AND TASK ANALYSIS FOR INTERFACE DESIGN, Cooper & Reimann’s ABOUT FACE 2.0: THE ESSENTIALS OF INTERACTION DESIGN, Annett & Stanton’s TASK ANALYSIS, and Carroll’s SCENARIO-BASED DESIGN: ENVISIONING WORK AND TECHNOLOGY IN SYSTEM DEVELOPMENT. Familiarity with some of these books, especially familiarity accompanied by thinking about how what\’s in the book could help us understand how to examine what testers do, what challenges testers face, what skills testers develop, what knowledge testers seek (etc.) will be helpful, useful in the meeting. But we will run the meeting on the assumption that you have read Jonassen.
The meeting will include some brainstorming sessions and experience reports describing how tasks get done. We will probably do some small group activities, such as interviews to bring out the details and texture of a particular experience, skill, training experience (whatever). We will probably also spend some time in discussion of the nature of task analysis–but not more than 1/2 day of the 2.5 day agenda.
This is a meeting in the LAWST (Los Altos Workshops in Software Testing) tradition. Discussions will be facilitated, the sequence of presentations will evolve as the meeting progresses, our emphasis will be application and we will particularly value detailed reports of actual experiences. Presentations are subject to discussion, which might be very brief or very long. A presenter who captures the imagination of the meeting in a 45-minute presentation might face as much as a day of questions, arguments, counter-examples, supporting examples and demonstrations, and other comments on the ideas raised in that presentation. Discussions rarely last this long, but we don’t shift topic while there is still strong group energy with the current topic. We would rather spend enough time on a few things to learn their lessons well than to cover a broad agenda at the expense of useful depth.
INTELLECTUAL PROPERTY AGREEMENT
We expect to be able to share the work developed for and in this meeting. Material that we create at the meeting or that was created in preparation for the meeting will be reusable by all participants in the meeting.
We recommend the Jameson Inn in Palm Bay. Their prices are reasonable, they have OK rooms, and good Internet connections. Melbourne’s Courtyard Marriott is also conveniently located.
The meeting itself will probably be at Florida Tech (we are still making final arrangements).
Friday, May 19, we will officially start at 2 p.m. There are some strong sessions at the STAR conference in Orlando Friday morning, and we choose not to conflict with these. Participants who are not coming to STAR are welcome to hang out Friday morning, perhaps at Kaner\’s house, perhaps at one of the local coffee shops. We will work Friday from 2 to 7.
Saturday and Sunday, we will start informally at 8 (breakfast and chat), calling the meeting to order promptly at 9. We will finish Saturday at 6,Sunday at 4 (which allows time to get to the nearby Melbourne Airport for the 6 p.m. flight to Delta’s Atlanta hub).
Participants and guest(s) are welcome to join us for (pay-for-your-own)dinner on Friday, Saturday, and Sunday nights. We will supply lunch on Saturday and Sunday.
HOW TO APPLY
We cannot accept more than 20 people and will probably be happiest with about 12.
There is no financial charge to attend. The way you pay for your attendance is by offering valuable experiences and insights at the meeting, in a collegial and supportative way.
If you want to attend the meeting, send an electronic message to Cem Kaner <firstname.lastname@example.org> and James Bach (James@satisfice.com) that briefly describes your software testing background, your experience with or knowledge of exploratory testing and of task analysis, why you want to attend this meeting and what you think you can offer to it. Please also explicitly promise that you will read and prepare notes that apply Jonassen et al in preparation for the meeting.
We would like to balance attendance among five groups:
– Consultants and senior thinkers in the field
– Educators (academic or commercial trainers) who have experience in teaching complex cognitive skills
– Analysts (business analysts, requirements analysts, people who have done task analyses and relied on what they got)
– Test managers
– Individual testers
Some groups will probably be under-represented, but we will apply a bias toward diversity in our selection.
When you apply, please tell us in your note which group(s) we should see you as belonging to (and perhaps tell us that you really belong to a 6th group that we forgot to make explicit).
Please send in your application, no more than one or a few pages of text(but you can link to your websites or published papers or other published material) by April 15.