CASTx17 – Tutorials


Tutorial: Testing APIs from Imagination to Implementation

You have heard of Web services and APIs. You know how to use Postman. You have even created a few API tests. But throughout your testing do you consider how API design affects the usability, extensibility, performance, and many other risks to your application? Or do you and the business leave it to those more technical to decide?

In this tutorial, we will discuss how to analyse and define APIs with both technical and business facing team members. You will also be introduced to, and trial, a set of tools and skills to help test APIs from the requirements phase through to implementation. As a group, we will explore:

  • Why business expectations matter in how you develop your API and how to explore and question architectural decisions
  • How to expand your test design to deeply explore an API
  • How to model an already existing API platform which can drive the design and automation of a suite of checks

Many argue that the future of software is not new creations, but instead creating new value by building connections between existing solutions both internally and across organisations. To take advantage of these opportunities we will need to value APIs in a whole new light. Come join us in building that broader understanding of APIs and how they relate to your business and end user goals.

Abby Bangser


Abby Bangser has had the opportunity to work in a variety of domains, countries, and team dynamics over her tenure as a consultant with ThoughtWorks. While the challenges of each domain and tech stack have varied, she has always found that the key to any quality product is the collaboration between the technical and business teams.

Over the last couple of years Abby has had the opportunity to join fantastic line-ups of speakers at Agile2016, European Testing Conference, TestBash Brighton & Philly, Nordic Testing Days, and Grace Hopper Conference. In addition, she has begun blogging at and is fairly active on Twitter at @a_bangser.

Mark Winteringham


Mark is a technical test manager, testing coach and international speaker, presenting workshops and talks on technical testing techniques. He has worked on award-winning projects across a wide variety of technology sectors ranging from broadcast, digital, financial and public sector working with various Web, mobile and desktop technologies.

Mark is an expert in technical testing and test automation and is a passionate advocate of risk-based automation and automation in testing practises which he regularly blogs about at is also the co-founder of the Software Testing Clinic in London, a regular workshop for new and junior testers to receive free mentoring and lessons in software testing.

Mark also has a keen interest in various technologies, developing new apps and Internet of thing devices regularly. You can get in touch with Mark on Twitter: @2bittester

Tutorial – Dissecting Your Testing

Have you ever studied testing? Too few testers, so it seems, have read a book on testing or its underlying principles. Even fewer have studied testing by deliberately observing evaluating it directly and systematically.According to the Oxford English Dictionary, an autopsy is an “examination to discover the cause of death or the extent of disease”, ultimately derived from the Greek work “autoptes”, meaning “eyewitness”. Doctors perform autopsies to learn about the human body and to discover how things might have gone wrong. A testopsy—to use a word coined by James Bach—is an examination of testing work, performed by watching a testing session in action and evaluating it with the goal of sharpening observation and analysis of testing work. Testopsies can help in training, assessment, and developing testing skill for novices and experienced testers alike.In this one-day workshop, led by Michael Bolton, participants will learn from each other by preparing and performing a series of testopsies. The process begins with creating a coding system, mapping out the activities that testers perform and the skills and tactics they apply. Using the coding system to guide observation, participants will watch each other as they test software, record what happens, and then discuss the activity and refine the coding system. Join us as we dissect our testing!

Michael Bolton

michaelboltonMichael Bolton is a consulting software tester and testing teacher who helps people to solve testing problems that they didn’t realize they could solve. He is the co-author (with senior author James Bach) of Rapid Software Testing, a methodology and mindset for testing software expertly and credibly in uncertain conditions and under extreme time pressure. Michael has 25 years of experience testing, developing, managing, and writing about software. For the last 18 years, he has led DevelopSense, a Toronto-based testing and development consultancy. Prior to that, he was with Quarterdeck Corporation for eight years, during which he managed the company’s flagship products and directed project and testing teams both in-house and around the world.

Contact Michael at, on Twitter @michaelbolton, or through his Web site,

Tutorial – Introduction to Capacity Engineering

I am planning for four different blocks that cover, in my opinion, the most important aspects of capacity engineering:

1. Capacity 101: (50 minutes, 10 min break)
– goals of capacity engineering
– different methods of managing capacity in use today
– advantages and disadvantages of different approaches

2. Monitoring 101: (2 50 minute sessions, 10 min break with each)
– introduction to why monitoring matters
– war stories where monitoring either saved the day or lack of it cause disasters
– what should be the goals of good monitoring
– discussion of how we could go about accomplishing those goals

– discussion of what we did at FB
– describing three daemons running on all servers
– showing what can be built on top of these
– demonstration of diagnosing problems using these demons

3. Performance 101: (2 50 minute sessions, 10 min break with each)
– description of the performance space: terminology and usage
– what types of problems can be found
– what types of problems are most common/prevalent in code today

– performance testing (why it matters)
– discussion of tools for performance testing
– benchmarking done right

4. Putting it all together (2 50 minute sessions, second one completely interactive)
– you just got the first performance job, where do you start?
– how do you develop reasonable approaches for a place that never had capacity work done?
– learning from past approaches (history will teach you a lot!)
– understanding risks and safety buffers
– alternative approaches to handling “unused” capacity
– Q and A

Goranka Bjedov


Known for her technical achievements and her refreshing wit, Goranka Bjedov works as a capacity engineer at Facebook where she is responsible for making sure there are enough servers to handle everything Facebook users want to post, upload, find or otherwise engage with. Most of her time is spent analyzing performance and assessing risks. Her industry career also included a performance engineering position at Google, Network Appliance and AT&T Labs. She is a frequent keynote speaker at performance related conferences and workshops.

Prior to joining industry, Goranka was a tenured faculty at Schools of Engineering at Purdue University, teaching mostly programming classes and conducting research in large-scale computer parallelism. She co-authored two textbooks and numerous papers, and was a publication chair for Frontiers in Education conference for a decade. In that role, she co-wrote the software for the first complete Web-based conference proceedings production in 1995. She has served on Anita Borg Scholarship committee while at Google and has been 2014 Grace Hopper Celebration Software Engineering Track Co-Chair and 2015 Grace Hopper Celebration Career Track Co-Chair.

To register for CASTx17, please click here.