Test Automation in Context: What Might Help? 

Let’s talk and think about when, where, why, and how to effectively apply automation as part of a complete testing approach or strategy. We’re looking for practitioner experience and constructive advice to help people succeed. Of course automation doesn’t replace thinking testers – but how might it support and aid broader testing efforts? 

How could you describe the automated checks and exploration that have been helpful? Which aspects of your context had the greatest effect on the success (or failure!) of your efforts? Which hopes and expectations are reasonable to have and achievable for test automation? What lessons have you learned from attempting to use tools to support your testing?

Selected Talks will be repeated for an online audience shortly after CAST.

More content will be announced as we approach CAST.

Monday Tutorials

Tariq King Tutorial:

Testing AI and Machine Learning: A Holistic Strategy

Although there are several controversies and misunderstandings surrounding AI and machine learning (ML), one thing is apparent — people have quality concerns about the safety, reliability, and trustworthiness of these types of systems. Testing ML is challenging and requires a cross-section of skills, and experience from areas such as mathematics, data science, software engineering, cyber-security, and operations.  In this tutorial, Tariq King introduces you to a holistic strategy for testing AI and machine learning systems. You’ll start with AI/ML fundamentals and then dive into approaches for testing these types of systems offline, prior to release, and then online, post-deployment. Engage with other participants to develop and execute a test plan for a live ML-based recommendation system, and experience the practical issues around testing AI first-hand.  By the end of the session, you’ll be able to better prepared to help your organization build trustworthy machine learning systems.

Biography

Tariq King enjoys breaking things and is currently the Chief Scientist at test.ai, where he leads research and development of their core platform for AI-driven testing. He started his career in academia as a tenure-track professor and later transitioned into the software industry. Tariq previously held positions as a test architect, manager, director, and head of quality. He has published over 40 articles in peer-reviewed software testing books, journals, conferences, and workshops. He is a member of the ACM, IEEE Computer Society, and Association of Software Testing, and serves as a board member and keynote speaker for several international software engineering and testing conferences.

Tuesday Plenary Sessions

Ben Simo

Computer-Assisted Testing

Abstract to follow

Biography

Ben Simo is a software quality leader who has been helping make software better for over 30 years. Ben has applied his skills as a software tester, designer of testing tools, and quality coach in a variety of industries, including: healthcare, finance, defense, education, marketing, ecommerce, and cloud services.

Ben approaches software testing as something much larger than verifying that explicit expectations are met. He believes that good testing includes observational and experimental investigation that enables people to make better decisions that lead to delivering better software. Although Ben believes that testing cannot be automated, automation plays an essential role in most of his testing work. 

Ben is a member of the Context-Driven school of testing and a former president of the Association for Software Testing.

Ben is perhaps best known as the “sort of skilled hacker” who uncovered numerous problems with the initial release of the US Government’s health insurance marketplace: Healthcare.gov.

Ben is currently exploring and creating tomorrow’s testing tools as a product researcher and product manager at Tricentis.

Rajni Hatti

Ethical Hacking for Testers

Security testing is often handled by a specialized team or a set of automated tools, but every tester should understand the basics of how malicious data can enter a system in order to prevent the vulnerabilities from occurring in the first place. In this session, I will share a case study from a healthcare technology project where I led the QA initiative. I will show you the ethical hacking practices I used to find security flaws in various contexts such as design, code review, testing, and product release. I will show you the practices I used that worked to:

Determine the attack vectors most likely to occur based on the application functionality.
Determine what security tests are needed based on the likely attack vectors.
Create valuable tests to identify common security flaws.
Decide if and how a security testing tool can aid testing efforts.

You will learn how to be an ethical hacker in order to make your system more robust, and you will leave with a better understanding of how to incorporate a security mindset into your daily testing efforts.

Biography

Rajni Hatti has a Computer Science degree from Cornell University and has worked as a software professional for over 20 years in a variety of industries, including Telecom, Finance, Healthcare Technology, and Online Fraud Detection. She began her career as a software developer and gained a passion for testing, which led her to using her technical skills as an automation engineer, and her leadership skills as a manager and team coach. She then went on to start her own software consulting company, successfully driving projects from conception to deployment. Rajni currently works as a Lead Software Test Engineer for MaxMind, an industry-leading provider of IP intelligence and online fraud detection tools.

Greg Sypolt

Building a Better Tomorrow with Model-Based Testing

Let’s build a better tomorrow and a more equitable world.

We have all heard the call for change, and EVERFI is committed to answering it with educational courses and quality digital transformation with model-based testing. In a world of agile development, we experience a fast-paced development environment that is challenging, if not impossible, to keep up with hundreds of courses that require testing across our platforms. It calls for change, and EVERFI is committed to answering it by introducing model-based testing.

It’s a software testing technique that helps you simplify and accelerate application development without jeopardizing quality. By shifting to model-based testing techniques, the system is broken down into smaller manageable components. The models describe the system requirements and capture the expected behavior. Then utilizing test generation algorithms will recreate the entire system from a collection of models to generate Cypress test code from those visual flowchart models.

Biography

As a VP of Quality Assurance at EverFi, Greg Sypolt enjoys speaking and writing about how others can rethink how they approach quality excellence. He has spent most of this career in quality assurance with an engineering mindset, gaining experience in quality tool development, automated testing, and DevOps. Being a quality advocate, he believes delivering high-quality products is everyone’s responsibility.

Tariq King

Towards Better Software: How Testers Can Revolutionize AI and Machine Learning

You may have heard that software ate the world and AI is eating software.  However, if you’ve been paying attention, then you’ve probably realized that the world is filled with bad software. Many organizations struggle with meeting their quality goals and keeping testing-related costs contained. Software is indeed revolutionizing the world, but the world is also paying a revolutionary price for bad software. So where do AI and machine learning (ML) fit in? Are AI/ML breakthroughs just new ways of filling the world with bad software? Or do they offer a path towards better software? 

Tariq King believes that although ML is bringing valuable improvements and capabilities to software, it is unlikely for such advances to be successful without integrating testing and quality engineering into AI/ML workstreams. Join Tariq as he highlights the parallels and differences between ML engineering and software testing, and explains why he is convinced that testing professionals hold the keys for the transition to AI/ML system development. However, before testers can revolutionize AI/ML, they must first become agents of change through innovation. For the first time, Tariq will share his own journey and transformation, and discuss how he is helping other individuals, teams, and organizations on the road to AI. 

Biography

Tariq King enjoys breaking things and is currently the Chief Scientist at test.ai, where he leads research and development of their core platform for AI-driven testing. He started his career in academia as a tenure-track professor and later transitioned into the software industry. Tariq previously held positions as a test architect, manager, director, and head of quality. He has published over 40 articles in peer-reviewed software testing books, journals, conferences, and workshops. He is a member of the ACM, IEEE Computer Society, and Association of Software Testing, and serves as a board member and keynote speaker for several international software engineering and testing conferences.

Laurie Sirois

Quality Isn’t Funny

How can software testing professionals help their organizations take Quality even more seriously? Many of the challenges testers face are consistent across industries. This humorous, memoir-style talk will bring light to such serious topics as: elevating quality professionals to 1st class citizen status; getting buy-in on “Shift-Left” testing practices & true continuous improvement culture; how to add value by identifying risk, representing the customer, honing requirements and planning testing efforts earlier in business discussions; and how to sustain progress by dropping enough of our defenses to take ownership to the next level (organizational).

Humor doesn’t always have a place in preventing contentious escalations, but having data to back your recommendations & decisions is only one step in gaining support. Data doesn’t tell stories, people do: humor during stressful times can build trust and the strong relationships needed to get serious issues addressed sooner. Similarly, data doesn’t make decisions, people do. Leveraging self-awareness and emotional intelligence tools can help us reach consensus sooner & underscore the seriousness of our daily profession without taking the fun out of our careers.

“The Life You Save May Be Your Own” ~Flannery O’Connor

Biography

Laurie Sirois is currently the Director of Quality Assurance at PTC Inc. & has over 25 years of experience in QA leadership roles, as well as Product Owner and Producer roles at small, mid-sized and global companies. She specializes in building and evolving QA teams, processes and automation initiatives that close the gap between business and customer needs. 

Laurie holds an active CMSQ (Certified Manager of Software Quality – QAI) as the top-scoring participant globally as of 2015. She has also earned her CPCU (Chartered Property Casualty Underwriter – The Institutes), and holds active CSPO and CSM designations from the Scrum Alliance. She champions Quality as a core component of enterprise risk management, and her favorite motto is “fail fast, succeed faster.”

Jack Taylor

When to Say No to Automation

I work for a large Fin Tech company in which the higher ups base their lives on scorecards and eye grabbing headlines. “The Travel Team now have 100% Automation and have completed their implementation of DevOps!” or “We are currently at 75% automation but we hope to have 100% across all applications by the end of Q2!” are typical phrases you’ll hear, but is this culture counter productive when it comes to testing?

We all know that automation is a massive part of what we do and for the most part it greatly enhances our test coverage, reduces workload and risk and increases confidence in our software. However, there are times when automation isn’t required to achieve the greatest test efficiency and we find ourselves implementing it simply because we’ve been told to.

For example, your director wants to tell his boss that he has 100% automation coverage on all applications, but what is the point in spending 2 weeks creating a regression suite for an application that is updated twice a year on average? Perhaps rather than creating a large Selenium script base a different approach might actually be more effective, such as Exploratory testing?

This talk will look at the scenarios in which Automation might not be your best option and how you can win over a leadership group typically focused on metrics and milestones.

Takeaways:

How to assess if automation is the best answer to your test problems

How to fight back on leadership when they are obsessed with buzzwords and undesirable metrics

Different approaches to the standard regression methods

Biography

My name’s Jack and I am a 30 year old from London/Brighton currently working as a Senior Quality Engineer in Financial Technologies. I studied Multimedia and Digital Systems at Sussex University and after graduating in 2011 I began my career in tech whilst continuing my studies into postgrad. It took me a while to find where I wanted to go but once I fell onto the testing path I knew I had found the right track for me. I love engaging with the test community via any medium, whether it be Twitter, the Software Test Clinic, the MoT or the Testers Network (a large 90+ testers community of practice at work that I run). I am always keen to learn and to develop my understanding of our great craft so please drop me a tweet anytime!