Blog
BREWT #2 Experience Report
The following is an Experience Report submitted by Zeger Van Hese, who convened a peer workshop / conference in December of 2018 that was supported by the AST Grant Program. This was originally published on BREWT Peer Conference website. This year we organised...
Webinar: API Testing: From Entry Level to a PhD in 40mins with Jason Ioannides
November 2018 Webinar: Join us on Tuesday, November 6th at 10:00am PST to learn about API Testing: From Entry Level to a PhD in 40mins! Have you seen a recent job posting for a Tester or QA Engineer? The majority of job descriptions have some requirement for API...
Webinar: The Well Architected Automation Framework with Adam Goucher
October 2018 Webinar: Test automation talks tend to focus on dealing with flakey tests, reducing runtime, etc. even after most of those problems had credible solutions 5+ years ago. What doesn’t get discussed at automation events is where their automation is run and...
Webinar: Teaching and Coaching Exploratory Testing with Maaret Pyhäjärvi
September 2018 Webinar: How would you test this? This is the leading question to the way Maaret has been passing on what she's learned on testing, giving people real software problems to work on that illustrate important skills on testing. There is a lot of talk...
Webinar: Testing the Right Path Well with Rob Sabourin
April 2018 Webinar: Join us on Monday, April 2nd at 10:00am PST for the AST's presentation of "Testing the Right Path Well" also known as A Context Driven Solution to a Death March Problem with Path-Based Test Design with Rob Sabourin. Software testing is hard....
Webinar: The (AB)use and Misuse of Test Automation with Alan Page
February 2018 Webinar: If you’re a tester who codes, chances are a big chunk of your job involves test automation - specifically writing tests that automate the user workflow. As a two-decade (and counting) veteran of software testing, Alan Page has seen a huge number...
Webinar: The Three Pillars of Expert Test Leadership: Driving Projects, Process and People with Anna Royzman
January 2018 Webinar: This webinar (register here) is designed for the modern test leader. Whether you are responsible for establishing a testing practice in your organization, managing test processes or people, defining testing strategies for your team, coaching...
AST Grant Report – NWEWT 2
This Guest post is from Duncan Nisbet, describing a peer workshop he secured a Grant from the AST Grant program to help finance. Earlier this year we ran the 2nd North West Exploratory Workshop on Testing (NWEWT). NWEWT follows the peer conference format which enables...
From 6 to 60: Our Scaled Agile Testing Journey
Wow, we are down to the last presentation of CAST 2017! These past two days have flown by. The final session I chose to attend was Cathy Toth's talk about her organization's ten-year journey towards effective Agile practices. Cathy described the day in 2007 when her...
Reducing Risk when Changing Legacy Code
Tina Fletcher used a clever imagery to help explain legacy code, and that was to look at ancient architecture with modern buildings built over/on top of it. Legacy code is code you inherited, don't understand or is difficult to change. It's also murky stuff to look...
Press Releases
AST and BCS Peer Conference Considers Whether the Public Should Care About Software Testing
“It’s our contention that most people don’t think very much about software development and software testing, despite software being deeply embedded in almost all aspects of our lives.”
SAN FRANCISCO, CALIF. AND LONDON, UNITED KINGDOM (PRWEB) JANUARY 22, 2021
The Association for Software Testing (AST) and The BCS Special Interest Group in Software Testing (SIGIST) have released a joint report that suggests the public should be deeply invested in the quality of software, if not necessarily the discipline of software testing. The public is constantly exposed to risks from poor software quality, including in life-critical contexts.
Software impacts our world in so many ways it is hardly worth enumerating, but the newest risks are enough to require serious examination. Machine learning algorithms are exploding in use as their cost and difficulty to implement plummet, preventing even the people implementing them from truly understanding their inner workings and predicting their outputs, by their very nature. Social media is fracturing our society with these generated algorithms, and we do not yet know how that story ends.
Data sets used to train these algorithms are both deliberately and inadvertently simplified and carelessly selected, baking in biases and blind spots. These algorithms are being aggressively married with audio and video surveillance in public spaces, workplaces, and educational settings. Vehicle automation may yet prove to be safer per mile than manual control in the aggregate, but that is little comfort when contemplating the rush to deploy fully automated vehicles on our streets, seas, and for air travel. Robotics and further automation against machine learning outputs will introduce risks we do not yet fully appreciate.
The public must be able to trust that experts have exercised good judgement about where, what, how, why, and when to test, that this testing has been conducted by skilled and curious testers with sufficient subject matter expertise, and testing results are properly communicated to and consumed by decision makers who decide whether or not and when to release software. This testing must center users and the public, not just commercial considerations.
The report proposes three approaches for establishing public trust — push, publicise, and punish. Pushes are applied up front to influence behaviour during the development of a product, publication puts information into the public domain to help consumers ask the right kinds of questions, and punishments discourage undesirable behaviour and introduce additional practices to attempt to prevent similar problems in the future.
Regulations codifying testing process standards are usually proposed as counterweights to the commercial pressures to release software as soon as it appears to work correctly, under expected conditions – at least for the most important use cases. Standards can fit into all three categories of push, publicise, and punish, but they can be difficult to broadly apply and may contribute to goal displacement by optimizing testing for generating proof that records of prescribed activities exist, as opposed to optimizing for deep examination and thoroughness in testing.
In the software testing community, there has been controversy over the ISO 29119 standard for software testing. The report notes that if a testing standard is expected to be a proxy for a product quality standard, then it is risking trying to drive software development from the back of the bus. Narrow standards intelligently applied for specific subject matter or contexts could be very helpful. Broad standards applied without consideration of context can be unhelpful or worse.
The Association for Software Testing (AST) is an organization for and by professional software testers that is creating community, boosting careers, and promoting the science and craft of software testing. Learn more at https://associationforsoftwaretesting.org
The BCS Special Interest Group in Software Testing (SIGIST) is the software testing specialist group of the British Computer Society. SIGIST promotes the importance of testing, develops an awareness of good practices recognised within the industry, represents the interests of the Group’s members with other bodies, encourages research into testing, promotes high standards of professionalism and excellence within testing, and promotes diversity within the industry generally. Learn more at https://www.bcs.org/membership/member-communities/software-testing-specialist-group/