I had the opportunity to attend Noah Sussman’s tutorial on Continuous Automated Testing last week as part of CAST2014. It was a great tutorial, with most of the morning spent on the theory and concepts behind continuous automated testing, and the afternoon spent with some hands-on exercises. I think that Noah really understands the problems associated with test automation in an agile environment, and the solutions that he presented in his tutorial show the true depth of his understanding of, and insight into, those problems. Here are some of the main highlights and takeaways that I got from his tutorial at CAST2014.

Key Concepts

  • Design Tools – QA and testing are design tools, and the purpose of software testing is to design systems that are deterministic
  • Efficiency-to-Thoroughness-Trade-Offs – (ETTO) We do not always pick the best option, we pick the one that best meets the immediate needs
  • Ironies of automation – Automation makes things more complex and, while tools can make the process safer or faster, they cannot make things simpler
  • Hawthorne Effect –  Productivity (temporarily) goes up when you get a new process or tool
  • Goodhart’s Law – Simplified for the tutorial, the law states that people will game the system. Period.
  • Diseconomies of scale – The opposite of economies of scale, producing services at an increased unit cost
  • Conway’s Law  –  Simplified for the tutorial, the law states that software looks like your organization
  • Bikeshedding – It’s hard to build a complex, multipart system, but building a bike shed is easy, so organizations tend to spend too much time on trivial items

Automated Monitoring

In 2007 it was proposed by an engineer at Google that sufficiently advanced monitoring is indistinguishable from testing. This statement highlights the relationship that exists between monitoring and testing, and we can certainly use advanced monitoring to help us in our testing efforts. For example, we can use statsd as a means of instrumenting production code to gather high-volume data with minimal or no performance impact. The statement also highlights the issue of monitoring vs. testing. Noah provided a list of four things we should be doing as part of our monitoring efforts:

  • We should monitor all things
  • We should build real-time dashboards
  • We should deploy continuously
  • We should fix production bugs on the fly 

We should perform these four things, keeping in mind that while monitoring does provide visibility into implementation, it has nothing to do with design, and so does not replace QA and testing because they are design tools. Thus, while monitoring and testing are both necessary, it is only when practiced jointly that they are sufficient.

The Problem of Abstraction

We use abstractions as a means of hiding information, and we layer abstractions on top of the universe around us in an attempt to make things appear simpler than they really are. Eventually, however, we reach a point of complexity at which, even with multiple layers of abstractions to hide the information from us, our brains cannot process any more information. The Law of Leaky Abstractions, by Joel Spolsky, basically says that shifting layers of abstraction leak and, when this happens with software it results in bugs, commonly at the points where the abstractions integrate with each other.

Conway’s Game of Life

One approach to addressing the risk introduced into a system by leaky abstractions is to limit complexity. Limiting complexity was something that Noah stressed several times, saying that “systems are safer if people keep the system under control” and “simple rules take you a lot further.” He also said that safety is derived from being able to predict system behavior, and suggested using Conway’s Game of Life as a learning environment, especially Golly. Golly serves as a good tool for learning and predicting system behavior because it can be used as a Read-Eval-Print loop (REPL), allowing you to predict what the output of the next step will be and executing that step while still maintaining the program state so that you can toggle back and forth to better understand and refine your predictions.

Jenkins for Testing and Monitoring

The hands-on portion of the tutorial walked through setting up Jenkins on your local machine, taking it beyond just a tool used for continuous integration to show how it could be used as a sort of “fancy cron” for scheduling test execution and other tasks. This is especially useful in continuous automated testing as Jenkins can be set up as a Read-Eval-Print loop for use in the rapid development of automation scripts. The tutorial continued by showing how to take advantage of other useful aspects of Jenkins, such as manipulating the URL to access the API documentation, using JSON and manipulating the URL to create real-time dashboards, and using Jenkins as a database for historical records of test executions.

Lightweight Automation

Automation in an agile environment is often too cumbersome, too brittle, and created too late in the sprint to be of any benefit to development activities in the current iteration. But it doesn’t have to be. Automated monitoring, REPLs, and less complex automated scripts can be utilized as a lightweight automation “framework” for continuous automated testing without the overhead typically associated with traditional automation techniques. Implementing an automation strategy in this way allows not only for much more agility in our automation efforts, but it also allows us to use automation as a design tool alongside our other testing activities.