Lee Hawkins: 

A Day in the Life of a Test Architect

Although I stumbled into testing — in 1999 after migrating from the UK to Australia amid a tech boom time — I have since become a passionate member of the worldwide testing community and currently hold the title of Principal Test Architect. But what does that really mean? A test architect at (Company) provides technical leadership and strategic direction for testing, and I will describe what that means in my day-to-day work.

My position involves advocacy for great new testing ideas gleaned from the wider testing community, mentoring new testers and coaching testing teams in using context-appropriate approaches to their work. This leadership role extends beyond (Company) too so a typical day might include sharing knowledge with a meetup group, blogging on a testing topic or helping a new speaker with a conference proposal.

Join me to discover that testing is far from being a dead-end career and learn how you can become an active participant in your testing community.


Mike Hrycyk: 

Augmenting the Agile Team – A Testing Success Story

A few things have become an undeniable fact of business over the past few years. Agile or agile descendants will be how we manage projects. Teams will have to learn to be successful both distributed and remotely. The testing role is evolving both for the above reasons and because technology always means change. Independent testers, contract testers, members of a testing service organization all face some daunting challenges embedding and producing successful outcomes in this new world. Remotely embedded independent testers often have trouble relating to a team they are not permanently part of or collocated with. It’s harder for teams to gel into a cohesive productive unit without traditional face-to-face organic team alignment activities. While these challenges are not insurmountable, with time, effort and tools an augmenting strategy can be an alternative path to success. This talk will describe implementing an augmenting testing team, one that works in parallel with feature teams, handling the SIT regression cycle while the feature teams continue to work through new development. Tips will be given for a successful feature handoff, intra-team communication and troubleshooting some of the hurdles that will be encountered. An augmenting team solution can provide an alternate path to success, bypassing many of the common problems ¾ this talk can get you there.


Josh Gibbs: 

Black Swans Wear Hoodies

As testers, we know bugs are often found off the happy path, yet we design personas emphasizing positive experiences. There’s a darker path that can attract attackers to our software, and understanding the attackers’ motives can help us find security bugs that may have otherwise eluded us. I’ll share actual examples and guide you through a few exercises to help you think like an attacker.

Key Learnings:

  • Hackers are normal, everyday people.
  • Our design decisions influence the types of attacks our application will face.
  • Your corporate security team can be a great partner in test design.


Dmitry Vinnik: 

Building Tests to Build Websites

Technologies like Squarespace, Salesforce, Wordpress or WIX are extremely popular for those who want to create a working website without requisite developer knowledge. In this talk, I will explore how Salesforce uses Page Object Model patterns to test its Communities platform, which is used to develop websites for Salesforce users. Throughout the talk, we will explore how a multi-frame platform can be directly mapped to POM for Selenium Webdriver, and how the client side code is developed to support this pattern. The importance and complexity of these test frameworks is that it needs to be applicable for both platform and produced websites.


Arun Kumar Dutta: 

Continuous Regression Performance Testing for Enduring in Market

Presently, high performance for an application is no longer a luxury for your business. it’s now a standard need. Normally, performance testing happens just before production for any major release and doesn’t have much time despite having performance issues as going live is more important. Organizations are now concerned with the increasing rate of change in the market where they not only need to compete but also familiarize themselves with the speed of changes to adapt time-to-market pressures. Performance testing now becomes very crucial before the application goes live no matter whether it’s major or minor releases or sprint for agile-based projects. It is time to conduct continuous regression performance testing, which is mandatory for all projects since the early phases of SDLC, to avoid big losses and reduce overall costs.

This presentation will assist us in knowing the values that can bring continuous regression performance testing over normal performance testing, why continuous regression performance testing is mandatory for enduring in the market and the things you need to remember while making implementing is an ongoing process.


Guillermo Skrilec: 

Creating and Implementing a Mobile Testing Strategy

As we mentioned, mobile testing presents unique challenges. There are trade-offs that you need to consider and choices that you need to make regarding the mix of different techniques and methods that will be used in mobile app testing. Each testing method you consider will have associated pros and cons, and you will most likely find that there is not a single testing method that is completely satisfying.

Instead, you will need to consider a testing strategy that combines different testing options, and as a whole provide the best overall testing result, balancing the trade-offs between cost, quality and time-to-market.

In this session, we examine various testing options for mobile applications while explaining the factors you need to consider when determining your testing strategy.

Finally, we will make some recommendations on how you can combine the various testing options to find the best overall strategy to fit your mobile applications.

This session is going to be a high-level discussion based on our experience defining a corporate mobile testing strategy, analyzed from the perspective of Balanced Scorecards (Financial, Customer, Internal process, and Organizational capacity).


Pekka Marjamäki: 

Crime Scene Investigator

Do you want to be the Horatio Cane or Gil Grissom? Here’s a chance to try your skills as a Crime Scene Investigator! In this workshop, we’ll develop idea creation skills, note taking, storytelling, scenario imagination, as well as finding proof, problems and out-of-the-box thinking. The skills learned here can be used in many areas of life: software testing, development and management, problem solving in various occupations, and even studying and research. This workshop uses the guidelines and best practices of real Crime Scene Investigation and puts them into digestible form to be used by anyone. The workshop uses a heuristic approach to investigating a crime. Throughout the workshop we recap the lessons learned in each stage of the workshop by letting the attendees present their own ideas. We use coaching skills to delve deeper into the learning process to capture the essence of learning.


Frank Charlton: 

Data is Your Friend

As a tester I find myself spending more and more time digging up and analyzing data. Instead of going in blind to a test run, status update, triage meeting etc I can go in with full confidence informed.

  • Hmm. Looks like we only have 1 user on iOS 8. I should probably spend my efforts testing more used OSs.
  • Our beta pool seems to cover all our localized languages except German. Let’s do a careful pass on that.
  • We spent 6 months on this feature…and no one is using it. How can we encourage users to make the most of it.
  • Our crash rate on the latest beta drop is through the roof! What checkins made it in to the build?
  • Comcast users have significantly higher error rates. Is there something special about thier default router setup?
  • We are finding way too many bugs at this stage in the release. I think we may need to re-think that release date.
  • It looks like most of the code changed is all in this part of the app. Rather than doing a full regression, let’s spend more of our time focusing on tests in that area.

Data can help us in all of these areas and more. During my session I’ll be discussing what I’ve learned during my time nerding out with data, and how you can use similar concepts during your work.


Raj Subramanian: 

Demystifying Mobile Testing - Quick Tours on Your Mobile App

As mobile devices, tools, operating systems and web technologies rapidly evolve, testers must quickly adapt their thinking in this changing domain. Testers often struggle to find important vulnerabilities and bugs in mobile applications due to lack of guidance, experience and the right resources. During my career in the mobile testing field, I’ve come across numerous bugs related to native mobile applications. Looking at these bugs, I started categorizing them, and have since come up with a mind map. This mind map helps to provide a quick tour of your mobile application and find vulnerabilities as quickly as possible (http://www.rajsubra.com/2015/01/16/native-app-testing-cheat-sheet-quick-tour/ ). This could be used for smoke testing, acceptance testing or even production testing after your application is live on the different app stores. This session will give attendees hands-on experience by using these mobile testing approaches in real applications to get quick feedback.


Matt Heusser: 

Explaining Testing with Exercises

A lot of test education is PowerPoint.

Yet we know that a disproportionate number of testers are tactile learners. We learn by doing, which is what makes many of us so good at exploration and discovery.

This class involves actual testing. Participants are immersed in a simulation that includes time pressure, uncertainty, and conditions of ambiguity with evolving requirements. After sharing our bugs, we reflect on what we have learned, discuss as a group, then provide enough instructors’ notes for others to run the exercise at their home office.

We’ve been working on this simulation for two years. We use it for job interviews and in training. This particular version of the exercise has never been run at a conference in its current form, though we prototyped the exercise at TestRetreat in 2016.


Cathy Toth: 

From 6 to 60: Our Scaled Agile Testing Journey

When the National Nuclear Security Administration (NNSA) Program Management Information System Generation 2 (G2) project launched in 2007, our executive sponsor stood in front of the project team and stated what we already knew: “I’m a very demanding customer. I know what I want. I wanted it yesterday. And I reserve the right to change my mind.” This talk will share our experiences in establishing testing as an integral part of the development process, how we focus testing on what matters and how we have come to think about processes.

Since the beginning of the G2 project, testers have come from one company and developers have come from at least three different companies, all working to server customers located at DOE headquarters, about 500 miles away. How did we set up testing and testers to help the project succeed? First, it’s the relationships. Good relationships meant we needed to spend time with customers and our developer teammates. We needed to understand their different worlds and find ways to bridge them. Second, it’s what we set out to do. We defined our role on the team. We wanted to know whether the end product really solved the business problem, and we wanted the customer to be able to make good business decisions. Testing is not the gate keeper; it’s a partner in the overall quality of the product and the end-user experience. Third, it’s about a process that lets people focus on what’s important and protects them from rash reactions under pressure. We learned that processes can be our friend. We devised processes to facilitate communication and build in transparency so we could deliver the highest-quality product for the longest sustained time. People are accountable to the process, and the process protects people. We can break the rules, but the process requires we do it with our eyes open. When the process stops working, we change it.

In 2007, the G2 project launched with one scrum team of six people serving one NNSA program: the Global Threat Reduction Initiative (GTRI). Over the past ten years, the project has grown into an enterprise solution developed by a team of over 60 people divided into six scrum teams serving six NNSA programs. The original application helped the GTRI program to grow from approximately $95M of nuclear non-proliferation projects to nearly $400M in less than three years without adding any additional federal staff. Today, the G2 system integrates existing Headquarters and Laboratory scope, schedule, more than $2B of annual budget and metrics information at the project level, creating a single repository of participating program data and providing execution-level modules for management oversight, data collection, analysis and tracking. Users can access the massive data set collected through GIS graphical representations, on-demand reports and business intelligence capabilities. Throughout the G2 project’s entire life, Acato has served as the sole independent test team. In 2010, the project was the first federal project to be recognized by the Project Management Institute as Project of the Year, and the Acato team’s approach to software quality was cited as a key reason for the award. The project was more recently recognized by the National Defense and Industrial Association (NDIA) for Enterprise Information with a 2015 Excellence in Information Award, and by NNSA with a 2015 Excellence Award from NA-50.

Our talk will share the history of this project, the challenges we faced and our current processes.


Chris Garcia, Márion Nepomuceno: 

Hands-On Web Application Testing

Get hands-on experience with the most widely used HTTP proxy for web application security testing. For this intermediate-level workshop, some HTTP/HTML knowledge is recommended.


Raj Subramanian, Carlene Wesemeyer: 

How Testers Can Become Effective Communicators

Have you ever been in a situation where…
•You find a bug but the developer does not listen to what you have to say and ignores you?
•Your stakeholders ask you about the status of the project in spite of you reporting status periodically?
•You give the necessary status updates in standups and your team still cannot understand what you were talking about?
•You are stormed by your project manager when you least expect him and commit to something out of fear or without thinking, then realize that you have made a major mistake, which you will repent for the rest of your life?
•You are handling a remote team outside the country and find it hard to collaborate although you give them all the resources they need for proper communication and status updates?
•You are talking in a team meeting and many of the team members start looking at each other and access their mobile phones and laptops without paying much attention to you?

If you answered YES to any of the questions above, this is a session for you. Based on our real-time experiences, attending other presentations and talking to various software practitioners we identified some of the components/behavioural patterns that would help to address some of the questions above. Solutions are often simple but getting to it is complicated. Come join us as we share our research and experiences pertaining to effective communication for software testers in a corporate world.


Chris Glaettli: 

How We Tested Gotthard Base Tunnel to Start Operation One Year Earlier

Automatic train protection has parts on the rolling stock as well as on the track. It consists of both hardware and software. In this highly regulated field we also must comply with several standards, which the testing strategy had to consider. We applied some context-driven approaches to stay on the meaningful track. After five of nine years’ project duration the customer had to accelerate the project one year. Thanks to our laboratory we were able to speed things up. After lab testing the testing focus switched to the field ¾ to the tunnel itself. Being on the locomotive performing the tests, a lot of processes and regulations must be taken into account. And, yes, in the end, it’s about teamwork.


Joel Tosi:

Intro to Chaos - Chaos Engineering in Your World

Chaos Engineering from Netflix is more than just randomly breaking stuff. It is a means of learning complex systems; a way of understanding business impact; a way of building resiliency; and ultimately a way of learning.


Fahed Sider: 

Lessons Learned Journey at Software Testing. Try. Experiment. Fail. Try Again

Those who fail to learn from history are doomed to repeat it. Software development has changed a lot in the past 17 years. So has software testing. I will take you back ten years and journey to the present to learn from different software development processes and challenges testers face. Waterfall, Min Waterfall, Agile, Testing, Manual, Automation, Exploratory, Regression. We will glimpse into the future on how to build testing skills that help the team work together to build quality into our product.

Objectives of Presentation: How to grow your skills for all key topic areas in Agile Team, Agile testing and Software testing. Key Lessons I learned and practiced from the most influential people in software testing: Lisa Crispin, Janet Gregory, James Bach, Elisabeth Hendrickson, Michael Bolton.


Sivakumar Anna: 

Mining JIRA Data for Defect Prediction

JIRA dashboards are the ubiquitous tool to keep track of projects and provide useful metrics for effective project management. However, this data also contains very useful insights that can lead to better allocation of testing and development resources for future releases of a product. This session will present an approach to using machine learning to glean predictive insights from JIRA defect data, walking through a scenario of how a dashboard tool can be integrated with JIRA. Using real-world examples, we will outline a specific scenario of how JIRA data can be moved from raw defect data to predictive analytics and follow it with actionable recommendations.

Attendees of this session will fully understand:
- Possibilities of JIRA as an effective tool for predictive analytics
- How JIRA can speed up testing and release cycles
- How to use JIRA for more efficient project management


Tina Fletcher: 

Reducing Risk When Changing Legacy Code

Do you work on a large product that’s been around for a while? Are there dark corners and scary areas that no one wants to touch? Does everything seem interconnected in ways that no one fully understands? Do you have low, spotty or maybe even unknown automated test coverage? Do your colleagues talk about doing a “big re-write” or hiring an army of contractors to add tests in fragile areas? Does it feel tempting to attribute escaped defects to something like “there was no subject matter expert for the code we changed”?

If any of this sounds familiar, well, I’m afraid I don’t have all the answers. However, my belief that there are ways to make changing legacy code safer has led me to conduct several experiments to help determine what the most effective tactics might be. Interestingly, although I am a tester, none of the investigations I have undertaken so far involved doing any actual testing. So no matter what your role is, join me to hear what I’ve learned about good and bad ways to approach risk mitigation strategies such as code stewardship, team shadowing, knowledge management, test coverage analysis and culture change.


Galen Emery: 

Security and Testing: Why Red, Green, Deploy Matters More Than Ever.

We get security into the pipeline by testing security, just like we write unit, integration, smoke and functional tests. Using the open-source Inspec testing language, we can bring these security controls into the testing pipeline and ensure that our build doesn’t ship unless the system maintains its security posture.

We do that by treating our security controls like an integration test. Does the system actually comply with the rule? By doing so, we can automate this type of testing and we can put it into our pipeline. Once it’s there, we can ensure that code doesn’t move past until it clears these tests and eliminates a significant bottleneck to our velocity.

In this talk, I’ll go over why we need to build security into our code pipeline, what doing so gives us for our velocity and security, and how we can generate reports with inspec to please managers, auditors and security teams.


S.P. Schrijver: 

Session Based Testing in an Agile project - Sessions I Do During a Sprint

When we start with a two-week sprint, I do not always start directly with execution of the test cases. The developers start their coding work. The tester starts with his preparation. As a tester, I start with the analyses of the user stories. The work I can expect. The next step is to gather information on what I have to do and what I need to come up with test ideas and a decent risk assessment. All this preparation work I document in session reports. It will help me to define the test cases I can start with. During my test execution, I observe what happens and write this down in my session reports. At the end of the sprint I sum up my work and give a recommendation to the stakeholders of all the user stories we completed.

In my talk, I will elaborate on the work I do as a tester in a two-week sprint. All is based on my own experiences. My goal is to give insight of what I do and hope to help people with their struggles.


Mike Hrycyk: 

Technical Nomads: Stemming the Migration of Senior Talent

There has been a problem in software tech jobs for a long time now. What do you do with a technical resource when they have reached the senior level and want to continue growing? The only real answer to this question has long been that of joining the management track. Many technical resources, however, don’t want to be people managers, handling resourcing, project management or politics. Instead, they get bored and take the next interesting job to come along, where they spend a couple of years, get bored and move on again. We’ve been creating an upper echelon of Technical Nomads.

We’ve implemented a Guru Career Track in our organization, giving senior resources a way to continue growing and gathering the three career R’s: respect, responsibility and remuneration. This talk will discuss the implementation of the Guru Track and share some of the lessons we’ve learned getting there.


Andrea M Connell: 

Testing Through Time And Space: NASA’s Twenty-Year Mission to Saturn

NASA’s Cassini mission to Saturn launched in 1997, and has been orbiting the ringed planet continuously since arrival in 2004. Throughout this time, the Mission Sequencing Subsystem team at the Jet Propulsion Laboratory has developed software used to design and validate the spacecraft’s science activities. As we learn more about the Saturn system and as the spacecraft ages, software changes are needed. Automating tests for software that was initially developed before modern architecture and testing methodologies existed has posed many challenges. The limited-funding and risk-averse environment of a flagship planetary mission heightens these challenges. This talk will discuss the strategies taken and lessons learned from nearly two decades of flight.


Justin Harrison: 

What the Heck Do Performance Testers Really Do?

Most testers are very skilled at functional testing, and provide assurance that software works as designed. What they often are not equipped to do is validate that a system will work the same way with 1000 or 10,000 users as it does with one. When system performance is bad, however, the effects can be severe and highly visible: Customers unable to access services = Lost Revenue Employees unable to perform their jobs = Wasted Resources Broken data-transfer processes = Corrupt Data Confusion and frustration = Tarnished Brand.

Performance testing is essential for determining how scalable any software implementation is. The critical steps of an effective performance testing plan are as follows: Forecast user load and usage patterns; Identify technical risks inherent to system architecture or design; Design an appropriate environment for load simulations; Plan for realistic test data; and Employ the right automated tools for the job.

VPS has deep experience with performance testing, serving some of the largest agencies in the Federal government as well as commercial customers. We use a blend of adapted and proprietary tools to measure system performance and provide actionable results. Come visit with us and learn the basics of performance testing!


Alex Bauduin: 

Your Safety as a Boeing 777 Passenger is the Product of a "Big Gaming Rig"

Manufacturing and testing a full flight simulator poses several challenges: hard deadlines, highly regulated environment, high numbers of complex components, safety and legal concerns. Airplanes are manufactured at a faster pace — full flight simulators are no exception.

A fixed test plan cannot respond to the various incidents that will occur during the manufacturing, integration and testing of the device. Risk management and close collaboration with different departments are keys to success. Drastic changes in a simulator’s design enabled a review of, among other things, some of the practices and processes. The first concept was to use a common policy to cover the new generation of flight simulators: test the right thing, at the right place, at the right time, using the right tools. Participation at hardware design sessions allowed early insertion of testing points at the frontier of hardware and software. Moreover, root cause analysis was used as a means of tackling recurrent issues, active participation in backlog grooming facilitated bug resolution, deployment of a semi-automatic testing system in manufacturing to distribute testing at appropriate locations, and the creation of a dashboard where both hardware and software tests progress to give project management a full picture. Some tests were moved to smaller rigs (desktop simulation) instead of the full flight simulator to free up some time, and testing was standardized and more objective using test automation. Reuse of pieces and bits of automated testing allowed the creation of an acceptance document in the form of a flight scenario matching aviation authorities’ certification requirements.