But what if you are already hitting your deadlines and have no issues found in Production or User Acceptance Testing? Why then, should we put any effort into trying to improve how we test?
This is a common discussion I have, especially amongst those who justifiably count themselves as experienced testers / managers. And its a fair question… As long as you’ve thought about it critically!
I was a quietly confident test manager. I’ve worked in multiple industries. I work hard, have high standards and strive to make sure my project team are happy with the work my team and I do. But seeing James Bach solve a team’s tricky testing problem within 10 minutes, mainly because he asked such perfectly targeted questions… that was my wake up call. Yes I had my years of testing experience, but until I discovered Heuristics, I didn’t have a method to help me quickly anticipate risks and create potential solutions to problems, even when facing a brand new technology or business area. Because I was open to learn, I have gained (and keep gaining) new skills. I now have the confidence when teams approach me for advice, that I can quickly get enough context to offer suggestions which could make their work lives easier and ensure they have the best chance of delivering excellent results to their project team.
I’m learning the differentiator is in attempting to understand what I don’t yet know about a project. A common way to make testing more efficient is to realise why things testers often discount as SEP’s (Someone Else’s Problem) may actually be in our best interests to get more information on.
An example…
A team worked on a project for 2 or 3 years, had excellent business knowledge and were confident in their test approach. The latest project had come in and they were busy planning the testing for the complex functional changes in the release. When I asked who was sponsoring this release (a question which comes from the M in the MIDTESTD heuristic checklist… google it if you need to) they casually mentioned this was technology led; unusual for them, as most changes are driven by the business. This raised a flag for me so I asked further questions. It then transpired the underlying architecture of the application was being changed to Service Oriented Architecture (SOA). But the test team had not reflected this in their planning or even thought to mention it. I questioned more and was told the reason “the dev team will test the architecture element. We were told the testers need only focus on testing the functionality changes & regression”. I had all sorts of personal issues with this as I have seen SOA projects be both successful and be a nightmare based purely on how early the test team got involved… but in this context, time was short and it was too late to get that early involvement. So what could this team do to improve their testing, given that the remit had been set by the Dev team and it really was too late to change it? They were already planning and would conduct good quality functional testing, of that I had no doubt. But my heuristics told me they still seemed to be missing some vital elements… So I asked about the test environments. Were they available? The response “oh there’s a team that always supply our environments. We will get it 2 days before testing starts… and we’ll probably lose a day at the start to teething problems but its the same every release and we always plan for that”.
I continued to question
Me: “Is it the same team that has always supplied your environments? Any new members?”
TM: “No its the same team” they replied confidently
BIG RED FLAG!!!!
Me: “So have the environments and dev team had any training on SOA?”
TM: “I don’t know”
Me: “Are they planning any additional time to prepare your new test environment?”
TM: “No idea, they promise to deliver and they always do”
Can you see the problem here…? Lulled into a comfortable cycle that had worked well for the last few years, the test manager had been unable to see the looming risk that this new architecture would bring. The ‘direction’ to focus on the Functionality and not worry about the architecture compounded this temporary blindness to new risks. The technology was new to the dev team, it was new to the support team. I suggested the Test Manager speak to the support team ASAP (the release was expected into Test in 2 weeks time) and find out what was being done differently to prepare the new test environment. Did they have the hardware required? Were they already trained to deploy and support the new architecture? How confident were they? Could the test team help out with any early smoke testing?
Why would the test manager need to know this? Because, if I was planning this testing and I did not get satisfactory answers to those questions, I would want to prove/ smoke test the environment earlier than usual in order to iron out the inevitable teething problems. If I could not do this I would add extra days to the test schedule and raise a risk to the project team that testing may be impacted if there are any delays due to the new environment. Unfortunately the test manager did not follow up on this…. the result… a 2 week delay to the start of testing. Did the project blame the test manager? No they didn’t. But it was still the test team who worked late nights and weekends to make the time up! Had the TM followed up and flagged this potential issue earlier, the delays and overtime may have been avoided…. with the result of an on-time release to production, a smaller overtime bill and a less tired project team. As testers we are not responsible to make the whole project run smoothly. However I feel it is my duty to consider and raise risks around our dependencies as early as possible, to give the project team a fighting chance to mitigate them. SEP’s can soon become Our Problems if we don’t take an interest.
So how can we seek to improve?
Below are some pointers I consider when I am assessing improvement areas on a project that is already working well..
Ask for Feedback!
Have I ever directly asked the stakeholders whether they are happy with the work my team does? Taken their feedback on how I could improve my service to them? It may be they wouldn’t know good testing if it walked up and slapped them. When thats the case I’ve often chosen to evaluate my team members based on the direct knowledge I have. But I realise that customers still have an opinion, and if they are ultimately paying for our service, it will protect my team’s interests to know what that is… and I
no longer assume they are happy because I don’t get complaints. I ask directly. By getting feedback from multiple teams, I have seen patterns start to emerge, let me address some of the common statements here:
Test Faster?
If they think testing should be done faster… Maybe they are right! Or perhaps they are ok with the testing timescales?
We have a team who had worked well for years in a slick and organised release model. The TM felt there was no need to change as they were always granted enough time for test preparation by the project. When they asked for more time they were given it. However, having seen the benefits to other projects, the team recently started using Visual Test Models (they really are catching on in a HUGE way now in my organisation). They use the VTM for execution too, so no longer write test cases for new features and conduct a leaner, more targeted regression test. The result: less time needed for testing of a release. Were the project team happy before? Yes they were. But since the change they are even happier because the time to production is faster and testing never have to ask for more time! Even when someone is happy with the work we do, we still seek to improve on that. Any improvement that reduces costs and timescales (without impacting quality) will be well received by stakeholders, whether they were happy before or not.
Be more transparent?
One common complaint I hear from the stakeholders is lack of transparency. To be honest this took us by surprise! We send test plans, we invite review of tests before we start and we send daily status reports and join project calls. How then are we not being transparent?
I’ve come to realise ‘lack of transparency’ often means ‘lack of control’. Only by being consulted, will people feel they ‘have transparency’ into the testing on their projects. If the testing budget disappears each month but all they get back is “we need 10 days to test… here’s some documents, tests and your results” it may feel like a black box to them.
So we decided to be transparent when presenting estimates. To give multiple options to allow for the risk appetite of our customer (i.e. Option 1. We can test in 5 days and cover all high medium and low modules, Option 2. test for 3 days for high and medium or Option 3. take 1 day to only test high). If we only give a single estimate, we give our stakeholders no option to make an informed decision about risk. Supplying these options allows them some control. This has helped to change the ‘transparency issue’ feedback.
Become experts?
Other feedback has been that the test team don’t understand the business / application. Often we know this is inaccurate, the team actually have excellent knowledge. The problem is they haven’t demonstrated that. So where we have teams who are not already recognised as experts, we find a way to show off that knowledge! We are creating more wikis containing business / product knowledge. We make sure the dev/BA’s have access so they can also use it. And if our test team genuinely don’t have that expert knowledge, then we set about building it fast, again using Heuristics, brainstorming and Visual Test Models.
Better still, the test teams who create a Visual Test Model and present to their project find no one will ever doubt their system knowledge again. Every team we present these to are impressed as they have never seen the system mapped out in that way. And have rarely seen the test team contribute so actively in developing understanding of the application. Teams may expect test scripts to contain testers knowledge, but they rarely look at them. We now better invest our time documenting the system in a way that a wider audience can review and understand. There are now BA’s in our organisation using mind-maps for analysis and planning, and joining my training courses to understand how we are using these methods!! We know for certain our improvements are appreciated when they are being imitated outside of testing!!
So the moral of this blog post…?
Don’t be blinded by your previous successes. It is what’s in front of you that will trip you up, especially when you are looking behind you!
Recent Comments