So far, we have two ways to predict project outcome:

First by comparing the test effort to other projects, and suggesting it is “between the six of project X and projec Y”, this giving us a range.

Second by calculating out test costs as a percentage of the development (or total project) effort, looking at the official schedule and projecting out our expected project length. If we’re smart, we also take into account slips to get that percentage.

A third approach I can suggest is to predict the cycle time – time to run through all the testing once. I find that teams are often good at predicting cycle time. The problem is they predict that everything will go right.

It turns out that things don’t go right.

Team members find defects. That means they have to stop, reproduce the issue, document the issue, and start over — that takes time. More than that, it take mental brain energy; the tester has to “switch gears.” Plus, each defect found means a defect that needs to be verified at some later point. Some large percentage of defects require conversation, triage, and additional mental effort.

Then there is the inevitable waiting for the build, waiting for the environment, the “one more thing I forgot.”

So each cycle time should be larger than ideal – perhaps by 30 to 40%.

Then we need to predict the number of cycles based on previous projects. Four is usually a reasonable number to start with — of course, it depends if “code complete” means the code is actually complete or not. If “code complete” means “the first chunks of code big enough to hand to test are done”, you’ll need more cycles.

If you start to hear rhetoric about “making it up later” or “the specs tooks longer than we expected, but now that they are solid development should go faster”, you’ll need more cycles.

(Hint: When folks plan to make it up later, that means the software is more complex, probably buggier, than the team expected. That means it’ll take more time to test than you’d hoped, not less.)


So now we have three different methods to come up with estimates. With these three measures we can do something called triangulation – where we average the three. (Or average the ranges, if you came up with ranges.)

When that happens, it’s human nature to tend to throw out the outliers – the weird numbers that are too big or too small.

I don’t recommend that. Instead, ask why the outliers are big or small. “What’s up with that?”

Only throw out the outlier if you can easily figure out why it is conceptually invalid. Otherwise, listen to the outlier.

Which brings up a problem — all the estimating techniques I’ve listed so far have a couple of major conceptual flaws. And I haven’t talked about iterative or incremental models yet.

They are just a start.

Still more to come.