One of the important testing questions that I don’t see asked often enough are performance related design questions.

1. How long do we keep trying for?
2. What do we do while we are trying?
3. When we fail, how long does that take?
4. How do we define fail?
5. How many ways can we fail?
6. What if we fail in another way?
7. Can we clearly and correctly define success and failure?
8. How does failure impact the user?

I have an example here of what I love to see and one that really bothers me, and the difference is really subtle.

Happy Case:

In free Hotmail, which is supported by ads, when faced with a connection issue, I’m shown my email and an ad error. Good choice, Microsoft! The ads will give up and live to fight another day. They will accept failure to the ad server this once, still give me my email, and the next time I click, they get another chance.

Also, while I’m pretty happy with this fail, it could be better. How else could it fail? When should they require success to show me my email?

Sad Case:
I’ve had to stop using Google Chat for awhile. I often have many browsers and windows open at once. The combination of reading gmail with a google chat window open, active, and searching for new updates is a performance battle to the death. I urge you to try this.

Setup: Either Firefox or Safari for the Mac or IE on Windows
1. Read and respond to multiple gmails in one browser tab or window.
2. At the same time have a Google Chat with 5-10 minute lags between responses keeping the window minimized and bringing it up again periodically.
3. See how long you can tolerate it for.

I was ready to kill within 20 minutes. How does this happen? This is called team silos, and way too little human scenario testing across applications. How can products known for speed be SO painfully slow used together? It’s subtle failure to let go, persistence to avoid failure. Perfectionism hurts perfomance in some cases. Look for it. Think about when failure is the better option.