mobile testing

If you haven’t heard of it before, Weekend Testing brings testers together to engage in a bustling Skype conversation on a testing related topic.
You get all levels of experience and background which fosters some good conversation, insight and is great opportunity to exercise your testing brain. It’s great because it the session may not be something you get a chance to explore in you day job.

This session was centered around testing a mobile website.

Where are you using the site?

We are all aware of the fact that a mobile website has the opportunity to be viewed anywhere. What I hadn’t considered is the full range of consideration this draws into scope for testing. Michael spoke to how using his phone on train rides it was mainly in landscape mode and that his connectivity may be limited or intermittent and how it affects his experience using sites. So in that case you have multiple environmental considerations for testing but you have to consider that this usage pattern also indicates this type of user may spend more time on the site. The longer a user is on the site the more likely they are to use other features within the site, like search and navigation controls. This was a great example of what Jean Ann (who was facilitating the session) suggested which is:

When testing mobile anything, you will need to consider combining test conditions.

This also applies to shorter usage patterns. Users may be need to read the site inside or outside while on the move, switching between static content and video. Can the site handle these transitions?

Why are users on the site?

Mobile also means that user priorities vary wildly. Are they trying to pull a site up quickly to answer a question, or solve an immediate need? We started out the session looking at the, a site you might pull up to get a quick answer about today’s weather. This user may expect the site to understand their current location and use that context to streamline their experience. That’s a nice feature but if a user wants information from outside their current location the site needs to support that just as well.

The session shifted focus away from since it was not a pure mobile web site and was a mobile web app. We didn’t have a ton of time to get into specifics but I think this difference and what it means for a tester might be a topic of interest for many testers.

The site we shifted to was, and the difference in experience was drastic. I’d encourage you to pull up both sites on your mobile phone and take a look. CNN is immediately more usable than the previous site. Since it’s a news site the scope of the site is larger and contains multiple child sites for more focused news. It makes sense that CNN might place a higher value on mobile experience since it seems more likely that a user may spend more time on their site and need to navigate to multiple stories and different sections of the site.

How are they navigating?

This difference in experience brought up the topic of “train-ability”, meaning a site should help train its users on how to use and consume its content. Some trainers are good and some leave a lot to be desired and result in some less than ideal habits being developed.

Frequent use of pinch and zoom, or constant swiping on the screen on a mobile sites were discussed as signs of a sites usability problems. Also any time the site required a user to break out of the website experience and use browser controls like the back button or address bar to navigate or refresh should considered possible red flags.

Consistency across the experience also came up as we navigated through the CCN child sites. As a user you could feel that some sites were created or managed by different teams. This presented challenges since in some cases the paradigms you uses to navigate to a page where not present to navigate back out of them.


A mobile site needs to be touch friendly. It doesn’t need to be super optimized but there’s nothing worse than battling a site on your phone. My least favorite thing is trying to find where my finger can swipe to scroll and not accidentally end up hitting an unexpected link or worse an add.

One particularly interesting thing came up in the session regarding usability for testers, it’s bias. It’s particularly relevant because most of us use mobile phones or other mobile devices all the time. This can lead to blind spots while testing since it can be very hard to accurately portray how different users might actually be using the site.

Dawn brought up the use of crowd sourced testing to try and prevent bias issues. It allows a broad spectrum of users to test but we were sure of teh overall quality it might produce or how it could be successfully integrated into a team with dedicated testers. It’s something to be aware of because even with testing personas defined you still need to be a good actor in order to accurately step into the shoes of that user.

For some of the physical issues relating to usability Michael shared a toolkit from the University of Cambridge that includes things like glasses to simulate color blindness and gloves for arthritis.

Take Aways

Study users – watch how people use their phones and imagine its affects on the site being tested. Use that insight to limit your bias and gain deeper insights.

Mobile sites need to first be functional, users will leace a site rather and find another rather than fight a bad experience.

Lastly I thought a helpful oracle for mobile testing is comparing usability of desktop site to mobile site. Even if the experience is different can a user achieve comparable results on a mobile device.

You can also check out the Weekend Testing site for session details including another experience report and full chat transcript