Getting started with remote moderated usability testing

Posted on January 1, 2016 by - Theory and practice, Tools and resources

Remote moderated usability testing

Remote moderated usability testing can be an inexpensive way to gain insights about your users’ needs—even when users live in developing countries halfway around the world. Photo by HeatherLWilliams, CC BY-SA 2.0, with color correction.
 

Several people have asked how I conduct remote moderated usability testing for international nonprofits. I’d like to share my approach.

Here’s the three-sentence version:

I use Skype to share screens with users who work in developing countries. I ask users to complete a set of tasks, and I observe which steps give them trouble. We talk to each other while sharing screens so users can ask questions and give feedback in real time.

In this post, I’ll briefly describe the main aspects of remote moderated usability testing. I’ll suggest when to use this research method and which software to consider. I’ll also share my process for conducting the research and a few resources for further reading.

My hope is that more international and domestic nonprofits will begin experimenting with usability testing. It’s easy to get started:

  • You don’t need a budget for software or travel.
  • Most usability tests last only 15–30 minutes.
  • You can schedule the sessions at your participants’ convenience.
  • You don’t need to be a professional researcher to obtain useful results.

As Erika Hall says, “People who make design decisions at any level benefit from asking more and better questions.” The more you know about your users, the better you can meet their needs.

What is remote moderated usability testing?

Usability testing: In a usability test, a researcher gives a user a set of tasks and observes how the user attempts to complete each task. Because people often say one thing but do another, it’s important to observe users’ behavior rather than just ask for their opinions. Usability testing allows you to see which steps in a process are causing problems even if users can’t articulate the source of their problems.

Moderated: A moderated study is one in which the researcher talks to the user during the test. The researcher usually introduces the purpose of the activity and explains how it will proceed. Then the researcher asks the participant to perform a series of tasks while the researcher observes the participant’s screen, listens to the participant’s comments, and answers questions as they come up.

Remote: This means that the researcher and the participants do not meet at the same physical location for the study. Instead, they use software to share the participant’s screen. They may also use a phone or webcam to talk to each other during the usability test.

When to use it

This research method isn’t right for every study. I use it when I can answer “yes” to each of the following questions:

  • Do I need to gauge how well an existing design works for users? (In other words, do I need evaluative research instead of generative research?)
  • Do I need oral or visual feedback about complicated tasks?
  • Do I need qualitative insights rather than statistically significant quantitative results?
  • Would it be difficult to meet participants in person?

I consider doing remote moderated usability testing when I need to see firsthand how users in other countries would interact with an interface, information architecture, or task flow.

I try to test my work early and often. The sooner I discover problems, the easier it is to fix them.

Software for remote usability testing

To conduct remote moderated usability testing, you’ll need software that allows you to view a participant’s screen. You’ll also need a way to communicate with testers in real time. In addition, some software allows you to record the session so you can review it later or show it to stakeholders. (If you use this feature, remember to ask permission from your users first!)

Here are some programs to consider:

I keep things simple by using Skype and a webcam. Skype is ubiquitous in the relief and development community, so most of my users are very comfortable with it. Other software has more features, but Skype is free.

My process for conducting the research

1. Review analytics

Analytics can help you identify underperforming parts of websites—the parts that users are overlooking or abandoning. These are natural candidates for usability testing.

In Google Analytics, event tracking and conversion funnels can give you data about how often users complete key tasks and subtasks.

This can be a useful starting point, but remember that analytics only tell you how often something happened, not why it happened or whether users were satisfied with the experience. Fortunately, usability testing can help to answer some of these questions.

2. Make a list of the top tasks to evaluate during the session

What do users need to do? It will depend on your industry and your users. Make a list of the top tasks that users need to accomplish, and then outline how users would interact with your organization to complete these tasks.

Ideally, this list of tasks should be based on previous research into which tasks users want to perform. Whenever possible, avoid making assumptions about what users want to do. Otherwise, you could end up testing things that don’t matter to users.

For more information about how to create a user-centered list of top tasks, see Gerry McGovern’s “What Really Matters: Focusing on Top Tasks.”

3. Draft a script

I use Steve Krug’s usability test script (PDF, 221 KB) as a template. On page 5, I add plenty of room under each task to define what I want to measure.

It’s possible to collect some quantitative data, such as success rate, time to complete the task, and number of attempts, but this isn’t the strength of moderated usability testing.

Instead, moderated usability testing is better for collecting rich qualitative data, since this method provides an opportunity to experience digital products and services from the perspective of a handful of your users. Qualitative data can take the form of users’ questions or comments, your observations about how users attempted to complete a task, or your insights about why users may be confused.

4. Test the script

Typically, I ask a colleague to do a dry run with me. This is the perfect opportunity to identify potential problems.

  • Are the instructions clear?
  • Is the script too long or too short?
  • Am I collecting the right amount and type of data?
  • Could one activity bias a participant’s behavior during the next activity?
  • Were there any surprises that reveal assumptions I should reconsider?

5. Invite users to participate

You only need a few participants. The Nielsen Norman Group has repeatedly shown that “the best results come from testing no more than 5 users” and from conducting periodic small tests rather than a single large test.

However, it’s important to select participants who are similar to your intended users. To find representative users, you might start by searching a list of your organization’s e-mail subscribers, social media followers, and CRM profiles. Sometimes colleagues will also be able to nominate representative users. Just make sure that you have clear criteria for screening participants.

For more advice about selecting participants, see usability.gov’s “Recruiting Usability Test Participants.”

6. Conduct the study

This is the fun part, and it tends to go smoothly when I’ve given enough attention to steps 1–5. However, there’s always a possibility that slow Internet connections and old technology could cause problems. (This is true for any remote usability test, but it’s especially important to remember when participants are in developing countries.)

Technical difficulties can be frustrating for participants, so I try to mention at the beginning of the session that we may experience problems with slow connections. I thank people in advance for their willingness to help, and I say that it’s absolutely fine if we need to end the session early. I want people to know that I respect their time and that I’ll let them off the hook if the technology misbehaves.

Most sessions should be no longer than 15–30 minutes. If a participant starts to sound fatigued, I consider ending the session early. After all, participants are doing me a big favor by participating in the study, so I want each session to be a good experience.

Other resources

For practical guidance about how to conduct usability testing, see Steve Krug’s Rocket Surgery Made Easy: The Do-It-Yourself Guide to Finding and Fixing Usability Problems.

To learn more about conducting remote research, I recommend Nate Bolt and Tony Tulathimutte’s Remote Research: Real Users, Real Time, Real Research.

Two excellent resources about lean research are Just Enough Research by Erika Hall and Lean UX: Applying Lean Principles to Improve User Experience by Jeff Gothelf with Josh Seiden.

If you’d like to learn more about usability testing and twelve other practical research methods, see Observing the User Experience: A Practitioner’s Guide to User Research by Elizabeth Goodman, Mike Kuniavsky, and Andrea Moed.

Josh TongJosh Tong is a content strategist in Washington, DC. He helps colleagues create powerful content through research, strategy, and implementation.

Creative Commons License
“Getting Started with Remote Moderated Usability Testing” by Josh Tong is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

3 Responses to “Getting started with remote moderated usability testing”

  1. noreen compton on

    This is really great advice. I have noticed in recent years that usability testing is often overlooked – so what are decisions about design being based on? Analytics are important but can’t tell the whole picture. This puts “no budget” excuses to rest.

    Reply

Leave a Reply

  • (will not be published)