Skip Navigation

Usability.gov - Your guide for developing usable & useful Web sites
Plan analyze Design

Test & Refine

Analyze the Results


What data will you have?

At the end of the usability test, you will probably have several types of data.

  • Quantitative data, which might include:
  • success rates
  • time to complete tasks
  • pages visited
  • error rates
  • ratings on a satisfaction questionnaire
  • Qualitative data, which might include:
  • notes of your observations about the pathways participants took
  • notes about problems participants had
  • notes of what participants said as they worked
  • participants' answers to open-ended questions

top of page


What do you do with quantitative data?

Consider using a spreadsheet for the quantitative data. A spreadsheet allows you to quickly see patterns in the data. It allows you to use formulas to calculate information that you might want to share, such as:

  • percentage of participants who succeeded or not at each task
  • average time to complete tasks
  • average number of pages visited in each task
  • frequency of specific problems

If you find that different participants had very different results, those differences may correlate with some of the demographic factors you captured through the screening questionnaire or through the participant profile questions that you asked at the beginning of the sessions. You may want to add columns to your spreadsheet for each participant's answers to demographic questions. Then you can sort by demographics and see if the other data correlate with specific demographic features.

top of page


What do you do with qualitative data?

Read through all the notes carefully looking for patterns. In addition to the spreadsheet for quantitative data, you might open a second spreadsheet for problems. Each row would be the description of one problem. Each column would represent one of the participants. You can then put a "yes" or a 1 for each participant who had the problem.

Have a column for notes/comments as you will probably have comments that you want to make as you go through the data to remind you of what a participant did or as an insight for your report. Also, have a column to indicate which task/scenario the information in that row of the spreadsheet came from so that you can report by task/scenario if that would be useful to the development team.

When you begin to analyze the data, be as specific as you can about what the participant did. A good problem statement, like a good note, tells what the participant did and contrasts that with what the participant should have done.

Good problem statement: Clicked on link to Research instead of Clinical Trials.
Poor problem statement: Clicked on wrong link.
Poor problem statement: Was confused about links.

As you accumulate these problem statements look for patterns because, in the end, you probably want to report on patterns with the specific statements as examples of the pattern.

top of page


How do you know what are the most important results?

As you review the results, consider:

  • how "global" the problem is throughout the site
  • how severe (or serious) the problem is

Local versus global problems

The scenarios you use in a usability test are only sampling the site. You are not testing every possible pathway or reaching every page. What you learn may affect pathways and pages that you did not see in the test.

For example, if you find in one scenario that participants in your usability test cannot find what they need in the content because the page they come to is so dense with text that they do not want to read it, you could say that specific page has problems and needs to be fixed. But you should also consider how many other pages in the site are equally dense with text. If you had chosen a scenario that led to those other pages, how likely is it that you would have gotten the same results? Your finding may have implications for many other pages. Overly dense text may be a global problem within the site.

Levels of severity

Some problems contribute more to participants being unable to complete the scenarios than others. Many groups note the severity of the problems they identify on a three-point or four-point scale. One possible scale would be:

  • Show stopper: If we don't fix this, users will not be able to complete the scenario.
  • Serious: Many users will be frustrated if we don't fix this; they may give up.
  • Minor: Users are annoyed, but this does not keep them from completing the scenario.

top of page


Next steps

When you have analyzed the data, you are ready to Prepare the Usability Test Report.

top of page