Skip Navigation
Home Home Site Map Site Map Help Help Search Search Glossary Glossary
Talking Quality Home Page
Home Site Map Help
TalkingQuality Home Page Search Glossary TalkingQuality Home Page
shim

blank
The Big Picture
What to Say
How to Say It
Into The Hands of The Consumer
Refining What You Do
blank
Making Quality Measures Manageable

Comparing Results

Talking About Statistics

Saying It Clearly

Designing Your Report

Choosing Media

     
   

Comparing Results

Standing all by itself, a number does not convey much information about health care quality. People need some context for understanding what the number means and how to evaluate it. Is it high? Is it low? Is it good? Is it bad? In the early days of public reports on health care quality, some health plans tried to educate their members by releasing information on their own performance—but without a point of comparison, consumers did not know how to interpret the results. Since that time, many sponsors and researchers have invested a great deal of time and effort into figuring out how to help consumers understand what performance results mean.

This section discusses ways to handle two key tasks:

Offer a Point of Comparison

Workbook Reminder
Question
24
 

To help consumers understand a piece of information, you have to offer them a way to compare the information to some other piece of data. This enables your audience to interpret the information and determine how it applies to their health care decisions. Comparative data also allow people to deal with a large amount of information because they can use it to narrow their choices, eliminate options, and make trade-offs.

There are four common strategies for helping people make comparisons:

Show Results for Multiple Organizations

The most straightforward way to provide some context for performance information is to present all the results together in the same table or chart. This simple approach allows your audience to see how each health care organization's scores compare to those of other organizations; it also means that readers must judge for themselves whether one score is really better than the others and whether it is objectively good.

This approach is particularly valuable when you don't know what an appropriate comparison point would be. In some cases, it is not appropriate to calculate and display an average or it is not possible to suggest a rate that is both ideal and achievable.

Go Top

 

Show Results Compared to Average Performance

The most common way to provide context for scores is to show how they compare to the average of plans or providers in the area. This allows consumers to see an organization's performance relative to whatever else is available in the local market. It also encourages all participants to do well, since no one knows where they fall relative to the average until the measures are calculated.

The downside of this approach is that above-average performance may not necessarily be good; it could just be that nobody is performing well on that measure. Similarly, below-average performance may not really be that bad if all plans or providers are reporting high scores.

ExampleMedicare Compare Web Site of the Centers for Medicare & Medicaid Services (PDF file, 15 KB; HTML)

 

Example2000 New Jersey HMO Performance Report: Compare Your Choices. On the Web site, select any category to see a comparison of plan scores to the State average.

 

ExampleComparing Texas HMOs 1998 (PDF file, 70 KB; HTML)
© Copyright 1998. Texas Office of Public Insurance Council. All rights Reserved. Used with permission.

 

ExampleThe Oregon Coalition of Health Care Purchasers' 1999 report, Health Plan Quality from the Consumer's Point of View (PDF file, 224 KB; HTML)
© Copyright 1999. Oregon Coalition of Health Care Purchasers (OCHCP). All Rights Reserved. Used with Permission.

 

Which Average?

Most sponsors like to limit their reporting to the plans or providers that are most relevant to their audience; this is also the approach preferred by many consumers, who can get confused by information about organizations that are not available to them. But it is not always possible to use available organizations as the basis for an average. In some cases, an average score is simply not available. For example, satisfaction data may come from a statewide survey that doesn't lend itself to market-by-market breakdowns. Or you may not have enough data to calculate a meaningful average for comparison purposes. Consider a purchaser that offers employees a choice of three HMOs and a PPO. It's possible to calculate average scores based on the results of those four plans, but that figure would have little meaning. It would be more useful to show how the rates compare to State or regional averages so that consumers can tell which scores are really relatively good or bad.

Go Top

 

Show Results Compared to a Benchmark or Goal

An alternative approach is to show how the score compares to an external standard that represents the ideal or best possible performance. The purpose is to demonstrate how good the score could have been, so that the consumer can assess the difference between an organization's actual result and what it could achieve.

Select links below to learn about:

Where You Find Benchmarks

Common benchmarks for individual measures include:

  • Public health objectives, with the most popular being those spelled out in a report called "Healthy People 2010," which was produced by the U.S. Department of Health and Human Services.
  • The top score of similar plans or provider groups in the state, the region, or even the country, which tells consumers what a health care organization could achieve.

ExampleDeveloping Health Plan Performance Reports: Responding to BBA (PDF file, 162 KB; HTML) a report from RAND to the Health Care Financing Administration (now the Centers for Medicare & Medicaid Services). For the full report, select to Order Reports from RAND.
© Copyright 1999. RAND Health. All Rights Reserved. Used with Permission.

 

ExampleThe Madison Alliance QualityCounts™ report (PDF file, 759 KB; HTML).
© Copyright 2000. Employer Health Care Alliance Cooperative. All Rights Reserved. Used with Permission.

 

  • Goals negotiated by the sponsor and those being evaluated.

The bigger challenge is to identify a fair benchmark for summary scores, which are new in the marketplace. Here are some examples of approaches that are currently being proposed and tested:

The National CAHPS® Benchmarking Database (NCBD): To provide a point of comparison for the CAHPS® composites, the NCBD has established benchmarks at the 90th percentile, i.e., the benchmark is based on the scores achieved by the top 10 percent of health plans reporting to the database.

ExampleNCBD Template (PDF file, 34 KB; HTML) for a comparative graphic.
© Copyright 1999. Oregon Coalition of Health Care Purchasers (OCHCP). All Rights Reserved. Used with Permission.

 

RAND: For summary scores based on HEDIS® as well as CAHPS® measures, RAND has tested two different approaches in its work for the Health Care Financing Administration (HCFA), now the Centers for Medicare & Medicaid Services (CMS) and the CARS project:

  • For HCFA (now CMS), RAND recommended comparing each plan to the "perfect plan," a fictional construct that would have the best score of any health plan in the country for every measure.
  • For the CARS project, RAND compared each plan's performance to that of the best performer in the Nation in that category.

Select to Order Reports from RAND that discuss these approaches.

FACCT: FACCT (the Foundation for Accountability) shows the summary scores for each of its categories relative to the score of the best performer in the State or region.

Go Top

 

Two Ways to Use Benchmarks

Sponsors can use benchmarks for either one of the following purposes, although the first one is much more common:

As a "gold standard". One use of external goals is to establish a gold standard that everyone can strive for. The benefit of this approach is that it raises the bar for health care organizations in the local marketplace, forcing them all to pay attention to their performance. The risk is that no one will achieve the goals, creating a negative perception among consumers of all their choices.

As a minimum standard. Another less common option is to use goals to set a floor, or a minimum standard that all participants must achieve. This ensures a basic level of quality (which can be reassuring to consumers), but provides few incentives for health care organizations to improve. Purchasers often use goals in this way as a management tool, rather than as a strategy to share with consumers.

Displaying Benchmarks

Sponsors typically display benchmarks as an explicit point of comparison. For example, the benchmarks would be a data point in a table or graphic display that allows the reader to judge how closely the performance of the health care organizations approached the "gold standard."

But new methodologies are incorporating benchmarks into scores in an effort to produce results that convey some idea of how good the performance is. Both FACCT and RAND are testing ways to calculate scores that reflects performance relative to a standard for a given measure.

FACCT's approach: FACCT has proposed a methodology that uses benchmarks to transform actual results, which cannot be judged on their own, into scores that have their context "built-in." Its methodology would also make it harder for consumers to distinguish between plans that do not really differ in their performance.

Select for information on FACCT's Approach to Incorporating Benchmarks into Scores.

RAND's approach: In its report to HCFA, RAND proposed a method for calculating summary scores that reflect each plan's performance relative to a benchmark, where the benchmarks are the best observed performance for each measure. For each plan, RAND would divide the actual score on a measure by the benchmark score, creating a new score on a 0-100 scale (where 100 represents the benchmark).

These two uses are not mutually exclusive. Both FACCT and RAND would calculate scores relative to a benchmark, then provide even more context by displaying those scores relative to the best in the market, in the region, or in the Nation.

Go Top

 

What Consumers Think of This Approach

Consumers respond well to comparisons of quality scores to goals or benchmarks. Here's what we know about their reactions to specific approaches:

  • Comparisons to actual performance. Researchers have found that consumers generally like comparisons to actual, achievable performance (e.g., comparisons to the best results in the local market). This kind of information helps them see how easy it was to exceed the goal; they think more poorly of a plan that can't meet a standard that all its competitors can achieve.
  • Comparisons to negotiated goals. Consumers are not comfortable with benchmarks based on negotiated goals. If no plans meet the standard, they are left with the impression that none of their options are good; if all plans exceed the benchmark, they wonder if it was set high enough. Finally, in many cases, consumers do not understand the basis for a goal, which means that sponsors have to be prepared to explain and defend their decisions.
  • Comparisons to public health goals. In a 1996 study by NCQA of the use of Healthy People 2000 (now 2010) goals, researchers found that consumers questioned the public health standards and their relevance to them as individuals. (Source: Results of Consumer Interviews to Improve Health Plan Report Cards, National Committee for Quality Assurance, June 1996.) They may have a point. For example, public health goals are set for the entire population; they may not be ambitious enough for the segment of the population that is commercially insured. Also, public health goals and quality measures do not always have the same specifications. Since they may be based on different variables and calculations, they may not be comparable in an "apples-to-apples" way.
  • Comparisons to the performance of unavailable plans. It appears that consumers like to see benchmarks even if they are based on the performance of organizations that are not available to them (e.g., where the standard reflects the best performance of any plan in the Nation, region, or State). FACCT's testing with consumers, for example, suggests that they want to know how their options compare to the "best-in-class"—even if the best performers are not among their choices.

However, the use of benchmarks raises the possibility that consumers who are disappointed in the quality of their choices relative to the benchmark will question the purchasers' decisions. Sponsors that develop information for purchasers to use should recognize that employers may refuse to use data on organizations they do not offer in order to limit protests by their employees. One sponsor, Maryland Health Care Commission, addressed this concern by putting all of its health plan data on a Web site, enabling local employers to customize the report as needed. Others have customized reports for specific employers.

On the other hand, sponsors should keep in mind that comparisons to the best performers, whether locally or Nationally, are an important tool for motivating health care organizations to improve their quality. Whether or not consumers are interested, health plans and providers are often spurred to action by data that shows how well others can provide the same services.

Rank Organizations Based on Their Results

A less common way to help people make comparisons is to rank organizations in order of performance, from best to worst. In theory, you could do this for each measure or category, but multiple lists would be unwieldy for consumers. This strategy is best reserved for major summary scores, so that consumers only need to look at one list.

The biggest problem with this approach is the complexity of devising a methodology for ranking the health care organizations. To determine who has the best performance, the sponsor must select and weight criteria in a way that is equitable and justifiable. Also, this strategy does not allow the consumer to determine which measures are most important.

ExampleU.S. News & World Report Rankings (http://www.usnews.com/usnews/nycu/health/hehmohon.htm)

 

Go Top

 

Present Comparative Information

Workbook Reminder
Question
25

Once you've chosen a point of comparison for your quality data, the next task is to decide how to present the comparative data in a way that people can grasp easily and quickly.

Select links below to learn more about:

Why Presentation Is Important

The approach you choose for presenting data is important because it will determine how well consumers understand the data, how they interpret it, and whether and how they use it to make health care-related decisions.

Understanding the Data

It seems obvious to say that presentation can determine the extent to which a reader comprehends data on quality; few of us can easily find our way through large tables thick with numbers or complex graphs filled with unfamiliar terms. Yet many reports on health care quality offer data in a manner more appropriate for statisticians than the typical reader. And many sponsors fail to apply the most basic rules of good design and clear writing to make sure that consumers can understand their presentations of data. 

Interpreting and Using the Data

Presentation also plays a key role in determining how people interpret the data you share with them. One of the challenges for sponsors is to recognize that consumers are easily influenced by the ways in which you display information. Traditional economic models presume that consumers approach information about goods and services with an established set of preferences and values—that they know what is important to them, what they like, and what they want. However, researchers have found that health care consumers don't actually know what they want. In fact, consumers change, and even construct, their preferences with respect to health care in reaction to the content of the information as well as the way that it is presented to them. For instance, in testing with consumers, researchers noted that people would say that they valued one category of quality information over two others, but then gave more weight to the category for which they were shown more data.

Presentation also affects what consumers pay attention to. Consumers give greater weight, and sometimes even limit their focus, to information that is presented more clearly and seems more concrete than other data. For example, information about costs can be expressed in dollars per month, a concept that people can easily relate to and understand. Quality measures, on the other hand, tend to be abstract, vague, and unfamiliar; for example, few of us think about percentages of populations on a daily basis. Consumers will simply disregard information that might be quite valuable to them if it is too hard to interpret or poorly presented or explained.

For a list of articles on how presentation can affect how consumers respond to information, go to Comparing Results: Interpreting and Using the Data.

Go Top

 

Presenting Information in Absolute Terms

In the context of report cards, "absolute" refers to a display of scores that are not explicitly compared to each other. This straightforward approach lets consumers draw their own conclusions.

Select links below to learn more about:

Pros and Cons of the Absolute Approach

This strategy has its pluses and minuses.

The downside is the possibility that consumers will misinterpret the data. Displays of absolute scores are not self-explanatory. What do the scores mean? Do organizations with small differences in scores really perform differently? In testing, researchers have found that people vary in what they consider a meaningful difference in scores; some think any difference matters, while others pay more attention to a range of performance. Since presentations of absolute scores are meant to help consumers compare their options, you must give your readers some guidance.

The advantage is that presenting absolute information is simple for sponsors: all you have to do is array the scores in a table or graph. People are accustomed to seeing scores this way; they know to focus on the organizations with the highest scores. While there's a chance that consumers will misinterpret the information (which could happen, for instance, if the highest score is not necessarily the best score), absolute data can be less confusing to some audiences than displays of relative data, which usually rely on symbols to communicate differences between organizations. 

In most cases, absolute results are also useful for health care organizations because they provide a specific and concrete picture of their performance. This enables the purchasers and/or the health care organizations to use the information to establish improvement goals and assess their progress over time.

How to Portray Absolute Information

Absolute information is typically displayed in one of the following two ways:

Bar graphs

Bar graphs can portray one piece of data (e.g., the percentage of eligible female enrollees who received mammograms) or multiple pieces of data (e.g., the percentage of members who provided any of three possible responses to a survey question).

Select links below to learn more about:

Go Top

 

Benefits of bar graphs: Bar graphs tend to be the preferred vehicle for communicating absolute information because they combine a picture with numbers, with each reinforcing the other. This allows readers to focus on whatever they understand best. In addition, bar graphs enable you to convey information more effectively. Suppose you want to show a breakdown of survey responses: the percentage who reported being very satisfied and the percentage who reported being satisfied. If you use a table, the reader would have to look at both responses, add them together, and try to keep track of the total number of positive responses for each organization. But if you use a stacked bar graph, the reader need only determine visually which bar is the longest.

Disadvantages of bar graphs: The downside of bar graphs is that they can quickly become complicated once you move beyond a simple one-part bar. For instance, grouped bars (which are often used to show trends from year to year) are very hard for people to follow. Unless they are designed carefully and labeled clearly, this kind of chart is a challenge for readers trying to make comparisons. 

Stacked, or split, bars with three or more segments are also a challenge for consumers, especially when it's not clear which part really matters. Moreover, the top and bottom ends of the bar tend to be highly correlated, i.e., organizations with a large number of very positive responses have low numbers of negative responses and vice versa. This suggests that it may not be necessary to provide all segments of a stacked bar.

Using bar graphs to present multiple pieces of data:

There are two ways to show several pieces of data in a single bar:

Provide the complete range of responses in a stacked bar.

Many researchers and sponsors believe that audiences should be able to see all of the segments of a bar graph, with the caveat that it is hard to interpret bars that contain more than three categories of responses.

One reason to display all of the responses is to avoid a negative interpretation that may be inappropriate. For instance, the Centers for Medicare & Medicaid Services (CMS), formerly the Health Care Financing Administration (HCFA), offers Medicare beneficiaries information on the disenrollment rates of Medicare + Choice plans, including the reasons for disenrollment. In response to the results of consumer tests, the agency decided to indicate the percentage of members who left the plan as well as the percentage who stayed—even though just one of those numbers should be adequate. One reason for this decision is to avoid any impression that CMS is being negative about its managed care plans. Another reason has to do with CMS's sensitivity to its audience; the agency recognizes that since Medicare beneficiaries tend to be risk averse, negative information has a greater impact on their decisions. By offering both positive and negative information, CMS is trying to counter its audience's natural inclination to focus on the potential for problems.

ExampleMedicare Compare (PDF file, 242 KB; HTML) offers disenrollment rates for Medicare + Choice plans. For more information, go to http://www.medicare.gov.

 

Go Top

 

Provide only one part of the bar.

Because it can be difficult to interpret a multi-response bar, some sponsors have been experimenting with showing their audience only one part of the response. For instance, CMS found that Medicare beneficiaries do not understand how to interpret divided bars. In 200 cognitive interviews, nearly 50 percent of beneficiaries were not able to select the "worst" plan based on the split bar graphs. As a result, CMS has decided to report only a truncated part of the distribution: the percentage of respondents who rated the plan a "10" on a scale of one to 10.

ExampleMedicare Compare (PDF file, 384 KB; HTML) shows the percentage of survey respondents who rated their plan a 10. For more information, go to http://www.medicare.gov.

 

Unfortunately, the research on this issue doesn't point to a single best answer. Some focus groups studies have indicated that people become suspicious when shown only the top response, questioning what the sponsors may be hiding. Specifically, in testing with fairly well-educated consumers, researchers learned that they wanted to see all segments of a bar. Evaluations with Medicaid recipients and Medicare beneficiaries, on the other hand, suggested a preference for a single bar that represents the positive end of the scale (e.g., the percent who rated the plan "excellent"). This emphasis on a single piece of data is easier for people to understand and more informative for a casual, quick reader.

Tables

Benefits of tables: While less appealing visually, tables can be a concise, convenient way to provide a great deal of information in a small space. You would need several pages of bar graphs to present all of the data that can be displayed in a one-page table. In addition to being a more efficient use of space, tables can make it easier for the reader to compare results across multiple measures or items. They are especially useful in situations where bars would be contrived or it would not be clear what the bars mean.

Disadvantages of tables: Tables can be less "user-friendly" than bar graphs, especially when they contain a great deal of data. In particular, consumers may have a hard time interpreting the data to determine which plans perform best, mostly because tables require the user to remember or add several numbers in order to make comparisons. Researchers refer to this as a problem with accuracy, which is considered the lowest level of cognitive tasks; that is, the least a consumer should be able to do is to pick out the best performer.

One way to improve tables is to limit numbers to one decimal point, which is easier for readers to skim; you can also use design tricks to make the columns and rows easier to follow.

For details on design considerations, go to Designing your Report.

ExampleThe Pacific Business Group on Health's report cards on HealthScope.org present results in a table format. Select any report card on the site to see this format.
© Copyright 2001. The Pacific Business Group on Health. All Rights Reserved. Used with Permission.

 

Go Top

 

Presenting Information in Relative Terms

Sponsors can help consumers interpret quality information by explicitly showing performance relative to something else, whether it is the average, an external standard, or simply all their other options. The purpose of this approach is to make it clear which organizations are performing relatively well and which aren't.

Select links below to learn more about:

The Basic Steps

  • First step: Divide organizations into tiers
    Your first task is to use the organizations' scores (for categories or individual measures) as the basis for dividing them into tiers. The most basic way to do this is to simply create an even distribution by grouping together the top third, the middle third, and the bottom third. A more sophisticated approach would be to use statistical tests to divide the organizations into tiers that are truly different from each other. That is, the middle group could represent average performance, while the top group would include organizations whose performance was statistically better than average and the bottom group would have those that are statistically worse than average. In most cases, this results in an uneven distribution: that is, most organizations fall into the average tier.
  • Second step: Choose method for representing tiers
    Your next task is choose a way to represent those tiers for each organization. Sponsors generally use two approaches—symbols, words— individually or in combination.

Using Symbols

Symbols have become a popular way to provide information on the comparative performance of health plans and providers. They are often used to show performance relative to an average. Less commonly, they reflect performance relative to a benchmark. For instance, if an organization's performance approaches or exceeds a benchmark, a five-star result could indicate "excellent," not just "better than average." One prominent example of this approach is NCQA's new Health Plan Report Card, which awards each health plan a number of stars for each category based on a comparison of the plan's score to NCQA's accreditation standards.

Select links below to learn more about:

Go Top

 

Advantages of symbols

The primary advantage of symbols is that—when well designed and presented—they create a concise, visual picture. The reader can interpret a table with symbols much more quickly and easily than a table with numbers or a series of graphs. Report card developers also like symbols because they seem to reflect the uncertainty inherent in quality scores. Unlike bars and numbers, which imply more finely tuned scores, symbols don't communicate any more specificity than there really is.

Another major benefit is that they can convey real differences among plans. One of the strengths of symbols is that they offer a way to capture and communicate statistical analyses without overwhelming the reader. When you compare organizations, you need to be clear about which differences in performance are meaningful, that is, whether a score of 79 percent for a health plan is really different from the average score of 74 percent. Researchers refer to this issue as "statistical significance," i.e., they ask whether the difference between two scores is statistically significant. Since most consumers are not familiar or comfortable with concepts like this, many sponsors of quality measurement projects use symbols as a way to convey statistical significance without intimidating their audiences with unfamiliar terminology.

Select to learn How Symbols Reflect Real Differences.

Finally, on a practical note, symbol displays are more compact than bar charts with multiple categories, so they require less space to present the same information.

Problems with symbols

Generally speaking, symbols have three major weaknesses:

  • They can be misinterpreted.
    Consumers often misinterpret the relative nature of symbols. For instance, a consumer may assume that a health plan that receives one star delivers poor care. However, a plan with below-average performance could still be providing excellent care if the average in the market is very high. An appropriate analogy would be a student who gets C grades in a highly competitive school; that student may still be smarter and more knowledgeable that an A-student in a less competitive school. Similarly, a one-star plan in Minnesota, which is known for its high-quality plans, could be much better than a three-star plan somewhere else. The problem is that consumers don't always know to interpret symbols in this way.

    Another problem with misinterpretation arises when consumers don't see any symbols (or sets of symbols) that represent the highest or lowest level of performance. In those cases, they may presume that the highest one they see (such as four stars on a five-star scale) represents the best possible result, rather than recognize it for what it really means (in this example, above average). Or they may interpret the lowest score they see (such as three stars on a five-star scale) as being "bad" rather than what it really is (in this example, average).

  • They can be misused.
    For instance, researchers have found that consumers will add up the number of stars on a chart in order to identify the plan with the most stars, thus creating their own summary scores. While this has intuitive appeal, some experts argue that it is not an appropriate use of the symbols, which represent statistical differences rather than absolute performance. Others claim that consumers who do this do not reach the wrong conclusion about which plans perform best; however, by adding the stars, they are implicitly giving the same weight to each category, which may not reflect their personal values.
  • They can be hard to read, especially when the same symbols are presented in clusters of different sizes.
    Researchers have found that it is visually difficult for readers to differentiate between groups of symbols (e.g., four stars versus five stars). To address this problem, you can move the symbols close together to create visual blocks and left-justify the blocks of symbols so that it is easy to see which ones stick out furthest to the right. Also, symbols with points (e.g., triangles, stars) are visually distracting, which makes it hard to detect patterns. Finally, symbols—and especially icons—can be bewildering to consumers when they are very small on the page.
Go Top

 

Pros and Cons of Specific Kinds of Symbols

For symbols to be effective, they should be easy to interpret. That is, you shouldn't have to rely on the ability of people to read a legend and remember it accurately. It should also be easy to skim the symbols to detect patterns in the relative performance of different organizations.

Common symbols include stars, circles, triangles, arrows, and boxes.

  • Stars
    Stars are currently the most widely used symbol in quality reports. Because children in America often receive stars for good behavior or good work, stars have a positive connotation in our culture. Even one star is often regarded as "good." For that reason, some sponsors believe that stars can be a useful symbol for conveying the message that all plans (or providers) offer good care, even though some are better than others. Conversely, they are not as useful for communicating poor performance.
  • One concern about stars is that they are commonly used in ratings of restaurants, hotels, and movies. As a result, people are accustomed to interpreting them as absolute rather than relative assessments of quality, i.e., four stars is very, very good, rather than simply better than the average. Stars are also sometimes perceived as subjective (i.e., reflecting one person's personal opinion) rather than objective (i.e., based on a statistically valid sample of responses).

    Example A Consumer's Guide to Medicaid Managed Care in New York City 2000 (PDF file, 429 KB; HTML) is one of several guides available for each region of the State.
    © Copyright 2000. New York State Department of Health. All Rights Reserved. Used with Permission.

     

  • Circles
    Thanks to Consumer Reports, circles are very familiar to many sponsors and somewhat familiar to consumers. (However, sponsors should note that you may not use the five-circle model of Consumer Reports because it is trademarked.) Unlike stars, circles don't carry any meaning on their own, so they are useful for communicating a full range of performance. In report cards developed for several states, NCQA uses different kinds of circles (e.g., empty, half-filled, filled) in summary charts to represent the performance of plans relative to the average. Some sponsors, including New York's Medicaid agency, use an empty circle to denote the poorest performance, reserving stars for higher levels.
  • However, although they are familiar, most people do not know how to interpret circles. Researchers have also found that consumers have a hard time remembering the meaning of different kinds of circles. In cognitive tests, even strong readers had to look back repeatedly at the legend to see what the symbols meant.

    ExampleComparing the Quality of Maryland HMOs, 1999, from the Maryland Health Care Commission (PDF file, 177 KB; HTML). Select for a recent report.
    © Copyright 1999. Maryland Health Care Commission. All Rights Reserved. Used with Permission.

     

    Example"HMO or PPO: Are you in the right plan?" from Consumer Reports.

     

  • Triangles and Arrows
    Some sponsors have used triangles or arrows to point up and down to signify performance above or below the average. Others have used triangles in the same way as stars, in groups of up to three or five symbols.
  • One problem with pointed symbols—and this includes stars as well—is that the points scatter your attention on the page. Also, when used to point up and down, the symbols distract the reader from skimming the page to detect a pattern.

    ExampleHealthScope.org uses up and down triangles to indicate which scores are above or below average.
    © Copyright 2001. The Pacific Business Group on Health. All Rights Reserved. Used with Permission.

     

  • Boxes
    One of the newest symbols in quality reports is the box, which has several advantages relative to other symbols. You can use them like stars, where the number of boxes indicates the level of performance (i.e., more boxes means higher performance).
  • The major advantage of boxes is their shape, whose graphic simplicity lends itself to easier pattern recognition than stars and triangles, which have points that distract the reader's eye and make it harder to visually scan for patterns. In consumer tests to compare boxes to pointed symbols, researchers found that the boxes facilitate comprehension of comparative information. First, unlike stars, boxes are neutral, with no association with an absolute level of performance; as a result, they don't have the same potential to be misleading. Another benefit is that boxes lend themselves to being perceived as a single form (i.e., a larger block), which makes it easier for the reader to see which groups of symbols are larger or smaller than others. The shape also lends itself to the use of color; when two or more boxes are presented together, they create the perception of a more concentrated block of color than you can get with triangles or stars. Finally, unlike half-filled circles and arrows pointing in various directions, boxes just need to be counted, which is a simple cognitive process. This supports the goal of requiring the fewest possible cognitive steps in order to get your message across to the reader.

    The downside of boxes is that they are bland and unfamiliar to consumers as symbols of quality. Also, in contrast to stars, there is nothing intuitive about them. But the fact that they have no meaning in themselves can be both a plus and a minus.

    ExampleBuyers Health Care Action Group's Choice Plus 2000 Report (PDF file, 288 KB; HTML)
    © Copyright 1999. Buyers Health Care Action Group. All Rights Reserved. Used with Permission.

     

Other options include:

  • Letter grades: A, B, C, D, and F
    Letter grades can be an effective way to show performance relative to specific goals or benchmarks. The biggest appeal of this approach is that these symbols already have meaning for most Americans; a legend isn't necessary to explain what they mean. Unlike the other symbols, the letters are also different shapes (i.e., combinations of lines and curves), which makes it easier for people to perceive differences between them.
  • The downside is that letter grades are not equally effective for everyone. First, they have bad connotations for those who didn't enjoy school. Second, grades are subject to cultural and linguistic factors: depending on where they grew up, people may have experienced different approaches to rating school performance, or they may have different ways of interpreting the meaning and impact of high or low grades. Finally, the use of letters as a symbol doesn't work for low-literate readers.

    Another problem is that health care organizations don't like the idea of being graded in this way; they are especially concerned about the public perception that would be created by a C-grade.

  • Unique symbols for each level
    Rather than using the same symbol in multiple iterations, you could use different symbols to represent different levels of performance. This approach has some appeal because it avoids the problems associated with repeating symbols. However, it faces several challenges:
    • The symbols must be intuitive and familiar; that is, people should easily understand and remember which symbol implies better performance and which implies worse performance. They should not have to refer repeatedly to the legend.
    • Interpretation of the symbols cannot be culturally dependent. For example, a thumb pointing up may not mean the same thing to every ethnic or racial group.
    • The symbols cannot rely on color to be understood. One reason for not depending on color is that the reader may be colorblind; another is that the symbols need to be understandable if the reader is looking at a black-and-white photocopy.
    • The presentation of symbols must enable the reader to detect patterns. This is hard with disparate symbols because the differences are distracting and the table often appears cluttered.
    • Finally, the symbols cannot be loaded with meaning that may not be fair to the health care organizations, especially given the uncertainties inherent in quality measurement. For example, a health plan is unlikely to react well to a public report that gives it red lights or stop signs.
Go Top

 

Using Words

Words are another way to convey relative information. They offer several benefits:

  • They can be used to supplement graphics such as bar charts or symbols. Since people can interpret information differently, it is helpful to offer the information in more than one way.
  • Words can clarify graphics. For instance, three stars could mean "better than average" or "good." With words, there is less concern about misinterpretation.
  • Words can remind the reader about the meaning of symbols. This is especially important when you do not have a forced distribution, e.g., if all the health plans in a report receive either two or three stars on a five-star scale. Seeing three stars as the highest number on the page, the reader may forget that that number of stars doesn't indicate good performance.
  • Words can succinctly describe performance relative to a benchmark, e.g., best, very good, good, fair, poor.

Ironically, the problem with using words is that they are hard to read on the page. For the reader, there is too much information to look at. A visual picture has a greater impact once people understand what the graphic (star, circle, etc.) means. Also, people like an "at-a-glance" view that is easy to interpret. With symbols, for instance, they can look to see who has the most stars or the most red circles.

Previous Page

Next Page

  
 
AHRQ  Advancing Excellence in Health Care
AHRQ Home | Questions? | Contact AHRQ | Site Map | Accessibility | Privacy Policy | Freedom of Information Act | Disclaimers
U.S. Department of Health & Human Services | The White House | USA.gov: The U.S. Government's Official Web Portal