Skip Navigation
Home Home Site Map Site Map Help Help Search Search Glossary Glossary
TalkingQuality Home Page
Home Site Map Help
TalkingQuality Home Page Search Glossary TalkingQuality Home Page
shim

blank
The Big Picture
What to Say
How to Say It
Into The Hands of The Consumer
Refining What You Do
blank
Making Quality Measures Manageable

Comparing Results

Talking About Statistics

Saying It Clearly

Designing Your Report

Choosing Media

   

Talking About Statistics

Workbook Reminder
Question 18

Quality measures, summary scores, symbols, and bar charts are all products of statistical calculations. But people don't especially like to deal with statistics. Rather than use the data in charts and tables to make decisions, most consumers rely on stories and anecdotes. In fact, Robert Blendon, a noted researcher at Harvard, has suggested that as much as 70 percent of the population fall into the data-averse group.

Select for a list of articles on Talking About Statistics.

So how can you make all of the statistical information in a typical performance report as digestible as possible?

This section reviews:

Go Top

 

What You Need to Know

Here are four things you need to know:

Statistical Decisions Matter

The statistical decisions that you and your vendor make will determine how people perceive the plans or provider groups that are the subject of your report. What you choose to do with the data you have can exaggerate variations in performance, draw attention to good performers, or make weak performers even less appealing. Consequently, you have a responsibility to be aware of the implications of your decisions and to take steps to ensure that their impact is fair to the health care organizations and consistent with your overall objectives.

To read about confidence intervals and statistical significance, select Two Statistical Concepts You Should Understand

Statistics Can Be Misleading

Statistics can create inaccurate perceptions of health care organizations if you do not present them correctly and explain them clearly. Suppose a sponsor chooses to grade health plans on a curve (the way many of us were graded in school). It could decide to assign three stars to the top 20 percent, one star to the bottom 20 percent, and two stars to those who scored in between. Many consumers would conclude that a health plan with only one star performed poorly—even if all the plans actually performed well relative to objective standards. That is, consumers could easily form an inappropriate negatively perception unless the sponsor is very careful about defining and applying the rating scheme.

Another problem stems from the statistical complexity of quality measures. Most measures are drawn from samples of the population; because they are not based on the entire relevant population, the scores for these measures have some uncertainty associated with them. (Statisticians refer to this as sampling error.) Consequently, the score should really be a range (e.g., 72 percent to 75 percent) rather than a single number (73.5 percent). However, we use single numbers to represent scores because they are much easier for people to understand.

Misinterpretation is inevitable if you present multiple scores without clarifying whether they are truly different from each other. To most readers, a score of 73.5 percent seems better than a score of 71 percent. But if the range for the latter score is really 69.5 percent to 72.5 percent, the two scores are not different from a statistical viewpoint.

Go Top

 

People Don't Understand Statistics

Statistical concepts are foreign to most people, even those with advanced degrees. Many of us get through college without ever taking a class in statistics.

One lesson that sponsors have learned is that you cannot use complicated graphs to convey whether differences in performance are statistically significant (i.e., whether the scores are statistically different from each other). What you gain in accuracy, you lose in comprehension. This is the chief reason for the development of symbol displays that can capture statistical information.

In addition to problems interpreting graphs and charts, many people lack a basic familiarity with statistical terms. One tactic is to avoid such terms as much as possible, using plain English to express a concept. For example, rather than "margin of error," say "Plans that are less than 3 points apart may not have real differences."

Another option is to define the term each time you use the it. For instance, when the Health Care Financing Administration (HCFA), now the Centers for Medicare & Medicaid Service (CMS), tested information on disenrollment, it found that many beneficiaries did not understand averages or percentages.

It is ideal to provide definitions on the page where an unfamiliar term is used. However, this may not always be possible because of space constraints. In electronic documents, one way to address this problem is to provide highlighted links to definitions and explanations. Research is needed to determine the effectiveness of this approach and to identify other creative ways to present and explain statistical concepts in a way that people can easily grasp.

Consumers Don't Trust Statistics

In focus groups, NCQA learned that people don't trust statistics. They believe that you can use them to say whatever you want, which (unfortunately) has some truth to it. They also tend to think that the information can be manipulated by the health care organizations that provide much of the data.

Go Top

 

What You Need to Explain

Here are two things you need to explain to your audience:

The Sponsors Can Be Trusted

Consumers are skeptical about health care quality data, especially if it appears to come from health plans. They are also suspicious of their employers, which have a financial stake in their decisions. To counter this distrust, you must position yourself as an objective, independent, and therefore credible source of information. Be sure to tell readers who you are and why they should trust you. In addition, make it clear that you have done everything possible to ensure that the information is as credible as possible. 

  • If you truly are an objective third party (i.e., neither a health care organization nor an employer), State that clearly in the report. Very briefly explain who you are in order to establish your independence and your credibility.
  • If you are an employer, reinforce your independence from the influence of health care organizations. Reassure your audience that this information is intended to help them make the health care decisions that are best for them, not to convince them to do something in the employer's interests (although those two things may coincide). Also, consider providing the information in collaboration with other purchasers or consumer organizations; this approach can remove some of the taint of self-interest.

The Information Is Credible

It is not enough to establish yourself as an objective, well-meaning source of information. You need to be very clear about the credibility of the data itself: that it was collected and analyzed in accordance with a well-tested and established methodology and that it was validated (if appropriate) through auditing. A brief discussion of the methodology can be reassuring to the reader.

Things to tell your audience include:

  • Where did the data come from? The medical records of providers? Surveys of patients? (If a survey, you may also want to indicate who conducted the survey, how many people were surveyed, and how those people were chosen.)
  • When was the data collected? (This suggests how relevant the information will be to the reader's current choices.)
  • Who collected the data? Was anything done to confirm its accuracy (e.g., was it audited?)?
  • What organizations are covered in the report card?
  • What is their performance being compared to?

In general, consumers have little interest in knowing how scores were created or how items were grouped together. However, you could offer an explanation of this process in a Web environment for those who really want to know.

To learn more about explaining methodology, go to Where the Data Came From.

 

Go Top

 

How to Provide an Explanation

Here are two tips for maximizing the value of your explanations to your audience:

Consumers Don't Want To Know Everything

Although they want reassurance, consumers don't really want to know a lot about the development of the information. Too much explanation—particularly about technical issues—can be overwhelming. The general rule is not to say any more than you have to. Ask yourself: what can be eliminated without being missed?

In report cards developed for several States, NCQA simply makes the point that the information is statistically valid, and says little about the statistical tests that they do. To the extent they explain these tests, they "bury" the material in footnotes or back pages so that it is available to those who want it, but not disturbing those who aren't interested.

Stash the Details in the Back

The material in the front of your report should communicate a clear, compelling message that engages the reader and stimulates interest in the information on quality. Technical details tend to be distracting and potentially intimidating. For that reason, experienced report card developers suggest that you keep only the most basic and important explanations in the front, moving all the other details to the back pages. Wherever it is, methodological and other technical information should be clearly labeled so that the reader can find it (or skip it) easily.

 

Previous Page

Next Page

  
 
AHRQ  Advancing Excellence in Health Care
AHRQ Home | Questions? | Contact AHRQ | Site Map | Accessibility | Privacy Policy | Freedom of Information Act | Disclaimers
U.S. Department of Health & Human Services | The White House | USA.gov: The U.S. Government's Official Web Portal