Skip Navigation Links
Centers for Disease Control and Prevention
 CDC Home Search Health Topics A-Z

National Center for Chronic Disease Prevention and Health Promotion
Behavioral Risk Factor Surveillance System

BRFSS Home | Contact Us



BRFSS Contents


Item: Prevalence and Trends Data
Item: SMART: City and County Data
Item: BRFSS Maps
Item: Web Enabled Analysis Tool (WEAT)
Item: Chronic Disease Indicators (CDI)
Item: About the BRFSS
Item: BRFSS Datasets (downloads and documentation)
Item: Chronic Disease and the Environment
Item: Questionnaires
Item: FAQs
Item: State Information
Item: Publications and Research
Item: Training
Item: Site Map
Item: Related Links







BRFSS Data Quality, Validity, and Reliability

Overview of Behavioral Risk Factor Surveillance System (BRFSS) 2004 Expert Panel Review and Recommendations 

In May 2002, the Behavioral Risk Factor Surveillance System (BRFSS) held its first expert panel review. The second review meeting was held in November 2004. At both meetings, approximately twenty leading survey statisticians, methodologists, and operational experts gathered to discuss the challenges facing the field of survey research and implications for the BRFSS. The goal of these meetings was to develop options and prioritize recommendations for maintaining data quality in the face of societal and technological changes, managing an increasingly complex surveillance system, and meeting the demand for more local-level data and expanded analysis capabilities.

The meetings began with 34 overview presentations on challenges facing survey research generally in the areas of statistics, operations, estimation, and system infrastructure. The larger working group was then divided into smaller discussion panels. The 2002 meeting panels focused on technological, methodological, and system challenges. The 2004 breakout groups were organized around statistical and operational issues. Each group held open discussion and deliberation of issues, then drafted recommendations for research and system improvements which were presented at a plenary session. These recommendations were then prioritized by the larger working group in terms of importance and timeframe.

The May 2002 BRFSS expert panel meeting generated 42 recommendations, of which 37 were implemented or are in progress (see Morbidity and Mortality Weekly Report, May 23, 2003, Vol. 52, No. RR-9). The November 2004 panel made 38 specific recommendations, including use of more complex weighting and imputation procedures, experiments with incentives, multiple modes of data collection and alternative sampling frames, and assessment of processes for reaching language-isolated households (see appendix for list of specific recommendations). These recommendations will help shape BRFSS research and process improvement activities for the next 23 years.

Participants at the 2004 meeting were Lina Balluz (CDC), Michael Battaglia (Abt Associates), Paul Biemer (University of North Carolina at Chapel Hill and RTI International), Stephen Blumberg (CDC), Barbara Bowman (CDC), J. Michael Brick (Westat), Donna Brogan (Emory University), Bonnie Davis (Public Health Institute), Michael Elliott (University of Pennsylvania School of Medicine), Amy Ferketich (Ohio State University), Earl Ford (CDC), Martin Frankel (Baruch College at CUNY and Abt Associates), William Garvin (CDC), Gary Gentry (Public Health Institute), Wayne Giles (CDC), Ziya Gizlice (North Carolina Department of Health and Human Services), Virginia Bales Harris (CDC), Ruth Jiles (CDC), William Kalsbeek (University of North Carolina at Chapel Hill), Jim Lepkowski (University of Michigan), Paul Levy (RTI International), Michael Link (CDC), Ali Mokdad (CDC), Cynthia Nelson (Northern Illinois University), Sarah Nusser (Iowa State University), Colm O’Muircheartaigh (National Opinion Research Center), Charlie Palit (University of Wisconsin-Madison), Nagi Salem (Minnesota Department of Health), Fritz Scheuren (National Opinion Research Center), and Donna Stroup (CDC).

2004 BRFSS Expert Panel Recommendations

High priority/Short timeframe:

  • State-specific nonresponse analysis: examine disposition codes (past and future) for use in response rate improvement and model-assisted weighting.
  • Nonresponse simulation analysis: use past years' BRFSS disposition codes to assess how simulated variations in response rates may affect estimates (i.e., simulate 30, 40, and 50 percent response rates based on call history information).
  • Evaluate call center “house effects” by estimating intra-interviewer correlation (inter-interviewer variance), which could be contributing to some of the cross-state variances.
  • Evaluate modifications to current weighting and poststratification adjustments: examine coherence of weighting procedures across states; consider the effects of alternative weighting adjustments, such as race, education, geography, and telephone interruption; consider use of raking and weight trimming; develop diagnostic testing of weights; and consider the use of model-assisted sampling.
  • Evaluate methods for imputing missing data: consider the use of imputation for weighting variables, other descriptive variables, and substantive variables

High priority / Longer timeframe:

  • Continue alternative and mixed-mode studies looking at mode effects in telephone, mail, Web, and in-person interviewing; appropriateness of question wording for self-administered and interviewer-administered questionnaires; and utility as a refusal conversion tool.
  • Investigate a predominantly mail-mode BRFSS, focusing on evaluation of within-household selection techniques, use of substitution for nonrespondents, and follow-up of refusals and noncontacts in subsequent months.
  • Conduct assessment of how Spanish-language interviewing is handled, including techniques for translation and assessment of cultural comparability with English version. Also assess cultural implications of contact attempts – trust issues, legitimacy, etc.
  • Assess extent of language isolation: evaluate and if necessary modify disposition codes to capture more information about what language is potentially spoken, whether the language is other than English or Spanish, and whether the language is Spanish, and a bilingual interviewer is not available at the time.
  • Conduct feasibility, utility, and quality assessment of Language Line Services for on-the-phone translations into languages other than English or Spanish.
  • Nonresponse adjustment: collect basic demographic information (e.g., sex, age, race) of the selected but unavailable respondent from the proxy household member who answers the respondent selection section.
  • Assess noncoverage and nonresponse bias in selected counties (selected from SMART-BRFSS counties) comparing RDD sample to an area probability sample. Start with address-based sample. Conduct interviews initially by telephone, then follow up with in-person interviews. Goal may be to justify adequacy of low response rates, not to adjust the estimates per se.
  • Continue and expand research on external validation for quality.
  • Ensure cross-state standardization for CATI programming.
  • Evaluate tailored and modified introductions highlighting local area relevance.

Intermediate Priority:

  • Sample release: conduct analysis to evaluate the statistical, operational, and analytic (including weighting) pros and cons of releasing sample weekly or biweekly.
  • Field period: consider extending current one-month field period to six or eight weeks to improve response rates.
  • Evaluate partial interviews and terminations to determine their feasibility for use in nonresponse adjustment and imputation of missing data.
  • Evaluate current within-household randomization procedure to determine if it is optimal, given the changes in household composition over time and underrepresentation of particular groups.
  • Evaluate cell-phone-only coverage bias and nonresponse bias with multiframe, multimode experiments. Study design components using a sample from a landline frame and a cellular frame, conducting a survey initially by telephone, and applying follow-up techniques to complete nonrespondent interviews (consider use of an abbreviated mail questionnaire). Analysis would consist of comparing characteristics of cellular respondents with land line respondents, comparing characteristics of telephone respondents with responses obtained in the follow-up, and comparing results with external sources (e.g., CPS).
  • Evaluate nonresponse bias and response propensity by seeding the sample with persons of known characteristics and by applying frames of other surveys.
  • As part of the reporting process, identify methods of presenting quality declarations (i.e., information on the quality of the data) to explain how to interpret the quality measures associated with the study.
  • Consider the use of an abbreviated questionnaire for nonresponse follow-up and bias assessment.
  • Conduct cognitive and language-level assessment of the English version of questionnaire to ensure that it is correctly comprehended by all.
  • Consider the use of specialized, stand-alone surveys to reach important hard-to-reach or hard-to-interview populations (such as institutionalized adults, mobile-only households, and Native Americans).

Low Priority:

  • Incentives: consider experiments with incentives, including up-front vs. promised incentives and incentives for use with nonrespondents.
  • Examine the impact of imputation on sampling error.
  • Use NHIS as a standard for evaluating the validity of telephone, Web, and mail surveys.
  • Use NHIS to evaluate within-household correlation on health estimates (if correlations are low, then surveying multiple household members by mail may be more cost-effective than interviewing only one selected household member).
  • Consider the effectiveness of quota sampling with backend weighting adjustments to determine if better estimates are produced at a lower cost.

Back to Data Quality, Validity, and Reliability page







* Links to non-Federal organizations are provided solely as a service to our users. Links do not constitute an endorsement of any organization by CDC or the Federal Government, and none should be inferred. The CDC is not responsible for the content of the individual organization Web pages found at this link.

Policies and Regulations | BRFSS Home | Contact Us

CDC Home
| Search | Health Topics A-Z

This page last reviewed June 28, 2006

United States Department of Health and Human Services
Centers for Disease Control and Prevention
National Center for Chronic Disease Prevention and Health Promotion
Division of Adult and Community Health