Skip Navigation U.S. Department of Health and Human Services www.hhs.gov
Agency for Healthcare Research Quality www.ahrq.gov
www.ahrq.gov
""
MEPS Home Medical Expenditure Panel Survey
FAQ Contact MEPS Espanol Site Map
 
""
 

MEPS Annual Methodology Report

Deliverable Number: 142
Contract Number: 290-02-0005
March 12, 2009

Submitted to:
Agency for Healthcare Research and Quality
540 Gaither Road
Rockville, Maryland 20850

Submitted by:
WESTAT
1650 Research Boulevard
Rockville, Maryland 20850-3195
301-251-1500


Table of Contents

_. Introduction
1. Sample
1.1 Sample Design and Size
1.2 Sample Delivery and Processing
2. Instrument and Materials Design
2.1 Questionnaire Changes for Spring and Fall 2008
2.2 Changes to Materials and Procedures for Spring and Fall 2008
3. Recruiting and Training
3.1 Recruiting for 2008
3.2 2008 Trainings
4. Data Collection
4.1 Schedule
4.2 Operations
4.3 Data Collection Results
5. Home Office Processing and Support
6. Interview Timing and Utilization Measures
Appendix A. Comprehensive Tables – Household Survey
Table A-1 Data collection periods and starting RU-level sample sizes, all panels
Table A-2 MEPS household survey data collection results, all panels
Table A-3 Signing rates for medical provider authorization forms
Table A-4 Signing rates for pharmacy authorization forms
Table A-5 Results of self-administered questionnaire (SAQ) collection
Table A-6 Results of Diabetes Care Supplement (DCS) collection
Table A-7 Calls to respondent information line
Table 1-1 Initial MEPS sample size and number of NHIS PSUs, all panels
Table 1-2 Data collection periods and starting RU-level sample sizes, Spring 2001 through Fall 2008
Table 1-3 Percentage of NHIS households with partially completed interviews in Panels 3 to 13
Table 2-1 Supplements to the CAPI core questionnaire (including hard-copy materials) for 2008
Table 4-1 Data collection schedule and number of weeks per round of data collection
Table 4-2 Percent of total interviews conducted on travel
Table 4-3 Results of patient profile collection for medications prescribed in 2008
Table 4-4 MEPS HC data collection results, Panels 9 through 13
Table 4-5 Summary of nonresponse for Round 1, 2004-2008
Table 4-6 Summary of nonresponse for Rounds 2 and 4, 2005-2008
Table 4-7 Signing rates for medical provider authorization forms for Panels 9 through 13
Table 4-8 Signing rates for pharmacy authorization forms for Panels 9 through 12
Table 4-9 Results of self-administered questionnaire (SAQ) collection for Panels 9 through 12
Table 4-10 Results of diabetes care supplement (DCS) collection for Panels 9 through 12
Table 4-11 Summary of MEPS Round 1 response, 2004-2008 panels
Table 4-12 Summary of MEPS Round 2 response, 2004-2008 panels, by NHIS completion status
Table 4-13 Summary of MEPS Round 1 response rates, 2005-2008 panels, by race/ethnicity and NHIS completion status
Table 4-14 Summary of MEPS refusal rates, 2003-2008 panels, by race/ethnicityand NHIS completion status
Table 4-15 Summary of MEPS Panel 13 Round 1 response rates, by sample Domain by NHIS completion status
Table 4-16 Summary of MEPS round 1 results: ever refused, final refusals, and refusal conversion rate, by panel
Table 4-17 Summary of MEPS round 1 results: ever traced and final not located, by panel
Table 5-1 Calls to the respondent information line, 2007 and 2008
Table 6-1 Timing comparison, Panels 12 and 13 vs. prior panels (mean minutes per interview, single-session interviews)
Table 6-2 Mean round 1 interview time, in minutes, for single-session interviews, Panel 12 and Panel 13, by interview training and production groups
Table 6-3 Round 1 mean interview time, by NHIS completion status, Panel 12 and 13
Table 6-4 Round 2 outcome, by Round 1 interview time (Round 1 interviews with no breaks), Panel 12 and Panel 13
Table 6-5 Round 2 outcome by interview break status in Round 1, Panel 12 and Panel 13
Table 6-6 Later round outcomes by 'ever refused' status in Round 1, Panel 12 and Panel 13
Table 6-7 Round 2 outcome by month of Round 1 complete, Panel 12 and Panel 13
Table 6-8a Utilization comparison: mean total events per person (excluding prescribed medicines) by panel and round (unweighted)
Table 6-8b Utilization comparison: office-based physician events per person by panel and round (unweighted)
Table 6-9a Utilization comparison, Round 1 mean total events per person for all events (excluding prescribed medicines) by sample domain
Table 6-9b Utilization comparison, Round 1 mean office-based events per person, by sample domain

Introduction

This report documents the principal design, training and data collection activities of the Household Component of the Medical Expenditure Panel Survey for survey year 2008. These activities were conducted under Contract 290-02-0005, awarded in July 2002. As modified, the contract covers MEPS Panels 8-13.

This report covers all work associated with Panel 11 Round 5, Panel 12 Rounds 3 and 4, and Panel 13 Rounds 1 and 2, which were in the field during the survey year. It includes a description of preparations for fielding a new panel that are performed in the latter half of the year preceding the fielding.

The report touches only briefly on procedures and operations that remained unchanged from prior years. It focuses primarily on features of the project that were new, changed, or enhanced during 2008, and presents the results of the data collection activities conducted during the year. The tables within the report document 2008 data collection results. A comprehensive set of tables showing data collection results from prior years is included in Appendix A.

The most notable change to the project in survey year 2008 was the implementation of an experiment to test varying respondent incentive payments on the new panel, Panel 13. The experiment was designed in 2007 as a result of an OMB request approving a higher incentive payment, and is being carried out on all five rounds of Panel 13 data collection. This report contains an overview of the experimental design and implementation. Results for the first survey year will be provided in a separate report when Panel 13 Round 3 data collection ends in the summer of 2009.

Survey year 2008 began the transition to a more steady state of operations after the significant challenges faced in 2007 with Panel 12, which included the transition from the DOS-based instrument to the windows-based instrument, and the use of the new sample design in the 2006 NHIS. The sample design presented its own challenges: more PSUs were added, and many had small workloads which were difficult to assign efficiently. In 2008, with the addition of the Panel 13 sample, the workload in the new PSUs increased and efficiencies were gained. Panel 11, the last panel to use the DOS-based instrument and the last using the old NHIS sample design, was retired after the spring rounds of data collection.

Chapter 1 of the report describes the sample preparation activities. Chapters 2 through 5 discuss activities associated with the data collection for 2008 including field staff recruiting, training, materials development, questionnaire updates that took place in the Fall of 2007, data collection procedures and results, and home office processing support. Chapter 6 provides an analysis of utilization and timing measures begun in 2007.

Return To Table Of Contents

1. Sample

This chapter documents the sample preparation activities associated with the fielding of the 2008 sample, which included households selected for Panel 11 Round 5, Panel 12 Round 3, and Panel 13 Round 1.

Return To Table Of Contents

1.1 Sample Design and Size

Each year MEPS draws its household sample from among responding households in the previous year’s National Health Interview Survey (NHIS). The MEPS sample for 2008 – Panel 13 - was selected from households that participated during the first three quarters of the NHIS in 2007, Panels 1 and 4. Panel 13 is the second panel using the new sample design introduced by the NHIS in 2006 and consisted of 9,939 reporting units, the largest panel since Panel 6. Panel 11, from the earlier NHIS sample design, was also fielded in Spring 2008.

As with the Panel 11 and 12 samples, Panel 13 contained an oversample of Asian, low income, and Black households. Panels 12 and 13 also contained an oversample of Hispanic households.

Table 1-1 shows the starting sample sizes for Panels 1 to 13 and the number of NHIS PSUs from which each panel was drawn.

Table 1-1. Initial MEPS sample size and number of NHIS PSUs, all panels

Panel Initial sample size (RUs)* NHIS PSUs
1 10,799 195
2 6,461 195
3 5,410 195
4 7,103 100
5 5,533 100
6 11,026 195
7 8,339 195
8 8,706 195
9 8,939 195
10 8,748 195
11 9,654 195
12 7,467 183
13 9,939 183

*RU: Reporting Unit

Return To Table Of Contents

Table 1-2 on the following page summarizes the combined workload for the January-June and July-December periods from spring 2001 through fall 2008. (Table A-1 in Appendix A shows the data collection periods and sample sizes for all panels and rounds.)

Across the three panels that were active during the first half of 2008, the combined workload was 22,414 RUs. For the two panels that were active during the second half of the year, the total initial workload was 13,384 RUs.

Return To Table Of Contents

1.2 Sample Delivery and Processing

The 2008 MEPS sample was received in two deliveries. The first delivery, received September 4, 2007 contained households sampled from the first two quarters, Panels 1 and 4 of the 2007 NHIS. Households selected from the third quarter, Panels 1 and 4 of the 2007 NHIS, were delivered on November 21, 2007.

Table 1-2. Data collection periods and starting RU-level sample sizes, Spring 2001 through Fall 2008

January-June 2001 21,069
Panel 4 Round 5 5,547
Panel 5 Round 3 4,496
Panel 6 Round 1 11,026
 
July-December 2001 13,777
Panel 5 Round 4 4,426
Panel 6 Round 2 9,351
 
January-June 2002 21,915
Panel 5 Round 5 4,393
Panel 6 Round 3 9,183
Panel 7 Round 1 8,339
 
July-December 2002 15,968
Panel 6 Round 4 8,977
Panel 7 Round 2 6,991
 
January-June 2003 24,315
Panel 6 Round 5 8,830
Panel 7 Round 3 6,779
Panel 8 Round 1 8,706
 
July-December 2003 13,814
Panel 7, Round 4 6,655
Panel 8, Round 2 7,159
 
January-June 2004 22,552
Panel 7 Round 5 6,578
Panel 8 Round 3 7,035
Panel 9 Round 1 8,939
 
July-December 2004 14,068
Panel 8, Round 4 6,878
Panel 9, Round 2 7,190
 
January-June 2005 22,548
Panel 8 Round 5 6,795
Panel 9 Round 3 7,005
Panel 10 Round 1 8,748
 
July-December 2005 13,991
Panel 9, Round 4 6,843
Panel 10, Round 2 7,148
 
January-June 2006 23,278
Panel 9 Round 5 6,703
Panel 10 Round 3 6,921
Panel 11 Round 1 9,654
 
July-December 2006 14,280
Panel 10 Round 4 6,708
Panel 11 Round 2 7,572
 
January-June 2007 21,326
Panel 10 Round 5 6,596
Panel 11 Round 3 7,263
Panel 12 Round 1 7,467
 
July-December 2007 12,906
Panel 11 Round 4 7,005
Panel 12 Round 2 5,901
 
January-June 2008 22,414
Panel 11 Round 5 6,895
Panel 12 Round 3 5,580
Panel 13 Round 1 9,939
 
July-December 2008 13,384
Panel 12 Round 4 5,376
Panel 13 Round 2 8,008

Return To Table Of Contents

As in recent years, the September sample delivery was instrumental to the project’s plan to launch interviewing for the new panel at the beginning of February. The partial file gave insight into the demographic and geographic distribution of the households in the new panel and guidance on the need for recruiting new interviewers. With two MEPS panels in the new sample design, the increase in the number of households in the new PSUS made for larger workloads and more efficient staffing of interviewers.

As soon as the first sample delivery was received, the NHIS sample file formats were reviewed to identify any new variables or values and to make any necessary changes to the project programs that use the sample file information. With the early delivery, Westat began the standard processing through which the NHIS households are reconfigured to conform to MEPS reporting unit definitions and prepared the files needed for advance mailouts and interviewer assignments. The delivery also allowed time for checking and updating NHIS addresses to improve the quality of the initial mailouts and to identify households that have moved since the NHIS interview.

In order to understand to what extent different levels of respondent payment might reduce nonresponse in MEPS at Round 1 and in subsequent rounds, an experiment testing 3 levels of respondent payments was designed for implementation in Panel 13 Round 1. As part of the processing of the Panel 13 sample, households were assigned to one of three incentive groups - $30, $50, and $70. All households in an NHIS segment were assigned to the same incentive group to eliminate the risk that neighboring households in the MEPS sample receive different incentive amounts.

The segments were assigned to one of two strata based on expected response propensity. Since MEPS response rates are higher among black and low income households, segments with a majority of black or low income households were assigned to the high response strata and Asian and white, non poor households, where response rates have been the lowest were assigned to the low response strata. The same proportion of low income and black households in the total MEPS sample was applied to each incentive group. Since 35 percent of the households in the MEPS sample are black or low income, 35 percent of the segments in the high response strata were assigned to each incentive group. Similarly, 65 percent of the MEPS sample contains Asian and white, non poor households so 65 percent of the segments in each incentive group were from the low response strata.

An unequal assignment of segments across the three incentive groups was done to improve the statistical power for testing the different levels, with the $30 incentive group (where the lowest response rates were expected) receiving the largest share of the households.

Each year, the NHIS sample includes a percentage of households classified as ‘partial completes’. Table 1-3 shows the percentage of NHIS interviews classified as "partially complete" in panels 3 through 13. The NHIS partial completes are, as a group, more difficult to complete in MEPS than the full NHIS completes and therefore receive special monitoring. For Panel 13 partial completes made up 25 percent of the MEPS sample, the highest percent so far in MEPS.

Table 1-3. Percentage of NHIS households with partially completed interviews in Panels 3 to 13

Panel Percentage with partially completed interviews
3 10
4 21
5 24
6 22
7 17
8 20
9 19
10 16
11 23
12 19
13 25

Return To Table Of Contents

2. Instrument and Materials Design

This chapter describes changes to the computer assisted personal interviewing (CAPI) instrument and supporting field materials made in support of the data collection activities for Spring and Fall 2008 (Panel 11 Round 5, Panel 12 Rounds 3 and 4, and Panel 13 Rounds 1 and 2).

Return To Table Of Contents

2.1 Questionnaire Changes for Spring and Fall 2008

During 2008, the following revisions were made to the MEPS CAPI instrument:

  • Reenumeration. In Panel 12 Round 4 and Panel 13 Round 2, question wording was revised to probe for relationships more clearly when someone new joins the household.
     
  • Priority Conditions. The supplemental section asked in Panel 12 Round 3 and Panel 11 Round 5 was revised to collect additional information about two conditions (diabetes and asthma). In Panel 12 Round 4 and Panel 13 Round 2, on-screen instructions were added on coding "don't know" or "refused" for type of cancer.
     
  • Child Preventative Health. The wording of questions in the supplemental section asked in Panel 13 Round 2 and Panel 12 Round 4 was revised to correspond with changes made to the 2008 SAQ.
     
  • Charge Payment. In Panel 11 Round 5, Panel 12 Rounds 3 and 4, and Panel 13 Rounds 1 and 2, the wording of the question text and interviewer instructions was revised to clarify intent and improve respondent comprehension of questions about sources of payment and out-of-pocket payments.
     
  • Access to Care. In the supplemental section asked in Panel 13 Round 2 and Panel 12 Round 4, question wording was revised to better identify individual medical providers seen at facilities.
     
  • Employment. In Panel 11 Round 5, Panel 12 Rounds 3 and 4, and Panel 13 Rounds 1 and 2, employer addresses were no longer collected.
     
  • Closing. In Panel 13 Round 2 and Panel 12 Round 4, "cell phone" was added as a response category when a second contact phone number is collected.

Table 2-1 shows the supplements in the CAPI instrument for the rounds administered in calendar year 2008.

Table 2-1. Supplements to the CAPI core questionnaire (including hard-copy materials) for 2008

Supplement Round 1 Round 2 Round 3 Round 4 Round 5
Child Health   X   X  
Priority Conditions     X   X
Preventive Care     X   X
Access to Care   X   X  
Satisfaction with Health Care   X   X  
Income     X   X
Assets         X
Medical Provider Authorization Forms X X X X X
Pharmacy Authorization Forms     X   X
Self-Administered Questionnaire   X Round 2 follow-up only X Round 4 follow-up only
Diabetes Care Supplement     X   X
Institutional History Form   X X X X
Priority Condition Enumeration X New RU members only X New RU members only X

Return To Table Of Contents

2.2 Changes to Materials and Procedures for Spring and Fall 2008

Increased awareness of the importance of protecting respondent data in 2008 led to some procedural and material changes to assure the security of data collected. In addition, MEPS is working on a long term goal to eliminate all but essential hard copy which increases the risk of exposure of personally identifiable information (PII).

Because of the respondent incentive experiment introduced in Panel 13, changes to materials and procedures were kept to a minimum to reduce the risk that these changes could influence the outcome of the experiment. Respondent contact materials (brochure, advance letters, etc.) were not changed materially; nor were the administrative forms used for record keeping revised in any significant way, except to support the documentation of the incentive experiment implementation.

Changes made to MEPS materials and manuals are described below.

Instructional Manuals

  • Field Interviewer Manual. The field interviewer manual was updated to cover changes made to the Interviewer Management System (IMS) that is part of the Basic Field Operating System (BFOS) in the windows-based system. For reference purposes, an appendix was added with generic copies of the refusal letters mailed to respondents. Another appendix was added with specific instructions for the Panel 13 incentive experiment.
     
  • Question by Question Specifications. Question by Question specifications were updated to cover revisions to the instrument.

Case Materials

  • RU Folder. The RU folder was revised so that one version could be used for all rounds, with different rounds indicated by the folder color.
     
  • Record of Calls. The hard copy Record of Calls printed on the RU Folder was changed from a format for recording each contact attempt to a "Notes" page. Interviewers use this page to record notes that can be referred to when entering contacts in the Electronic Record of Calls in BFOS. This change was made as part of the goal of reducing paperwork and increasing security.
     
  • Advance Contact Record (ACR). Most revisions to the ACR were made to collect information for use in evaluating the incentive experiment. Two questions were added to capture whether respondents received and reviewed the respondent mailings, and one question was added to determine if the RU has moved. A new final disposition code was added: "Unable to contact." The number of contacts and the name of the ACR respondent were no longer recorded, and a question asking if the respondent would prefer a VHS tape instead of a DVD was dropped.
     
  • Self-Administered Questionnaire (SAQ). Some inconsistencies in the underlining and bolding of certain words compared to SAQs in previous years were corrected in Spring 2008. In Fall 2008 the SAQ was updated for use in Panel 12 Round 4 and Panel 13 Round 2.
     
  • Diabetes Care Supplement (DCS). A question asking about the A1-C blood test was revised to be more descriptive. A question relating to flu vaccination was revised to include "nasal spray" so that it corresponds to CAPI.
     
  • Health Care Information Record Keeper. This newly designed form distributed to respondents at the end of the interview replaced the Record Keeper Tri-Fold used in past years. The Record Keeper includes space to record events as well as health care providers’ contact information.
     
  • Interview Quick Reference Guide. The Job Aid booklet used in previous years was replaced with a condensed version designed for use during the interview.

Security-Related Revisions

  • Laptop Passwords. At the start of each cycle of data collection (Spring and Fall), passwords were changed on all interviewer and supervisor laptops as a safeguard against access to the laptop by an unauthorized user.
     
  • Encryption. Beginning in Fall 2008 PGP full disk encryption was implemented on all laptops to protect data. With this enhancement Westat was also able to provide field supervisors and field managers with high-speed internet access to BFOS.
     
  • Instructions for reporting lost case materials and stolen laptops. As part of our compliance with the security C&A, interviewers are required each year to read procedures for reporting lost or stolen materials and laptops and sign a receipt indicating they read the material. This procedure takes place at training for new interviewers and is mailed to the existing field staff each year, with new confidentiality pledges to sign and return.
     
  • Incident Reporting Plan. Westat developed a plan for reporting the loss of laptops or hard copy materials with personal identifying information in accordance with IRB and government requirements. This included a report log used to track the resolution of all security issues, and a hotline number which was staffed 24/7 to ensure that any incidents were reported promptly. During 2008 an automated notification system was developed and tested, to be implemented in 2009.

Return To Table Of Contents

3. Recruiting and Training

3.1 Recruiting for 2008

A new sample design with both new and overlap PSUs was implemented beginning in 2007 with Panel 12. Some of the new PSUs with light workloads were not staffed for Panel 12. Selected travelers worked cases in these locations. In 2008, with the introduction of Panel 13, the sample size in the new PSUs increased sufficiently to hire local interviewers.

Recruiting for 2008 began in Fall 2007 following delivery of the Panel 13 sample. Recruiting needs were established by estimating the full workload for the new panel and adding it to the existing workload in Panels 11 and 12. The projected total caseload in each PSU was used to calculate the number of interviewers needed. This number was compared to the number of active interviewers on staff in each PSU, to determine PSU-level staffing requirements.

A total of 145 interviewers were recruited and 135 completed the training programs. With the addition of these new trainees, the project began 2008 data collection with a total of 484 interviewers. Of these, 35 were experienced interviewers working in PSUs with only Panel 11 Round 5 cases whose work ended after the Spring data collection. There were 97 interviewers (20%) who were lost to attrition during the spring interviewing rounds. An additional 11 (2%) of those remaining were lost during the fall round. Total attrition for the year was 22 percent, excluding the interviewers whose work ended with Panel 11. This rate is comparable to the prior six years, where attrition rates have ranged from 21 to 24 percent.

Return To Table Of Contents

3.2 2008 Trainings

The interviewer training program for 2008 included the new interviewer in-person training in Anaheim, California, between February 1-14, a home study for experienced interviewers prior to the start of the Round 1/3/5, and a home study for all interviewers prior to the start of Round 2/4. Both the in-person training and home study trainings were modeled on the 2005 materials, with updates to correspond with the new windows-based instrument. An 11-day training session included instruction on the administration of the Round 1 core interview followed by several days of training on dependent interviewing and the supplemental sections in the Rounds 3 and 5 interviews. For one day of the training, 24 bilingual interviewers were brought together from their separate training rooms to practice administering the instrument in Spanish during role plays. After the general training was completed, they were given an additional day to practice introducing the survey and answering respondent questions in Spanish.

In Fall 2008, all interviewers completed a Round 2/4 home study, and interviewers who attended the February 2008 in-person training were required to participate in a mock interview. The home study featured a review of the supplemental sections, information about new procedures and updates to the instrument, and an exercise to be completed and returned to supervisors. Each interviewer completing the home study was instructed to store the supplemental reading in his/her Interviewer’s Procedures Manual for future reference.

To hone interviewers’ skills and maintain data quality, the project used several methods of continuing education during 2008. Emails were sent to all field staff on a daily basis to keep them informed of the progress of data collection; these often contained instructions, reminders, and clarifications of procedures and questionnaire items. During 2008, Wednesday production emails to the field sometimes included a "refusal conversion exercise" scenario. Scenarios reflected common respondent cooperation issues as reported by field staff. Interviewers were instructed to reflect on the scenario, and email their supervisor with their ideas on how best to approach the situation presented in the scenario. The best ideas were shared with all interviewers.

A quarterly newsletter provided updates about project news and a more in-depth look at selected procedures. In addition, interviewers could send questions to be answered by home office staff in an "Ask Dr. MEPS" column included in the newsletter.

Return To Table Of Contents

4. Data Collection

4.1 Schedule

Table 4-1 shows the calendar dates and number of weeks per round in the standardized, "steady state" data collection schedule for the 5 rounds of MEPS household data collection. The data collection schedule has remained essentially unchanged since 2002. There is a two week interval between the end of rounds 1 and 3 and the start of rounds 2 and 4. Rounds 3 and 5 begin in mid-January of each year followed by a February 1 start-up for round 1. The later start of round 1 allows for a minimum 4 week reference period for the first round of MEPS interviews. The fixed schedule for data collection provides a secure anchor for scheduling the related activities that prepare for or immediately follow the data collection, such as the preparation of field materials for subsequent rounds and identification of the sample for the Medical Provider Component.

Table 4-1. Data collection schedule and number of weeks per round of data collection

Round Dates No. of weeks in round
1 February 1 – July 15 23
2 August 1 – December 15 20
3 January 10 – June 15 22
4 July 1 – December 1 21
5 January 15 – May 31 19

Return To Table Of Contents

4.2 Operations

Incentive Experiment

New for Panel 13 was the implementation of a respondent incentive experiment to test the effect of different levels of payment on response rates, nonresponse bias, data quality, and costs. The experiment will be in place for all five rounds of data collection. In 2008, the experiment was carried out on Rounds 1 and 2 of Panel 13. As mentioned earlier, a full description of the experimental design and results from the first two rounds of data collection will be provided in a separate report.

To enhance comparison of the results from the experiment with prior MEPS panels, procedural changes to operational activities were held to a minimum, except for changes to procedures related to implementation of the experiment. This was done so that differences detected in the research objectives could be attributed to the different incentive amounts. Pre-field activities, including advance letter mail outs, advance contact calls, and assignment material preparation remained unchanged from prior years. Home office tracking, disseminating information from the respondent calls to the Alex Scott line, mailing of refusal letters, and other data collection support activities were also relatively unchanged from prior years.

Implementation of the incentive experiment involved several minor changes to the case materials and reporting forms. So that interviewers knew the incentive amount assigned to a case, all labels on case folders and RU folders contained a code to indicate the amount. In addition, the interviewer’s weekly status report, the Interviewer Assignment Sheet, indicated the incentive amount. To reduce the risk of paying the respondent the wrong amount, the check for the appropriate amount was included in each case folder. Interviewers were trained to exercise caution when handing an advance letter to a Panel 13 respondent since the letter indicated the payment amount. Checking respondent payment receipts during home office receipt processing verified that interviewers were careful to follow this procedure. Less than a dozen households were paid the wrong amount across Rounds 1 and 2 of Panel 13.

To avoid any possibility of influencing the outcome of the experiment, home office and field supervisors and managers were blinded to the production and response rate status by incentive group throughout the field period. Although the incentive amount for each case was clearly visible on the materials, combining the outcomes by incentive group for reporting purposes was not done until the end of the data collection round.

Transition to the New NHIS PSUs

The challenges and complications of data collection experienced in 2007 when the Blaise/WVS instrument was first deployed and the new panel (Panel 12) of households was selected from the new NHIS sample design, had less of an impact in 2008 data collection. In the Spring rounds, only one panel, Panel 11, was still in the old sample design and using the DOS-based instrument. Although interviewers working in all three panels still had to carry two laptops during the Spring data collection, the remaining challenges of working in a new instrument and locating households in new geographic areas were minimized by the experience gained during the prior year. By Fall, the Panel 11 sample was retired and both rounds of fall data collection were in the new sample of PSUs using the Blaise/WVS system, which had a positive impact on the response rates in the Fall data collection. (Response rates are provided in Section 4.3, Data Collection Results.)

One challenge to the 2008 data collection effort was covering the work in 102 MEPS PSUs in the old sample design (Panel 11 Round 5 work.). Interviewers in these PSUs saw their caseload diminish considerably from the levels of earlier years.

Security Incidents

In 2008 the method for reporting incidents of lost/stolen hard copy and laptops was formalized and documented in the plan "Procedures For Reporting Incidents of Loss/Theft of Laptop Computers and Hard Copy with Personal Identifying Information (PII)". A documentation log describing each incident was maintained and provided to AHRQ whenever an incident occurred or an update was made to the documentation. AHRQ was notified within one hour of the discovery of each loss or suspected loss.

There were 13 separate incidents of lost/stolen hardcopy and laptops reported in 2008. In 6 of the incidents, the lost items were recovered. In the remaining incidents, 13 case folders and one Authorization form were lost and not recovered and two laptops were not recovered, though police reports were filed for each laptop. All respondents at risk of PII exposure were notified of the loss by certified mail. To minimize the risk of exposure, all MEPS laptops were full disk encrypted in August of 2008 using a system of file-based and full disk-encryption software (PGP) that is FIPS 140-2 compliant. The two laptops that were not recovered were full disk encrypted. However, one laptop still posed a security threat since it was a laptop that was not recovered by an interviewer who was released from the study. This interviewer could access the information on the laptop using her assigned password.

Travel to Complete Work

Table 4-2 shows the percent of cases completed on travel status during the Spring data collection rounds in 2006 through 2008. Nearly 18 percent of completes obtained in the Spring 2007 data collection were obtained on travel status. In 2008 the percent completed on travel status rose to nearly 20 percent. The percent of all Panel 13 Round 1 completes obtained on travel status (23.7 percent) decreased from Panel 12 Round 1 (26.3 percent.) One contributing factor could be the workload distributed among PSUs in the old and new sample designs. With the addition of a second panel in the new design, the work increased in the new PSUs, enough so that there was sufficient work to support a local interviewers. As can be seen from the table, the percent of Round 1 cases completed on travel in Panel 11, Round 1, when the old sample design was in place, was only 20.2 percent. The spike in Panel 12 to 26.3 percent was followed the next year by a return to about 20 percent of Round 1 work completed on travel.

The overall increase in completes done on travel in Spring 2008 could be due to the interviewer attrition from small caseloads experienced in the Panel 11 Round 5 old design PSUs. With more PSUs without local staff the need for travel shifted to Panel 11 (from Panel 12 in 2007.)

Table 4-2. Percent of total interviews conducted on travel

Data Collection Period All Completes Completed
On Travel
N
Completed
On Travel
Percent
Spring 2006
P11R1, P10R3, P9R5
20,939 3,498 16.7
Spring 2006
P11R1 Only
N
7,585 1,528 20.2
Spring 2006
P11R1 Only
Percent
36.2 43.7  
Spring 2007
P12R1, P11R3, P10R5
19,369 3,439 17.8
Spring 2007
P12R1 Only
N
5,901 1,552 26.3
Spring 2007
P12R1 Only
Percent
30.5 45.1  
Spring 2008
P13R1, P12R3, P11R5
20,181 3,951 19.6
Spring 2008
P13R1 Only
N
8,017 1,903 23.7
Spring 2008
P13R1 Only
Percent
39.7 48.2  

Return To Table Of Contents

The Medical Provider Component continued to have difficulty securing cooperation from several large pharmacy chains and the procedure for collecting patient profiles from these pharmacies was folded into the Household Component data collection. As in 2007, the decision to collect the profiles was made before the field period for the Panel 12 Round 4 data collection effort started so the request for profiles could be made at the end of the round 4 interview. There were five pharmacies included in the patient profile collection in 2008.

The same procedures for carrying out the patient profile collection used in 2007 were used in 2008. For Panel 12, Round 4 households, letters with instructions and lists of RU members who used the pharmacies were assembled and included in the case folder for each household with signed authorization forms. Respondents were told that upon receipt of the patient profile(s), they would be paid $30 for the time and effort made to collect the profile(s).

Panel 11 Round 5 households were mailed a request to collect patient profiles after they had completed their last interview round. These households were also told that they would be sent a check for $30 for returning patient profiles.

Results of the effort are shown in Table 4-3. The percentage of profiles collected from household respondents in 2008 was comparable to the 2007 patient profile collection. Although more profiles were requested in 2008, the percentage of completed profiles received stayed the same. The effort provided patient profiles that could not have been collected in the MPC through corporate contacts.

Table 4-3. Results of patient profile collection for medications prescribed in 2008

P12R3 and
P11R5 In-Person
and Mail Collection
Pharmacy
P12R3 and
P11R5 In-Person
and Mail Collection
Total Number
P12R3 and
P11R5 In-Person
and Mail Collection
Total Received
P12R3 and
P11R5 In-Person
and Mail Collection
Percent Received
P12R3 and
P11R5 In-Person
and Mail Collection
Total Complete
P12R3 and
P11R5 In-Person
and Mail Collection
Completes as
a Percent of
Total
Total RUs 2,764 1,116 40.4% 775 28.0%
Total Pairs 4,331 1,643 37.9% 1,118 27.4%

P12R3 In-Person Collection
Pharmacy
P12R3 In-Person Collection
Total Number
P12R3 In-Person Collection
Total Received
P12R3 In-Person Collection
Percent Received
P12R3 In-Person Collection
Total Complete
P12R3 In-Person Collection
Completes as
a Percent of
Total
Total RUs 1,173 717 61.1% 488 41.6%
Total Pairs 1,791 1,091 60.9% 740 41.3%

P11R5 All Mail Collection
Pharmacy
P11R5 All Mail Collection
Total Number
P11R5 All Mail Collection
Total Received
P11R5 All Mail Collection
Percent Received
P11R5 All Mail Collection
Total Complete
P11R5 All Mail Collection
Completes as
a Percent of
Total
Total RUs 1,591 399 25.1% 287 18.0%
Total Pairs 2,540 552 21.7% 448 17.6%

Return To Table Of Contents

Quality Control

Quality control measures followed on previous panels continued to receive attention during the 2008 data collection effort. Five full-time experienced MEPS field interviewers made validation calls by phone; field supervisors also validated some of the work in their regions – especially the work of new interviewers. Cases without phone numbers or cases that were difficult to reach by phone were either validated in person or by mail. About 20 percent of the sample was pre-selected for validation and at least 15 percent of each interviewer’s case assignment was validated to ensure that the interview took place and appropriate procedures were followed. In addition, supervisors selected at least one interviewer from the region in each data collection cycle (spring and fall) for 100 percent validation. As in prior years, all interviews completed in less than 30 minutes were validated. The problems found in interviews of less than 30 minutes were comparable in frequency and type to those found in the validation of interviews greater than 30 minutes. For interviews of less than 30 minutes, some respondents told the validator that the interview took from 45 minutes to an hour, but many respondents were not certain about the interview duration. To date, no falsifications have been found in the interviews of less than 30 minutes. All new interviewers were observed in person at least once during their first year of interviewing. No interviewers were released as a result of an observation, although most received feedback on ways to improve specific interviewing skills.

Return To Table Of Contents

4.3 Data Collection Results

Table 4-4 provides an overview of the data collection results, showing sample sizes, average interviewer hours per completed interview, and response rates for Panels 9 through 13. (Table A-2 in Appendix A shows the data collection results for all panels.) Response rates achieved in all five rounds of interviewing in 2008 exceeded response rates achieved in 2007. The only exception was Panel 12 Round 4 which remained within a half of a percentage point of Round 4 response rates in the three prior panels.

The response rates for Panel 13 were noticeably higher than in recent panels. The Round 1 response rate was the highest since Panel 10 – and exceeded the round 1 response rates in Panels 11 and 12 by as much as 1.5 percentage points. The Panel 13 Round 2 response rate was 2.4 percentage points higher than in Panel 12, and 1.7 percent higher than Panel 11. It was the highest round 2 response rate since Panel 9 in 2004.

With two panels in the new sample design, the total caseloads increased in the new PSUs. The increase in work allowed for a more efficient assignment of cases in the PSUs, as reflected in a decrease in hours per complete. Panel 13 Round 1 hours per complete decreased by two hours from Panel 12 Round 1 (12.2 vs. 14.2.). Panel 12 Round 3 hours per complete decreased by one hour from Panel 12 Round 2. Panel 12 Round 3 had the benefit of having a second panel (Panel 13) in the new design to increase workload. During its first year in the field (Rounds 1 and 2), Panel 12 was the only panel in the new PSUs and experienced some inefficiencies because of the small workload.

Table 4-5 shows response rates and the components of nonresponse for Round 1 of the five most recent MEPS panels. The refusal rate for Panel 13 was the lowest it has been since before Panel 9 in 2004. It dropped by 2 percent over the rates for refusal in Panels 11 and 12. This may be a result of the increased incentive amount for nearly two thirds of the Panel 13 households.

Table 4-6 shows the components of nonresponse for Rounds 2 and 4. Panel 13, again, showed the most marked improvement on response rate and lowest rate of refusals. The refusal rate for Panel 13 Round 2 was nearly 2 Ā½ percent lower than the refusal rate in Panel 12 Round 2. Again, the increase in incentive amount may have had the largest impact on the reduction in refusals.

Medical provider authorization form signing rates are shown in Table 4-7 for Panels 9 through 13. (Table A-3 in Appendix A shows the signing rates for all panels and rounds to date.) With the exception of Panel 12, the MPC signing rates increased in Panel 11 and Panel 13. The largest was a 9.4 percent increase between Panel 13 Round 1 and Panel 12 Round 1.

Table 4-8 shows signing rates for pharmacy authorization forms for Panels 9 through 12 (Table A-4 in Appendix A shows the signing rates for all panels and rounds to date.) In 2008, the signing rates for these forms for both Panel 11 Round 5 and Panel 12 round 3 were higher than the previous year’s rates.

Table 4-4. MEPS HC data collection results, Panels 9 through 13

Panel/round Original sample Split cases (movers) Student cases Out-of-scope cases Net sample Completes Average interviewer hours/complete Response rate (%) Response rate goal
Panel 9
Round 1
8,939 417 73 179 9,250 7,205 10.5 77.9 84.0
Panel 9
Round 2
7,190 237 40 40 7,427 7,027 7.7 94.6 95.0
Panel 9
Round 3
7,005 189 24 31 7,187 6,861 7.1 95.5 97.5
Panel 9
Round 4
6,843 142 23 44 6,964 6,716 7.4 96.5 97.0
Panel 9
Round 5
6,703 60 8 43 6,728 6,627 6.1 98.5 97.0
Panel 10
Round 1
8,748 430 77 169 9,086 7,175 11.0 79.0 84.0
Panel 10
Round 2
7,148 219 36 22 7,381 6,940 7.8 94.0 95.0
Panel 10
Round 3
6,921 156 10 31 7,056 6,727 6.8 95.3 98.0
Panel 10
Round 4
6,708 155 13 34 6,842 6,590 7.3 96.3 97.0
Panel 10
Round 5
6,596 55 9 38 6,622 6,461 6.2 97.6 97.0
Panel 11
Round 1
9,654 399 81 162 9,972 7,585 11.5 76.1 84.0
Panel 11
Round 2
7,572 244 42 24 7,834 7,276 7.8 92.9 95.0
Panel 11
Round 3
7,263 170 15 25 7,423 7,007 6.9 94.4 98.0
Panel 11
Round 4
7,005 139 14 36 7,122 6,898 7.2 96.9 97.0
Panel 11
Round 5
6,895 51 7 44 6,905 6,781 5.5 98.2 97.0
Panel 12
Round 1
7,467 331 86 172 7,712 5,901 14.2 76.5 84.0
Panel 12
Round 2
5,901 157 27 27 6,058 5,584 9.1 92.2 95.0
Panel 12
Round 3
5,580 105 13 12 5,686 5,383 8.1 94.7 98.0
Panel 12
Round 4
5,376 102 12 16 5,474 5,267 8.8 96.2 97.0
Panel 13
Round 1
9,939 502 97 213 10,325 8,017 12.2 77.6 84.0
Panel 13
Round 2
8,008 220 47 23 8,252 7,809 9.0 94.6 95.0

Return To Table Of Contents

Table 4-5. Summary of nonresponse for Round 1, 2004-2008

  2004
P9 R1
2005
P10 R1
2006
P11 R1
2007
P12R1
2008
P13R1
Net sample of RUs (N) 9,250 9,086 9,972 7,712 10,325
Response rate (%) 77.9 79.0 76.1 76.5 77.6
Refusal rate (%) 17.5 16.6 18.4 18.4 16.4
Unlocated rate (%) 3.0 3.3 3.8 3.9 4.3
All remaining nonresponse (%) 1.6 1.1 1.7 1.2 1.7

NOTE: Figures in tables showing results of field work are drawn from the database used to monitor ongoing production and from the ‘delivery’ database, which reflects minor adjustments made in post-data collection processing. This is the source of several discrepancies in totals shown in the tables.

Return To Table Of Contents

Table 4-6. Summary of nonresponse for Rounds 2 and 4, 2005-2008

  2005
P9R4
2006
P10R4
2007
P11R4
2008
P12R4
2005
P10R2
2006
P11R2
2007
P12R2
2008
P13R2
Net sample of RUs (N) 6,964 6,842 7,122 5,472 7,381 7,834 6,058 8,253
Response rate (%) 96.5 96.3 96.8 96.2 94.0 92.9 92.2 94.6
Refusal rate (%) 2.2 2.5 2.0 2.7 4.5 5.3 6.2 3.8
Unlocated rate (%) 0.8 0.7 0.7 0.7 0.9 1.1 1.0 1.0
All remaining nonresponse (%) 0.5 0.5 0.5 0.4 0.6 0.6 0.6 0.6

Return To Table Of Contents

Table 4-7. Signing rates for medical provider authorization forms for Panels 9 through 13

Panel/round Authorization forms
requested
Authorization forms
signed
Signing rate
(%)
Panel 9 Round 1 2,253 1,681 74.6
Panel 9 Round 2 22,668 17,522 77.3
Panel 9 Round 3 19,601 13,672 69.8
Panel 9 Round 4 20,147 14,527 72.1
Panel 9 Round 5 15,963 10,720 67.2
Panel 10 Round 1 2,068 1,443 69.8
Panel 10 Round 2 22,582 17,090 75.7
Panel 10 Round 3 18,967 13,396 70.6
Panel 10 Round 4 19,087 13,296 69.7
Panel 10 Round 5 15,787 10,476 66.4
Panel 11 Round 1 2,154 1,498 69.5
Panel 11 Round 2 23,957 17,742 74.1
Panel 11 Round 3 20,756 13,400 64.6
Panel 11 Round 4 21,260 14,808 69.7
Panel 11 Round 5 16,793 11,482 68.4
Panel 12 Round 1 1,695 1,066 62.9
Panel 12 Round 2 17,787 12,524 70.4
Panel 12 Round 3 15,291 10,006 65.4
Panel 12 Round 4 15.692 10,717 68.3
Panel 13 Round 1 2,217 1,603 72.3
Panel 13 Round 2 24,357 18,566 76.2

Return To Table Of Contents

Table 4-8. Signing rates for pharmacy authorization forms for Panels 9 through 12

Panel/round Authorization forms
requested
Authorization forms
signed
Signing rate
(%)
Panel 9 Round 3 14,334 11,189 78.1
Panel 9 Round 5 13,416 10,893 81.2
Panel 10 Round 3 13,928 10,706 76.9
Panel 10 Round 5 12,869 10,260 79.7
Panel 11 Round 3 14,937 11,328 75.8
Panel 11 Round 5 13,778 11,332 82.3
Panel 12 Round 3 10,840 8,242 76.0

Return To Table Of Contents

Table 4-9 shows the results of the Self-Administered Questionnaire (SAQ) data collection. SAQ collection begins in rounds 2 and 4 of a panel, with followup for nonresponse in Rounds 3 and 5. Table 4-9 shows both the round-specific response rate and the combined rate after the followup round was completed. (Table A-5 in Appendix A shows the results of the SAQ collection for all applicable panels and rounds to date.) The combined rates for the first year of Panel 13 and second year of Panel 12 showed slight increases in response rates from their counterparts in the prior year. The SAQ response rate for Panel 13 Round 2 was the highest it has been since before Panel 9.

Table 4-9. Results of self-administered questionnaire (SAQ) collection for Panels 9 through 13

  SAQs
requested
SAQs
completed
SAQs
refused
Other
nonresponse
Response
rate (%)
Panel 9 Round 2 12,541 10,631 381 1,529 84.8
Panel 9 Round 3 1,670 886 287 496 53.1
Panel 9 Combined, 2004 12,541 11,517 668 2,025 91.9
Panel 9 Round 4 11,913 10,357 379 1,177 86.9
Panel 9 Round 5 1,478 751 324 403 50.8
Panel 9 Combined, 2005 11,913 11,108 703 1,580 93.2
Panel 10 Round 2 12,360 10,503 391 1,466 85.0
Panel 10 Round 3 1,626 787 280 559 48.4
Panel 10 Combined, 2005 12,360 11,290 671 2025 91.3
Panel 10 Round 4 11,726 10,081 415 1,230 86.0
Panel 10 Round 5 1,516 696 417 403 45.9
Panel 10 Combined, 2006 11,726 10,777 832 1,633 91.9
Panel 11 Round 2 13,146 10,924 452 1,770 83.1
Panel 11 Round 3 1,908 948 349 611 49.7
Panel 11 Combined, 2006 13,146 11,872 801 2,381 90.3
Panel 11 Round 4 12,479 10,771 622 1086 86.3
Panel 11 Round 5 1,621 790 539 292 48.7
Panel 11 Combined, 2007 12,479 11,561     92.6
Panel 12 Round 2 10,061 8,419 502 1,140 83.7
Panel 12 Round 3 1,460 711 402 347 48.7
Panel 12 Combined, 2007 10,061 9,130     90.7
Panel 12 Round 4 9,550 8,303 577 670 86.9
Panel 13 Round 2 14,410 12,541 707 1,162 87.0

Return To Table Of Contents

The response rates for the Diabetes Care Supplement (DCS) are shown in Table 4-10. (Table A-6 in Appendix A shows the results of Diabetes Care supplement (DCS) collection for all applicable panels and rounds to date.) Since the DCS is collected only during Rounds 3 and 5, with no followup in the subsequent round, efforts to gain a high response rate are limited to the one round in which the DCS is requested. The DCS rates in the table include the results of an additional followup effort conducted by telephone toward the end of the field period. The response rate for the DCS in Panel 12 Round 3 reached 90 percent for the first time since Panel 9 Round 3.

Table 4-10. Results of diabetes care supplement (DCS) collection for Panels 9 through 12

Panel/round DCSs requested DCSs completed Response rate (%)
Panel 9 Round 3 1,003 909 90.6
Panel 9 Round 5 904 806 89.2
Panel 10 Round 3 1,060 939 88.6
Panel 10 Round 5 1,078 965 89.5
Panel 11 Round 3 1,188 1,030 86.7
Panel 11 Round 5 1,182 1,053 89.1
Panel 12 Round 3 917 825 90.0

Return To Table Of Contents

Table 4-11 summarizes the Round 1 data collection results for the panels begun in calendar years 2004 through 2008. While the round 1 response rate in 2008 is higher than it has been since 2005, the most interesting result is the 2 percent decrease in the refusal rate in the same time period. The contribution of the increased incentive amount being tested in the Panel 13 (2008) data collection could likely explain this difference. On the other hand, the not located rate for this same Panel was higher than in past years, but in keeping with a steady increase in this rate over time.

Table 4-11. Summary of MEPS Round 1 response, 2004-2008 panels

  2004 2005 2006 2007 2008
Total sample (N) 9,429 9,240 10,139 7,883 10,538
Out of scope (%) 1.9 1.8 1.5 2.1 2.0
Complete (%) 77.9 79.0 76.1 76.6 77.6
Nonresponse (%) 22.1 21.0 23.9 23.4 22.4
   Refusal (%) 17.5 16.6 18.4 18.4 16.4
   Not located (%) 3.0 3.3 3.8 3.9 4.3
   Other nonresponse (%) 1.6 1.1 1.7 1.2 1.7

Return To Table Of Contents

Table 4-12 shows the Round 1 results by NHIS completion status (this table includes only the originally sampled NHIS households and excludes sample units added during data collection as a result of ‘split’ households or the identification of student reporting units). The proportion of partial completes in the Panel 13 sample was the highest it has ever been at 25 percent. Despite the increase in these more difficult cases, the response rate improved, both for NHIS completes and partial completes. This appears to be due to the incentive experiment. Response rates achieved in the $50 and $70 incentive groups were higher than the $30 group, and were high enough to increase the overall Panel 13 Round 1 response rate.

Table 4-12. Summary of MEPS Round 1 response, 2004-2008 panels, by NHIS completion status

  2004 2005 2006 2007 2008
Original NHIS sample (N) 8,939 8,748 9,654 7,467 9,939
   Percent complete in NHIS 81.4 84.0 77.0 80.6 75.2
   Percent partial complete in NHIS 18.6 16.0 23.0 19.4 24.8
MEPS Round 1 response rate          
   Percent complete for NHIS completes 81.0 81.2 80.1 79.8 81.2
   Percent complete for NHIS partial completes 64.4 69.6 64.4 63.3 67.0

NOTE: Includes only households in sample originally provided from NHIS.

Return To Table Of Contents

Table 4-13 presents the completion percentages for the NHIS completes and partial completes by race/ethnicity for the 2005-2008 panels. The table shows substantial changes over time in the proportion of households in each race/ethnicity group. For 2008, the largest change was in the White/other group, which decreased as a proportion of the sample by 12 percent from the prior 2 years. This group has historically had the lowest response rates and having a smaller proportion of these low responders may also be a contributing factor to the increase in the round 1 response rate in Panel 13. The other groups with a change in proportion are Black and Hispanic, with a 5 percent increase in their representation. As in prior years, the response rates for the Asian and White/other groups were lower than for the Black and Hispanic groups.

Table 4-13. Summary of MEPS Round 1 response rates, 2005-2008 panels, by race/ethnicity and NHIS completion status

  2005
Percent
of net sample
2005
Percent complete
2006
Percent
of net sample
2006
Percent complete
2007
Percent
of net sample
2007
Percent complete
2008
Percent
of net sample
2008
Percent complete
Asian total 4.6 71.1 4.6 71.1 6.2 7.7 72.6  
   NHIS complete 3.8 75.3 3.1 75.7 4.8 74.3 5.0 75.9
   NHIS partial 0.8 50.7 1.6 62.3 1.4 61.5 2.6 66.3
Black total 17.8 82.5 15.9 80.8 16.4 81.5 21.1 82.7
   NHIS complete 14.7 83.8 12.3 83.9 13.2 83.7 15.6 86.4
   NHIS partial 3.0 76.1 3.6 70.2 3.1 72.0 5.5 72.0
Hispanic total 19.2 82.5 19.4 80.4 17.4 78.7 23.5 78.7
   NHIS complete 17.3 82.9 13.9 83.0 13.1 81.7 16.6 81.6
   NHIS partial 4.0 81.1 5.5 74.1 4.3 69.7 6.9 71.7
White/other total 58.4 77.4 60.0 73.6 60.0 75.1 47.7 75.7
   NHIS complete 49.8 79.6 47.5 77.8 49.3 78.6 37.5 79.3
   NHIS partial 8.6 64.5 12.6 57.8 10.7 59.2 10.2 62.5
All groups   79.0   76.1   76.6   77.6
   NHIS complete 83.6 80.8 76.7 79.6 80.4 79.7 74.8 81.1
   NHIS partial 16.4 70.0 23.3 63.9 19.6 63.7 25.2 67.5

NOTE: Includes reporting units added to sample as "splits" and "students" from original NHIS households, which were given the same ‘complete’ or ‘partial complete’ designation as the original NHIS household.

Return To Table Of Contents

Table 4-14 presents the same breakouts as Table 4-13, but highlights refusals, which comprise most of the nonresponse. Nearly a third of the partial completes in the White, other group (30 percent) refused to complete the MEPS interview, contributing to the lower response rate from this group. However, this rate has declined and is the lowest refusal rate from this group since 2003. As with the overall Panel 13 round 1 response rate, the incentive experiment had an impact on the refusal rates, most notably in the white/other race category, which has been seen in early, unweighted response rates by incentive groups.

Table 4-14. Summary of MEPS refusal rates, 2003-2008 panels, by race/ethnicity and NHIS completion status

  2003
(%)
2004
(%)
2005
(%)
2006
(%)
2007
(%)
2008
(%)
Asian - NHIS complete 18.6 22.1 20.1 19.3 18.1 18.7
Asian - NHIS partial 28.5 30.4 42.3 31.4 24.8 24.9
Black - NHIS complete 9.4 11.2 9.9 10.9 10.8 8.2
Black - NHIS partial 14.1 19.3 17.0 22.9 20.2 18.4
Hispanic - NHIS complete 8.5 8.8 9.3 8.4 10.2 10.6
Hispanic - NHIS partial 12.1 14.9 12.3 15.6 17.4 14.6
White, not Hispanic - NHIS complete 16.0 18.3 17.9 18.2 18.6 17.4
White, not Hispanic - NHIS partial 28.0 32.4 31.3 35.9 36.0 30.4
All groups 15.4 17.5 16.6 18.4 18.4 16.3
   NHIS complete 13.8 15.5 15.0 15.3 15.9 14.1
   NHIS partial 22.4 26.4 24.5 28.7 28.5 22.9

Return To Table Of Contents

Table 4-15 presents response information for a combination of race/ethnicity and sample domain categories. In general, the response patterns for 2008 are similar to those of prior years. Each of the low-income groups had a higher response rate than the associated non-low-income group. The Asian and the White/other, non-low-income groups had the lowest response rates and highest refusal rates. As in past years, the highest rate for not-located households was among the Hispanic, low-income group.

Table 4-15. Summary of MEPS Panel 13 Round 1 response rates, by sample domain by NHIS completion status

By race/ethnicity and domain Net sample
(N)
Complete
(%)
Refusal
(%)
Not
located
(%)
Other
nonresponse
(%)
Asian 913 72.6 20.6 4.1 2.7
   NHIS complete 519 75.9 18.7 3.1 2.3
   NHIS partial complete 273 66.3 24.9 5.1 3.7
Black, low income 580 87.6 7.2 4.1 1.0
   NHIS complete 473 89.2 5.5 4.2 1.1
   NHIS partial complete 107 80.4 15.0 3.7 0.9
Black, not low income 1,592 81.0 12.2 5.0 1.9
   NHIS complete 1,136 85.3 9.3 3.7 1.7
   NHIS partial complete 456 70.2 19.3 8.1 2.4
Hispanic, low income 693 81.1 7.1 9.8 2.0
   NHIS complete 514 84.2 5.8 8.4 1.6
   NHIS partial complete 179 72.1 10.6 14.0 3.4
Hispanic, not low income 1,727 77.8 13.6 6.6 2.0
   NHIS complete 1,198 80.6 12.5 5.2 1.7
   NHIS partial complete 529 71.5 16.1 9.8 2.6
White/other, low income 620 83.4 11.3 3.9 1.5
   NHIS complete 520 87.1 8.5 3.3 1.2
   NHIS partial complete 100 64.0 26.0 7.0 3.0
White/other, not low income 4,200 74.6 21.6 2.4 1.4
   NHIS complete 3,268 78.1 18.9 1.9 1.1
   NHIS partial complete 932 62.4 30.8 4.3 2.5
All groups 10,325 77.6 16.3 4.3 1.7
   NHIS complete 7,719 81.1 14.1 3.5 1.4
   NHIS partial complete 2,606 67.5 22.9 6.9 2.7

NOTE: Includes reporting units added to sample as "splits" and "students" from original NHIS households, which were given the same ‘complete’ or ‘partial complete’ designation as the original household.

Return To Table Of Contents

Table 4-16 summarizes the results of refusal conversion efforts by panel. Conversion rates have varied from a low of 23 percent to a high of 28 percent, though the final refusal rates have stayed within 2 percent across panels.

Table 4-16. Summary of MEPS round 1 results: ever refused, final refusals, and refusal conversion rate, by panel

Panel Net Sample
(N)
Ever Refused
(%)
Converted
(%)
Final
Refusal Rate
(%)
Final
Response Rate
(%)
Panel 9 9,429 21.9 23.0 17.5 77.9
Panel 10 9,240 21.6 26.8 16.6 79.0
Panel 11 10,139 23.8 24.2 18.4 76.0
Panel 12 7,721 25.4 28.2 18.4 76.6
Panel 13 10,325 22.3 23.7 16.3 77.6

Return To Table Of Contents

Table 4-17 shows results of locating efforts for households that required tracking during the Round 1 field period by panel. The 15.6 percent of the sample that required tracking in Panel 13 was about 1 percent lower than in Panel 12 though higher than the other earlier panels. The percent not located was the highest it has been at 4.2 percent yet with no obvious reason for the increase.

Table 4-17. Summary of MEPS round 1 results: ever traced and final not located, by panel

Panel Total sample (N) Ever traced (%) Not located (%)
Panel 9 9,429 14.0 3.0
Panel 10 9,240 14.4 3.3
Panel 11 10,139 15.0 3.8
Panel 12 7,883 16.5 3.8
Panel 13 10,538 15.6 4.2

Return To Table Of Contents

5. Home Office Processing and Support

The variety of home office support activities carried out in prior years continued through 2008. The home office responds to the toll-free respondent information line and relays information from respondent calls to the field. Table 5-1 shows the number and types of calls received during 2007 and 2008. (Table A-7 in Appendix A shows the number and types of calls from 2000 through 2008.)

Table 5-1. Calls to the respondent information line, 2007 and 2008

Reason for Call Spring 2007
(Panel 12 Round 1,
Panel 11 Round 3,
Panel 10 Round 5)
Round 1
N
Spring 2007
(Panel 12 Round 1,
Panel 11 Round 3,
Panel 10 Round 5)
Round 1
%
Spring 2007
(Panel 12 Round 1,
Panel 11 Round 3,
Panel 10 Round 5)
Rounds 3 and 5
N
Spring 2007
(Panel 12 Round 1,
Panel 11 Round 3,
Panel 10 Round 5)
Rounds 3 and 5
%
Fall 2007
(Panel 12 Round 2,
Panel 11 Round 4)
Rounds 2 and 4
N
Fall 2007
(Panel 12 Round 2,
Panel 11 Round 4)
Rounds 2 and 4
%
Address/telephone change 8 2.1 21 7.3 23 7.6
Appointment 56 14.6 129 44.8 129 42.6
Request callback 72 18.8 75 26.0 88 29.0
No message 56 14.6 37 12.8 33 10.9
Other 20 5.2 15 5.2 6 2.0
Proxy needed 0 0.0 0 0.0 0 0.0
Request SAQ help 0 0.0 0 0.0 0 0.0
Special needs 5 1.3 0 0.0 1 0.3
Refusal 160 41.8 10 3.5 21 6.9
Willing to participate 6 1.6 1 0.3 2 0.7
Total 383   288   303  

Reason for Call Spring 2008
(Panel 13 Round 1,
Panel 12 Round 3,
Panel 11 Round 5)
Round 1
N
Spring 2008
(Panel 13 Round 1,
Panel 12 Round 3,
Panel 11 Round 5)
Round 1
%
Spring 2008
(Panel 13 Round 1,
Panel 12 Round 3,
Panel 11 Round 5)
Rounds 3 and 5
N
Spring 2008
(Panel 13 Round 1,
Panel 12 Round 3,
Panel 11 Round 5)
Rounds 3 and 5
%
Fall 2008
(Panel 13 Round 2,
Panel 12 Round 4)
Rounds 2 and 4
N
Fall 2008
(Panel 13 Round 2,
Panel 12 Round 4)
Rounds 2 and 4
%
Address/telephone change 20 3.4 12 4.7 21 5.7
Appointment 92 15.5 117 45.9 148 39.9
Request callback 164 27.6 81 31.8 154 41.5
No message 82 13.8 20 7.8 22 5.9
Other 13 2.2 12 4.7 8 2.2
Proxy needed 0 0.0 0 0.0 0 0.0
Request SAQ help 0 0.0 0 0.0 0 0.0
Special needs 4 0.7 0 0.0 0 0.0
Refusal 196 32.9 13 5.1 18 4.9
Willing to participate 24 4.0 0 0.0 0 0.0
Total 595   255   371  

Return To Table Of Contents

The most significant differences in the calls between 2007 and 2008 are the differences in the percentage calling to refuse. Panel 13 Round 1 experienced a 9 percent decrease in the number of calls to refuse. Even in the Fall of 2008, with Panel 12 Round 4 and Panel 13 Round 2 active, just 4.9 percent called to refuse, compared to 6.9 percent in the Fall 2007 data collection round. Panel 13, overall, has been a more cooperative sample – higher response rate and lower refusal rate – and this is reflected in the kinds of calls received at the respondent hotline.

Home office staff monitor production and provide reports and feedback (such as CAPI interviews conducted in less than 30 minutes) to field managers and supervisors for review and followup. The home office prints validation abstracts, which contain information from the interview, and sends them to the quality control assistants for validation calls. Home office staff also print and distribute split processing reports that provide information for conducting interviews with a split RU. Refusal letter requests and requests for locating information from an outside tracking service also are managed at the home office.

For security reasons, all packages sent to and from the field with personally identifying information (PII) must be shipped via Federal Express. Federal Express has an on line tracking system that can be accessed to trace a package not delivered. Anytime a package containing PII is shipped, the sender must notify the intended recipient and provide the tracking number of the package, and the date and time of expected delivery. The recipient, in turn, notifies the sender when the package has arrived. This procedure allows staff to quickly identify and promptly report lost case materials.

Contents of completed case folders sent to the home office from the field are reviewed and recorded in the receipt system. Panel 13 cases are carefully reviewed for notes from the interviewer that may indicate that the wrong incentive amount was paid. Such cases are flagged in the receipt system so they can be excluded from the incentive experiment analysis.

Authorization forms are edited for completeness and scanned into an image database. Problems with authorization forms are documented and feedback is sent to the field supervisor to review with the interviewer. The receipt department also tracks interview dates and notifies the field if the case materials for a completed interview have not arrived within 2 weeks of the interview date. SAQs and DCS questionnaires also are receipted and prepared for coding. Supply requests from the field are emailed to the MEPS supply center at the home office and requests are filled promptly. An inventory of supplies is maintained in a database so that shortages are identified early for additional printing.

The MEPS CAPI Hotline continued to provide technical support for field interviewing activities during 2008. Hotline staff are available 7 days a week to help field staff resolve CAPI, Field Management System, transmission, laptop, and modem problems. The CAPI Hotline serves as a focal point for tracking and shipping all field laptops, maintaining systems for monitoring field laptop assignment, and coordinating laptop repair.

Return To Table Of Contents

6. Interview Timing and Utilization Measures

With the introduction of the new CAPI system in 2007, substantial attention was focused on identifying potential differences in the data that might be attributable to the new application. Attention focused particularly on the length of the Panel 12 interviews, which in the early weeks of interviewing were taking longer to administer than in prior panels, and on the utilization data, which, in the unweighted measures available during the data collection period, were consistently lower than those observed in the first rounds of prior panels. Special reports developed to monitor progress during the first rounds of Panel 12 were continued through 2008 and extended to the new 2008 panel, Panel 13. A major effort was made to accelerate the development of weights that could be applied to the first full year of data for the new application. These weights and a series of analysis files with data from Panel 12 and, for comparison, Panel 11, were delivered to AHRQ in the months following the close out of the first full year of data collection for Panel 12. As this report was prepared, an AHRQ review of the full-year data for Panel 12 was in progress. This section of the methodology report presents selected findings from the ongoing analyses of interview length and utilization for Panel 12 and Panel 13. These findings are based on operational reports and unweighted data.

Interview Timing

Interviews conducted in Round 1 of Panel 11 had an average interview administration time of 73 minutes. In the early weeks of Panel 12, the average Round 1 administration time was almost 100 minutes, raising concern about the possible effects on participation and data quality of the increase in interview length. New reports, tracking administration time for entire interviews and for each section of the interview were developed to monitor this aspect of the operation. Table 6-1 shows the mean interview times for the rounds of Panels 12 and 13 completed through December 2008, and, for comparison, mean times for interviews completed in Panels 1, 10, 11 with the previous application.

Table 6-1. Timing comparison, Panels 12 and 13 vs. prior panels (mean minutes per interview, single-session interviews)

  Panel 1 Panel 10 Panel 11 Panel 12 Panel 13
Round 1 101.0 73.1 73.1 89.5 84.0
Round 2 95.0 81.5 81.7 91.4 87.8
Round 3 84.3 84.4 85.4 92.4  
Round 4 70.3 76.6 78.0 84.3  

Return To Table Of Contents

As shown in the table, each round of the new Panel 12 application has taken longer to administer than the comparable rounds of the recent prior panels. The difference was 16.4 minutes in Round 1, 9.7 minutes in Round 2, 7.0 minutes in Round 3, and 6.3 minutes in Round 4. This pattern of declining differences continued with Panel 13, with the two completed rounds of Panel 13 requiring less administration time than the comparable rounds in Panel 12. The residual difference from the earlier application suggests that, beyond the relatively minor content differences in the applications, some aspects of the newer application do add to administration time. The decline over time, however, suggests that learning is a significant component of the difference, and that as interviewers become increasingly experienced with the new application, their administration times decline.

Tracking of the increased interview times in the early rounds of Panel 12 spurred investigation of several possible factors that might account for the increase. Because the content of the instrument had not changed substantially (an exception being the redesign of the priority conditions section), the search focused on factors such as the performance of the interviewers and the new application itself. Results of that investigation were reported separately (Report on Panel 12 Blaise/WVS Interview Administration Time, Dec. 21, 2007). The current report extends one thread of the earlier analysis of the factor of interviewer experience. Table 6-2 shows the mean interview time for Panel 12 and Panel 13 Round 1 interviews, within two experience-related classifications of the Round 1 interviewers. The first classification identifies interviewers on the basis of their prior MEPS interviewing experience – those who were newly trained and were working on MEPS for the first time, and those who had worked on prior panels. Special circumstances in Panel 12 required two different protocols for the new interviewer training (the majority of the new interviewers had to learn to interview both in the old and the new applications); these circumstances did not apply in Panel 13, and for comparison purposes the two groups of new interviewers in Panel 12 have been collapsed into a single group. For Panel 13, the experienced interviewers included those who had been trained for the first time for Panel 12 and were continuing with the study, and those who had been MEPS veterans at the start of Panel 12 and now had a full year of experience working with the new application.

Table 6-2. Mean round 1 interview time, in minutes, for single-session interviews, Panel 12 and Panel 13, by interview training and production groups

Interviewer Group Groups by
Number of Completes
Panel 12
N
Panel 12
Mean Interviewing
Time (min)
Panel 13
N
Panel 13
Mean Interviewing
Time (min)
New 1-9 277 109.5 219 106.8
  10 or more 949 96.4 1,452 100.3
  Subtotal 1,226 99.4 1,671 101.2
Experienced 1-9 621 87.1 398 87.8
  10 or more 3,170 86.8 5,095 78.2
  Subtotal 3,791 86.8 5,493 78.9

Return To Table Of Contents

The second level of breakout in the table divides the training groups according to the number of interviews completed: interviewers who completed relatively few (1-9) interviews and those who completed 10 or more. Note that the table includes only interviews completed in a single session. In both panels, a substantial number of interviews (10-13 percent) required more than one session to complete. Multiple session interviews occurred for a variety of reasons – respondent-initiated interruptions, interviewer errors with the new application, and interruptions resulting from features of the application itself. The table is limited to the single-session interviews because accurate timings were not obtained for many of the multi-session interviews.

For both panels, the table shows noticeable differences between the new and experienced interviewers and, within the experience groups, between those in the larger and smaller production categories. Somewhat surprisingly, the mean interview timings were less for the new interviewers in Panel 12 than Panel 13, with, for example, the times for the higher production group of new interviewers about 4 minutes less than for Panel 13 (96.4 vs. 100.3 minutes). Among the groups of experienced interviewers, the mean time for the lower producing group was practically identical in the two panels: 87.1 minutes in Panel 12 and 87.8 minutes in Panel 13. For the higher producing group, however, the mean for Panel 13 was more than 8 minutes less than in Panel 12 (86.8 v 78.2 minutes). Where in Panel 12, the new application was ‘new’ to all interviewers, including those with prior MEPS experience, in Panel 13 the experienced interviewers had worked with the new application for a full year. That additional experience with the Windows application may have been the major factor in the decreased interview times from Panel 12 to Panel 13.

Table 6-3 shows mean times for the Round 1 single-session cases in Panel 12 and Panel 13 broken by NHIS completion outcome. As noted earlier, approximately 19 percent of the Panel 12 sample were classified as "partial complete" in the NHIS interview, and the response rate for these households was 16.5 percent lower than that for the NHIS interviews classified as "complete". For Panel 13, NHIS partial completes made up 25 percent of the sample and ended with a response rate 14 percent below that of the NHIS completes. The minimal differences between the two groups in the table suggest that, despite the difference in response rate, the interviews that were successfully conducted with the partial complete households were similar to those conducted with the ‘full’ completes.

Table 6-3. Round 1 mean interview time, by NHIS completion status, Panel 12 and 13

NHIS Status Panel 12
N
Panel 12
Min per Intv
Panel 13
N
Panel 13
Min per intv
Partial Complete 795 89.6 1,530 83.6
Complete 4,222 89.9 5,634 84.2

Return To Table Of Contents

The longer interview times for the Panel 12, Round 1 interviews, coupled with difficulties experienced in achieving the desired response rate, raised concern for response rates in Round 2 and subsequent rounds. This concern increased as the response rate in the early weeks of Round 2 data collection remained consistently lower than in prior panels. Tables 6-4 through 6-6 were generated to examine the possible impact of several factors on the Round 2 response rate: the length of the Round 1 interview, whether any interruptions or breaks had occurred during the Round 1 interview, and whether any refusal had occurred during Round 1. The tables show figures both for Panel 12 and Panel 13.

Table 6-4 shows, for the major outcome categories of Round 2, the mean interview time for the Round 1 interviews completed in a single session. In Panel 12, the mean Round 1 interview time for the cases that did not respond in Round 2 was about 4 minutes longer (94.1 vs. 90.5 minutes) than for those that did respond. In Panel 13, the difference was less, at 2 minutes (84.8 minutes Round 1 administration time for the Round 2 completes, and 86.9 minutes for those that were nonresponse in Round 2).

Table 6-4. Round 2 outcome, by Round 1 interview time (Round 1 interviews with no breaks), Panel 12 and Panel 13

  Panel 12
Number
Panel 12
Minutes per RU
Panel 13
Number
Panel 13
Minutes per RU
Total 5,165 90.6 7,390 84.5
Complete 4,771 90.5 6,966 84.5
Out of Scope 14 58.4 18 59.8
Nonresponse 380 94.1 376 86.9

Return To Table Of Contents

Table 6-5 shows the Round 2 outcome categories by the break status of the Round 1 interview, that is, whether the Round 1 interview was completed in a single session or in multiple sessions. The table shows minimal differences in Round 2 response rate relative to the break status in Round 1: the response rate for the Panel 12 group with legal breaks was 2.6 percent less than that for the group with no breaks, and the same 2.6 percent difference occurred with Panel 13.

Table 6-5. Round 2 outcome by interview break status in Round 1, Panel 12 and Panel 13

  Panel 12
Break Status
in Round 1
Panel 12
Round 2
Response Rate
Panel 13
Break Status
in Round 1
Panel 13
Round 2
Response Rate
Full Sample 5,951 92.4 8,274 94.6
No Break 5,165 92.6 7,390 94.9
Legal Break 454 90.0 534 92.3
Illegal Break 332 91.8 350 92.8

Return To Table Of Contents

Table 6-6 shows the Round 2 response rates for households that cooperated in Round 1 with no reported refusal and those that cooperated only after having refused at least once. In both panels, the difference between the response rates for the two groups is greater than the differences in Tables 6-4 and 6-5: 11 percent in Panel 12 and 6.6 percent in Panel 13. This suggests that an initial refusal in Round 1 – which typically occurs before the interview begins – was more likely to affect the Round 2 outcome than administration time or the occurrence of interruptions in the Round 1 interview, factors that come into play only after the interview has begun. Table 6-6 also shows the Round 3 response rate for the Panel 12 Round 1 interim refusals. In Round 3 the difference in cooperation rate decreased to 2.7 percent.

Table 6-6. Later round outcomes by 'ever refused' status in Round 1, Panel 12 and Panel 13

  Panel 12
Ever
Refused
in Round 1
Panel 12
Round 2
Response
Rate
Panel 12
Ever
Refused
in Round 1
Panel 12
Round 3
Response
Rate
Panel 13
Ever
Refused
in Round 1
Panel 13
Round 2
Response
Rate
Full Sample 6,085   5,703   8,274  
No 5,517 93.2 5,227 94.7 7,722 95.1
Yes 568 82.2 476 92.0 552 88.5

Return To Table Of Contents

Table 6-7 shows the Round 2 response rates by the month in which the Round 1 interviews were completed. Both panels show the same pattern of gradual decline in response rate as the field period continues, with the lowest response rate among those households completed during the last month of Round 1. It seems likely that many of these late cooperators were completed late in the field period because they were ‘difficult’ in some respect – hard to locate, hard to find at home, or reluctant to participate. These types of difficulty – like the interim refusal in Round 1-- likely persisted to some extent in Round 2.

Table 6-7. Round 2 outcome by month of Round 1 complete, Panel 12 and Panel 13

Round 1
Interview Month
Panel 12
Round 1
Completes
Panel 12
Round 2
Response Rate
Panel 13
Round 1
Completes
Panel 13
Round 2
Response Rate
Full Sample 6,085 92.2 8,274 94.6
Jan 2 100.0 5 100.0
Feb 1,626 95.6 2,184 96.3
Mar 1,769 94.0 2,763 95.5
Apr 940 91.7 1,535 94.2
May 638 90.2 841 93.5
Jun 631 87.1 583 91.0
Jul 479 83.8 363 88.0

Return To Table Of Contents

Utilization

Several new reports were implemented at the start of Panel 12 to monitor the health care event utilization levels captured with the new instrument. These reports, with unweighted comparisons to prior panels, showed Panel 12 utilization levels consistently lower than the earlier panels and prompted an ongoing investigation of the differences. That investigation has had to address factors such as varying reference periods within a data collection round, the fact that the Panel 12 sample was drawn from a new set of NHIS PSUs than the prior panels, and the fact that the composition of the demographic domains within the Panel 12 sample differed from prior panels. To support the investigation, Westat accelerated development of full-year data files and a full-year weight for the first year of Panel 12 and parallel data for Panel 11. The investigation is still in progress; for this methodology report, we provide a limited summary of the unweighted utilization data for the rounds completed through the end of 2008.

Table 6-8 summarizes two unweighted measures of utilization: average total events per person, and average office-based events per person for Panels 9-13. The figures in the tables are taken from end-of-round operational reports, with numerators representing all events or all office-based events reported during the round and the denominators representing all persons in participating households, regardless of whether they reported any events. The measures have not been standardized to adjust for differences in the number of days in a given round or a given person’s reference period. The table shows some degree of variation from panel to panel in the years before the new application was introduced, but also shows means for Panels 12 and 13 that are consistently lower than those of the earlier panels.

Table 6-8a. Utilization comparison: mean total events per person (excluding prescribed medicines) by panel and round (unweighted)

 

Panel 9

Panel 10

Panel 11

Panel 12

Panel 13

Round 1 1.901 1.752 1.892 1.719 1.571
Round 2 3.037 3.131 3.122 2.860 2.719
Round 3 3.117 2.910 3.197 2.682  
Round 4 3.137 2.972 2.951 2.806  

Table 6-8b. Utilization comparison: office-based physician events per person by panel and round (unweighted)

 

Panel 9

Panel 10

Panel 11

Panel 12

Panel 13

Round 1 1.286 1.206 1.307 1.158 1.069
Round 2 2.146 2.206 2.194 1.988 1.917
Round 3 2.123 2.009 2.220 1.809  
Round 4 2.181 2.092 2.077 1.977  

Return To Table Of Contents

Tables 6-9a and 6-9b provide a breakout of person-level utilization means, for all events and for office-based events by sample domain.

Table 6-9a. Utilization comparison, Round 1 mean total events per person for all events (excluding prescribed medicines) by sample domain

  Panel 10 Panel 11 Panel 12 Panel 13
Asian 1.473 1.399 1.328 1.328
Low income 1.441 1.589 1.393 1.386
Hispanic 1.349 1.314 1.262 1.184
Black 1.564 1.593 1.501 1.397
Other 2.218 2.456 2.149 2.123
Total 1.752 1.892 1.719 1.571

Table 6-9b. Utilization comparison, Round 1 mean office-based events per person, by sample domain

  Panel 10 Panel 11 Panel 12 Panel 13
Asian 1.033 0.914 0.892 0.858
Low income 0.975 1.068 0.918 0.936
Hispanic 0.959 0.941 0.873 0.814
Black 1.024 1.067 0.970 0.908
Other 1.540 1.720 1.460 1.476
Total 1.206 1.307 1.158 1.069

Return To Table Of Contents

Appendix A. Comprehensive Tables – Household Survey

Table A-1. Data collection periods and starting RU-level sample sizes, all panels

January-June 1996 10,799
Panel 1 Round 1 10,799
 
July-December 1996 9,485
Panel 1 Round 2 9,485
 
January-June 1997 15,689
Panel 1 Round 3 9,228
Panel 2 Round 1 6,461
 
July-December 1997 14,657
Panel 1 Round 4 9,019
Panel 2 Round 2 5,638
 
January-June 1998 19,269
Panel 1 Round 5 8,477
Panel 2 Round 3 5,382
Panel 3 Round 1 5,410
 
July-December 1998 9,871
Panel 2 Round 4 5,290
Panel 3 Round 2 4,581
 
January-June 1999 17,612
Panel 2 Round 5 5,127
Panel 3 Round 3 5,382
Panel 4 Round 1 7,103
 
July-December 1999 10,161
Panel 3 Round 4 4,243
Panel 4 Round 2 5,918
 
January-June 2000 15,447
Panel 3 Round 5 4,183
Panel 4 Round 3 5,731
Panel 5 Round 1 5,533
 
July-December 2000 10,222
Panel 4 Round 4 5,567
Panel 5 Round 2 4,655
 
January-June 2001 21,069
Panel 4 Round 5 5,547
Panel 5 Round 3 4,496
Panel 6 Round 1 11,026
 
July-December 2001 13,777
Panel 5 Round 4 4,426
Panel 6 Round 2 9,351
 
January-June 2002 21,915
Panel 5 Round 5 4,393
Panel 6 Round 3 9,183
Panel 7 Round 1 8,339
 
July-December 2002 15,968
Panel 6 Round 4 8,977
Panel 7 Round 2 6,991
 
January-June 2003 24,315
Panel 6 Round 5 8,830
Panel 7 Round 3 6,779
Panel 8 Round 1 8,706
 
July-December 2003 13,814
Panel 7, Round 4 6,655
Panel 8, Round 2 7,159
 
January-June 2004 22,552
Panel 7 Round 5 6,578
Panel 8 Round 3 7,035
Panel 9 Round 1 8,939
 
July-December 2004 14,068
Panel 8, Round 4 6,878
Panel 9, Round 2 7,190
 
January-June 2005 22,548
Panel 8 Round 5 6,795
Panel 9 Round 3 7,005
Panel 10 Round 1 8,748
 
July-December 2005 13,991
Panel 9, Round 4 6,843
Panel 10, Round 2 7,148
 
January-June 2006 23,278
Panel 9 Round 5 6,703
Panel 10 Round 3 6,921
Panel 11 Round 1 9,654
 
July-December 2006 14,280
Panel 10 Round 4 6,708
Panel 11 Round 2 7,572
 
January-June 2007 21,326
Panel 10 Round 5 6,596
Panel 11 Round 3 7,263
Panel 12 Round 1 7,467
 
July-December 2007 12,906
Panel 11 Round 4 7,005
Panel 12 Round 2 5,901
 
January-June 2008 22,414
Panel 11 Round 5 6,895
Panel 12 Round 3 5,580
Panel 13 Round 1 9,939
 
July-December 2008 13,384
Panel 12 Round 4 5,376
Panel 13 Round 2 8,008

Return To Table Of Contents

Table A-2. MEPS household survey data collection results, all panels

Panel/round Original sample Split cases (movers) Student cases Out-of-scope cases Net sample Completes Average interviewer hours/complete Response rate (%)
Panel 1 Round 1 10,799 675 125 165 11,434 9,496 10.4 83.1
Panel 1 Round 2 9,485 310 74 101 9,768 9,239 8.7 94.6
Panel 1 Round 3 9,228 250 28 78 9,428 9,031 8.6 95.8
Panel 1 Round 4 9,019 261 33 89 9,224 8,487 8.5 92.0
Panel 1 Round 5 8,477 80 5 66 8,496 8,369 6.5 98.5
Panel 2 Round 1 6,461 431 71 151 6,812 5,660 12.9 83.1
Panel 2 Round 2 5,638 204 27 54 5,815 5,395 9.1 92.8
Panel 2 Round 3 5,382 166 15 52 5,511 5,296 8.5 96.1
Panel 2 Round 4 5,290 105 27 65 5,357 5,129 8.3 95.7
Panel 2 Round 5 5,127 38 2 56 5,111 5,049 6.7 98.8
Panel 3 Round 1 5,410 349 44 200 5,603 4,599 12.7 82.1
Panel 3 Round 2 4,581 106 25 39 4,673 4,388 8.3 93.9
Panel 3 Round 3 4,382 102 4 42 4,446 4,249 7.3 95.5
Panel 3 Round 4 4,243 86 17 33 4,313 4,184 6.7 97.0
Panel 3 Round 5 4,183 23 1 26 4,181 4,114 5.6 98.4
Panel 4 Round 1 7,103 371 64 134 7,404 5,948 10.9 80.3
Panel 4 Round 2 5,918 197 47 40 6,122 5,737 7.2 93.7
Panel 4 Round 3 5,731 145 10 39 5,847 5,574 6.9 95.3
Panel 4 Round 4 5,567 133 35 39 5,696 5,540 6.8 97.3
Panel 4 Round 5 5,547 52 4 47 5,556 5500 6.0 99.0
Panel 5 Round 1 5,533 258 62 103 5,750 4,670 11.1 81.2
Panel 5 Round 2 4,655 119 27 27 4,774 4,510 7.7 94.5
Panel 5 Round 3 4,496 108 17 24 4,597 4,437 7.2 96.5
Panel 5 Round 4 4,426 117 20 41 4,522 4,396 7.0 97.2
Panel 5 Round 5 4,393 47 12 32 4,420 4,357 5.5 98.6
Panel 6 Round 1 11,026 595 135 200 11,556 9,382 10.8 81.2
Panel 6 Round 2 9,351 316 49 50 9,666 9,222 7.2 95.4
Panel 6 Round 3 9,183 215 23 41 9,380 9,001 6.5 96.0
Panel 6 Round 4 8,977 174 32 66 9,117 8,843 6.6 97.0
Panel 6 Round 5 8,830 94 14 46 8,892 8,781 5.6 98.8
Panel 7 Round 1 8,339 417 76 122 8,710 7,008 10.0 80.5
Panel 7 Round 2 6,991 190 40 24 7,197 6,802 7.2 94.5
Panel 7 Round 3 6,779 169 21 32 6,937 6,673 6.5 96.2
Panel 7 Round 4 6,655 133 17 34 6,771 6,593 7.0 97.4
Panel 7 Round 5 6,578 79 11 39 6629 6529 5.7 98.5
Panel 8 Round 1 8,706 441 73 175 9,045 7,177 10.0 79.3
Panel 8 Round 2 7,159 218 52 36 7,393 7,049 7.2 95.4
Panel 8 Round 3 7,035 150 13 33 7,165 6,892 6.5 96.2
Panel 8 Round 4 6,878 149 27 53 7,001 6,799 7.3 97.1
Panel 8 Round 5 6,795 71 8 41 6,833 6,726 6.0 98.4
Panel 9 Round 1 8,939 417 73 179 9,250 7,205 10.5 77.9
Panel 9 Round 2 7,190 237 40 40 7,427 7,027 7.7 94.6
Panel 9 Round 3 7,005 189 24 31 7,187 6,861 7.1 95.5
Panel 9 Round 4 6,843 142 23 44 6,964 6,716 7.4 96.5
Panel 9 Round 5 6,703 60 8 43 6,728 6,627 6.1 98.5
Panel 10 Round 1 8,748 430 77 169 9,086 7,175 11.0 79.0
Panel 10 Round 2 7,148 219 36 22 7,381 6,940 7.8 94.0
Panel 10 Round 3 6,921 156 10 31 7,056 6,727 6.8 95.3
Panel 10 Round 4 6,708 155 13 34 6,842 6,590 7.3 96.3
Panel 10 Round 5 6,596 55 9 38 6,622 6,461 6.2 97.6
Panel 11 Round 1 9,654 399 81 162 9,972 7,585 11.5 76.1
Panel 11 Round 2 7,572 244 42 24 7,834 7,276 7.8 92.9
Panel 11 Round 3 7,263 170 15 25 7,423 7,007 6.9 94.4
Panel 11 Round 4 7,005 139 14 36 7,122 6,898 7.2 96.9
Panel 11 Round 5 6,895 51 7 44 6,905 6,781 5.5 98.2
Panel 12 Round 1 7,467 331 86 172 7,712 5,901 14.2 76.5
Panel 12 Round 2 5,901 157 27 27 6,058 5,584 9.1 92.2
Panel 12 Round 3 5,580 105 13 12 5,686 5,383 8.1 94.7
Panel 12 Round 4 5,376 102 12 16 5,474 5,267 8.8 96.2
Panel 13 Round 1 9,939 502 97 213 10,325 8,017 12.2 77.6
Panel 13 Round 2 8,008 220 47 23 8,252 7,809 9.0 94.6

Return To Table Of Contents

Table A-3. Signing rates for medical provider authorization forms

Panel/round Authorization forms
requested
Authorization forms
signed

Signing rate
(%)

Panel 1 Round 1 3,562 2,624 73.7
Panel 1 Round 2 19,874 14,145 71.2
Panel 1 Round 3 17,722 12,062 68.1
Panel 1 Round 4 17,133 10,542 61.5
Panel 1 Round 5 12,544 6,763 53.9
Panel 2 Round 1 2,735 1,788 65.4
Panel 2 Round 2 13,461 9,433 70.1
Panel 2 Round 3 11,901 7,537 63.3
Panel 2 Round 4 11,164 6,485 58.1
Panel 2 Round 5 8,104 4,244 52.4
Panel 3 Round 1 2,078 1,349 64.9
Panel 3 Round 2 10,335 6,463 62.5
Panel 3 Round 3 8,716 4,797 55.0
Panel 3 Round 4 8,761 4,246 48.5
Panel 3 Round 5 6,913 2,911 42.1
Panel 4 Round 1 2,400 1,607 67.0
Panel 4 Round 2 12,711 8,434 66.4
Panel 4 Round 3 11,078 6,642 60.0
Panel 4 Round 4 11,047 6,888 62.4
Panel 4 Round 5 8,684 5,096 58.7
Panel 5 Round 1 1,243 834 67.1
Panel 5 Round 2 14,008 9,618 68.7
Panel 5 Round 3 12,869 8,301 64.5
Panel 5 Round 4 13,464 9,170 68.1
Panel 5 Round 5 10,888 7,025 64.5
Panel 6 Round 1 2,783 2,012 72.3
Panel 6 Round 2 29,861 22,872 76.6
Panel 6 Round 3 26,068 18,219 69.9
Panel 6 Round 4 27,146 20,082 74.0
Panel 6 Round 5 21,022 14,581 69.4
Panel 7 Round 1 2,298 1,723 75.0
Panel 7 Round 2 22,302 17,557 78.7
Panel 7 Round 3 19,312 13,896 72.0
Panel 7 Round 4 16,934 13,725 81.1
Panel 7 Round 5 14,577 11,099 76.1
Panel 8 Round 1 2,287 1,773 77.5
Panel 8 Round 2 22,533 17,802 79.0
Panel 8 Round 3 19,530 14,064 72.0
Panel 8 Round 4 19,718 14,599 74.0
Panel 8 Round 5 15,856 11,106 70.0
Panel 9 Round 1 2,253 1,681 74.6
Panel 9 Round 2 22,668 17,522 77.3
Panel 9 Round 3 19,601 13,672 69.8
Panel 9 Round 4 20,147 14,527 72.1
Panel 9 Round 5 15,963 10,720 67.2
Panel 10 Round 1 2,068 1,443 69.8
Panel 10 Round 2 22,582 17,090 75.7
Panel 10 Round 3 18,967 13,396 70.6
Panel 10 Round 4 19,087 13,296 69.7
Panel 10 Round 5 15,787 10,476 66.4
Panel 11 Round 1 2,154 1,498 69.5
Panel 11 Round 2 23,957 17,742 74.1
Panel 11 Round 3 20,756 13,400 64.6
Panel 11 Round 4 21,260 14,808 69.7
Panel 11 Round 5 16,793 11,482 68.4
Panel 12 Round 1 1,695 1,066 62.9
Panel 12 Round 2 17,787 12,524 70.4
Panel 12 Round 3 15,291 10,006 65.4
Panel 12 Round 4 15,692 10,717 68.3
Panel 13 Round 1 2,217 1,603 72.3
Panel 13 Round 2 24,357 18,566 76.2

Return To Table Of Contents

Table A-4. Signing rates for pharmacy authorization forms

Panel/round Permission forms
requested
Permission forms
signed
Signing rate
(%)
Panel 1 Round 3 19,913 14,468 72.7
Panel 1 Round 5 8,685 6,002 69.1
Panel 2 Round 3 12,241 8,694 71.0
Panel 2 Round 5 8,640 6,297 72.9
Panel 3 Round 3 9,016 5,929 65.8
Panel 3 Round 5 7,569 5,200 68.7
Panel 4 Round 3 11,856 8,280 69.8
Panel 4 Round 5 10,688 8,318 77.8
Panel 5 Round 3 9,248 6,852 74.1
Panel 5 Round 5 8,955 7,174 80.1
Panel 6 Round 3 19,305 15,313 79.3
Panel 6 Round 5 17,981 14,864 82.7
Panel 7 Round 3 14,456 11,611 80.3
Panel 7 Round 5 13,428 11,210 83.5
Panel 8 Round 3 14,391 11,533 80.1
Panel 8 Round 5 13,422 11,049 82.3
Panel 9 Round 3 14,334 11,189 78.1
Panel 9 Round 5 13,416 10,893 81.2
Panel 10 Round 3 13,928 10,706 76.9
Panel 10 Round 5 12,869 10,260 79.7
Panel 11 Round 3 14,937 11,328 75.8
Panel 11 Round 5 13,778 11,332 82.3
Panel 12 Round 3 10,840 8,242 76.0

Return To Table Of Contents

Table A-5. Results of self-administered questionnaire (SAQ) collection

Panel/round SAQs
requested
SAQs
completed
SAQs
refused
Other
nonresponse
Response
rate (%)
Panel 1 Round 2 16,577 9,910 - - 59.8
Panel 1 Round 3 6,032 1,469 840 3,723 24.3
Panel 1 Combined, 1996 16,577 11,379 - - 68.6
Panel 4* Round 4 13,936 12,265 288 1,367 87.9
Panel 4* Round 5 1,683 947 314 422 56.3
Panel 4* Combined, 2000 13,936 13,212 - - 94.8
Panel 5* Round 2 11,239 9,833 191 1,213 86.9
Panel 5* Round 3 1,314 717 180 417 54.6
Panel 5* Combined, 2000 11,239 10,550 - - 93.9
Panel 5* Round 4 7,812 6,790 198 824 86.9
Panel 5* Round 5 1,022 483 182 357 47.3
Panel 5* Combined, 2001 7,812 7,273 380 1,181 93.1
Panel 6 Round 2 16,577 14,233 412 1,932 85.9
Panel 6 Round 3 2,143 1,213 230 700 56.6
Panel 6 Combined, 2001 16,577 15,446 642 2,632 93.2
Panel 6 Round 4 15,687 13,898 362 1,427 88.6
Panel 6 Round 5 1,852 967 377 508 52.2
Panel 6 Combined, 2002 15,687 14,865 739 1,935 94.8
Panel 7 Round 2 12,093 10,478 196 1,419 86.6
Panel 7 Round 3 1,559 894 206 459 57.3
Panel 7 Combined, 2002 12,093 11,372 402 1,878 94.0
Panel 7 Round 4 11,703 10,125 285 1,292 86.5
Panel 7 Round 5 1,493 786 273 434 52.7
Panel 7 Combined, 2003 11,703 10,911 558 1,726 93.2
Panel 8 Round 2 12,533 10,765 203 1,565 85.9
Panel 8 Round 3 1,568 846 234 488 54.0
Panel 8 Combined, 2003 12,533 11,611 437 2,053 92.6
Panel 8 Round 4 11,996 10,534 357 1,105 87.8
Panel 8 Round 5 1,400 675 344 381 48.2
Panel 8 Combined, 2004 11,996 11,209 701 1,486 93.4
Panel 9 Round 2 12,541 10,631 381 1,529 84.8
Panel 9 Round 3 1,670 886 287 496 53.1
Panel 9 Combined, 2004 12,541 11,517 668 2,025 91.9
Panel 9 Round 4 11,913 10,357 379 1,177 86.9
Panel 9 Round 5 1,478 751 324 403 50.8
Panel 9 Combined, 2005 11,913 11,108 703 1,580 93.2
Panel 10 Round 2 12,360 10,503 391 1,466 85.0
Panel 10 Round 3 1,626 787 280 559 48.4
Panel 10 Combined, 2005 12,360 11,290 671 2025 91.3
Panel 10 Round 4 11,726 10,081 415 1,230 86.0
Panel 10 Round 5 1,516 696 417 403 45.9
Panel 10 Combined, 2006 11,726 10,777 832 1,633 91.9
Panel 11 Round 2 13,146 10,924 452 1,770 83.1
Panel 11 Round 3 1,908 948 349 611 49.7
Panel 11 Combined, 2006 13,146 11,872 801 2,381 90.3
Panel 11 Round 4 12,479 10,771 622 1086 86.3
Panel 11 Round 5 1,621 790 539 292 48.7
Panel 11 Combined, 2007 12,479 11,561 - - 92.6
Panel 12 Round 2 10,061 8,419 502 1,140 83.7
Panel 12 Round 3 1,460 711 402 347 48.7
Panel 12 Combined, 2007 10,061 9,130 - - 90.7
Panel 12 Round 4 9,550 8,303 577 670 86.9
Panel 13 Round 2 14,410 12,541 707 1,162 87.0

*Totals represent combined collection of the SAQ and the parent-administered questionnaire (PAQ).

Return To Table Of Contents

Table A-6. Results of Diabetes Care Supplement (DCS) collection*

Panel/round DCSs requested DCSs completed Response rate (%)
Panel 4 Round 5 696 631 90.7
Panel 5 Round 3 550 508 92.4
Panel 5 Round 5 570 500 87.7
Panel 6 Round 3 1,166 1,000 85.8
Panel 6 Round 5 1,202 1,166 97.0
Panel 7 Round 3 870 848 97.5
Panel 7 Round 5 869 820 94.4
Panel 8 Round 3 971 885 91.1
Panel 8 Round 5 977 894 91.5
Panel 9 Round 3 1,003 909 90.6
Panel 9 Round 5 904 806 89.2
Panel 10 Round 3 1,060 939 88.6
Panel 10 Round 5 1,078 965 89.5
Panel 11 Round 3 1,188 1,030 86.7
Panel 11 Round 5 1,182 1,053 89.1
Panel 12 Round 3 917 825 90.0

*Tables represent combined DCS/proxy DCS collection.

Return To Table Of Contents

Table A-7. Calls to respondent information line

Reason for Call Spring 2000
(Panel 5 Round 1,
Panel 4 Round 3,
Panel 3 Round 5)
Round 1
N
Spring 2000
(Panel 5 Round 1,
Panel 4 Round 3,
Panel 3 Round 5)
Round 1
%
Spring 2000
(Panel 5 Round 1,
Panel 4 Round 3,
Panel 3 Round 5)
Rounds 3 and 5
N
Spring 2000
(Panel 5 Round 1,
Panel 4 Round 3,
Panel 3 Round 5)
Rounds 3 and 5
%
Fall 2000
(Panel 5 Round 2,
Panel 4 Round 4)
Rounds 2 and 4
N
Fall 2000
(Panel 5 Round 2,
Panel 4 Round 4)
Rounds 2 and 4
%
Address change 23 4.0 13 8.3 8 5.7
Appointment 37 6.5 26 16.7 28 19.9
Request callback 146 25.7 58 37.2 69 48.9
Refusal 183 32.2 20 12.8 12 8.5
Willing to participate 10 1.8 2 1.3 0 0.0
Other 157 27.6 35 22.4 8 5.7
Report a respondent deceased 5 0.9 1 0.6 0 0.0
Request a Spanish-speaking interview 8 1.4 1 0.6 0 0.0
Request SAQ help 0 0.0 0 0.0 16 11.3
Total 569   156   141  

Reason for Call Spring 2001
(Panel 6 Round 1,
Panel 5 Round 3,
Panel 4 Round 5)
Round 1
N
Spring 2001
(Panel 6 Round 1,
Panel 5 Round 3,
Panel 4 Round 5)
Round 1
%
Spring 2001
(Panel 6 Round 1,
Panel 5 Round 3,
Panel 4 Round 5)
Rounds 3 and 5
N
Spring 2001
(Panel 6 Round 1,
Panel 5 Round 3,
Panel 4 Round 5)
Rounds 3 and 5
%
Fall 2001
(Panel 6 Round 2,
Panel 5 Round 4)
Rounds 2 and 4
N
Fall 2001
(Panel 6 Round 2,
Panel 5 Round 4)
Rounds 2 and 4
%
Address/telephone change 27 3.7 17 12.7 56 15.7
Appointment 119 16.2 56 41.8 134 37.5
Request callback 259 35.3 36 26.9 92 25.8
No message 8 1.1 3 2.2 0 0.0
Other 29 4.0 7 5.2 31 8.7
Request SAQ help 0 0.0 2 1.5 10 2.8
Special needs 5 0.7 3 2.2 0 0.0
Refusal 278 37.9 10 7.5 25 7.0
Willing to participate 8 1.1 0 0.0 9 2.5
Total 733   134   357  

Reason for Call Spring 2002
(Panel 7 Round 1,
Panel 6 Round 3,
Panel 5 Round 5)
Round 1
N
Spring 2002
(Panel 7 Round 1,
Panel 6 Round 3,
Panel 5 Round 5)
Round 1
%
Spring 2002
(Panel 7 Round 1,
Panel 6 Round 3,
Panel 5 Round 5)
Rounds 3 and 5
N
Spring 2002
(Panel 7 Round 1,
Panel 6 Round 3,
Panel 5 Round 5)
Rounds 3 and 5
%
Fall 2002
(Panel 7 Round 2,
Panel 6 Round 4)
Rounds 2 and 4
N
Fall 2002
(Panel 7 Round 2,
Panel 6 Round 4)
Rounds 2 and 4
%
Address/telephone change 28 4.5 29 13.9 66 16.7
Appointment 77 12.5 71 34.1 147 37.1
Request callback 210 34.0 69 33.2 99 25.0
No message 6 1.0 3 1.4 5 1.3
Other 41 6.6 17 8.2 10 2.5
Request SAQ help 0 0.0 0 0.0 30 7.6
Special needs 1 0.2 0 0.0 3 0.8
Refusal 232 37.6 14 6.7 29 7.3
Willing to participate 22 3.6 5 2.4 7 1.8
Total 617   208   396  
 
Reason for Call Spring 2003
(Panel 8 Round 1,
Panel 7 Round 3,
Panel 6 Round 5)
Round 1
N
Spring 2003
(Panel 8 Round 1,
Panel 7 Round 3,
Panel 6 Round 5)
Round 1
%
Spring 2003
(Panel 8 Round 1,
Panel 7 Round 3,
Panel 6 Round 5)
Rounds 3 and 5
N
Spring 2003
(Panel 8 Round 1,
Panel 7 Round 3,
Panel 6 Round 5)
Rounds 3 and 5
%
Fall 2003
(Panel 8 Round 2,
Panel 7 Round 4)
Rounds 2 and 4
N
Fall 2003
(Panel 8 Round 2,
Panel 7 Round 4)
Rounds 2 and 4
%
Address/Telephone change 20 4.2 33 13.7 42 17.9
Appointment 83 17.5 87 36.1 79 33.8
Request callback 165 34.9 100 41.5 97 41.5
No message 16 3.4 7 2.9 6 2.6
Other 9 1.9 8 3.3 3 1.3
Request SAQ help 0 0.0 0 0.0 1 0.4
Special needs 5 1.1 0 0.0 0 0.0
Refusal 158 33.4 6 2.5 6 2.6
Willing to participate 17 3.6 0 0.0 0 0.0
Total 473   241   234  

Reason for Call Spring 2004
(Panel 9 Round 1,
Panel 8 Round 3,
Panel 7 Round 5)
Round 1
N
Spring 2004
(Panel 9 Round 1,
Panel 8 Round 3,
Panel 7 Round 5)
Round 1
%
Spring 2004
(Panel 9 Round 1,
Panel 8 Round 3,
Panel 7 Round 5)
Rounds 3 and 5
N
Spring 2004
(Panel 9 Round 1,
Panel 8 Round 3,
Panel 7 Round 5)
Rounds 3 and 5
%
Fall 2004
(Panel 9 Round 2,
Panel 8 Round 4)
Rounds 2 and 4
N
Fall 2004
(Panel 9 Round 2,
Panel 8 Round 4)
Rounds 2 and 4
%
Address/telephone change 8 1.6 26 13.2 42 10.9
Appointment 67 13.3 76 38.6 153 39.7
Request callback 158 31.5 77 39.1 139 36.1
No message 9 1.8 5 2.5 16 4.2
Other 8 1.6 5 2.5 5 1.3
Proxy needed 5 1.0 2 1.0 0 0.0
Request SAQ help 0 0.0 0 0.0 2 0.5
Special needs 0 0.0 0 0.0 0 0.0
Refusal 228 45.4 6 3.0 27 7.0
Willing to participate 19 3.8 0 0.0 1 0.3
Total 502   197   385  

Reason for Call Spring 2005
(Panel 10 Round 1,
Panel 9 Round 3,
Panel 8 Round 5)
Round 1
N
Spring 2005
(Panel 10 Round 1,
Panel 9 Round 3,
Panel 8 Round 5)
Round 1
%
Spring 2005
(Panel 10 Round 1,
Panel 9 Round 3,
Panel 8 Round 5)
Rounds 3 and 5
N
Spring 2005
(Panel 10 Round 1,
Panel 9 Round 3,
Panel 8 Round 5)
Rounds 3 and 5
%
Fall 2005
(Panel 10 Round 2,
Panel 9 Round 4)
Rounds 2 and 4
N
Fall 2005
(Panel 10 Round 2,
Panel 9 Round 4)
Rounds 2 and 4
%
Address/telephone change 16 3.3 23 8.7 27 6.8
Appointment 77 15.7 117 44.3 177 44.4
Request callback 154 31.4 88 33.3 126 31.6
No message 14 2.9 11 4.2 28 7.0
Other 13 2.7 1 0.4 8 2.0
Proxy needed 0 0.0 0 0.0 0 0.0
Request SAQ help 0 0.0 0 0.0 1 0.3
Special needs 1 0.2 1 0.4 0 0.0
Refusal 195 39.8 20 7.6 30 7.5
Willing to participate 20 4.1 3 1.1 2 0.5
Total 490   264   399  

Reason for Call Spring 2006
(Panel 11 Round 1,
Panel 10 Round 3,
Panel 9 Round 5)
Round 1
N
Spring 2006
(Panel 11 Round 1,
Panel 10 Round 3,
Panel 9 Round 5)
Round 1
%
Spring 2006
(Panel 11 Round 1,
Panel 10 Round 3,
Panel 9 Round 5)
Rounds 3 and 5
N
Spring 2006
(Panel 11 Round 1,
Panel 10 Round 3,
Panel 9 Round 5)
Rounds 3 and 5
%
Fall 2006
(Panel 11 Round 2,
Panel 10 Round 4)
Rounds 2 and 4
N
Fall 2006
(Panel 11 Round 2,
Panel 10 Round 4)
Rounds 2 and 4
%
Address/telephone change 7 1.3 24 7.5 11 4.1
Appointment 61 11.3 124 39.0 103 38.1
Request callback 146 27.1 96 30.2 101 37.4
No message 72 13.4 46 14.5 21 7.8
Other 16 3.0 12 3.8 8 3.0
Proxy needed 0 0.0 0 0.0 0 0.0
Request SAQ help 0 0.0 0 0.0 0 0.0
Special needs 4 0.7 0 0.0 0 0.0
Refusal 216 40.1 15 4.7 26 9.6
Willing to participate 17 3.2 1 0.3 0 0.0
Total 539   318   270  

Reason for Call Spring 2007
(Panel 12 Round 1,
Panel 11 Round 3,
Panel 10 Round 5)
Round 1
N
Spring 2007
(Panel 12 Round 1,
Panel 11 Round 3,
Panel 10 Round 5)
Round 1
%
Spring 2007
(Panel 12 Round 1,
Panel 11 Round 3,
Panel 10 Round 5)
Rounds 3 and 5
N
Spring 2007
(Panel 12 Round 1,
Panel 11 Round 3,
Panel 10 Round 5)
Rounds 3 and 5
%
Fall 2007
(Panel 12 Round 2,
Panel 11 Round 4)
Rounds 2 and 4
N
Fall 2007
(Panel 12 Round 2,
Panel 11 Round 4)
Rounds 2 and 4
%
Address/telephone change 8 2.1 21 7.3 23 7.6
Appointment 56 14.6 129 44.8 129 42.6
Request callback 72 18.8 75 26.0 88 29.0
No message 56 14.6 37 12.8 33 10.9
Other 20 5.2 15 5.2 6 2.0
Proxy needed 0 0.0 0 0.0 0 0.0
Request SAQ help 0 0.0 0 0.0 0 0.0
Special needs 5 1.3 0 0.0 1 0.3
Refusal 160 41.8 10 3.5 21 6.9
Willing to participate 6 1.6 1 0.3 2 0.7
Total 383   288   303  

Reason for Call Spring 2008
(Panel 13 Round 1,
Panel 12 Round 3,
Panel 11 Round 5)
Round 1
N
Spring 2008
(Panel 13 Round 1,
Panel 12 Round 3,
Panel 11 Round 5)
Round 1
%
Spring 2008
(Panel 13 Round 1,
Panel 12 Round 3,
Panel 11 Round 5)
Rounds 3 and 5
N
Spring 2008
(Panel 13 Round 1,
Panel 12 Round 3,
Panel 11 Round 5)
Rounds 3 and 5
%
Fall 2008
(Panel 13 Round 2,
Panel 12 Round 4)
Rounds 2 and 4
N
Fall 2008
(Panel 13 Round 2,
Panel 12 Round 4)
Rounds 2 and 4
%
Address/telephone change 20 3.4 12 4.7 21 5.7
Appointment 92 15.5 117 45.9 148 39.9
Request callback 164 27.6 81 31.8 154 41.5
No message 82 13.8 20 7.8 22 5.9
Other 13 2.2 12 4.7 8 2.2
Proxy needed 0 0.0 0 0.0 0 0.0
Request SAQ help 0 0.0 0 0.0 0 0.0
Special needs 4 0.7 0 0.0 0 0.0
Refusal 196 32.9 13 5.1 18 4.9
Willing to participate 24 4.0 0 0.0 0 0.0
Total 595   255   371  

Return To Table Of Contents

""