Bureau of Transportation Statistics (BTS)
Printable Version

Omnibus Survey
Household Survey Results
General Methodology
July 2001 to Present

Introduction and Background

The Bureau of Transportation Statistics (BTS) is conducting a series of monthly surveys to monitor expectations of and satisfaction with the transportation system and to gather event, issue, and mode-specific information. The surveys will serve as an information source for the U.S. Department of Transportation (DOT) modal administrators, who can use them to support congressional requests and for internal DOT performance indicators. Overall, the surveys will support the collection of information on a wide range of transportation-related topics.

This document contains the following information:

  • Background of the survey initiative;
  • A detailed description of how sample respondents were selected for the survey;
  • Information on interviewer training, pre-testing, interviewing methods, household screening methods and methods for call attempts and callbacks;
  • Guidance on the use of weights for analyses;
  • Instructions for calculating standard error estimates;
  • Data collection methods.

1. Sample Design

Target Population

The target population is the United States non-institutionalized adult population (18 years of age or older).

Sampling Frame and Selection

To ensure that the monthly Omnibus Surveys conducted after March 2001 is comparable to past Omnibus Surveys (March 2001 and earlier), the previous sample methodology was replicated. A sample methodology was used to achieve a random sample of non-institutionalized adults 18 years and older in the fifty states of the United States and the District of Columbia. A national probability sample of households using list-assisted random digit dialing (RDD) methodology was employed for the survey. The sample was purchased from GENESYS, a firm that provides sample for numerous government agencies and the private sector. In summary, GENESYS initiated a sample development process by first imposing an implicit stratification on the telephone prefixes using the Census Bureau divisions and metropolitan status (See the Census Bureau regions and divisions below).

Table 1: Census Bureau Regions and Divisions

Region Division States
Northeast New England CT, ME, MA, NH, RI, VT
Middle Atlantic NJ, NY, PA
Midwest East North Carolina IN, IL, MI, OH, WS
West North Carolina IA, KS, MN, MO, NE, ND, SD
South South Atlantic DE, DC, FL, GA, MD, NC, SC, VA, WV
East South Central AL, KY, MS, TN
West South Central AR, LA, OK, TX
West Mountain AZ, CO, ID, NM, MT, UT, NV, WY
Pacific AK, CA, HI, OR, WA

Within each Census Bureau division, counties and their associated prefix areas located in Metropolitan Statistical Areas (MSA) were sorted by the size of the MSA. Counties and their associated prefix areas within a Census Bureau division that are located outside of MSAs were first sorted by state. Within each state, the counties and their associated prefix areas were sorted by geographic location. This implicit stratification ensures that the sample of telephone numbers is geographically representative.

The resulting sample of telephone numbers was address-matched for subsequent mailing of a pre-contact letter to each address.

RDD Sample

To generate the sample the GENESYS System employs list-assisted random digit dialing methodology. List-assisted refers to the use of commercial lists of directory-listed telephone numbers to increase the likelihood of dialing household residences. This method gives unlisted telephone numbers the same chance to be selected as directory-listed numbers.

The system utilizes a database consisting of all residential telephone exchanges, working bank information, and various geographic service parameters such as state, county, Primary ZIP code, etc. In addition, the database provides working bank information at the two-digit level – each of the 100 banks (i.e., first two digits of the four-digit suffix) in each exchange is defined as "working" if it contains one or more listed telephone households. On a National basis, this definition covers an estimated 96.4% of all residential telephone numbers and 99.96% of listed residential numbers. This database is updated on a quarterly basis.

The sample frame consists of the set of all telephone exchanges that meet the geographic criteria. This geographic definition is made using one or more of the geographic codes included in the database. Following specification of the geographic area, the system selects all exchanges and associated working banks that meet those criteria.

Based on the sample frame defined above, the system computes an interval such that the number of intervals is equivalent to the desired number of sample pieces. The interval is computed by dividing the total possible telephone numbers in the sample frame (i.e., # of working banks X 100) by the number of RDD sample pieces required. Within each interval a single random number is generated between 1 and the interval size; the corresponding phone number within the interval is identified and written to an output file.

The result is that every potential telephone number within the defined sample frame has a known and equal probability of selection.

ID-PLUS

This process is designed to purge about 75% of the non-productive numbers (non-working, businesses and fax/modems). Since this process is completed after the sample is generated, the statistical integrity of the sample is maintained.

The Pre-Dialer Phase – The file of generated numbers is passed against the ID database, comprised of the GENESYS-Plus business database and the listed household database. Business numbers are eliminated while listed household numbers are set aside, to be recombined after the active Dialer Phase.

The Dialer Phase – The remaining numbers are then processed using automated dialing equipment – actually a specially configured PROYTYS Telephony system. In this phase, the dialing is 100% attended and the phone is allowed to ring up to two times. Specially trained agents are available to speak to anyone who might answer the phone and the number is dispositioned accordingly. Given this human intervention in evaluating all call results, virtually all remaining businesses, non-working and non-tritone intercepts, compensate for differences in non-working intercept behavior. The testing takes place during the restricted hours of 9 a.m. – 5 p.m. local time, to further minimize intrusion since fewer people are home during these hours.

The Post-Dialer Phase – The sample is then reconstructed, excluding the non-productive numbers identified in the previous two phases.

Address Matching

The Donnelley (InfoUSA) listed residential database was used for residential reverse matches (name and address). This file contains approximately 174 million names and addresses, of which 90 million have a phone number. This file is white-page based and has NCOA updates applied to it monthly. Full updates to the file are received 3 times a year as well as monthly ZIP Code replacements. Name and address or address (including ZIP+4’s) only, is appended, where available.

Precision of Estimates

The precision of estimated frequencies can be assessed by evaluating the width of the 95 percent confidence interval around the estimates. For this application, the confidence interval can be approximated for design purposes as:

Lowercase p subscript lowercase s plus or minus uppercase z times the square root of the variance of lowercase p subscript lowercase s where lowercase p subscript lowercase s is the estimated (sample) proportion and uppercase z is the 5 percent critical value of the normal distribution

Where

ps is the estimated (sample) proportion;

Z is the 5 percent critical value of the normal distribution; and

Var(ps) is the variance of ps.

The calculation of the end points of the confidence interval can be re-written as:

Lowercase p subscript lowercase s plus or minus uppercase z times the square root of outer left parenthesis lowercase p subscript lowercase s times inner left parenthesis 1 minus lowercase p subscript lowercase s inner right parenthesis divided by lowercase n outer right parenthesis where lowercase p subscript lowercase s is the estimated (sample) proportion, uppercase z is the 5 percent critical value of the normal distribution and lowercase n is the sample size

Or

Lowercase p subscript lowercase s minus uppercase z times the square root of outer left parenthesis lowercase p subscript lowercase s times inner left parenthesis 1 minus lowercase p subscript lowercase s inner right parenthesis divided by lowercase n outer right parenthesis less than or equal to uppercase p less than or equal to lowercase p subscript lowercase s minus uppercase z times the square root of outer left parenthesis lowercase p subscript lowercase s times inner left parenthesis 1 minus lowercase p subscript lowercase s inner right parenthesis divided by lowercase n where lowercase p subscript lowercase s is the estimated sample proportion, uppercase z is the 5 percent critical value of the normal distribution, lowercase n is the sample size and uppercase p is the true population value of the proportion

Where

P is the true population value of the proportion; and

n is the sample size.

Therefore, with a sample size of 1,023 and ps = 50 percent, the confidence interval range would be 47 ≤ P ≤ 53, approximately.

2. Sampling Weights and Adjustments

This section discusses the development of survey weights. Two types of weights were used in the present survey: inverse-probability weights (to correct for unequal selection probabilities) and post-stratification (to correct for known discrepancies between the sample and the population). The final analysis weight reflects both types of adjustments, i.e. adjustment for non-response, multiple telephone lines, and persons-per-household, and post-stratification adjustments. The final analysis weight is the weight that should be used for analyzing the survey data.

The final analysis weight was developed using the following steps:

  • Calculation of the base sampling weights;
  • Adjustment for unit non-response;
  • Adjustment for households with multiple voice telephone numbers;
  • Adjustment for selecting an adult within a sampled household; and
  • Post-stratification adjustments to the target population.

The product of all the above variables represents the final analysis weight. If needed, extreme values of the final analysis weight can be reduced (or trimmed) using standard weight trimming procedures.

Base Sampling Weights

The first step in weighting the sample is to calculate the sampling weight for each telephone number in the sample. The sampling rate is the inverse of the telephone number’s probability of selection, or:

Uppercase w subscript lowercase s equals uppercase n divided by lowercase n where uppercase w subscript lowercase s is the sampling weight for each telephone number in the sample, uppercase n is the total number of telephone numbers in the population and lowercase n is the total number of telephone numbers in the sample.

Where N is the total number of telephone numbers in the population and n is the total number of telephone numbers in the sample.

Adjustment for Unit Non-Response

Sampled telephone numbers are classified as responding or non-responding households according to Census division and metropolitan status (inside or outside a Metropolitan Statistical Area). The non-response adjustment factor for all telephone numbers in each Census division (c) by metropolitan status (s), is calculated as follows:

Uppercase adj subscript uppercase nr equals 1 divided by uppercase casro subscript lowercase response rate left parenthesis lowercase c, lowercase s right parenthesis where uppercase adj subscript uppercase nr is the non-response adjustment factor within each Census division/metropolitan status combination and uppercase casro subscript lowercase response rate left parenthesis lowercase c, lowercase s right parenthesis is the response rate for Census division c and metropolitan status s

Where the denominator is the CASRO response rate for Census division c and metropolitan status s. The non-response adjustment factor for a specific cell (defined by metropolitan status and Census division) is a function of the response rate, which is given by the ratio of the estimated number of telephone households to the number of completed surveys.

The non-response adjusted weight (WNR) is the product of the sampling weight (WS) and the non-response adjustment factor (ADJNR) within each Census division / metropolitan status combination.

Adjustment for Households with Multiple Telephone Numbers

Some households have multiple telephone lines for voice communication. Thus, these households have multiple chances of being selected into the sample and adjustments must be made to their survey weights. The adjustment for multiple telephone lines is:

Uppercase adj subscript uppercase mt equals 1 divided by the minimum of left parenthesis number of telephone lines in a household, 3 right parenthesis where uppercase adj subscript uppercase mt is the adjustment for households with multiple telephone numbers

As shown in the formula, the adjustment is limited to a maximum factor of three. In other words, the adjustment factor ADJMT will be one over two (0.50) if the household has two telephone lines, and one over three (0.33) if it has three or more.

For respondents that did not provide this information, it is assumed that the household contained only one telephone line. The non-response adjusted weight (WNR) is multiplied by the adjustment factor for multiple telephone lines (multiple probabilities of selection) (ADJMT) to create a weight that is adjusted for non-response and for multiple probabilities of selection (WNRMT).

Adjustment for Number of Eligible Household Members

The probability of selecting an individual respondent depends upon the number of eligible respondents in the household. Therefore, it is important to account for the total number of eligible household members when constructing the sampling weights. The adjustment for selecting a random adult household member is:

ADJRA = Number of Eligible Household Members

For respondents that did not provide this information, a value for ADJRA is imputed according to the distribution of the number of eligible persons in a household (from responding households) within the age, gender, and race/ethnicity cross-classification cell matching that of the respondent for which the value is being imputed.

The weight adjusted for non-response and for multiple probabilities of selection (WNRMT) is then multiplied by ADJRA, resulting in WNRMTRA, a weight adjusted for non-response, multiple probabilities of selection, and for selecting a random, household member.

Post-Stratification Adjustments

Adjusting weighted survey counts so that they agree with population counts provided by the Census Bureau can compensate for different response rates by demographic subgroups, increase the precision of survey estimates, and reduce the bias present in the estimates resulting from the inclusion of only telephone households. The final adjustment to the survey weight is a post-stratification adjustment that allows the weights to sum to the target population (i.e. U.S. non-institutionalized persons 18 years of age or older) by age, gender and race/ethnicity.

The outcome of post-stratification is a factor or multiplier (M) that scales WNRMTRA within each age/gender/race cell, so that the weighted marginal sums for age, gender and race/ethnicity agree with the corresponding Census Bureau distribution for these characteristics. The method used in the post-stratification adjustment is a simple ratio adjustment applied to the sampling weight using the appropriate national population total for a given cell defined by the intersection of age, gender, and race/ethnicity. The general method for ratio adjusting is:

  • A table of the sum of the weights for each cell denoted by each age, gender, and race/ethnicity combination is created. Each cell is denoted by S(i,j,k), where i is the indicator for age, j is the indicator for gender, and k is the indicator for race/ethnicity;
  • A similar table of national population controls is created, where each cell is denoted by P(i,j,k);
  • The ratio R(i,j,k) = P(i,j,k) / S(i,j,k) is calculated; the cell ratio R(i,j,k) is denoted as the multiplier M;
  • Each weight, at the record level, is multiplied by the appropriate cell ratio of R(i,j,k) to form the post-stratification adjustment.

Again, cells used in the post-stratification are defined by the combination of age, gender, and race/ethnicity. With two categories for gender, six for age and four for race/ethnicity, a total of 48 (2x6x4) cells can be used. In any month, some race/ethnicity or, preferably, age categories may be merged if the number of completed interviews within the corresponding cells falls below thirty.

Those respondents who did not supply the demographic information necessary to categorize their age, gender and/or race/ethnicity are excluded from the post-stratification process and assigned a value of 1 for M.

The multiplier M is then applied to WNRMTRA to create WNRMTRAPS. However, WNRMTRAPS is overstated because a portion of the sample is not included in the calculation of the post-stratification adjustment. Therefore, a deflation factor is applied to the value of WNRMTRAPS. The deflation factor DEF is calculated as follows:

Uppercase def equals summation from i equals 1 to 6 summation from j equals 1 to 2 summation from k equals 1 to 4 uppercase p left parenthesis i, j, k right parenthesis divided by uppercase tw subscript uppercase ntmtrana plus summation from i equals 1 to 6 summation from j equals 1 to 2 summation from k equals 1 to 4 uppercase p left parenthesis i, j, k right parenthesis where uppercase def is the deflation factor, uppercase p left parenthesis i, j, k right parenthesis is  the national population count for cell left parenthesis i, j, k right parenthesis and uppercase tw subscript uppercase ntmtrana is the sum of uppercase w subscript uppercase nrmtra weights for respondents with missing demographic information

Where:

P(i, j, k) is the national population count for cell (i, j, k); and

TWNRMTRA_NA is the sum of the WNRMTRA weights for respondents with missing demographic information.

This deflation factor denotes the proportion of the target population represented by respondents with non-missing demographic information. The final analysis weight, WFINAL, is the scaled value of WNRMTRAPS, calculated as:

WFINAL = DEF x WNRMTRAPS

WFINAL can be viewed as the number of population members that each respondent represents.

Trimming of Final Analysis Weights

Extreme values of WFINAL are trimmed to avoid over-inflation of the sampling variance. In short, the trimming process limits the relative contribution of the variance associated with the kth unit to the overall variance of the weighted estimate by comparing the square of each weight to a threshold value determined as a multiple of the sum of the squared weights. Letting w1, w2, … wj, denote the final analysis weights for the n completed interviews, the threshold value is calculated using the following formula:

Threshold equals the square root of left parenthesis 10 times the summation from j equals 1 to lowercase n lowercase w subscript lowercase j squared divided by lowercase n right parenthesis where lowercase w subscript lowercase j is the final analysis weight and lowercase n is the number of completed interviews

Each household having a final analysis weight that exceeds the determined threshold value is assigned a trimmed weight equal to the threshold. Next, the age/gender/race cell used in the post-stratification is identified for each household with a trimmed weight. To maintain the overall weighted sum within the cell, the trimmed portions of the original weights are reassigned to the cases whose weights are unchanged in the trimming process.

For cases having trimmed weights but missing age, gender, and/or race/ethnicity information, the trimmed portions of the original weights are assigned to all remaining cases whose weights are unchanged in the trimming process.

The entire trimming procedure is repeated on the new set of weights: a new threshold value is recalculated and the new extreme values are re-adjusted. The process is repeated until no new extreme values are found.

3. Variance Estimation

The data collected in the Omnibus Household Survey was obtained through a complex sample design involving stratification, and the final weights were subject to several adjustments. Any variance estimation methodology must involve some simplifying assumptions about the design and weighting. Some simplified conceptual design structures are provided in this section.

Variance Estimation Methodology

The software package SUDAAN® (Software for the Statistical Analysis of Correlated Data) Version 7.5.6 was used for computing standard errors.

Software

SUDAAN® is a statistical software package developed by Research Triangle Institute to analyze data from complex sample surveys. SUDAAN® uses advanced statistical techniques to produce robust variance estimates under various survey design options. The software, in particular, can handle stratification and the numerous adjustments associated with weights subject to multiple adjustments.

Methods

Overall, three variables, CENDIV (Census Division), METRO (metropolitan status), and FNLWGT (final analysis weights), are needed for variance estimation in SUDAAN®. The method used in the present survey utilizes the variables CENDIV and METRO to create 18 (9x2) strata, a single stage selection with replacement procedure, and the final analysis weights. This method provides somewhat conservative standard error estimates.

Assuming a simplified sample design structure, the following SUDAAN® statements can be used (note that the data file first must be sorted by the variables CENDIV and METRO before using it in SUDAAN®):

PROC ... DESIGN = STRWR;

NEST CENDIV METRO;

WEIGHT FNLWGT;

More precisely, the following code is used to produce un-weighted and weighted frequency counts, percentages and standard errors (the variable of interest here is "var1", a categorical variable with seven levels):

PROC CROSSTAB DATA = datafile DESIGN=STRWR;

WEIGHT FNLWGT;

NEST CENDIV METRO;

SUBGROUP var1;

LEVELS 7;

TABLE var1;

PRINT nsum wsum totper setot / STYLE=nchs;

When sampling weights are post-stratified, the variance of an estimate is reduced since the totals are known without sampling variation. Using SUDAAN® without any modifications produces standard errors of estimates that do not reflect this reduction in variance. The estimates of the standard errors can be improved by using SUDAAN® post-stratification option (POSTVAR and POSTWGT). This option reflects the reduction in variance due to adjustment to control totals in one dimension. However, this approach still does not reflect the full effect of post-stratification, as the other post-stratification dimensions are ignored.

Degrees of Freedom and Precision

A typically used rule-of-thumb for degrees of freedom associated with a standard error is the quantity: number of un-weighted records in the dataset minus number of strata. The rule-of-thumb degrees of freedom for the method above will fluctuate from month to month depending upon the number of records in each monthly dataset. Most monthly dataset will yield degrees of freedom of around 1000.

For practical purposes, any degrees of freedom exceeding 120 is treated as infinite, i.e., if one uses a normal Z-statistic instead of a t-statistic for testing. Note, that a one-tailed critical t at 120 degrees of freedom is 1.98 while at an infinite degrees of freedom (a 0.025 z-value) is 1.96. If a variable of interest covers most of the sample strata, this limiting value probably will be adequate for analysis.

4. Data Collection Methodology

Expert Panel Review

An Expert Panel is sent copies of the Omnibus Survey each month for review and comment. A link to the BTS website is sent to panelists to provide information about the purpose and history of the Omnibus Household Survey. Panelists are instructed to prioritize their comments about the draft survey. A conference call is conducted among the panelists to identify problems and issues and reach consensus (where possible) on the most significant problems and associated recommendations. The discussion and associated recommendations are summarized and distributed to the panelists to review for accuracy. Edits and modifications are then incorporated into the document and distributed to BTS.

Cognitive Interviews

A total of twenty (20) cognitive interviews are conducted each month. The interviewing is conducted between 10 a.m. and 6 p.m. to broaden the distribution of participants that could participate. Recruiters intercept individuals in the mall and screen based on race, gender, age and income to ensure the ending sample of respondents are reflective of the United States population regarding the aforementioned characteristics. They also screen for no personal experience and/or close relationship with someone working in any of the sensitive occupations (transit agency, market research, advertising or public relations) and their non-participation in any survey initiative in the past six (6) months. Respondents are paid $10 for their participation in the cognitive interview.

Respondents who agree to participate are escorted to an interviewing facility in the mall and are administered the cognitive interview by MDAC personnel. Interviewers are required to compile results from their interviews and develop a summary of noteworthy issues and any suggested solutions by the end of the next day.

Interview Procedures

The following outlines the key phases of the interviewing procedures utilized in the survey.

Pre-Testing

A Pre-Test is conducted prior to the initiation of actual calling. The Pre-Test is used to replicate the data collection process and identify any problem areas related to the process, the survey instrument in total, specific questions, answer choices, questionnaire instructions or question format. It is also used to test the interview length.

Telephone supervisors conduct these pre-test interviews of the draft survey instrument. All problematic questions, issues and recommendations resulting from the pre-test are included in the list of problematic issues report which is forwarded to BTS.

Interviewer Training

All new interviewers initially completed a generic two-day (approximately 12 hours) classroom training on general interviewing skills. Additionally, each month all interviewers will complete approximately four to six hours of classroom training on specific aspects of the Omnibus Household Survey. In response to normal interviewer turnover and/or increased staffing needs, all interviewers new to the project will receive the full complement of training prior to beginning their interviewing for this study.An outline of the generic two-day training is below. This generic training included these topics as well as Asking questions as worded (Verbatim Reading and Recording), use of bold type on the screen, use of light type on the screen, use of ALL CAPS on the screen (Maneuvering through CfMC: Start Interviewing, Meaning/Significance of font style (e.g., bold) and text effects (e.g., all caps)). Also, interviewers were provided with a list of Frequently Asked Questions so they were ready to counter a respondent’s potential refuse to participate in the study.

I. ORIENTATION
A. Welcome
B. Organizational Chart
C. Your Job Description/Responsibilities
D. Policies and Procedures

II. TRAINING
***Includes Excerpts from the Market Research Association (MRA) Training Manual

A. Introduction to the Marketing and Opinion Research Industry

     What is marketing and opinion research?
     Types of interviews
     Techniques used in data collection
     Survey settings
     Overview of the marketing and opinion research process
     Key Terms

B. The Interviewer’s Role

     Appropriate Attitude
     Characteristics of a successful interviewer
     Recruiting Respondents
     The "Art" of Interviewing
     Key Terms

C. Respondents

     Relating to Respondents
     "Training" Respondents
     Building and Maintaining Rapport
     "Active Listening"
     Callback Scenarios and Procedures
     Terminations

D. Questions and Answers Plus Other Topics

     The One Unbreakable Rule
     Types of Questions
     The Interviewing Process
     Paperwork
     Quality Assurance
     Dos and Don’ts
     Conducting the Interview
     Editing the Interview
     Monitoring (includes Quotas)
     Validation

E. Bias, Probing and Clarifying

     Introduction
     Good Feedback
     Bad Feedback
     Avoid Bias
     Verbatim Reading and Recording
     Open-end Questions and Probing
     Additional Section, "Bias, Probing and Clarifying"

F. Objections and Refusal Conversion

     Nine Most Common Objections and Reasons for Refusal
     Acknowledgement of the Objection
     Soft Refusal Conversion

G. Getting Familiar With The Computer

     Mouse
     Keyboard
     Logging On

H. Maneuvering through CfMC
     Keyboard Commands
     Introduction to CfMC Phone System
     Starting the InterviewingInterviewing with SURVENT
     Responding to Different Question Types
     SURVENT Commands
     More About CfMC
     Role Playing

I. Open Discussion Additional questions

Each survey month, a questionnaire update training is conducted to discuss the questionnaire changes. An updated interviewer training manual specific to the new month is developed and distributed to the interviewers. An outline of the approximately four-to-six hour training includes:

  • A review of last month’s results;
  • Feedback from interviewers, supervisors;
  • Problems and issues emerging from last month’s data collection;
  • An Overview of changed sections from last month (Sections B, S and M);
  • Question-by-Question Training for New Sections.

In addition to the initial (generic) training and monthly refresher (survey-specific) training, interviewer re-training is conducted on an "as-needed" basis – that is, as interviewers are replaced or the survey instrument changes. Also, interviewers are evaluated and retrained as needed for improvement or changes in work habits as identified by our monitoring and editing control procedures.

On a monthly basis MDAC reviews the new questionnaire for changes, incorporates any changes approved by BTS emanating from the Expert Panel Review, the Cognitive Interviews and the Pretest. MDAC re-issues a new manual to each interviewer with the changes.

Pre-Contact Letter

Eight (8) calendar days prior to the start of data collection a BTS-approved pre-contact letter is sent to sample numbers with an address. The intent is for each household with an address to receive the pre-contact letter several days before they receive a call to conduct the interview.

An "800" number is listed in each letter with the specific times to call (M-F, 9:00 am – 11:00 pm EST; Sat and Sun, 1:00 pm to 9:00 pm EST). The letters are categorized by call center and each call center’s "800" number. Should the respondent call outside the times listed above they will receive a phone message asking them to leave their name and number and someone will contact them as soon as possible to conduct the interview.

The toll free number is also mentioned at the seventh attempt in messages left for potential respondents that have an answering machine in cases where we are unable to make contact with a member of the household. Additionally, after the seventh callback we leave our 1-800 number to arrange for interviewing appointments.

The toll free number is not left before the seventh attempt in messages due to concern that people might avoid the call or feel "harassed" if they were away for a few days and find four to six messages on their answering machine upon returning home. Given that a household with an answering machine is called two to three times per day during the Omnibus Household Survey there must be a balance between perceived harassment and encouraging participation, particularly given the limited duration of fielding.

Given the short time frame for data collection, the potential perception of harassment and prior research results, the toll free 800 number is left for the first time at the seventh call.

Call Attempts and Callbacks

The interviews are conducted using CfMC computer assisted telephone interviewing software. At a minimum, one thousand (1,000) interviews are completed each month. The interviewing is distributed between two call facilities, the Wats Room and MDAC.

The Wats Room has two shifts from Monday through Friday (9a.m – 4: 30pm and 5p.m. – Midnight), a shift from 9a.m. until Midnight on Saturday and a shift from 10am until Midnight on Sunday. MDAC has three shifts on Monday through Friday (9 am - 2 pm, 2 pm – 6 pm and 6 pm – 12 midnight) and two shifts on Saturdays (11am – 4 pm and 4 pm – 9 pm) and Sundays (1 pm – 5 pm and 5 pm – 9 pm). Monday through Friday, 9 am to 2 pm, only callbacks (scheduled and non-scheduled) are initiated at both the Wats Room and at MDAC due to historically documented significantly lower completion rates during this time period. In addition, calls after 9pm local time are for scheduled callbacks only. No non-scheduled callbacks are conducted after 9pm local time.

A sufficient number of telephone numbers are released to each call center to ensure that a minimum 30% response rate is achieved if all numbers released are in scope. "In scope" means numbers where contact has been achieved and eligibility determined. Sample is added based upon past calling history, the quantity of numbers determined to be ineligible, and projection of completes based upon past and current experience, number of callbacks achieved and refusal conversion rates.

When a phone number is called initially, the interviewer determines that it is a household. Then, the interviewer requests to speak with an adult 18 years of age or older (if the person on the phone is not an adult). Once an adult is on the line, then the interviewer randomly selects the actual survey respondent by asking for the adult in the household who had a birthday most recently. When the adult with the most recent birthday comes onto the phone line the interviewer conducts the survey. Should the interviewer not be able to complete the survey the following dispositions are recorded:

Do-Not-Call dispositions are for households that request their number not be called in the future. This disposition ensures compliance with the respondent’s request.

Refusals are defined as when a person refuses to participate in the survey at all. Someone who breaks off the interview or refuses because s/he doesn’t have time or says s/he is busy is considered a callback. Refusals are routed to supervisors and selected interviewers capable of converting refusals into completions or other disposition. Interviewers experiencing a refusal enter the appropriate refusal code. Supervisors review refusals the next day and assign the refusal numbers to the appropriate personnel to initiate callbacks with a refusal script. Refusal households are called twice a day, once during the time period contact was initially made and one other time period. The refusal callback is rotated between the morning and late afternoon time periods from Monday through Friday.

Callbacks are scheduled and prioritized by the CfMC software. The callbacks are prioritized based upon the following criteria: first priority – scheduled callback to qualified household member; second priority--scheduled callback to "qualify" household (includes contact with Spanish language barrier households); third priority – callback to make initial contact with household (includes answering machine, busy, ring no answer); and fourth priority – callbacks that are the seventh or higher attempts to schedule interview.

An interview is considered "complete" only if all questions are answered. A refusal to answer an individual question meets the definition of, and counts as, an "answered" question.

Should the interviewer not be able to complete the interview the following procedures will be followed:

Scheduled callbacks can be dialed at anytime during calling hours and as frequently as requested by the callback household up to seven times. Callback attempts in excess of seven are at the discretion of the interviewer based upon his/her perception of the likelihood of completing the interview. The basis of the interviewer’s perception, in part, is determined by how vigorously the interviewer is being encouraged to call back to complete the interview by the potential respondent or another member of the household. The interviewer then confers with a supervisor and a final determination is made as to if the interviewer continues calling.

Callbacks to Spanish language households are conducted by Spanish-speaking interviewers. Interviewer’s that identify a household as Spanish speaking alerts supervisor a Spanish-speaking interviewer is needed to handle phone call. If Spanish interviewer is not available, the interviewer will inform respondent someone will call back, then record as CBS (Callback Spanish). If person is not available within the next hour a callback will be scheduled, if possible.

Those records identified as Spanish will be routed to a Spanish-speaking interviewer. Spanish Interviewer makes call and follows standard protocol for all English calls.

Callbacks for initial contact with potential respondents are distributed across the various calling time periods and weekday/weekend to ensure that a callback is initiated during each time period each day. Two (Saturday and Sunday) to three (Monday through Friday) callbacks per number are initiated per day assuming the number retains a callback status during the calling. There are up to twenty (20) callback attempts. This protocol is designed for ring no answer and answering machines. When an interviewer reaches a household with an answering machine during the seventh, fourteenth or twentieth time calling the interviewer leaves a message with the respective appropriate 800 number.

Callbacks to numbers with a busy signal are scheduled every 30 minutes until the household is reached, disposition is modified, maximum callbacks are achieved or the study is completed.

Disposition Codes

The following are the disposition codes used for each call outcome:

Out-of-Scope Numbers:

  • BG – Business (The number dialed is a non-residential phone number. The call is terminated and the number resolved.)
  • CF – Computer/Fax (The number dialed has led to a modem, fax, pager, or cell phone.)
  • DS – Disconnected number (The number dialed is disconnected. The call is terminated and the number resolved.)
  • NC – Number change (The call yielded a recording that the number was changed, with or without a change in the area code.)
  • NQ – No one 18 years old or older in household
  • UNB – Unavailable before and during study period

Scope Undetermined:

  • NA – No answer (The phone is not answered within 5 rings.)
  • BZ – Busy (busy signal)
  • AM – Answering machine (The call has led to an answering machine or voicemail.)
  • CCC – Cannot complete call (The message "Your call cannot be completed at this time" is received. This is a message provided by the local telephone company when there is a line problem in the local area. These calls are dialed on another day.)
  • PM – Privacy manager (Privacy manager is a feature provided by local telephone companies that requires incoming callers to identify themselves, before the household will accept the call.)
  • NQL – Eligibility undetermined because of language problems or deafness
  • RFI – Refused to speak with interviewer (screening incomplete) If the respondent refuses to speak with interviewer prior to answering F0250 (screening incomplete) and, if asked, F0200 responded "no"
  • HRI – Requests their name be removed from calling list or if the respondent refuses to speak with interviewer for second time prior to answering F0250 (screening incomplete) and, if asked, F0200 responded "no"
  • OD – The maximum number of call attempts is reached before being able to determine eligibility

In-Scope Numbers:

  • YES – Yes (Respondent has agreed to be screened and is eligible, 18 years old or older.)
  • CB – Callback (The respondent has asked that we call them back at another time.)
  • CBS – Callback Spanish
  • DL – Deaf/Language (The respondent is eligible but is hard of hearing, or cannot speak English fluently to complete the interview.)
  • RFQ – Respondent refusal (Respondent refuses after establishing there is a qualified household member by answering F0350 or a later appearing question, or after answering F0200 "yes".)
  • UN – Unavailable (Was available when study began or unable to determine.)
  • DR – Respondent deceased prior to completion of interview
  • AC – The area code is changed but not the number
  • HRQ – Requests their name be removed from calling list or respondent refusal for second time after establishing there is a qualified household member by answering F0350 or a later appearing question, or after answering F0200 "yes"

Household Screening

Qualified respondents are at least 18 years of age or older and must be the household member with the most recent birthday. If the household member is not available at the time of the call a callback is scheduled to screen and/or interview the respondent.

Interviewing Methods

Incentives were not offered to potential respondents in exchange for their participation in the survey. Surveys were conducted in both English and Spanish. If the potential respondent refuses to be interviewed the reason for refusal is recorded. The average length of the interview was 10 to 12 minutes and an additional 3 to 5 minutes to screen and recruit potential respondents.

Generally, interviewers introduced themselves, who they worked for, the purpose of the survey, and assured the potential respondent this was not a sales call. Interviewer then determined whether there was an eligible person in the household. Once contact was made with the eligible household member the interviewer they reintroduced themselves when necessary, explained the purpose of the survey, that it is a voluntary study, indicates the survey takes only 15 minutes, indicated all information would remain confidential and they can refuse to answer any question.

If the potential respondent agrees to participate the interviewer provides the respondent an opportunity to ask any questions, addresses their questions and the interview is conducted. However, if it is not a convenient time then a callback is scheduled.

Data Quality Control Procedures

A key component to successful data quality control procedures is a well-trained and experienced interview staff. All potential interviewers underwent intensive training and orientation regardless of their level of experience prior to being hired for this project. New hires were first screened on their voice quality, dictation, and their ability to administer a simple test questionnaire.

Our interviewer training for administering telephone surveys included:

  • Orientation on the purpose and importance of marketing research, company policies, and quality standards including viewing Market Research Association (MRA) training videotapes;
  • Testing on material developed by the Market Research Association;
  • Background and purposes of the survey;
  • Procedure for selection of correct respondent for the interview;
  • Intensive hands-on training on the "basics" of interviewing itself- the handling of skip patterns, probing and clarify techniques, sample administration, Computer Assisted Telephone Interviewing (CATI), overcoming refusals, etc.;
  • Observing and listening to experienced interviewers conducting actual interviews during which each trainee's performance is closely monitored and evaluated under actual interviewing conditions;
  • Constant reference on the importance of accuracy, quality and courtesy; and
  • Successful completion of a total of approximately eight hours of training during the different sessions.

Interviewer Performance

Ongoing monitoring of every interviewer is undertaken throughout the BTS Omnibus Survey. Fifteen (15%) to twenty (20%) percent of all calls are monitored. An interviewer evaluation form is completed for each monitored contact with a household. Additionally, the evaluation forms includes two to three evaluations of a completed interview per hour. The evaluation forms are paper hard copy forms and are available for review by BTS at the offices of M. Davis and Company, Inc. in Philadelphia.

Other Procedures

The initial two days of interviews by each interviewer are checked to identify any problems administering the survey. The objective is to identify problems, if any, correct the errors and take action so that the problems do not reappear. Before beginning the second day of work all interviewers are alerted to their problems, if any, and the interviewers review how to ensure the problem does not recur. Interviews that were completed during the second day are checked to see that the first day’s errors are not repeated. If errors were repeated and dependent upon the significance of the error, the interviewer is retrained and/or removed from the project for that month of calling.

Newer interviewers are monitored at a higher rate regardless of their level of experience until their first performance evaluation. Additionally, validation is performed on 10% - 20% of each interviewer's work through actual callbacks to respondents to verify responses to key questions. The validation is initiated on the first day of interviewing to ensure early detection of problems and to avoid a backlog of validation calls. Validations are performed for both new and experienced interviewers.

Summary of Data Cleaning

On a daily basis, the data file is checked as a standard to maintain quality. The CfMC utility called SCAN, allows for checking the data, to be sure that all questions are being asked in accordance with the skip patterns on the final questionnaire. The file is also checked for missing codes.

This survey contains "other specify" questions. These questions allow the interviewer to record text responses that do not appear on the pre-listed set of responses. "Other specify" responses are edited to determine if responses entered in "other specify" appear on the pre-listed set of responses. Upon review of the "other specify" responses, it may be necessary to "code-back" a response to the pre-list. This occurs when an interviewer recorded a response as "other", although one of the pre-listed responses matched the "other" response.

Treatment of Missing Values

As with any survey, the BTS Omnibus Survey, by design, contains questions that are not asked to certain respondents based on their response(s) to other questions. In addition, there will always be some respondents who do not know the answer to or chose not to answer some items in the survey. Each of these responses can have a different meaning to the data user. While each of these response categories is important in characterizing the results of the survey, they are often removed from certain analyses, particularly those involving percentages. Therefore, the categories were given standard codes for easy identification. The table below presents the response categories and how they are represented in each data file.

Table 2: Summary of Codes for Missing Values by Data File Format

Response Category Data Set Value
SAS Transport1 Microsoft Excel ASCII
Appropriate Skip .S -7 -7
Refused .R -8 -8
Don’t Know .D -9 -9

Response Rates

The procedures for response rate calculation are based on the guidelines established by the Council of American Survey Research Organizations (CASRO) in defining a response rate. The final response rate for the survey is obtained using the following formula:

Response rate equals completed household interviews divided by outer left parenthesis households in scope plus inner left parenthesis scope undetermined times left brace households in scope divided by households in and out of scope right brace inner right parenthesis outer right parenthesis

Non-Response Methods

For the Omnibus Survey the following is undertaken to maximize the response rate:

  1. Matching sample telephone numbers against commercial file against residential directory-listed numbers.
  2. Advance letter stating clearly the aims, objectives and importance of the survey, with toll free number to callback. MDAC will collaborate with BTS to create a BTS approved advance letter.
  3. Coordination of the mailing of advance letters with the interview calling.
  4. Develop answers for the questions and objections that may arise during the interview.
  5. Leaving message on answering machine with a toll free number.
  6. Having multi-lingual interviewers to reduce language barriers.
  7. Elimination of non-residential numbers from sample.
  8. Callbacks of respondents who initially refused or broke-off interview.
  9. Minimizing turnover of key and non-key personnel.

Reasons for Non-Response

As with any survey, the BTS Omnibus Survey, by design, contains questions that ask respondents to supply the demographic information necessary to categorize their age, gender, and/or education. There will always be some respondents who do not choose to answer some items in the survey. For respondents that did not want to provide this information, the most common reasons for non-responses are: I don’t like giving my age, I would rather not say, I don’t like to be labeled, and that is personal information.

Common reasons for non-responses when asked questions regarding contacts they may have had with any government agencies and/or why they contacted the agencies are: I don’t want to say because I don’t trust the government, I don’t want to answer because I have an issue pending, and I would rather not say.

References

Books:

"Sampling of Populations: Methods and Applications," 3rd Ed., 1999, Paul S. Levy (School of Public Health, University of Illinois at Chicago) and Stanley Lemeshow (School of Public Health, University of Massachusetts)

"Practical Methods for Design and Analysis of Complex Surveys," 1995, Risto Lehtonen (The Social Insurance Institution, Finland) and Erkki J. Pahkinen (University of Jyvaskyla)

"Sampling Techniques," 2nd Ed, 1967, William G. Cochran (Harvard University), Wiley

"SUDAAN Release 7.5, User's Manual Volume I and II," 1997, Babubhai V. Shah, Beth G. Barnwell and Gayle S. Bieler, Research Triangle Institute

Articles:

"1999 Variance Estimation," National Survey of America's Families Methodology Report, 1999 Methodology Series, Report No. 4, prepared by J.M. Brick, P. Broene, D. Ferraro, T. Hankins, C. Rauch and T. Strickler, November 2000

"Pitfalls of Using Standard Statistical Software Packages for Sample Survey Data," Donna J. Brogan, Encyclopedia of Biostatistics, edited by P. Armitage and T. Colton, John Wiley, 1998

"Sampling and Weighting in the National Assessment", K. Rust and E. Johnson, Journal of Educational Statistics, 17(2): 111-129, 1992

"Poststratification and weighting adjustments," Andrew Gelman and John B. Carlin, Department of Statistics, Columbia University Working Paper, February 2000

"Sampling Variances for Surveys With Weighting, Poststratification, and Raking," Hao Lu and Andrew Gelman, Department of Statistics, Columbia University Working Paper, April 2000



RITA's privacy policies and procedures do not necessarily apply to external web sites. We suggest contacting these sites directly for information on their data collection and distribution policies.