Abstract
J.L. Esposito, J.M. Rothgeb, A.E. Polivka, J. Hess,
and P. Campanelli (1992) "Methodologies for
Evaluating Survey Questions: Some Lessons from the Redesign of
the Current Population Survey," Paper presented at the International
Conference on Social Science Methodology, Trento, Italy.
Various techniques have been developed over the years to
pretest new survey questions or to evaluate the effectiveness
of preexisting questions. As part of the current effort to
redesign the Current Population Survey (CPS), researchers
from the Census Bureau and the Bureau of Labor Statistics
used a variety of methods to evaluate questions designed to
elicit accurate labor force data and to assess the quality of
interviewer-respondent interactions. The initial two phases
of the redesign utilized Computer-Assisted Telephone
Interviewing (CATI) and a Random Digit Dialing (RDD) sampling
plan, and hence is referred to as the CATI/RDD Test. During
the first phase of the CATI/RDD Test, the current version of
the CPS questionnaire ("A") was compared with two
alternative versions ("B" and "C") which
were developed on the basis of earlier laboratory and field
research. The principal product of this first phase was a
single alternative questionnaire ("D"), which
comprised the best questions from versions B and C, and a
number of other questions deemed necessary given the results
of phase one analyses. In the second phase of the field test,
versions A and D were compared. The main purpose of phase two
was to fine tune version D, which with minor revisions will
become the revised CPS questionnaire for the 1990s.
Four general pretesting methods were used in the evaluation
of alternative CPS questionnaires during phases one and two.
These methods are listed and briefly described below:
- Systematically Coded Interviewer-Respondent Interactions.
Six monitors coded interactions between interviewers and
respondents while interviews were in progress. Both
interviewer behaviors (e.g., exact reading of question,
minor/major change in wording, probes) and respondent
behaviors (e.g., adequate answer, inadequate answer, request
for clarification) were coded.
- Interviewer Debriefing. During phase one, interviewers
were asked to report in a self-administered questionnaire,
and later to discuss in a focus group format, their
impressions on various aspects of the alternate
questionnaires (e.g., version preferences, problematic
questions/series, difficulties with asking certain
questions). During phase two, only focus groups were
conducted.
- Field-based Respondent Debriefing. After completing their
monthly CPS CATI/RDD interview, most respondents were asked
two or three sets of debriefing questions that were keyed to
responses given during the main labor force interview. Among
other purposes, these debriefing questions were designed to
establish if key labor force concepts were being
misunderstood and to evaluate whether or not questions in the
main survey were superfluous. Ten percent of the respondents
were read short vignettes and asked how they would classify
the main character in each scenario (i.e., as working or
looking for work).
- Item-based Response Analysis. These analyses involved
statistical comparisons of response distributions and
nonresponse rates among comparable question sets for
alternative versions of the CPS questionnaire.
The focus of the paper is on what we have learned about the
strengths and weaknesses of each of these methods in terms of
implementation, interpretation, and effectiveness at
identifying problematic questions. The discussion is
illustrated with selected results from the CATI/RDD field
test.
To receive a copy of this paper (usually within 3-5 days), please contact Jim Esposito by phone or voice mail (202-691-6368), by e-mail (Esposito.Jim@bls.gov), or by mailing your request to:
James L. Esposito
Bureau of Labor Statistics
Postal Square Building, Room 4985
2 Massachusetts Avenue, N.E.
Washington, DC, 20212
Last Modified Date: July 19, 2008
|