text-only page produced automatically by LIFT Text
Transcoder Skip all navigation and go to page contentSkip top navigation and go to directorate navigationSkip top navigation and go to page navigation
National Science Foundation HomeNational Science Foundation - Directorate for Social, Behavioral & Economic Sciences
 
Social & Economic Sciences (SES)
design element
SES Home
About SES
Funding Opportunities
Awards
News
Events
Discoveries
Publications
Career Opportunities
Data Archiving Policy
Human Subjects Guidance
View SES Staff
SBE Organizations
Behavioral and Cognitive Sciences (BCS)
Social and Economic Sciences (SES)
Science Resources Statistics (SRS)
Proposals and Awards
Proposal and Award Policies and Procedures Guide
  Introduction
Proposal Preparation and Submission
bullet Grant Proposal Guide
  bullet Grants.gov Application Guide
Award and Administration
bullet Award and Administration Guide
Award Conditions
Other Types of Proposals
Merit Review
NSF Outreach
Policy Office


Research on Survey Methodology FY 1999 Awards List

Awards from this competition were jointly reviewed and supported by NSF’s Methodology, Measurement, and Statistics Program and a consortium of federal statistical agencies represented by the Federal Committee on Statistical Methodology (FCSM). The following agencies provided direct financial support for these awards:

Bureau of Economic Analysis, DoC
National Agriculture Statistics Service, DoA
Bureau of Justice Statistics, DoJ
National Center for Education Statistics, DoE
Bureau of Labor Statistics, DoL
National Center for Health Statistics, DHHS
Department of Transportation
Science Resource Studies, NSF
Economic Research Service, USDA
Social Security Administration
Energy Information Administration, DoE
U.S. Census Bureau, DoC


Developing and Testing a Computer Tool That Critiques Survey Questions

9977969

Arthur C. Graesser
University of Memphis
Total Award Duration: 36 months
Amount: $205,990

The validity and reliability of answers to questions on a survey critically depend on whether the respondents understand the meaning of the questions. This project develops and tests a computer tool that assists survey designers in improving the comprehensibility of questions. The computer tool will have particular modules that diagnose each question in a survey on various levels of language, discourse, and world knowledge. For example, the critique identifies questions with low frequency words, vague or ambiguous terms, unclear relative terms, complex syntax, high working memory load, misleading presuppositions, and content that appears to be unrelated to the survey context. The computer tool will incorporate empirical findings and computational architectures in the fields of cognitive science, artificial intelligence, computational linguistics, discourse processing, and psychology. Some of these modules are so complex, technical, or subtle that they are invisible to the unassisted human eye, including experts in survey methodology, questionnaire design, and computational linguistics. This motivates the need for a computer tool to assist the research methodologist in revising questions and in learning about the complex mechanisms that underlie each component.The computer tool will be useful to the extent that it provides an accurate and reliable diagnosis of problematic questions. The project will therefore evaluate the performance of the computer tool on several measures. Each module determines whether or not a particular question has a problem (e.g., unfamiliar technical term, working memory overload). These decisions will be compared with the decisions of experts. Other performance measures are needed because trained expert judges may miss subtle computational mechanisms. These other measures will assess whether the computer output can predict the behavior of respondents when they answer the questions: (a) behaviors of respondents that indicate they are having difficulty comprehending the question in a conversational interview (such as clarification questions of respondents) and (b) test-retest reliability of answers to questions when respondents answer a question on multiple occasions. Performance measures also will be compared for original questions, questions revised by survey methodologists who do not use the computer tool, and questions revised by survey methodologists who have had the benefit of using the tool.

 

Collaborative Research: Small-Area Estimation - A Growing Problem for the Next Millennium

9978101

Jiming Jiang
Case Western Reserve
Total Award Duration: 24 months
Amount: $54,1349978145P. Lahiri
University of Nebraska, Lincoln
Total Award Duration: 24 months
Amount: $73,310

Large scale sample surveys are usually designed to produce reliable estimates of various characteristics of interest for large geographic areas. However, for effective planning of health, social, and other services, there is a growing demand to produce similar estimates for smaller geographic areas and subpopulations, commonly referred to as small-areas (or small-domains). The accuracy of small-area statistics is especially crucial when data are used to apportion government funds among various groups.This project focuses on development of new robust small-area estimation methods and the associated model diagnostics. The estimation methods will be developed under general multi-level models which will be useful in solving a variety of small-area estimation problems. To address an important and yet largely neglected aspect of model validation and model selection associated with multi-level models, a test using a sample splitting technique is proposed. Splitting the sample into an estimation set and a validation set can also be used for assessing the actual power of the model. This area of research will continue to grow as social scientists find the need to use complex multi-level models to solve their problems.The research is an outgrowth of the investigators' experiences with small-area estimation problems encountered by various federal, state, and private agencies. Importantly, this project will address a crucial practical problem underlying the work of many governmental and private institutions throughout the world. Further, this research on small-area estimation also will contribute significantly to the literature on survey sampling, generalized linear mixed models, empirical best prediction theory, linear empirical Bayes, variance component estimation, resampling methods, model diagnostics, higher order asymptotics, and statistical computing. Because of the interests of different types of researchers (e.g., survey samplers, main stream statisticians, social scientists), small-area estimation will remain one of the most intriguing problems in survey sampling as we advance into the next millennium.

 

The Cognitive Basis of Seam Effects in Panel Surveys

9907414

Lance J. Rips
Northwestern University
Total Award Duration: 36 months
Amount: $209,654

This project investigates a type of error called the "seam effect" that occurs in national surveys and that affects the quality of their data. Some panel surveys, such as the Survey of Income and Program Participation and the Consumer Expenditure Survey, interview respondents three or four times a year; however, questions on these surveys ask for information about each of the preceding months. For example, a respondent might be interviewed in April and asked during that interview to provide information about his or her expenditures for each of the months of January, February, and March. The same respondent might be interviewed in July for expenditures during April, May, and June. Previous analyses of the data from these surveys show that month-to-month changes in respondents' answers are much greater when the data come from successive interviews than when they come from the same interview. In the example just mentioned, changes in the level of expenditures would be greater between March and April (data gathered from separate interviews) than between other adjacent months (data gathered from the same interview). Prior studies strongly suggest that these differences are not due to true changes between March and April, but are due to response error. The purpose of this project is to develop a model of this effect that can help predict its severity and that will aid in eliminating it or adjusting for it statistically.The studies in the project investigate the seam effect using a procedure in which respondents answer questions about information supplied in the experiments themselves. In this way, the experiments control variables that might alter the size of the effect, and they monitor the respondents' accuracy. The strategy in these experiments is to vary separately factors that might affect respondents' memory for earlier information (e.g., the importance or salience of that information) and factors that might affect respondents' willingness to estimate or to guess at an answer. Because the first set of factors may have more impact on later parts of the response period and the second set of factors more impact on earlier parts, their combined influence can increase or decrease the size of the seam effect. These experiments test this hypothesis.

 

Collaborative Research: Cognitive Issues in the Design of Web Surveys

9907395

Roger Tourangeau
Robert Tortora


Gallup Organization
Total Award Duration: 12 months
Amount: $114,9329910882Mick P. Couper
University of MichiganTotal Award Duration: 12 months
Amount: $16,016

The development of new methods for collecting survey data, including Web surveys, may be ushering in a golden age for self-administered surveys. The new methods of data collection appear to offer the power and complexity of computerization combined with the privacy of self-administration. At the same time, because they do not require an interviewer, they may reduce other types of survey error and could dramatically lower the costs of conducting surveys. Still, there is mounting evidence that different methods of self administration can produce different results; these differences across methods of self administration seem to reflect apparently incidental features of the interface between the respondent and the electronic questionnaire. This collaborative research tests a theory to explain these effects of the interface. The key concept in the theory is that of social presence. To the extent that the method of data collection, its setting, or the interface gives the respondent a sense of interacting with another person, it will trigger motivations similar to those triggered by an interviewer. These motivations include the desire to avoid embarrassing oneself or giving offense to someone else, as well as enhanced motivation to complete the interview. The Web offer ample resources for attracting the interest of the respondent (color, animated images), but even apparently innocuous characteristics of the interface can create a sense of social presence, producing social desirability and related response effects.

This experiment will attempt to identify the features of the interface with a Web survey that create a virtual social presence. A sample of respondents will be recruited by telephone to complete a Web survey. The experiment will vary whether or not the electronic questioner is identified by name ("Hi! I'm John") and whether or not it offers explicit reminders of prior answers. The results of this first experiment may suggest follow-up experiments to be carried out if the budget permits. The main hypothesis to be tested in any follow-up studies is that the more the interface creates a sense of social presence, the more respondents will act as if they are interacting with another human being.


 

 

 

Print this page
Back to Top of page
  Web Policies and Important Links | Privacy | FOIA | Help | Contact NSF | Contact Webmaster | SiteMap  
National Science Foundation Social, Behavioral & Economic Sciences (SBE)
The National Science Foundation, 4201 Wilson Boulevard, Arlington, Virginia 22230, USA
Tel: (703) 292-5111 , FIRS: (800) 877-8339 | TDD: (800) 281-8749
Last Updated:
Jul 10, 2008
Text Only


Last Updated: Jul 10, 2008