United States Department of Agriculture
United States Department of Agriculture Food Safety and Inspection Service
HomeAbout FSISNews & EventsFact SheetsCareersFormsHelpContact UsEn Espanol
 
Search FSIS
Search Tips
A to Z Index
Browse by Audience. The following script allows you to access a dropdown menu, increasing the navigation options across the Web site
 
Browse by Subject
Food Safety Education
Science
Regulations & Policies
FSIS Recalls
Food Defense & Emergency Response
Codex Alimentarius
About FSIS
Office of Program Evaluation, Enforcement and Review
Data Collection For Evaluation
In the data collection phase of an evaluation, evaluators compile information needed to answer the evaluation study questions that have been identified in the earlier design phase. In the design effort, evaluators reviewed the evaluation questions, noting what data would need to be collected from whom in order to answer the questions. Next, evaluators decided what approaches or techniques would best provide them with those data.

Many correctly think of evaluation as an investigative process and data collection as "gathering credible evidence" to indicate how the program is performing or has performed. The evaluator, with involvement from the client and stakeholders,¹ selects those approaches and techniques that are feasible within the time and budget constraints of the project to answer the evaluation study questions. The intent is to collect information that stakeholders perceive as trustworthy and relevant. Evaluators consider respondent burden, collecting only the type and quantity of data needed to answer the study questions. Evaluators strive to collect data in a systematic, comparable, uniform, precise, clear, and unbiased way so that data are correct, complete, valid, and unbiased.

There are many ways to collect data, such as: surveys, document analysis, observation, interviews, focus groups, etc. Evaluators must choose the best approach or combination of approaches that best answer the evaluation questions. Quantitative (data in the form of numbers) and qualitative (data in the form of words) data both have their advantages and disadvantages. A good option is to consider using both, as they can complement each other. Usually, evaluators collect qualitative data to add depth and a fuller understanding of the complexities of a program to the quantitative information that straightforwardly defines the program. Therefore, careful consideration will be given to data collection during the evaluation design phase.

Approaches
A brief description is provided below for the main Program Evaluation and Improvement Staff (PEIS) data collection approaches:

Documents are very handy in program evaluation. Existing (archival) records often provide insights that cannot be observed or noted in another way, if the documents are accessible and accurate. Examining records requires that the data collector have a very clear idea of what information is needed, because there will likely be plenty of other interesting information to distract the unorganized reviewer.

Surveys use data collection instruments, like questionnaires, to collect data from a sample of the relevant population, or from the entire population (a census). Surveys are used extensively in evaluation - perhaps overused - because of their flexibility to gather data on almost any issue. When done correctly, surveys are an efficient and accurate means of collecting data, but they can be difficult to construct, and may yield low participation (response rate). A low response rate hinders the reliability and validity of the information. The evaluator does not know if the non-respondents would have answered differently, so including a non-respondent analysis is often important to see who actually responded or not.

Observations can be useful in determining how the program is implemented and provides opportunities for identifying unanticipated outcomes. Observations can answer questions on whether or not the program is being delivered and operated as planned. By directly observing operations and activities, the evaluator can enter into and understand the situation and context. However, observation (obtrusive and unobtrusive) can be expensive and time consuming. Depending on the situation, the observer may need to be a content expert to accurately interpret the observations.

Interviews are essentially conversations between the evaluators and their respondents. An interview is selected when interpersonal contact is important, when opportunities for follow-up of interesting comments are desired, when the topic is complex and requires explanation and interaction, or when cultural, educational, or language barriers are present. The use of interviews as a data collection method assumes that the participants´ perspectives are meaningful and knowable. The quality of information obtained is largely dependent on the interviewer´s skills and personality.²

Groups (such as focus groups) combine elements of both observation and interviewing. A focus group is an interview with a gathering of 8 to 12 people, but uses group interaction to generate data and insights that would be unlikely to emerge in individual interviews. The technique includes observation of group dynamics, and insights into the respondents´ behaviors and attitudes. Originally used as a market research tool to learn the appeal of various products, the focus group method has been adopted by other fields as a way to gather data on a given topic.

Resource Management
"Collect only the information you are going to use, and use all the information you collect"³

The possibilities for gathering evidence for an evaluation are endless, but unfortunately, resources are not. Without a coherent plan to answer study questions, people often tend to collect too much data. By collecting everything from everybody, they hope they will find something they can use. This wastes resources and is cumbersome to manage. By focusing the data collection, the evaluator can balance the breadth and depth of the information obtained and achieve results that are practical and within the budget constraints of the project. Since all types of data have limitations, evaluators will often select multiple methods to obtain information that conveys a well-rounded picture of the program. Multiple approaches and techniques that "triangulate" data from several sources in several ways can improve overall accuracy and are often seen by the evaluation´s clients as more credible than data from one source.

PEIS staff are experienced in multiple approaches to data collection and analysis and can help ensure the most efficient and effective data collection for your evaluation.


1 Stakeholders are persons that have an interest in the project being evaluated or the results of the evaluation. (W.K. Kellogg Foundation. Evaluation Handbook, 1998, page 48.)
2 Patton, M.Q. Qualitative Evaluation and Research Methods, Second Edition, Newbury Park, CA: Sage Publications, 1990.
3 W. K. Kellogg Foundation. Evaluation Handbook, 1998, page 69.

 

 

About FSIS
  Structure & Organization
    OA
    OFDER
    OFO
    OIA
    OM
    OOEET
    OPACE
    OPEER
    OPHS
    OPPD
   FSIS Biographies
   Associated Agencies & Partners
   Cooperative Agreements
   Agency History
   Strategic Planning
FSIS Home | USDA.gov | FoodSafety.gov | Site Map | A to Z Index | Policies & Links | Significant Guidance
FOIA | Accessibility Statement | Privacy Policy | Non-Discrimination Statement | Information Quality | USA.gov | Whitehouse.gov