Click here to skip navigation
OPM.gov Home  |  Subject Index  |  Important Links  |  Contact Us  |  Help

This page can be found on the web at the following url:
http://www.fedview.opm.gov/2012/About/index.asp

U.S. Office of Personnel Management - Recruiting, Retaining, and Honoring a World-Class Workforce to Serve the American People

Advanced Search

Analyzing the Data


What Are Unweighted Data?
What Are Weighted Data?
Why Are Weighted Data Important?
How Were the 2012 FedView Data Weighted?
Are Data Comparisons Significant?
What Is the Margin of Error?


What Are Unweighted Data?

The data collected from survey respondents are called raw, or unweighted data. FedView unweighted results represent all Federal employees who completed surveys.

Data users should be aware that population estimates derived from unweighted data, for all agencies and other subgroups represented by the survey, will be biased because some subgroups of the survey population are under- or over-represented in the respondent group. Statisticians use available information about the entire survey population to develop weights for respondents. When the weights are applied correctly in data analyses, survey findings can be generalized to the entire survey population.

Back to Top

What are Weighted Data?

When the data collected from survey respondents are adjusted to represent the population from which the sample was drawn, the resulting data are called weighted data. FedView weighted results represent all Federal employees covered by the survey.

The weighting process involves computing and assigning a weight to each FedView survey respondent. The weight indicates the number of employees in the survey population the respondent represents. Information about demographic characteristics, such as gender, race, supervisory status, age, and agency size, are used to develop the weights.

The weight does not change a FedView survey respondent's answer; rather, it gives appropriate relative importance to the answer.

Back to Top

Why Are Weighted Data Important?

Weighted data are essential in generalizing findings from survey respondents to the population covered by the survey. If weights are not used in data analyses, estimates for the agencies and subgroups covered by the survey will be biased because some population subgroups are under- or over-represented in the respondent group. The FedView survey weights adjust for the differences between the survey population and respondent group.

Back to Top

How Were the 2012 FedView Data Weighted?

The 2012 FedView data were weighted in three steps:

  1. A base weight was computed for each employee in the sample. The base weight is equal to the reciprocal of the employee's probability of selection.
  2. The base weights of respondents with usable surveys were increased to compensate for sample employees who did not complete and return their surveys. Demographic variables and special software for detecting relationships among variables were used during the nonresponse adjustment process.
  3. The nonresponse-adjusted weights were then modified through a process called raking. The purpose of raking is to use known information about the survey population (such as demographic characteristics) to increase the precision of population estimates. For the 2012 FedView survey, statisticians used demographic information about Federal employees to form dimension variables. Then they "raked" the data until sample distributions for the dimension variables equaled population distributions within a specified degree of precision.

Respondents' final adjusted weights indicate the number of employees in the survey population they represent.

Back to Top

Are Data Comparisons Significant?

In general, the Governmentwide FedView survey reports contain two major types of data comparisons:

  1. Subgroup results, for example, males versus females or headquarters employees versus field employees.
  2. 2012 results to 2011 results and 2010 results.

OPM ran a standard statistical test to determine whether the difference between positive percentages for each comparison (e.g., males versus females or 2012 versus 2011 and 2010) is statistically significant. When such a test indicates less than a 5 percent probability that a difference occurred by chance, that difference is considered to be statistically significant, i.e., it is a reliable "significant difference." In these reports, those test results are shown in the report column labeled "Significant Difference" as one or more of the following:

  • "Yes" means the difference between positive response percentages is statistically significant
  • "No" means the difference between positive response percentages is not statistically significant
  • "NA" means there were not a sufficient number of responses to perform the analysis
  • an up arrow means the increase in positive response percentages over time is statistically significant
  • a down arrow means the decrease in positive response percentages over time is statistically significant

Finding that a difference between two percentages is statistically significant does not imply the difference is meaningful. Government managers must rely on their substantive understanding of the survey topic to decide whether a statistically significant difference is important.

Back to Top

What is the Margin of Error?

Whenever a sample rather than a census of a population is taken, sampling error occurs. Sampling error is the difference between the true population value and the population value estimated from the sample. When we interpret sample data, there is a chance that we will draw the wrong conclusion because of sampling error. However, the extent of sampling error can be estimated. That estimate is often referred to as the margin of error, a statistical measure (confidence interval) that indicates the precision of a sample estimate.

For example, assume that the margin of error is plus or minus 3 percent for a 95 percent level of confidence (those criteria are established when the sample is designed). If the percent favorable is 92 percent, a statement such as the following would be reported: "There is a 95 percent chance that the population percent favorable is between 89 percent and 95 percent." Of course, there is still a 5 percent chance that the true population value will lie outside that range.

Back to Top