Introduction to Survey Statistics

We can use what we learn from samples of a population – in this case fishermen and fisheries – to understand the characteristics of the whole population through sample surveys. Sampling and estimation can be extremely complex. However, we experience results from sampling in our everyday lives in things such as political polling, health statistics, and television ratings. 

Part of the goal of MRIP is to make science as clear and transparent as possible. Below, we outline the fundamental mathematical concepts behind survey statistics including sample sizes, weighting, percent standard error (PSE), and the two main sources of error that can occur during a sample survey: sampling error and non-sampling error.  More information is available on our materials and resources page. We also answer questions from our constituents through our online e-newsletter. If you would like to be added to the distribution list, please contact us at NMFS.MRIP@noaa.gov.

 

Survey Statistics Overview

Survey Design

There are many factors that must be considered when designing a complex survey. Elements of the design can impact the efficacy, budget and precision of the statistical output. To ensure that all factors are balanced, there are several adjustments that can be made to the survey design including the elements stratification, clustering, and sample size.

Sample Selection

Once the survey design is complete, a sample must be selected that adheres to your design. Ultimately, the goal of sample selection is to obtain a sample representative of the entire population of interest. Having a representative sample will reduce the error, specifically the sampling error, inherent in all estimates derived from sample data.

Data Collection

After selecting your sample, it's time to field your survey and begin collecting data. 

Estimation

Once a survey has been fielded and the data has been cleaned and analyzed, the next step is to create statistically valid estimates. The estimation process must take into account the survey design in order to ensure that all units in the sample are represented properly in the final point estimate. To do this, weighting is introduced and each point estimate has an associated measure of precision called the percent standard error (PSE) to help gain a better understanding of what we don't know from the sample.

For more information about the process used in developing statistics from surveys, please continue exploring this page.

 

Sampling

A sample survey uses data from a subset of a population - the sample- to estimatecharacteristics of the whole population. 

There are two broad categories of sampling; probability sampling and non-probability sampling. MRIP surveys utilize probability or random samples to estimate population values. In probability sampling, each member of the target population has a known, non-zero probability of being included in the sample. Generally, samples are randomly selected from a comprehensive list of population members commonly referred to as the sample frame.  Different probability sampling techniques, such as simple random sampling, stratification, and cluster sampling, may be used to improve the efficiency and precision of a sampling design. Each of these sampling techniques, if implemented properly, will result in unbiased samples that are representative of the target population.

In non-probability sampling, the relationship between the sample and the target population is unknown. Consequently, it is not possible to know if a sample is unbiased. Examples of non-probability samples include convenience samples, quota samples and volunteer or opt-in samples in which the sample members self-select into the survey. Generally, non-probability samples are not used to estimate population values. 

 

Error

All surveys include some amount of error. Survey errors are classified into one of two types; sampling error and non-sampling error. Collectively, sampling and non-sampling errors determine the accuracy of a survey estimate.  Properly designed surveys attempt to minimize both types of errors through careful planning, testing and analysis.  The evaluation of survey errors should be an ongoing process throughout the life cycle of any survey.

Sampling Error

A sample does not include all members of a population. Consequently, an estimate based on a sample is likely to differ from the actual population value that would result from a complete census of the population. Sampling error is inherent in all sample statistics and is a result of random variation among samples. The size of sampling error depends upon the sample size, the sample design and the natural variability within the population. As a general rule, increasing the sample size reduces the sampling error. 

The most commonly reported measure of sampling error is the “standard error”, which is a measure of the spread of independent sample estimates around a true population value. In MRIP, sampling error is reported as percent standard error or PSE which expresses the standard error as a percentage of an estimate. The lower the PSE the greater the confidence that the estimate is close to the true population value. 

Non-sampling Error

Non-sampling error includes any type of error that can impact an estimate other than sampling error. Non-sampling error that results in a systematic difference between a survey estimate and the “true” population value is commonly referred to as bias. Non-sampling error can arise from insufficient coverage of the target population, inaccurate response or measurement, nonresponse or data processing errors. 

  • Coverage error: Coverage error occurs when members of the target population are omitted, duplicated or wrongly included on the sample frame.  Omissions from the sample frame, or undercoverage, will result in biased estimates if those who are excluded have different characteristics from those who are included. Overcoverage resulting from duplication or the inclusion of out-of-scope units can result in bias and sampling inefficiencies.

  • Measurement or response error: Measurement error occurs when respondents provide incorrect responses to survey questions. Measurement error can result from poorly worded or ambiguous survey questions, faulty recollection of activities or events (recall error), inconsistent delivery of survey questions by interviewers (interviewer error), or intentional misreporting. 

  • Nonresponse error: Nonresponse error occurs when individual sample members are unwilling or unable to participate in the survey. This will result in bias if nonrespondents have different characteristics than respondents. 

  • Data processing error: Data processing errors can occur during preparation of the survey data. Examples include data entry errors, coding errors and data editing errors. 
 

Sample Sizes

The “sample size” is the number of units you measure in a sample survey. For example, if you have a bag of 100 black and white marbles, and you pull out 10 at random to estimate the number of each color in the bag, your sample size is 10. With MRIP, we sample angler-trips from the entire population of saltwater recreational anglers. 

In survey statistics, there are two very important things to understand about sample sizes. The first is that the more samples you draw, the more precise your estimate will be. The second is, that it does not matter how large the population is you’re sampling from when it comes to determining precision. Although this often strikes many people as counterintuitive, your sample size of 10 marbles will give you the same level of precision whether the bag contains 100,000 marbles, 1 million, or 100 million. That’s because as long as the population size is larger than the sample size (i.e., that you’re using a survey instead of a census), precision is calculated by looking at the difference between the value (or measurement) result of each sample and the point estimate calculated from that sample. The actual formula for calculating precision is more involved than that (see PSE tab), but the major takeaway is that the way it works is what enables a public opinion pollster to predict the votes of millions of people from a sample size of just hundreds of voters. 

Obviously, increasing sample sizes comes with tradeoffs; the more you invest in sampling, the less you have for other science and management activities. In MRIP, as we develop, test, and certify improvements to our surveys to make sure they are free of the potential for bias, we are working with our partners and stakeholders to determine the level of sampling necessary to provide the level of precision necessary to meet their science and management needs depending on the location, species, time of year, amount of fishing activity, etc.

 

Weighting

“Weighting” is the statistical method in a sample to make sure each sample unit (fishing trip, measured fish, etc.) is properly represented when calculating a final estimate. 

For instance, picking up on the example above, if we had a bag of 100 assorted black and white marbles and drew a random sample of 10, we could say that each marble represents 10/100th of what’s in the bag. In statistical terms, each sample has an equal “weight” of 10. Each sampled marble represents 10 marbles, itself plus 9 others not sampled from the bag.

However, let’s say we had two bags of 100 marbles. If we drew 10 from Bag 1, and 20 from Bag 2, we could not simply add up the results of all 30 marbles to make an estimate. That’s because the marbles from Bag 1 carry a weight of 10, but the marbles from Bag 2 each represent 20/100th of the bag, for a weight of 5. So it’s twice as likely that one of our samples comes from Bag 2 vs. Bag 1, and if we treat them equally, we’re making an assumption that the contents of the two bags are the same. And as discussed above, any time we make untested assumptions, we’re likely to miss identifying bias. 

In sampling, each one of these bags is called a “stratum” (i.e. subgroup). To get an accurate estimate, you must weight of each strata to account for potential differences among groups. 

Along with making sure our estimates are accurate, weighting has another purpose in MRIP. As long as our design is free of bias, and we know what weight to apply to each sample unit, we can choose to spend more time sampling specific places, times of day, or species that might be important to scientists or managers without skewing our results. 

MRIP Guide to Weighting 

One of the goals of MRIP is to be completely transparent about the methods we use to estimate recreational catch, why we use them, and how they work. In this presentation, we look at the process of weighting data to produce accurate estimates — and to help make the most of our limited sampling resources. 

 

 

PSE

All survey estimates include some amount of statistical error and uncertainty. Being able to decipher this error is critical to understanding a catch estimate. 

Every MRIP estimate is made up of two parts: The point estimate and the percent standard error (PSE). The point estimate is the estimated fishing effort, or the number of fish caught at a given place over a specified period of time. When using MRIP queries to examine the data, you will see a number on a table or a point on a graph that indicates the “point estimate.” Even though it is a specific number, it’s important to remember that this number is an estimate. It is impossible to have 100% certainty with any type of sample survey. To indicate how confident we are about a point estimate, we use the PSE. 

The PSE is similar to the “margin of error” that is frequently used in public opinion surveys. It is the measure of how precise an estimate is. The lower the PSE, the greater the precision. Accurately calculating PSEs is important because a full understanding of what we don’t know – and how we can better fill gaps in our knowledge – is an essential component in making prudent, sustainable fisheries management decisions. 

We know that the more data we collect, the higher our precision will be. However, there are trade-offs associated with increasing the number of anglers we sample. In an effort to increase the precision of our data, MRIP has funded several different projects that will study ways to increase precision and balance it with other trade-offs like data timeliness and accuracy. To learn more about these efforts, visit our Projects page.