text-only page produced automatically by LIFT Text
Transcoder Skip all navigation and go to page contentSkip top navigation and go to directorate navigationSkip top navigation and go to page navigation
National Science Foundation Division of Science Resources Statistics
Contents

General Notes

References

Data Tables

Appendix A. Technical Notes and Technical Tables

Appendix B. Survey Documents

Suggested Citation, Acknowledgments



Raymond M. Wolfe,
Project Officer
(703) 292-7789
Research and Development Statistics Program

SRS Home

Appendix A. Technical Notes and Technical Tables

Survey Methodology

Much of the information for this appendix was provided by the Manufacturing and Construction Division of the U.S. Bureau of the Census, which collected and compiled the survey data. Copies of the technical papers cited can be obtained from NSF's Research and Development Statistics Program in the Division of Science Resources Statistics. The first part of this appendix focuses on recent changes to the survey methodology; major historical changes are discussed later in Comparability of Statistics. More detailed historical information is available from individual annual reports (http://www.nsf.gov/statistics/industry/).

Reporting Unit

The reporting unit for the Survey of Industrial Research and Development is initially the company,[3] defined as a business organization of one or more establishments under common ownership or control. Some companies, at their own request, are comprised of multiple reporting units. These reporting units are compiled to a single company record at the time of tabulation.

Frame Creation

The Business Register (BR), a Bureau of the Census database, containing industry, geographic (state), employment, and payroll information, was the foundation from which the frame used to select the 2004 survey sample was created (see table A-1 for population and sample sizes). For companies with more than one establishment, data were summed to the company level and the resulting company record was used to select the sample and process and tabulate the survey data.

After data were summed to the company level, each company then was assigned a single North American Industry Classification System (NAICS)[4] code based on payroll. The method used followed the hierarchical structure of the NAICS. The company was first assigned to the economic sector, defined by a 2-digit NAICS code, or combination thereof, representing manufacturing, mining, trade, etc., that accounted for the highest percentage of its aggregated payroll. Then the company was assigned to a subsector, defined by a 3-digit NAICS code, that accounted for the highest percentage of its payroll within the economic sector. Finally, the company was assigned a 4-digit NAICS code within the subsector, again based on the highest percentage of its aggregated payroll within the subsector. Assignment below the 4-digit level was not done because the 4-digit level was the lowest level needed to guarantee publication-level industry classification.

Frame Partitioning

For the 2004 survey, the frame was partitioned into four groups: (1) top 300 R&D-performing companies still in the frame from the 2003 survey year, (2) other companies known to conduct R&D in any of the previous five survey years, (3) companies that previously only reported zero R&D in all of the previous five survey years, and (4) companies for which information about the extent of R&D activity was uncertain. There were 288 companies in the first group, 11,444 companies in the second group, 81,228 companies in the third group, and 2,008,489 companies in the fourth group for a total of 2,101,449 companies.

Defining Sampling Strata

For the first and third partitioned groups the sampling strata were defined corresponding to the 4-digit industries and groups of industries for which statistics were developed and published. There were 27 manufacturing and 22 nonmanufacturing strata in each of these partitioned groups. The second partitioned group was divided into two strata, one manufacturing and the other nonmanufacturing.

Identifying Arbitrary Certainty Companies

Arbitrary certainty companies were companies arbitrarily selected with certainty independent of relative standard error (RSE) constraints. There were different criteria defining an arbitrary certainty company depending on the partitioned group the company is in. Companies in the first partitioned group that also had prior R&D of $3 million or more were arbitrary certainties. Companies in the third partition, which were also in the top 50 of their strata by payroll or in the top 50 of their state by payroll, were arbitrary certainties.

Probability Proportionate to Size

The distribution of companies by R&D in the first partitioned group or by payroll in the third partitioned group was skewed as in earlier frames. Because of this skewness, a fixed sample probability proportionate to size (pps) method remained the appropriate selection technique for these partitioned groups. That is, with the pps method large companies had higher probabilities of selection than did small companies. The fixed sample size methodology has been replicated for every survey year since the 1998 survey.

Companies in the first partitioned group received a measure of size equal to the most recent reported positive R&D expenditure. Companies in the third partitioned group received a measure of size equal to their company payroll. RSE constraints by industry and by state were imposed separately in the first and third partitioned groups and the company received a probability of selection for each industry in which it had activity, as well as each state. The company's final probability was the maximum of these industry and state probabilities.

Simple Random Sampling

The second partitioned group was split into two strata, manufacturing and nonmanufacturing. Each stratum was sampled using simple random sampling (srs). The use of srs implied that each company within a stratum had an equal probability of selection. Companies in the manufacturing stratum received a probability of selection of roughly 0.01. Companies in the nonmanufacturing stratum received a probability of selection of roughly 0.004.

Sample Stratification and Relative Standard Error Constraints

The particular sample selected was one of a large number of samples of the same type and size that by chance might have been selected. Statistics resulting from the different samples would differ somewhat from each other. These differences are represented by estimates of sampling error or variance. The smaller the sampling error, the less variable the statistic. The accuracy of the estimate, that is, how close it is to the true value, is also a function of nonsampling error.

Controlling Sampling Error. Historically, it has been difficult to achieve control over the sampling error of survey estimates. Efforts were confined to controlling the amount of error due to sample size variation, but this was only one component of the overall sampling error. The other component depended on the correlation between the data from the sampling frame used to assign probabilities (namely R&D values either imputed or reported in the previous survey) and the actual current year reported data. The nature of R&D is such that these correlations could not be predicted with any reliability. Consequently, precise controls on overall sampling error were difficult to achieve.

Sampling Strata and Standard Error Estimates. The constraints used to control the sample size in each stratum were based on a universe total that, in large part, was improvised. That is, as previously noted, a prior R&D value for the first partitioned group and payroll for the third partitioned group were assigned to companies in their respective groups. Assignment of sampling probability was nevertheless based on this distribution. The presumption was that actual variation in the sample design would be less than that estimated, because many of the sampled companies in the third partitioned group have true R&D values of zero, not the widely varying values that were imputed using total payroll as a predictor of R&D. Previous sample selections indicate that in general this presumption held, but exceptions have occurred when companies with large sampling weights have reported large amounts of R&D spending. See table A-2 for a list by industry of the standard error estimates for selected items and table A-3 for a list of the standard error estimates of total R&D by state.

Nonsampling Error. In addition to sampling error, estimates are subject to nonsampling error. Errors are grouped in five categories: specification, coverage, response, nonresponse, and processing. For detailed discussions on the sources, control, and measurement of each of these types of error, see U.S. Bureau of the Census (1994b and 1994f).

Sample Size

The parameters set to control sampling error discussed above resulted in sample sizes of 288 companies from the first frame partition, 8,673 companies from the second frame partition, 666 companies from the third frame partition, and 22,373 companies from the fourth frame partition. The overall final sample consisted of 32,000 companies. This total included an adjustment to the sample size based on a minimum probability rule and changes in the operational status of some companies.

Minimum Probability Rule. A minimum probability rule was imposed for both the first and third partitions. As noted earlier, probabilities of selection proportionate to size were assigned to each company, where size was the prior reported R&D or payroll value assigned to each company. Selected companies received a sample weight that was the inverse of their probability. Selected companies that ultimately report R&D expenditures vastly larger than their assigned values can have adverse effects on the statistics, which were based on the weighted value of survey responses. In order to minimize these effects on the final statistics, a minimum probability rule was imposed to control the maximum weight of a company. If the probability based on company size was less than the minimum probability, then it was reset to this minimum value. The consequence of raising these original probabilities to the specific minimum probability was to raise the final sample size.

Changes in Operational Status. Between the time that the frame was created and the survey was prepared for mailing, the operational status of some companies changed. That is, they were merged with or acquired by another company, or they were no longer in business. Before preparing the survey for mailing, the operational status was updated to identify these changes. As a result, the number of companies mailed a survey questionnaire was somewhat smaller than the number of companies initially selected for the survey.

Weighting, Maximum Weights, and Probabilities of Selection

Sample weights were applied to each company record to produce national estimates. Within the first partition of the sample, consisting of known R&D performers (positive R&D expenditures), the maximum sample weight was roughly 20. For the second partition, consisting of companies reporting zero R&D expenditures, the maximum sample weight was roughly 100 for companies classified in manufacturing and 250 for those classified in nonmanufacturing. For the third partition, consisting of companies with uncertain R&D activity, the maximum sample weight was roughly 100 for companies classified in manufacturing and 250 for those classified in nonmanufacturing.

Survey Questionnaires

Two questionnaires are used each year to collect data for the survey. Known large R&D performers are sent a detailed survey form, Form RD-1.[5] The Form RD-1 requests data on sales or receipts, total employment, employment of scientists and engineers, expenditures for R&D performed within the company with federal funds and with company and other funds, character of work (basic research, applied research, and development), company-sponsored R&D expenditures in foreign countries, R&D performed by others, R&D performed in collaboration with others, federally funded R&D by contracting agency, R&D costs by type of expense, R&D costs by technology area, domestic R&D expenditures by state, energy-related R&D, R&D done in collaboration with others, and foreign R&D by country. Because companies receiving the Form RD-1 have participated in previous surveys, computer-imprinted data reported by the company for the previous year are supplied for reference. Companies are encouraged to revise or update the data for a prior year if they have more current information; however, prior-year statistics that had been previously published were revised only if large disparities were reported.

Small R&D performers and firms included in the sample for the first time were sent Form RD-1A. This questionnaire collects the same information as Form RD-1 except for five items: Federal R&D support to the firm by contracting agency, R&D costs by type of expense, domestic R&D expenditures by state, energy-related R&D, and foreign R&D by country. It also includes a screening item that allows respondents to indicate that they do not perform R&D. No prior-year information is made available since the majority of the companies that receive the Form RD-1A have not been surveyed in the previous year.

Recent Survey Form Content Changes

For 2004, some item headings and numbers have changed compared with the 2003 survey questionnaires. The five mandatory items, total r&d expenditures, federally funded r&d, net sales, total employment (which are included in the Census Bureau's annual mandatory statistical program), and the distribution of r&d by state are now: Question 5d (columns 3 and 1), question 2, question 3, and question 15, respectively. Some item response categories have been added and the wording of some has been changed for clarification. Question 6, which asks for projected R&D costs for 2005, has been expanded to include columns for reporting the projected cost of federally funded R&D. Question 8, which asks for the type of outside organization that performed R&D for the company, has been expanded to include response categories for federal agencies or laboratories and state government agencies or laboratories.  Question 9, which asks for the cost of R&D performed outside the United States by percentage of ownership of the organization performing the R&D, was expanded to include clarifying instructions. Question 10, which asks for the country location of R&D performed outside the United States, was expanded to include response categories for China, India, Ireland, Israel, Italy, Singapore, and Sweden. Question 17, which asks for the type of outside organization with which the company collaborated in the performance of R&D, has been expanded to include a response category for state government agencies.

Number of Survey Questionnaires Sent

For the 2004 survey, a Form RD-1 was mailed to companies that reported R&D expenditures of $3 million or more in the 2003 survey. Approximately 3,393 companies were mailed Form RD-1 and approximately 28,491 were mailed Form RD-1A. Both survey questionnaires and the instructions provided to respondents are reproduced in appendix B, Survey Documents.

Followup for Survey Nonresponse

The 2004 survey questionnaires were mailed in February 2005. Recipients of Form RD-1A were asked to respond within 30 days, while Form RD-1 recipients were given 60 days. A follow-up questionnaire and letter were mailed to RD-1A recipients every 30 days (up to a total of five times), if their completed survey form had not been received. After questionnaire and letter followups, three additional automated telephone followups were conducted for the remaining delinquent RD-1A recipients.

A letter was mailed to Form RD-1 recipients 30 days after the initial mailing, reminding them that their completed survey questionnaires were due within the next 30 days. A second questionnaire and reminder letter were mailed to Form RD-1 respondents after 60 days. Two additional followups (one mail, one telephone) were conducted for delinquent Form RD-1 recipients not ranked among the 300 largest R&D performers based on total R&D expenditures reported in the previous survey. For these performers, a special telephone followup was used to encourage response. Table A-4 shows the number of companies in each industry or industry group that received a survey questionnaire, what type of form, and the percentage that responded to the survey.

Imputation for Item Nonresponse

For various reasons, many firms chose to return the survey questionnaire with one or more blank items.[6]  For some firms, internal accounting systems and procedures may not have allowed quantification of specific expenditures. Others may have refused to answer any voluntary questions as a matter of company policy.

When respondents did not provide the requested information, estimates for the missing data were made using imputation algorithms. In general, the imputation algorithms computed values for missing items by applying the average percentage change for the target item in the nonresponding firm's industry to the item's prior-year value for that firm, reported or imputed. This approach, with minor variation, was used for most items.[7]  Table A-5 contains imputation rates for the principal survey items.

Response Rates and Mandatory/Voluntary Reporting

Survey reporting requirements divided survey items into two groups: mandatory and voluntary. Responses to five data items were mandatory; responses to the remaining items were voluntary. The mandatory items were total R&D expenditures, federal R&D funds, net sales, total employment (which are included in the Census Bureau's annual mandatory statistical program), and the distribution of R&D by state. During the 1990 survey cycle, NSF conducted a test of the effect of reporting on a completely voluntary basis to determine whether combining both mandatory and voluntary items on one survey questionnaire influences response rates. For this test, the 1990 sample was divided into two panels of approximately equal size. One panel, the mandatory panel, was asked to report as usual on four mandatory items with the remainder voluntary, and the other panel was asked to report all items on a completely voluntary basis. The result of the test was a decrease in the overall survey response rate to 80% from levels of 88% in 1989 and 89% in 1988. The response rates for the mandatory and voluntary panels were 89% and 69%, respectively. Detailed results of the test were published in Research and Development in Industry: 1990. For firms that reported R&D expenditures in 2002, table A-6 shows the percentage that also reported data for other selected items.

Character of Work Estimates

Response to questions about character of work (basic research, applied research, and development) declined in the mid-1980s, and as a result, imputation rates increased. The general imputation procedure described above became increasingly dependent upon information imputed in prior years, thereby distancing current-year estimates from any reported information. Because of the increasing dependence on imputed data, NSF chose not to publish character of work estimates in 1986. The imputation procedure used to develop these estimates was revised in 1987 for use with later data and differs from the general imputation approach. The new method calculated the character of work distribution for a nonresponding firm only if that firm reported a distribution within a five-year period, extending from two years before to two years after the year requiring imputation. Imputation for a given year was initially performed in the year the data were collected and was based on a character of work distribution reported in either of the two previous years, if any. It was again performed using new data collected in the next two years. If reported data followed no previously imputed or reported data, previous period estimates were inserted based on the currently reported information. Similarly, if reported data did not follow two years of imputed data, the two years of previously imputed data were removed. Thus, character of work estimates were revised as newly reported information became available and were not final for two years following their initial publication.

Beginning with 1995, previously estimated values were not removed for firms that did not report in the third year, nor were estimates made for the two previous years for firms reporting after two years of nonresponse. This process was changed because in the prior period revisions were minimal. Estimates continued to be made for two consecutive years of nonresponse and discontinued if the firm did not report character of work in the third year. If no reported data were available for a firm, character of work estimates were not imputed. As a consequence, only a portion of the total estimated R&D expenditures were distributed by character of work at the firm level. Those expenditures not meeting the requirements of the new imputation methodology were placed in a "not distributed" category.

NSF's objective in conducting the survey has always been to provide estimates for the entire population of firms performing R&D in the United States. However, the revised imputation procedure would no longer produce such estimates because of the not distributed component. A baseline estimation method thus was developed to allocate the not distributed amounts among the character of work components. In the baseline estimation method, the not distributed expenditures were allocated by industry group to basic research, applied research, and development categories using the percentage splits in the distributed category for that industry. The allocation was done at the lowest level of published industry detail only; higher levels were derived by aggregation, just as national totals were derived by aggregation of individual industry estimates, and result in higher performance shares for basic and applied research and lower estimates for development's share than would have been calculated using the previous method.

Using data collected during the 1999 and 2000 cycles of the survey, reporting anomalies for the character of work survey items, especially for basic research, were investigated. It was discovered that a number of large companies known to develop and manufacture products reported all of their R&D as basic research. This phenomenon is not logical and prompted a renewed effort to strengthen character of work estimates produced from the survey. Identification of the anomalous reporting patterns was completed and edit checks were improved for processing of the 2001 and 2002 data. Consequently, publication of character of work distributions of R&D has been resumed, and the tables containing historical basic research, applied research, and development estimates have been revised and footnoted accordingly.

State Estimates

Form RD-1 requests a distribution of the total cost of R&D among the states where R&D was performed. Prior to the 1999 survey, an independent source, the Directory of American Research and Technology, published by the Data Base Publishing Group of the R. R. Bowker Company was used in conjunction with previous survey results to estimate R&D expenditures by state for companies that did not provide this information. The information on scientists and engineers published in the directory was used as a proxy indicator of the proportion of R&D expenditures within each state. R&D expenditures by state were estimated by applying the distribution of scientists and engineers by state from the directory to total R&D expenditures for these companies. These estimates were included with reported survey data to arrive at published estimates of R&D expenditures for each state. However, the practice of using outside information to formulate or adjust estimates of R&D expenditures for each state has been discontinued because a suitable source for supporting information is no longer available. State estimates resulting from the 1999 and 2000 surveys were based solely on respondent reports and information internal to the survey.

Beginning with the 2001 survey, because of the lack of a reliable, comprehensive outside source of information, in an effort to improve the quality of reported data, NSF sought and was granted authorization to require reporting of the distribution of R&D by state from the Office of Management and Budget (OMB), the federal agency that oversees and controls burden on respondents.

Also beginning in 2001, the sampling and estimation methodologies used to produce state estimates were modified from previous years to yield better accuracy and precision and to reduce erroneous fluctuations in year-to-year estimates due to small sample sizes of R&D performers by state. The new sampling methodology selects known R&D performers with a higher probability than nonperformers and selects with certainty the largest 50 companies in each state based on payroll thus providing more coverage of R&D performers. The new estimation methodology for state estimates takes the form of a hybrid estimator combining the unweighted reported amount by state with a weighted amount apportioned (or raked) across states with industrial activity. The hybrid estimator smoothes the estimate over states with R&D activity by industry and accounts for real change within a state. The Horvitz-Thompson estimator continues to be used to estimate the number of R&D performers by state.

Top of page. Back to Top

Comparability of Statistics

This section summarizes major survey improvements, enhancements, and changes in procedures and practices that may have affected the comparability of statistics produced from the Survey of Industrial Research and Development over time and with other statistical series (see also NSF 2002a and U.S. Bureau of the Census 1995). This section focuses on major historical changes. More detailed historical information is available from individual annual reports http://www.nsf.gov/statistics/industry/.

Industry Classification System

Beginning with the 1999 cycle of the survey, industry statistics are published using the North American Industry Classification System (NAICS). The ongoing development of NAICS has been a joint effort of statistical agencies in Canada, Mexico, and the United States. The system replaced the Standard Industrial Classification (1980) of Canada, the Mexican Classification of Activities and Products (1994), and Standard Industrial Classification (SIC 1987) of the United States. (For a detailed comparison of NAICS to the Standard Industrial Classification (1987) of the United States, visit http://www.census.gov/epcd/www/naics.html.)  NAICS was designed to provide a production-oriented system under which economic units with similar production processes are classified in the same industry. NAICS was developed with special attention to classifications for new and emerging industries, service industries, and industries that produce advanced technologies. NAICS not only eases comparability of information about the economies of the three North American countries, but it also increases comparability with the two-digit level of the United Nations' International Standard Industrial Classification (ISIC) system. Important for the Survey of Industrial Research and Development is the creation of several new classifications that cover major performers of R&D in the U.S.  Among manufacturers, the computer and electronic products classification (NAICS 334) includes makers of computers and peripherals, semiconductors, and navigational and electromedical instruments. Among nonmanufacturing industries are information (NAICS 51) and professional, scientific, and technical services (NAICS 54). Information includes publishing, both paper and electronic; broadcasting; and telecommunications. Professional, scientific, and technical services include a variety of industries. Of specific importance for the survey are engineering and scientific R&D service industries.

The change of industry classification system affected most of the detailed statistical tables produced from the survey. Prior to the 1999 report, tables classified by industry contained the current survey's statistics plus statistics for 10 previous years. Because of the new classification system, the tables classified in the 1999–2003 reports contain only statistics for the study year and previous years back to 1999. However, to provide a bridge for users who wanted to make year-to-year comparisons below the aggregate level, in several tables in Research and Development in Industry: 1999 and Research and Development in Industry: 2000 statistics from the 1997 and 1998 cycles of the survey, which were previously classified and published using the SIC system, were reclassified using the new NAICS codes. These reclassified statistics were slotted using their new NAICS classifications alongside the 1999 and 2000 statistics, which were estimated using NAICS from the outset.

Industry Classification Methodology

Since 1999, the frame from which the statistical samples were selected was divided into two partitions based on total company employment. In the manufacturing sector, companies with employment of 50 or more were included in the large-company partition. In the nonmanufacturing sector, companies with employment of 15 or more were included in the large-company partition. Companies in the respective sectors with employment below these values but with at least 5 employees were included in the small-company partition. The purpose of partitioning the sample this way was to reduce the variability in industry estimates largely attributed to the random year-to-year selection of small companies by industry and the high sampling weights that sometimes were assigned to them. Therefore, in the 1999 and 2000 reports detailed industry statistics were published only from the large-company partition; detailed industry statistics from the small-company partition were not. Statistics from the small-company partition were included in the manufacturing, nonmanufacturing, and all industries totals but were aggregated into "small-manufacturing" and "small-nonmanufacturing" classifications instead of being included in their respective industry classifications. Beginning with the 2001 survey, this practice was evaluated and discontinued because it was determined that the data for small companies are more useful if they are included in their respective industries even given the sampling concerns described above.

For the 2004 survey, some companies' electronically assigned industry codes were manually examined and changed. Beginning in the late 1990s, increasingly large amounts of R&D were attributed to the wholesale trade industries, resulting from the payroll-based methodology used to assign industry classifications and the change from the SIC system to the NAICS in 1999. Such classification artifacts were of particular concern for companies traditionally thought of as pharmaceutical or computer-manufacturing firms. As these firms increasingly marketed their own products and more of their payroll involved employees in selling and distribution activities, the potential for the companies to be classified among the wholesale trade industries increased. To increase the relevance and usefulness of the industrial R&D statistics, NSF evaluated ways to ameliorate the negative effects of the industry classification methodology and change in classification systems. Beginning in 2004, in addition to firms originally assigned NAICS codes among the wholesale trade (NAICS 42) industries, firms in the information (NAICS 51), professional, scientific, and technical services (NAICS 54) and management of companies and enterprises (NAICS 55) industries using the payroll-based methodology were manually reviewed by NSF and Census. These firms were reclassified based on primary R&D activity, which in most cases corresponded to their primary products or service activities. The result was that most of the R&D previously attributed to NAICS 42 and 55 industries was redistributed. Statistics resulting from the old and new industry classification methods are in tables A-9 and A-10. For detailed information, see NSF 2007.

Company Size Classifications

Beginning with the 1999 cycle of the survey, the number of company size categories used to classify survey statistics was increased. The original 6 categories were expanded to 10 to emphasize the role of small companies in R&D performance. The more detailed business size information also facilitates better international comparisons. Generally, statistics produced by foreign countries that measure their industrial R&D enterprise are reported with more detailed company size classifications at the lower end of the scale than U.S. industrial R&D statistics traditionally have been. (For more information, visit the Organisation for Economic Co-operation and Development (OECD) website at http://www.oecd.org.) The new classifications of the U.S. statistics enable more direct comparisons with other countries' statistics.

Revisions to Historical and Immediate Prior-Year Statistics

Revisions to historical statistics usually have been made because of changes in the industry classification of companies caused by changes in payroll composition detected when a new sample was drawn. Various methodologies have been adopted over the years to revise, or backcast, the data when revisions to historical statistics have become necessary. Documented revisions to the historical statistics from post-1967 surveys through 1992 are summarized by NSF (1994) and in annual reports for subsequent surveys. Detailed descriptions of the specific revisions made to the statistics from pre-1967 surveys are scarce, but the U.S. Bureau of the Census (1995) summarizes some of the major revisions.

Changes to reported data can come from three sources: respondents, analysts involved in survey and statistical processing, and the industry reclassification process. Prior to 1995, routine revisions were made to prior-year statistics based on information from all three sources. Consequently, results from the current-year survey were used not only to develop current-year statistics but also to revise immediate prior-year statistics. Beginning with the 1995 survey, this practice was discontinued. The reasons for discontinuation of this practice were annual sampling; continual strengthening of sampling methodology; and improvements in data verification, processing, and nonresponse followup. Moreover, it was not clear that respondents or those who processed the survey results had any better information a year after the data were first reported. Thus, it was determined that routinely revising published survey statistics increased the potential for error and often confused users of the statistics. Revisions are now made to historical and immediate prior-year statistics only if substantive errors are discovered.

For 1999, an error in the sample frame caused one very large company (based on payroll) to be selected for the sample and its statistical record to be assigned a large weight (see Frame Creation and Weighting and Maximum Weights above). Because the company's record had received a large weight during 1999 sampling, the company was selected with certainty for the 2000 sample and assigned a weight of one (see Identifying Certainty Companies above). This sampling artifact caused an abnormally large decrease in the industry data, especially for sales and employment, when comparing the 2000 statistics with the statistics originally published for 1999. The weight in the company's record in the 1999 statistical file was corrected, and the 1999 statistics were revised and included in subsequent reports. R&D estimates for the company also were affected; however, the amount of R&D was relatively small, even after weighting.

As summarized above under Character of Work Estimates, reporting anomalies for the character of work survey items, especially for basic research, were discovered and investigated using data collected during the 1999 and 2000 cycles of the survey. Companies known to develop and manufacture products but that reported all of their R&D as basic research were contacted and queried regarding their R&D activities. After reviewing the definitions of basic research, applied research, and development, all but several changed their distribution of R&D. Census, the collection and tabulation agent for the survey, was able to go back as far as 1998 and correct the statistical files. Consequently, the tables containing historical basic research, applied research, and development estimates have been revised and footnoted accordingly.

During statistical processing for the 2003 survey two problems were discovered. The first involved a very large company classified among the manufacturing industries. The company was properly sampled for the survey and sent a questionnaire but did not respond. The company had responded to the survey in the late 1990s but not since then. In such cases, estimates for the missing data are made using imputation algorithms (see Imputation for Item Nonresponse above). Using publicly available information, it was discovered that the amount of R&D imputed for the company for 2003 was much lower than the amount from the public sources. Further, amounts imputed since the company's last report were similarly much lower. The company was contacted and it provided a corrected amount for 2003 and updated R&D amounts for past years. Consequently, the historical statistics for 1999–2002 in this report have been revised and affected tables footnoted accordingly. The second problem involved another very large company that significantly revised the 2003 data preprinted (see Survey Questionnaires above) on its 2004 questionnaire. During 2003, the company had acquired a portion of another company that also had been in the survey in previous years. Through correspondence with the acquiring company, it was discovered that a significant amount of R&D had been reported twice to the 2003 and 2004 surveys. The double-counted portion of the data was corrected in both survey files, and the tables in this report reflect the corrections.

Year-to-Year Changes

Comparability from year to year may be affected by new sample design, annual sample selection, and industry shifts.

Sample Design

By far the most profound influence on statistics from recent surveys occurred when the new sample design for the 1992 survey was introduced. Revisions to the 1991 statistics were dramatic (see Research and Development in Industry: 1992 (NSF 1995b) for a detailed discussion). While the allocation of the sample was changed somewhat, the sample designs used for subsequent surveys were comparable to the 1992 sample design in terms of size and coverage.

Annual Sample Selection

With annual sampling (introduced in 1992), more year-to-year change is evident than when survey panels were used, for two reasons. First, prior to annual sampling, a wedging operation, which was performed when a new sample was selected, adjusted the data series gradually to account for the changes in classification (see the discussion on wedging later under Time-Series Analyses). Second, yearly correlation of R&D data is weakened when independent samples are drawn each year.

Industry Shifts

The industry classification of companies is redefined each year with the creation of the sampling frame. By redefining the frame, the sample reflects current distributions of companies by size and industry. A company may move from one industry to another because of either changes in its payroll composition, which is used to determine the industry classification code (see previous discussion under Frame Creation); changes in the industry classification system itself; or changes in the way the industry classification code was assigned or revised during survey processing.

A company's payroll composition can change because of the growth or decline of product or service lines, the merger of two or more companies, the acquisition of one company by another, divestitures, or the formation of conglomerates. Although an unlikely occurrence, a company's industry designation could be reclassified yearly with the introduction of annual sampling. When companies shift industry classifications, the result is a downward movement in R&D expenditures in one industry that is balanced by an upward movement in another industry from one year to the next.

From time to time, the industry coding system used by federal agencies that publish industry statistics is changed or revised to reflect the changing composition of U.S. and North American industry. The Standard Industrial Classification (SIC) system, as revised in 1987, was used for statistics developed from the 1988–91 panel surveys and the 1992–98 annual surveys. As discussed above, the industrial classification system has been completely changed, and beginning with the 1999 cycle of the survey, the North American Industry Classification System (NAICS) is now used.

The method used to classify firms during survey processing was revised slightly in 1992. Research has shown that the impact on individual industry estimates was minor. (The effects of changes in the way companies were classified during survey processing are discussed in detail in U.S. Bureau of the Census 1994a and 1994e). The current method used to classify firms was discussed previously under Frame Creation. Methods used for past surveys are discussed in U.S. Bureau of the Census (1995.) Large year-to-year changes may occur because of the way industry classifications are assigned during statistical processing. As discussed above, a company's industry classification is a function of its primary activity based on payroll, which is not necessarily the primary source of its R&D activity. If the largest portion of a company's payroll shifts to an activity other than an R&D-related activity, for example trade, all of its R&D similarly shifts to the new activity. Further, the design of the statistical sample sometimes contributes to large year-to-year changes in industry estimates. Since relatively few companies perform R&D and there is no national register of industrial R&D performers, a large statistical "net" must be cast to capture new R&D performers. When these companies are sampled for the first time, they are often given weights much higher than they would be given if the their size and the amount of R&D they perform were known at the time of sampling. After the size of the company and the amount of R&D performed are discovered via the first survey, the weight assigned for subsequent surveys is adjusted.

Capturing Small and Nonmanufacturing R&D Performers

Before the 1992 survey, the sample of firms surveyed was selected at irregular intervals; until 1967, samples were selected every 5 years. Subsequent samples were selected for 1971, 1976, 1981, and 1987. In intervening years, a panel of the largest firms known to perform R&D was surveyed. For example, a sample of about 14,000 firms was selected for the 1987 survey. For the 1988–91 studies, about 1,700 of these firms were resurveyed annually; the other firms did not receive survey questionnaires, and their R&D data were estimated. This sample design was adequate during the survey's early years because R&D performance was concentrated in relatively few manufacturing industries. However, as more and more firms began entering the R&D arena, the old sample design proved increasingly deficient because it did not capture births of new R&D-performing firms. The entry of fledgling R&D performers into the marketplace was completely missed during panel years. Additionally, beginning in the early 1970s, the need for more detailed R&D information for nonmanufacturing industries was recognized. At that time, the broad industry classifications "miscellaneous business services" and "miscellaneous services" were added to the list of industry groups for which statistics were published. By 1975, about 3% of total R&D was performed by firms in nonmanufacturing industries. (See also NSF 1994, 1995a, and 1996a.)

During the mid-1980s, there was evidence that a significant amount of R&D was being conducted by an increasing number of companies classified among the nonmanufacturing industries. Again the number of industries used to develop the statistics for nonmanufacturers was increased. Consequently, the annual reports in this series for 1987–91 included separate R&D estimates for firms in the communication, utility, engineering, architectural, research, development, testing, computer programming, and data processing service industries; hospitals; and medical labs. Approximately 9% of the estimated industrial R&D performance during 1987 was undertaken by nonmanufacturing firms.

After the list of industries for which statistics were published was expanded, it became clear that the sample design itself should be changed to reflect the widening population of R&D performers among firms in the nonmanufacturing industries (NSF 1995a) and small firms in all industries so as to account better for births of R&D-performing firms and to produce more reliable statistics. Beginning with the 1992 survey, NSF decided (1) to draw new samples with broader coverage annually and (2) to increase the sample size to approximately 25,000 firms.[8] As a result of the sample redesign, for 1992 the reported nonmanufacturing share was (and has continued to be) 25%–30% of total R&D. (See also NSF 1997a, 1998a, 1999a, 2000a, 2001a, and 2002a.)

Time-Series Analyses

The statistics resulting from this survey on R&D spending and personnel are often used as if they were prepared using the same collection, processing, and tabulation methods over time. Such uniformity has not been the case. Since the survey was first fielded, improvements have been made to increase the reliability of the statistics and to make the survey results more useful. To that end, past practices have been changed and new procedures instituted. Preservation of the comparability of the statistics has, however, been an important consideration in making these improvements. Nonetheless, changes to survey definitions, the industry classification system, and the procedure used to assign industry codes to multiestablishment companies have had some, though not substantial, effects on the comparability of statistics. (For discussions of each of these changes, see U.S. Bureau of the Census 1994g; for considerations of comparability, see U.S. Bureau of the Census 1993 and 1994e.)

The aspect of the survey that had the greatest effect on comparability was the selection of samples at irregular intervals and the use of a subset or panel of the last sample drawn to develop statistics for intervening years.  As discussed earlier, this practice introduced cyclical deterioration of the statistics. As compensation for this deterioration, periodic revisions were made to the statistics produced from the panels surveyed between sample years. Early in the survey's history, various methods were used to make these revisions (U.S. Bureau of the Census 1995). After 1976 and until the 1992 advent of annual sampling, a linking procedure called wedging was used. In wedging, the two sample years on each end of a series of estimates served as benchmarks in the algorithms used to adjust the estimates for the intervening years. (The process was dubbed wedging because of the wedgelike area produced on a graph that compares originally reported statistics with the revised statistics that resulted after linking. For a full discussion of the mathematical algorithm used for the wedging process that linked statistics from the 1992 survey with those from the 1987 survey, see U.S. Bureau of the Census 1994g and NSF 1995b.)

Comparisons to Other Statistical Series

NSF collects data on federally financed R&D from both federal funding agencies, using the Survey of Federal Funds for Research and Development, and from performers of the R&D—industry, federally funded research and development centers, universities, and other nonprofit organizations—using the Survey of Industrial Research and Development and other surveys (http://www.nsf.gov/statistics/publication.cfm). As reported by federal agencies, NSF publishes data on federal R&D budget authority and outlays, in addition to federal obligations. These terms are defined below (NSF 2002b):

  • Budget authority is the primary source of legal authorization to enter into obligations that will result in outlays. Budget authority is most commonly granted in the form of appropriations by the congressional committees assigned to determine the budget for each function.

  • Obligations represent the amounts for orders placed, contracts awarded, services received, and similar transactions during a given period, regardless of when the funds were appropriated or when future payment of money is required.

  • Outlays represent the amounts for checks issued and cash payments made during a given period, regardless of when the funds were appropriated or obligated.

National R&D expenditure totals in NSF's National Patterns of R&D Resources report series are primarily constructed with data reported by performers and include estimates of federal R&D funding to these sectors. But until performer-reported survey data on federal R&D expenditures are available from industry and academia, data collected from the federal agency funders of R&D were used to project R&D performance. When survey data from the performers subsequently are tabulated, as they were for this report, these statistics replace the projections based on funder expectations. Historically, the two survey systems have tracked fairly closely. For example, in 1980, performers reported using $29.5 billion in federal R&D funding, and federal agencies reported total R&D funding between $29.2 billion in outlays and $29.8 billion in obligations (NSF 1996b). In recent years, however, the two series have diverged considerably. The difference in the federal R&D totals appears to be concentrated in funding of industry, primarily aircraft and missile firms, by the Department of Defense. Overall, industrial firms have reported significant declines in federal R&D support since 1990 (table A-1), while federal agencies have reported level or slightly increased funding of industrial R&D (NSF 2006b). NSF continues to identify and examine the factors behind these divergent trends.

Top of page. Back to Top

Survey Definitions

Employment, FTE R&D scientists and engineers. Number of people employed in the 50 U.S. states and DC by R&D-performing companies who were engaged in scientific or engineering work at a level that required knowledge, gained either formally or by experience, of engineering or of the physical, biological, mathematical, statistical, or computer sciences equivalent to at least that acquired through completion of a 4-year college program with a major in one of those fields. The statistics show full-time-equivalent (FTE) employment of persons employed by the company during the January following the survey year who were assigned full time to R&D, plus a prorated number of employees who worked on R&D only part of the time.

Employment, total. Number of people employed in the 50 U.S. states and DC by R&D-performing companies in all activities during the pay period that included the 12th of March of the study year (March 12 is the date most employers use when paying first quarter employment taxes to the Internal Revenue Service).

Federally funded R&D centers (FFRDCs). R&D-performing organizations administered by industrial, academic, or other institutions on a nonprofit basis and exclusively or substantially financed by the federal government. To avoid the possibility of disclosing company-specific information and therefore violating the confidentiality provisions of Title 13 of the United States Code, beginning in 2001 data for industry-administered FFRDCs are now collected through NSF's annual academic R&D expenditure survey, the Survey of Research and Development Expenditures at Universities and Colleges, as are data from FFRDCs administered by academic institutions and nonprofit organizations. More information about this survey is available from NSF's Division of Science Resources Statistics website at http://www.nsf.gov/statistics/rdexpenditures/. For current lists of FFRDCs, visit http://www.nsf.gov/statistics/ffrdc/.

Funds for R&D, company and other nonfederal. The cost of R&D performed within the company and funded by the company itself or by other nonfederal sources in the 50 U.S. states and DC; does not include the cost of R&D funded by the company but contracted to outside organizations such as research institutions, universities and colleges, nonprofit organizations, or—to avoid double counting—other companies.

Funds for R&D, federal. The cost of R&D performed within the company in the 50 U.S. states and DC funded by federal R&D contracts, subcontracts, R&D portions of federal procurement contracts and subcontracts, grants, or other arrangements; does not include the cost of R&D supported by the federal government but contracted to outside organizations such as research institutions, universities and colleges, nonprofit organizations, or other companies.

Funds for R&D, total. The cost of R&D performed within the company in its own laboratories or in other company-owned or company-operated facilities in the 50 U.S. states and DC, including expenses for wages and salaries, fringe benefits for R&D personnel, materials and supplies, property and other taxes, maintenance and repairs, depreciation, and an appropriate share of overhead; does not include capital expenditures or the cost of R&D contracted to outside organizations such as research institutions, universities and colleges, nonprofit organizations, or—to avoid double-counting—other companies.

Funds per R&D scientist or engineer. All costs associated with the performance of industrial R&D (salaries, wages, and fringe benefits paid to R&D personnel; materials and supplies used for R&D; depreciation on capital equipment and facilities used for R&D; and any other R&D costs) divided by the number of R&D scientists and engineers employed in the 50 U.S. states and DC To obtain a per person cost of R&D for a given year, the total R&D expenditures of that year were divided by an approximation of the number of full-time-equivalent (FTE) scientists and engineers engaged in the performance of R&D for that year. For accuracy, this approximation was the mean of the numbers of such FTE R&D-performing scientists and engineers as reported in January for the year in question and the subsequent year. For example, the mean of the numbers of FTE R&D scientists and engineers in January 2003 and January 2004 was divided into total 2003 R&D expenditures for a total cost per R&D scientist or engineer in 2003.

Net sales and receipts. Dollar values for goods sold or services rendered by R&D-performing companies to customers outside the company, including the federal government, less such items as returns, allowances, freight, charges, and excise taxes. Domestic intracompany transfers and sales by foreign subsidiaries were excluded, but transfers to foreign subsidiaries and export sales to foreign companies were included.

R&D and industrial R&D. R&D is the planned, systematic pursuit of new knowledge or understanding toward general application (basic research); the acquisition of knowledge or understanding to meet a specific, recognized need (applied research); or the application of knowledge or understanding toward the production or improvement of a product, service, process, or method (development). Basic research analyzes properties, structures, and relationships toward formulating and testing hypotheses, theories, or laws; applied research is undertaken either to determine possible uses for the findings of basic research or to determine new ways of achieving specific, predetermined objectives; and development draws on research findings or other scientific knowledge for the purpose of producing new or significantly improving products, services, processes, or methods. As used in this survey, industrial basic research is the pursuit of new scientific knowledge or understanding that does not have specific immediate commercial objectives, although it may be in fields of present or potential commercial interest; industrial applied research is investigation that may use findings of basic research toward discovering new scientific knowledge that has specific commercial objectives with respect to new products, services, processes, or methods; and industrial development is the systematic use of the knowledge or understanding gained from research or practical experience directed toward the production or significant improvement of useful products, services, processes, or methods, including the design and development of prototypes, materials, devices, and systems. The survey covers industrial R&D performed by people trained, either formally or by experience, in engineering or in the physical, biological, mathematical, statistical, or computer sciences and employed by a publicly or privately owned firm engaged in for-profit activity in the United States. Specifically excluded from the survey are quality control, routine product testing, market research, sales promotion, sales service, and other nontechnological activities; routine technical services; and research in the social sciences or psychology.



Footnotes

[3] In the Survey of Industrial Research and Development and in the publications presenting statistics resulting from the survey, the terms firm, company, and enterprise are used interchangeably. Industry refers to the 2-, 3-, or 4-digit North American Industry Classification System (NAICS) codes or group of NAICS codes used to publish statistics resulting from the survey.

[4] The 1999 survey was the first year that companies were classified using NAICS. Prior to 1999, the Standard Industrial Classification (SIC) system was used. The two systems are discussed later under Comparability of Statistics.

[5] Form RD-1 is a revised version of the Form RD-1L, formerly used to collect data from large R&D performers for odd-numbered years. For even-numbered years, an abbreviated questionnaire, Form RD-1S was used. Beginning in 1998 the Form RD-1L was streamlined, renamed Form RD-1, and the odd/even-numbered year cycle abandoned.

[6] For detailed discussions on the sources, control, and measurement of error resulting from item nonresponse, see U.S. Bureau of the Census (1994b).

[7] For detailed descriptions and analyses of the imputation methods and algorithms used, see U.S. Bureau of the Census (1994c).

[8] Annual sampling also remedies the cyclical deterioration of the statistics that results from changes in a company's payroll composition because of product line and corporate structural changes.

Top of page. Back to Top

Technical Tables


Table Table Title Excel PDF
A-1 Companies in the target population and selected for the sample, by industry and company size: 2004 view Excel. view PDF.
A-2 Relative standard error for survey estimates, by industry and company size: 2004 view Excel. view PDF.
A-3 Relative standard error for estimates of all R&D and percentage of estimates attributed to certainty companies, by state: 2004 view Excel. view PDF.
A-4 Unit response rates and percentage of companies performing R&D, by industry and type of survey form: 2004 view Excel. view PDF.
A-5 Imputation rates for survey items, by industry and company size: 2004 view Excel. view PDF.
A-6 R&D-performing companies that reported nonzero data for major survey items: 2004 view Excel. view PDF.
A-7 Funds for and number of companies performing industrial basic research, applied research, and development in the United States and funds, by industry and company size, by source of funds: 2004 view Excel. view PDF.
A-8 Funds for industrial R&D, sales, and employment for companies performing industrial R&D in the United States, by industry and company size: 2003–2004 view Excel. view PDF.
A-9 Funds for industrial R&D, sales, and employment for companies performing industrial R&D in the United States, by original (2003) industry and company size, by original and revised industry classification methodologies: 2004 view Excel. view PDF.
A-10 Funds for industrial R&D, sales, and employment for companies performing industrial R&D in the United States, by revised industry and company size, by original and revised industry classification methodologies: 2004 view Excel. view PDF.

 
Research and Development in Industry: 2004
Detailed Statistical Tables | NSF 09-301 | January 2009
National Science Foundation Division of Science Resources Statistics (SRS)
The National Science Foundation, 4201 Wilson Boulevard, Arlington, Virginia 22230, USA
Tel: (703) 292-8780, FIRS: (800) 877-8339 | TDD: (800) 281-8749
Text Only