Federal Committee on Statistical Methodology
Office of Management and Budget
FCSM Home ^
Methodology Reports ^

 

  Statistical Policy Working Paper 7 - An Interagency Review of Time-Series Revision Policies


 Click HERE for graphic.

 

 

 

 

 

                MEMBERS OF THE FEDERAL COMMITTEE ON

                      STATISTICAL METHODOLOGY

 

                          (October 1982)

 



                   Maria Elena Gonzalez (Chair)

        Office of Information and Regulatory Affairs (OMB)

 

                         Barbara A. Bailar



                  Bureau of the Census (Commerce)

 

                         Norman D. Beller

       National Center for Education Statistics (Education)

 

                         Yvonne M. Bishop

            Energy Information Administration (Energy)

 

                         Edwin J. Coleman

              Bureau of Economic Analysis (Commerce)

 

                         John E.  Cremeans

             Bureau of Industrial Economics (Commerce)

 

                         Zahava D. Doering

              Defense Manpower Data Center (Defense)

 

                         Marie D. Eldridge

       National Center for Education Statistics (Education)

 

                         Daniel E. Garnick

              Bureau of Economic Analysis (Commerce)

 

                         Charles D. Jones

                  Bureau of the Census (Commerce)

 

                          Daniel Kasprzyk

                  Social Security Administration

 

 

                         William E. Kibler

            Statistical Reporting Service (Agriculture)

 

 

                           Thomas Plewes

                Bureau of Labor Statistics (Labor)

 

                        Raimond C. Sansing

                Internal Revenue Service (Treasury)

 

                         Fritz J. Scheuren

                Internal Revenue Service (Treasury)

 

                         Monroe G. Sirken

            National Center for Health Statistics (HHS)

 

                            Wray Smith

            Energy Information Administration (Energy)

 

                         Thomas G. Staples

               Social Security Administration (HHS)

 

 

 

 



 

           OFFICE OF INFORMATION AND REGULATORY AFFAIRS

 

                 Christopher DeMuth, Administrator

 

            Thomas D. Hopkins, Deputy Administrator for

                Regulatory and Statistical Analysis

 

                  Maria E. Gonzalez, Chairperson

           Federal Committee on Statistical Methodology

 

                              PREFACE

 

 

The working paper was prepared by the members of the Subcommittee

on Guidelines for Making and Publishing Revisions and Corrections

to Time Series, Federal Committee on Statistical Methodology.  The

Subcommittee was chaired by Yvonne M. Bishop, Energy Information

Administration, Department of Energy.



 This report includes recommendations formulated by the interagency

subcommittee concerning policies and practices for estimating and

publishing revisions to time series.  These subcommittee

recommendations were not formally endorsed by the Federal Committee

on Statistical Methodology nor by the Office of Management and

Budget.  The findings presented here provide guidance for improving

agency practices with respect to time series.  Seminars will be

organized to discuss the findings of this subcommittee with Federal

agency personnel involved in estimating or publishing time series.

 

 

 

 

 

                          MEMBERS OF THE

            SUBCOMMITTEE ON GUIDELINES FOR  MAKING AND

        PUBLISHING REVISION AND CORRECTIONS TO TIME SERIES

 

 

Yvonne M. Bishop, Chairperson

Energy information Administration

Department of Energy

 

Cynthia Clark

Regulatory and Statistical Analysis Division

Office of Management and Budget

 

David Findley

Bureau of the Census

Department of Commerce

 

John N. Gorman

Bureau of Economic Analysis

Department of Commerce

 



Robert Freie

Economic Research Service

Department of Agriculture



 



Maria E. Gonzalez



Regulatory and Statistical Analysis Division



Office of Management and Budget



 



Michael Griffey



Energy Information Administration



Department of Energy



 



Marie Hertzberg



Bureau of Economic Analysis



Department of Commerce



 



Hajo Lamprecht



Securities and Exchange



Commission



 



James Lowerre



Federal Trade Commission



 



Richard McDonald



Bureau of Labor Statistics



Department of Labor



 



J. Courtland Peret



Federal Reserve Board



 



David Pierce



Federal Reserve Board



 



Fred Riley



Internal Revenue Service



Department of the Treasury



 



Joseph Stith



Bureau of the Census



Department of Commerce



 



 



Mitchell Trager



Bureau of the Census



Department of Commerce



 



Kenneth Utter



Internal Revenue Service



Department of the Treasury



 



Lynn Weidman



Bureau of the Census



Department of Commerce



 



Paul J. Werbos



Energy Information Administration



Department of Energy



 



                                ii



 



 



 



 



 



                         ACKNOWLEDGEMENTS



 



 



This report represents the collective efforts of the  Subcommittee



on Guidelines for making and Publishing Revisions and Corrections



to Time Series members of the subcommittee discussed the state of



affairs and decided to obtain more specific information from the



agencies represented.  They prepared a questionnaire and located



persons in their agencies who would complete the questionnaire. 



Responses were received as follows:



 



                    BEA       5



                    BLS       2



                    Census    9



                    EIA       2



                    FRB       5



                    FTC       1



                    IRS       2



                    SEC       2



                    USDA      3



 



                    Total     31



 



 



The subcommittee is grateful to all the respondents.



 



Various members of the subcommittee worked on tabulating the



responses to the questionnaire and drafting portions of the text. 



Paul Werbos undertook the task of consolidating the drafts into a



final report.



 



                                iii



 



 



 



 



 



      AN INTER-AGENCY REVIEW OF TIME-SERIES REVISION POLICIES



 



 



I.   INTRODUCTION.



 



The Federal Committee on Statistical Methodology established the



Subcommittee on Guidelines for Making and Publishing Revisions and



Corrections to Time-Series.  The purpose of the subcommittee was to



review current agency policies and to determine if user needs are



met by the current procedures and guidelines.  Revision policy



guidelines were formulated In Statistical Policy Directive no. 3 of



the Office of Federal Statistical Policy and Standards (OFSPS). 



This directive :is currently an Office of Management and Budget



(OMB) standard.  These guidelines Include:



     a.  Preliminary and revised figures should be clearly



Identified as such.  For principal aggregate figures, revisions



should be accompanied by the previous figures to facilitate



comparison.



     b. Revisions occurring for various reasons such as benchmark



revisions, updating of Seasonal factors, and replacement of



preliminary by revised figures, should be consolidated and released



simultaneously.



     c.  Revisions occurring for reasons other than routine and



regular replacement for preliminary revised figures because of new



data should be accompanied by a brief explanation at the time of



release.



     The subcommittee conducted a series of meetings to discuss



agency policies, and the impact of alternative policies an users. 



Because of the wide diversity of policies and users, a



questionnaire was developed to give a clearer picture of the extent



of this diversity.  Agencies were asked to select time-series of



Interest, and fill out the questionnaire for each series.  Nine



agencies submitted 31 questionnaires.  Participating agencies were



BEA, BLS, Census, EIA, FRB, FTC, IRS, SEC and USDA.  The series



chosen were not selected at random, and are too few to permit



statistical inference; however, the responses were discussed by



subcommittee members who represented the agencies, and It was



believed that the series selected could be regarded as Illustrative



of the practices In those agencies.



     This report represents the work of the subcommittee and



summarizes what was found In the questionnaires, and what further



Issues emerged in subcommittee discussions and In the analysis of



the questionnaires.  Possible charges in policy will be discussed.



 



 



II. SUMMARY OF THE QUESTIONNAIRE RESULTS.



 



     The initial version of the questionnaire contained several



questions on the data users and their needs.  This was considered



important to cost-benefit analysis of Policy because the costs Of



any data collection activity are concentrated In the agencies while



the benefits are to the users.  However, this question had to be



deleted because of an almost universal lack of information about



users.  A handful of agencies do send out questionnaires on user



satisfaction with their data packages, but these do not permit in-



depth analysis of revisions policy without further information.



     The final version of the questionnaire contained eight



substantive questions, and this discussion is based on series-by-



series tabulation of the responses.  From the responses it was



found that most agencies selected monthly time-series to review,



presumably because revision is considered a bigger problem in these



series.  Thus, annual time-series as a group are not adequately



covered by this analysis.  A few quarterly and weekly series are



covered.



     Questions an collection procedures and the timing of release



highlighted a major reason for revision: tight deadlines and



reliance an replies by mail.  It was also found that the deadlines



are set by the agencies themselves, in all but one agency; however,



most subcommittee members had a sense that the tight deadlines



resulted from agency response to pressure from users.  Users such



as the Office of Management and Budget, the Council of Economic



Advisors, the Treasury and the Federal Reserve Board were cited.



     Tabulations of the reasons for revision showed that two -



thirds are revised because of late responses or corrections from



respondents; all but a few are also revised because of errors



detected by the agencies.  Half are revised to update the seasonal



adjustment.  Seasonal adjustment in almost always done by use of



the Census Bureau X-11 program whose final results d"nd an future



observations.  Thus, of the 19 series reported which are seasonally



adjusted, 17 have the seasonal adjustment factors revised %&an; the



actual data become available.  In addition to these ongoing reasons



for revisions, there are occasional needs for adjusting more than



one past time period.  One example is change in definition of the



variable being measured.  Another example is "benchmarking"; in



other words, data from sources such as an annual survey are



adjusted retroactively to make the series fit smoothly to another



data source, such as a five year census.  Additionally, revisions



can occur because of a major change in the "frame," the list from



which respondents are sampled.



     Respondents were asked whether information war available on



the magnitude and direction of revisions.  The committee found that



there has been relatively little formal analysis of the magnitude



and  direction of revisions.  However, those performing the



revisions do typically notice the changes they are making.  Thus,



In almost half the cases the direction of revision was known.  In



three cases, It was stated that the revisions are much smaller than



the actual change from period to period; in other cases, It was



indicated (on the survey or in discussion) that revisions are



"small" (on the order of 1 to 3 percent apparently).  In three



cases, the magnitude was quantified at between 0.25 and 1.4 percent



     In order to determine cost-effectiveness of any revision



policy, It is necessary to consider



 



                                 1



 



 



 



 



 



 



the method of disseminating data and revisions.  It was found that



preliminary data and revisions or* both still disseminated



primarily by traditional means.  All data series discussed are



Published and, In all but two cases, the revisions are published. 



In the bulk of cases, the original is also sent out in a press



release; in most cases the revision is also released.  In less than



half the cases, the original and the revision are available in tape



form; furthermore, It Is relatively rare for current, revised fears



to be available in a user accessible databank or in microfiche



form.



     The OFSPS Directive no. 3 states that revisions should be



accompanied by previous figures, so as to provide an Indication of



the magnitude of the change.  After Initial checks, It was found



that only 8 out of the 31 data-series follow this guideline.  The



directive also indicates that preliminary figures should be



adjusted when the direction of revisions in predictable; from the



answers to the questionnaire, it seems doubtful that this is ever



done, on the  other hand, with almost half the series, some



information is published at times about the past history of



revisions or the like.  In most cases, the actual methodologies



used in revision are published.



     The committee was concerned as to bow the unit is notified



that a large revision has been made.  For those series reported an



being available in tape form, for about half the existence of a



revised tape is announced in periodicals such as the "Survey of



Current Business," "SEC Monthly Statistical Review"; for other



series a press release is used, or the revisions are indicated in



regularly scheduled updates.  For all but three of the data series



discussed, users are notified of gross errors by such means as a



note in published periodicals, errata sheets, or letters to



sponsors.



     Almost half of the series discussed are both benchmarked and



seasonally adjusted.  Therefore, the interactions of these two



forms of revision need more serious study; it is possible that



benchmarking, if timed incorrectly, may introduce mathematical



artifacts into the seasonal adjustment process.  Tight deadlines



for data release require not only revision, but also a heavy



reliance on imputation methods to estimate responses which are not



available in time for the new initial deadline.  In all but a few



of the 31 data series examined, imputation is used.  These methods



are very diverse, and do not seem to result from a statistical



evaluation of the various alternatives.  Sometimes the trend of the



overall series is used to impute individual responses are imputed



in other ways (by judgement, or by "hot deck," or by estimation



using prior data, or by unspecified estimation method, or by



matching to other data).  Often non-respondents as a group are



imputed (by assuming non-respondents are the same as respondents,



or that they change at the same rate, or by the use of trends,



adjusted weights or some ratio technique).



     The committee observed that for some series the size of the



change associated with each revision decreases over time. 



Nevertheless, it is rare for the decision as to whether to publish



a revised number to be dependent on the  size of the revision.  For



one series discussed - the Consumer Price Index - revisions are not



made public except when a very large error occurs, because such



revisions might confuse the contracts and laws which refer to the



value of that index; the potential revisions have  been studied and



are probably comparable to those of the other series described



here.



 



 



III. ISSUES RAISED BY QUESTIONNAIRE ANALYSIS AND DISCUSSION.



 



     The results of the questionnaires, by themselves, raise as



many questions as they answer. Solution of some of these questions



will require Statistical research.  In some cases, however, the



discussions of the subcommittee can provide a more complete (if



speculative) picture.  The  discussion below is partly based an the



subcommittee work and partly based on last-minute efforts by a few



subcommittee members to understand these results.  Concern focused



an the impact on users, the effects of benchmarking, and how bias



and the number of revisions might be minimized.



 



 



Impact of Revisions on Users.



The lack of Information about users does not mean that agencies are



unconcerned about users.  Borrowing terms from private Industry,



one might say that  most statistical agencies have large sales



departments, often with a major customer relations  function;



however, market analysis is not possible  within their budgets.  In



some agencies, requirements reviews are beginning to fill the gap;



but even when these are available, they tell us little about the



interaction between data collection options and the methods used to



apply the data to analysis or elsewhere.



     In its discussions, the Subcommittee emphasized two categories



of usage: (1) Monitoring current developments to detect any



indication of improvement or worsening in some situation, or more



generally, to obtain an accurate relative indication of what is



going on today; and (2) using an accurate historical record to



develop a statistical model of a system, so  that reasonable



inferences about cause and affect might be made.



 The educated analyst of current problems would actually combine



both because a proper Interpretation of the present requires an 



understanding of as past.



     Most agencies are primarily concerned about keeping the



monitor happy.  The reasons for this are straightforward.  The



monitors include Congressional Committees that ask for briefings



the current situation, and sometimes press hard for explanations



for delays, revisions or discrepancies between one source of data



and another.  Likewise, the monitors include those who brief the



President on the current situation; they also include TV stations



and newspapers who gave broad publicity to the latest statistics.



      Some monitors are highly conscious of revisions and will



complain strongly to an agency if there are too many versions of



the same number; other monitors may be less conscious of the



accuracy  factor, and simply assume that a preliminary estimate



reflects very recent reality.



     In almost all cases, if is important for monitors that the



agency define a data variable in a way which corresponds to the



concepts



 



                                 2



 



 



 



 



 



they use it as an indicator for: as a practical matter, they have



to assume such a correspondence in any event.



     The subcommittee located three reports and conducted one



interview to gauge the effect of revisions on causal analysts. 



They typically need accurate time-series data.  In most cases, they



cannot afford to study the discrepancy between preliminary and



revised figures; therefore, it is important for them to have access



to the best possible prediction of what the final, revised figure



will be if they use anything but final data at all.  Indications of



the likely error can help them decide whether to include recent



data at all in their analysis.  Causal analysts are less likely to



be policymakers than are monitors, but the products of their work



can be important to the policymaker; therefore, more consideration



of their needs may be warranted.  Fortunately, most analysts have



access to computers; thus more frequent revisions may be made



available to them, either in tape or databank form without



necessitating multiple publications or press releases.  Private



databank services have recently begun to offer on line interactive



retrieval to the mass market; well planned cooperation with such



services could relieve the government of much of the labor involved



in disseminating revisions, and speed Up the distribution process.



     Experiments with electronic dissemination by the government



have sometimes encountered bad results in the past.  User costs of



obtaining data have sometimes increased, especially when analysts



need access to only a small set of variables (e.g., U.S. Gross



National Product by Year).  However, technology has changed rapidly



In this area, and, if barriers to interagency cooperation and



government/industry cooperation can be overcome, it may be possible



to reduce the costs to users. (Dollar cost and the cost in term of



user effort both need to be considered.)  Where large databases are



being revised, or where many users need simultaneous access to data



from different agencies, electronic dissemination may become



cheaper to the user and is preferable to not publishing the latest



estimates.



     It is important, however, that changes in dissemination policy



be analyzed together with agencies  policies on computer use an a



creative and government wide basis, so as to ensure that future



user costs are reduced as such as possible.



     Analysts typically use statistical or "econometric" methods



which assume that the data are "clean"; some degree of inaccuracy



is acceptable, but it is important that the inaccuracy be random. 



Unlike monitors, analysts are often able to analyze seasonal



factors themselves if given accurate unadjusted data.



 



 



Benchmarking



 



     The subcommittee spent considerable time discussing the



reasons for benchmarking, and the problems it presents.  For users



such as monitors, the goal is simply to minimize error; to achieve



this goal, ones estimate should account for all relevant



information, including both the original unadjusted data and other



sources (benchmarks).  However, it is not obvious how best to do



this, and current methods are diverse and variable In the degree of



theoretical sophistication.  For analysts, it may be more important



to Preserve the randomness if the error rather than reduce its



size, so as to ensure the validity of normal analytic procedures



and avoid systematic biases. To achieve this goal one would want to



publish "clean" data series, with a minimum of benchmarking or of



other  revisions which introduce systematic alterations of the



original data.  To compromise between these two types of use, one



might make the "clean" series available on tape or in databank form



in cases where one cannot afford to publish both. For some users,



benchmarking may create a misleading impression of consistency if



the user is not aware that the original unadjusted measurements



from different sources were actually in disagreement with each



other.  Related to this In the problem of whether, or how, to



"smooth" data when major changes in definition have changed the



numbers drastically.



     Benchmarking is used to remove bias that has accumulated over



time.  For example, if an annual survey has drawn from a frame



which is updated only at ten year intervals, then deterioration of



the frame may lead to a growing systematic bias.  While the optimal



way to correct for this bias is unknown (despite some exploratory



research) the usual straightline adjustment used in "benchmarking"



may be better than nothing.  Thus, benchmarking may lead to less



systematic bias, and "cleaner" data at times.



     Unfortunately, the subcommittee did not have a chance to study



the problem of updating frames.  Many sample surveys, based on



response by mail, telephone or interview, come from frames based on



administrative records.  Thus, it may be possible to minimize the



degree of benchmarking by updating frames more often.  A more



expensive possibility might be to take larger surveys.  In some



cases (especially with monthly series and annual frames) sample



deterioration rather than frame deterioration may be the problem;



in such cases, sample renewal and related procedures may minimize



the systematic bias, and minimize the degree of benchmarking needed



for a "clean" database.  It seems likely that sample deterioration,



like missing value Imputation, is commonly handled via a diversity



of informal procedures, despite the possibility of more rigorous



statistical tools.



 



 



Indication of Bias.



     The former OFSPS Directive no. 3 states that adjustments for



bias In the preliminary figures should be made, and that



preliminary figures should be published alongside  their revisions. 



While a few of the agencies do publish both figures, the latter



guideline was opposed vigorously.  Given that several revisions of



a series are often necessary, publications might become far more



complex, confusing and also more expensive if the guidelines were



followed literally.  In press releases, however,  it may be



reasonable to ask that the initial preliminary  figure be mentioned



whenever a revision is announced.  It is important that the



preliminary figure cited correspond exactly to the revision (e.g.,



they refer to the same month), because citation of other



preliminary figures may confuse the reader; for example, If



variable X grows by one percent per month, and its revisions add



one percent to the preliminary figure, this months preliminary



figure may equal the revision of last months data exactly even



though there is significant revision error.



     In principle, it was agreed that users actually



 



                                 3



 



 



 



 



 



need an indication of expected bias and of random revision error. 



To do this professionally would require an effort to develop time-



series models to predict revised values as a function of



preliminary figures and previous data.  This would cost more



resources, however, by reducing the size of subsequent revisions. 



It might allow a reduction in subsequent expenses in publishing and



announcing multiple revisions.  Also, it is unclear what fraction



of users would still want access to adjusted preliminary figures. 



In theory, agencies could be given the freedom to pick a very



simple model (e.g., normally distributed revisions), if they were



willing to accept the need to then publish a larger standard error.



 



 



Reduction of Revision.



     The subcommittee discussed at length the possibility of



reducing costs and user confusion by reducing the number of



revisions.  The most promising approach seems to be a reduction of



the number of scheduled revisions.  Also, benchmarking, seasonal



readjustment and historical publication of late revisions can be



scheduled simultaneously.



     One Initial suggestion was to establish cutoff on the size of



changes: in other words, a revised number would be published only



if it differed from the previous version by more than the cutoff. 



This suggestion was not popular.  Agencies typically schedule a



complete calculation of revisions, publication and tables rather



than individual numbers, deleting half of the numbers from a table,



at random, would not reduce publication expense.  In any event,



adhering to a fixed schedule makes it possible for users to know



they have  the latest revision  without extensive checking. 



Furthermore, agencies in the United States prefer to publish



statistics and revisions on a preannounced, regular schedule,



because this reduces the fear that political factors might bias the



timing decisions.



     Another suggestion was to relax some of the tight deadlines. 



If the expected error in a preliminary figure exceeds the month-to-



mouth fluctuation, it may be a waste of money to publish it; it may



also mislead the public.  OFSPS Directive no. 3 endorses this view,



however, without a clear indication of how early is too early, 



agency policies may not change. One possibility Is simply to



require that agencies estimate the expected revision error



rigorously and that the "preliminary figure" not be published if



the random component of this error exceeds the mean period-to-



period fluctuation. In other words, if this inequality holds over a



significant period of time, the schedule should change so that the



first scheduled revision now becomes the first published number.



     Likewise, after the first or second monthly (periodic)



revision of a number, no more revisions need be published on paper,



or released to the press,  until the usual consolidated time-series



publications (e.g., annual review) are printed.  Such a policy



would not preclude exceptions for unusual circumstances. The



rationale for the policy is that monitors are likely to lose



interest after three months, while analysts can get the revisions



from databanks.  Updated tapes or databanks should still be



provided; if data are well managed within an agency, this should



not be expensive. With some data series; however, analysts make



direct use of the printed data (perhaps because electronic



distribution is not fully available yet), and the cost of



publishing an updated time-series is relatively small; such series



should be treated as an exception.



 



 



Benchmarking and Seasonal Analysis.



     The subcommittee strongly agrees with the OFSPS Directive that



benchmarking and seasonal analysis should be  consolidated, for



reasons of accuracy as well as expense.  However, we have not



examined present practices or their implications, as they relate to



this guideline.  In some cases, seasonal readjustments can be



performed sooner than benchmarking, as actual data become available



to replace the X-11 projections of the seasonal factors.  The



development of better tim-series models to make these projections



could reduce the sit* of the correction, however, so that a delay



In the revision would be more acceptable.  The subcommittee notes



that there is Important research wall underway to try to Improve



upon X-11 seasonal adjustment.  This too might reduce the need for



revision, but it is too early to be sure.  Preliminary studies



suggest strongly that concurrent seasonal adjustment, which



requires less revision, is a viable alternative to present



procedures.



 



 



IV.  RECOMMENDATIONS.



 



     The subcommittees found that they were in general agreement



with Directive no. 3, but that they would strengthen some of the



guidelines.  They formulated eight recommendations as follows:



 



     1.  Agencies should be required to maintain statistical models



(however simple) to determine whether bias has been removed and to



compute the standard error of revisions for all published (printed)



series.  The standard errors  should be  published along with all



preliminary figures.  This should override any need to publish



revised and preliminary figure together, except possibly in press



releases.



     2.  Schedules for data release and revision should continue to



be regular and fixed in advance.  Schedules should be adjusted and



consideration given to deleting versions so early that the standard



error of revision (as in recommendation 1)  exceeds the period-to-



period fluctuations.  Any such changes of schedule should be



subject to the joint agreement of producing and using agencies.



This recommendation should not be construed to mean that an



aggregate figure should be delayed  when its components are not



ready for publication.



     3. No more than three consecutive monthly versions of the same



statistic should be scheduled for publication within a year (not



counting revisions for annual or less frequent publications).  This



does not mitigate the need to disseminate tapes and databanks



containing the latest version, or to publish the revised time-



series when historic publications are printed.



     4.  As In the OFSPS Directive, benchmarking and seasonal



readjustment should be made simultaneously.



     5.  Resources should be made available for research into the



impact of benchmarking and ways of minimizing it.  Going beyond the



benchmark itself, the interpolation and extrapolation procedures



also need serious study.  This should include formal study of



alternative sample designs, frame updating procedures, and data



estimation



 



                                 4



 



 



 



 



 



     6.  Resources should be made available for more research into



the process of imputation, throughout government agencies.



     7.  Mechanisms are needed to help agencies better understand



and respond to the needs of users of various types.  Some users



want the most recent value in as short a time-frame as possible;



others require extended time-series.  The cost effective approach



from the point of view of both producers and users to meeting these



needs may require dissemination, not only through printed



publications, but also through mechanisms such as computer data



networks.



 



     8.  Where possible, better seasonal adjustment models should



be developed so as to minimize the revision of seasonal factors,



and make a less frequent revision schedule more acceptable.



 



 



Availability of Further Detail.



     Copies of the questionnaire and tabulations of responses are



available on request from OMB, Regulatory and Statistical Analysis



Division, Washington, D.C. 20503 or from EIA, Office of Statistical



Standards,  Washington, D.C. 20585.



 



                                 5



 



 



 



 



 



 Reports Available in the Statistical Policy Working Paper Series



 



1.   Report on Statistics for Allocation of Funds



     GPO Stock Number 003-005-00178-6, price $2.40



 



2.   Report on Statistical Disclosure and Disclosure-Avoidance



     Techniques



     GPO Stock-Number 003-005-00177-8, price $2.50



 



3.   An Error Profile: Employment as Measured by the Current



     Population Survey



     GPO Stock Number 003-005-00182-4, price $2.75



 



4.   Glossary of Nonsampling Error Terms: An Illustration of a



     Semantic Problem in Statistics (A limited number of copies are



     available from OMB.)



 



5.   Report on Exact and Statistical Matching Techniques



     GPO Stock Number 003-005-00186-7, price $3.50



 



6.   Report on Statistical Uses of Administrative Records



     GPO Stock Number 003-005-00185-9, price $5.00



 



7.   An Interagency Review of Time-Series Revision Policies (A



     limited number of copies are available from OMB.)



 



 



Copies of these working papers, as indicated may be ordered from



the Superintendent of Documents, U.S. Government Printing Office,



Washington D. C. 20402.



 



 



 

(sw7.html)

ARROW UP

 


Page Last Modified: April 20, 2007 FCSM Home
Methodology Reports