WRITTEN TESTIMONY OF


BRUCE B. HICKS

DIRECTOR, AIR RESOURCES LABORATORY

NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION

DEPARTMENT OF COMMERCE


BEFORE THE

SUBCOMMITTEE ON NATIONAL SECURITY, EMERGING THREAT, AND INTERNATIONAL RELATIONS

COMMITTEE ON GOVERNMENT REFORM

U.S. HOUSE OF REPRESENTATIVES


June 2, 2003


Good afternoon, Mr. Chairman and Members of the Subcommittee. My name is Bruce Hicks, Director of the Air Resources Laboratory of the National Oceanic and Atmospheric Administration (NOAA). I have been actively involved in studies of the transport and diffusion of pollutants in the atmosphere for more than 40 years, with research experience in Australia and at several US laboratories. I have been with NOAA since 1980. I recently served as the co-Chairman of the Joint Action Group for the Selection and Evaluation of Atmospheric Transport and Diffusion (ATD) Models of the Office of the Federal Coordinator of Meteorology for Meteorological Services and Supporting Research, most commonly known as the Federal Coordinator for Meteorology (OFCM). Following the events of September 11, 2001, the Federal Coordinator for Meteorology formed the Joint Action Group to include researchers, modelers, and user representatives from all Federal agencies actively employing atmospheric dispersion models for emergency response applications. The Federal Coordinator charged the Joint Action Group with the responsibility to (1) review the ATD modeling systems currently used by the Federal agencies, (2) conduct a preliminary analysis of gaps in understanding of the processes on which the modeling systems are constructed, (3) determine which operational ATD models are appropriate for use in addressing selected scenarios, (4) recommend research and development needs, and (5) review model evaluation procedures. I have been asked to present some views regarding the current state of the science in the modeling of atmospheric dispersion. It is my pleasure to do so. Three diagrams are appended, to illustrate some of the important points that I would like to make.


Concerns about atmospheric dispersion and weapons of mass destruction date back to the First World War and trench gas warfare. In the 1950s, the emphasis shifted to radioactive fallout. In the 1960s and 70s, the focus became hazardous chemicals. In the 1980s and 90s, accidents at Chernobyl and Three Mile Island occurred, and smoke and other air pollutants grew to be priority issues. Today, the main interest is in emergency response and planning. It is a major goal of NOAA to provide forecasts to protect the public; forecasts of atmospheric dispersion are among the capabilities we provide.


The modeling methods now commonly in use were developed by a small cadre of scientists, many of whom worked for my laboratory. As time progressed, the dispersion forecasting methodologies improved, partially due to the growth in computer power but also partially because of a slowly improving understanding of the atmospheric processes that cause dispersion to occur. The major processes are transport and diffusion.


The performance of dispersion models is assessed using tracers, either some trace gas that is intentionally released or a tracer “of opportunity” resulting from an accident or some other release into the air of some substance that can be measured downwind. However, the numerical comparison of model predictions with observed field data provides only a partial means for assessing model performance.


The Chernobyl nuclear accident is an example where dispersion models were used in real-time for an unfolding emergency situation, and were tested against the data sets that were collected. The results showed that many dispersion forecasts were quite deficient. The World Meteorological Organization concluded that there was need to provide for a more organized provision of dispersion forecasts in the future, and hence set up a small network of internationally recognized dispersion forecast providers. There are now seven of these Regional Specialized Meteorology Centers, distributed globally – Toulouse, France; Bracknell, England; Montreal, Canada; Oblinsk, Russia; Beijing, China; Melbourne, Australia; and Camp Springs, MD, USA. These are the centers of excellence that are internationally recognized. In practice, Montreal serves as a backup for the US capability, and vice versa.


The same dispersion model underpins the US, Australia, and Chinese national system – the Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model. Over the years, HYSPLIT has been extensively tested, firstly in a series of long-range tracer studies using perfluorocarbon and Krypton-85 gases, and secondly in a variety of opportunistic studies of smoke from fires, for example. Perfluorocarbons are variants on the gases used in refrigeration. They can be measured in exceedingly small concentrations. The field tests in which these tracers were used covered much of the eastern USA. The Cross Appalachians Tracer Experiment (CAPTEX) was conducted in 1983, following some initial tests in 1980. The larger-scale Across North America Tracer Experiment (ANATEX) was conducted in 1987. The first diagram attached shows how a puff of dispersing material spreads as it is transported across the eastern USA, over a four day period. These field studies were conducted by my laboratory, under sponsorship of the Department of Defense, and Department of Energy. Following the success of these field evaluations, HYSPLIT has been adopted as the standard dispersion forecasting tool used by the National Weather Service.


There are 122 Weather Forecast Offices (WFOs) of the National Weather Service, nationwide. In the event of an incident requiring a forecast of dispersion, each of these centers is prepared to provide dispersion predictions, out to at least two days. The forecasts are generated at the high-performance computing facility at the National Centers for Environmental Prediction (NCEP), which operates on a 24 X 7 basis, as do the WFOs. WFO forecasters are responsible for dissemination of dispersion output to state and local emergency managers, thus taking advantage of field forecaster experience and understanding of local weather issues. Hence, it is quite intentional that the dispersion forecasts are vectored through local regional forecast offices. However, the field forecasters are not dispersion experts; they rely on support from NOAA scientists to answer any detailed questions that might arise.


The accuracy of dispersion forecasts depends on the accuracy with which the meteorological wind fields are known. Operational weather forecast guidance is available at 12 km resolution, and NWS forecasters are now beginning to provide gridded forecast wind fields at higher resolution by means of the National Digital Forecast Database servers. The NWS maintains a large network of observing stations, at locations selected to provide nationwide coverage, including surface stations at airports (ASOS sites) and throughout communities (the modernized cooperative observer network) and many others; it also ingests a wealth of data from other federal and private sector partners. NWS routinely runs advanced numerical models to interpret the data obtained and drive the Nation’s numerical weather prediction models. The HYSPLIT model is designed specifically to be driven by a comprehensive set of numerical prediction model data. HYSPLIT is operationally integrated with the NWS’s highest resolution weather prediction models, and takes advantage of greater resolution– spatial and temporal-- within the model stream at a data density higher than generally practical for rapid external distribution. Dispersion predictions for selected locations across the nation are made with updated weather forecast data four times daily. For emergency events or preparations, dispersion predictions are run on request at 12 km resolution and results are generally available within 15 minutes. Four km resolution predictions can also be run on demand. This increase in resolution requires much greater processing time-- results are available within two hours. HYSPLIT is also run on remote computer systems; however, these remotely-run applications rely on reduced-resolution weather data to drive the dispersion calculation. The use of reduced-resolution weather data in remotely-run applications is necessitated by the huge amount of numerical information involved. Because pushing this volume of data through the Internet or via some dedicated communications system takes time, today’s dispersion forecasts are generated either with HYSPLIT integrated with the high-resolution weather models on the NWS mainframe computers, or by lower-resolution (40 km) data driving HYSPLIT on satellite computer systems. All of the Weather Forecast Offices have access to both kinds of product.


The coarser product is routinely made available via the Internet, to users who are registered by scientists of my laboratory. This is the Realtime Environmental Applications and Display sYstem, used routinely by over 1500 registered users for accessing and displaying meteorological data and running trajectory and dispersion models on the web server of my laboratory. The READY system brings together dispersion models, graphical display programs and textual forecast programs generated over many years into a form that is easy to use by anyone, but its primary focus is for atmospheric scientists. The products are used, for example, to guide response activities following industrial accidents and forest fires. The data have been used by every long-distance manned balloon venture so far. The READY system is widely known, and routinely employed. Evidence available to us indicates that it is the major outlet for dispersion products provided by the federal government. The models that now make up READY were central in the activities addressing the Kuwait oil fires in 1990/1991.


The Khamisiyah experience was quite revealing, and is worthy of some direct attention. In their scrutiny of the subject, the Office of the Special Assistant for Gulf War Illness (OSAGWI) elected to use a small number of dispersion models, mainly from within the DoD system. There were few meteorological observations available, and hence the dispersion codes were driven by exceedingly sparse and sometimes questionable information. It is not surprising that the dispersion systems yielded different answers. Each one of these answers represented a good approach to the problem. There was no way to weigh or order these alternative depictions of the plume from Khamisiyah. Consequently, it was decided to err on the side of caution and to assume that every prediction was equally likely. This is far from an optimal way to proceed, but in the lack of meteorological observations a better approach would be hard to conceive. There were few data, and there was no basis to select one model conclusion instead of another. For remote locations like the Khamisiyah case, the situation today has not improved greatly. We are still at the mercy of the meteorological forecasts, and if there are no observations to drive the forecasts then these forecasts are highly vulnerable. The community has now adopted the concept of ensemble modeling – in which many models are used to address the situation and the answers are derived from analysis of all of their products. This is much like what was done for Khamisiyah, but on a larger scale. The DOD has a good suite of models, which are tailored for combat applications, and they have been upgraded since the Persian Gulf War. However, they will continue to suffer from the lack of meteorological observation data, which can limit their effectiveness.


In North America, we are not short on data, although we still have need to learn how to use the available information optimally. The shortcomings that caused concern in the Khamisiyah case should not be seen as a basis for concern about North American situations.


Among other products, READY maintains a continuously updated plume forecast for every nuclear power installation in North America. In the event of a release of radioactivity from any nuclear power plant, there is no need to start a dispersion forecast. It is always immediately available. All that is needed is password access to the relevant READY product.

 

The discussion so far relates to long-range transport and diffusion of pollutants. However, the focus of current concern is on places where people live or congregate, or where there are buildings or other structures or institutions of national importance. Attention is mainly on urban areas and cities. NOAA, in partnership with EPA, also provides a local dispersion capability with the CAMEO/ALOHA system. CAMEO is the Computer-Aided Management of Emergency Operations system. The near-field atmospheric dispersion model provided in conjunction with CAMEO is ALOHA – the Areal Locations of Hazardous Atmospheres model. First responders and emergency planners use CAMEO to plan for and respond to chemical emergencies. The system integrates a chemical database and a method to manage the data, an air dispersion model, and a mapping capability. Responders can use CAMEO to access, store, and evaluate information critical for developing emergency plans. ALOHA allows the user to estimate the downwind dispersion of a chemical cloud based on the toxicological/physical characteristics of the released chemical, atmospheric conditions, and specific circumstances of the release. ALOHA also assists the user in estimating the amount of toxic chemicals entering the atmosphere by modeling a variety of release scenarios – discharges from tanks or pipelines as well as evaporating puddles. ALOHA makes use of local wind speed and direction observations or automated measurements.


NOAA, in collaboration with EPA, has delivered more than 30,000 copies of CAMEO and ALOHA to users across the country in the last year, providing communities and first responders with a tool that helps them prevent, prepare and respond to local emergencies. Over 1000 local responders will receive CAMEO/ALOHA training during the next year as part of joint NOAA, EPA, and Office of Domestic Preparedness efforts.


It is for cities and urban areas that the greatest challenge exists. Cities and urban areas influence wind fields considerably, in ways that the standard monitoring stations of the NWS do not yet detect well. These monitoring stations are typically located at airports, but the area of main concern is usually quite distant from the airports. We need to consider many possible emergency scenarios, and we must prepare for them with the fervent hope that our preparations will never be tested. To these ends, we need modeling systems that can be used to describe the dispersion of trace gases, biological agents, and radioactivity through the air, over distances that are intermediate between the regional scales of the HYSPLIT variety of dispersion models, and the ALOHA variety of near-field capabilities. In reality, wind fields are affected by the presence of buildings in ways that are not yet fully understood. Consequently, we tend to rely on the acquisition of actual data rather than on the predictions of wind fields based on some preferred numerical model. It is the wind fields that determine where released materials will drift, and it is atmospheric turbulence that controls the rate at which dilution occurs. Both are strongly affected by the presence of buildings or other structures, in ways that are often quite random.


The nation has many atmospheric dispersion models that purport to predict the dispersion of hazardous materials released into the urban atmosphere. The capabilities are widespread across the federal agencies, state and local authorities, academia, and the private sector. Every one of these systems has some special quality that makes it unique. The trick now facing the atmospheric dispersion community is to determine which subset of the many dispersion systems is best suited to the latest challenges. In the OFCM report, Atmospheric Modeling Releases from Weapons of Mass Destruction: Response by Federal Agencies in Support of Homeland Security, the Joint Action Group identifies 29 modeling systems running 24 x 7 within the federal system. Of these, seven systems are used nationwide including HYSPLIT and ALOHA. These are roughly equally split between the military and civilian agencies.


Sorting out which might be the best proved impossible, because each has special strengths to address the particular issues for which it was initially intended, and each suffers from specific weaknesses, the most important of which were documented as research and development needs in the report. The OFCM report stated, therefore, that there is no existing “best capability” suitable for widespread application. Nor is it likely that any such generalized model will be developed in the near future. Instead, we need to learn how to access the suite of capabilities now in use, and to select from it the capabilities best suited to situations that may arise. The margin of error in the models can be significant and is dependent on the scenario and the availability of reliable meteorological input data. The OFCM continues to work on these issues through its Federal coordinating infrastructure and in collaboration with the academic community and private sector through workshops and forums.


There is a practical reality that complicates the situation substantially. Most available modeling systems have been developed on the basis of understanding generated in field studies over grass or desert, completely in the absence of buildings or other large surface structures. The application of current concern centers on cities and urban areas, where the buildings will cause changes in wind fields that are not yet well understood. There are research programs presently underway to investigate the dispersion characteristics of urban areas. Recent field studies in Salt Lake City, for example, have yielded a lot of new information. However, we do not yet know how to apply the results applicable for some specific urban area to another, with confidence. Consequently, there is a strong need to obtain relevant data, based on measurements in the situations of actual importance.


This is the basis for the design of DCNet – a program to provide Washington with the best possible basis for dispersion computation, as is needed for both planning and possible response.


The problem we face is complex. The winds within a city sometimes bear little resemblance to those of the surrounding countryside. Emphasis for weather forecasting is on larger-scale patterns, and therefore observations of wind and temperature in cities below the level of tall buildings have not been weighted heavily in weather forecast models. For small, street-level releases of a contaminant, these local-scale conditions are dominant, especially within the first minutes to hours, until entrainment above the buildings is significant. The presence of buildings and the “street canyons” separating them often causes winds that are almost random, exceedingly difficult to predict or even describe. The flow above the “urban canopy” is far more describable in terms of larger scale meteorology. It is convenient to think in terms of two regimes – the street canyon flows beneath the urban canopy and the “skimming flow” above it. Washington presents an excellent testbed for studies, because the urban canopy is well defined by the height constraint on the buildings. New York, for example, presents an opposite extreme. In New York, many buildings are not only very high, but their height is quite variable.


There are thus two major reasons to focus attention on the Washington metropolitan area. First, the attention is needed because, the attacks of 9/11 demonstrate it to be a target. Second, the urban landscape lends itself to the application of new science, so that greatly improved capabilities are feasible. But there is a third reason that makes Washington so attractive. In 1983, a year-long study was conducted here, largely replicating the sort of situation that some people fear we might be confronting. In the 1983 study, minute amounts of harmless but very easily detected trace gases were released from a number of locations around the beltway. Several trace gases were used, all variants on the fluorocarbons used in refrigeration. This METRopolitan Experiment (“METREX”) has provided a baseline of understanding not present anywhere else. METREX was a NOAA program, specifically designed to test how well dispersion models perform in an urban area like the District of Columbia. The news was not good – the predictions were very poor. But there was some good news as well – the models appear to describe the statistics of the behavior quite well. That is, they fail to reproduce the fine details of what is going on, but they succeed in describing the probability that some particular range in exposures will be encountered. Based on this experience, the current program addresses the statistical description of urban dispersion directly. The statistical approach, rather than focusing on predicting the most accurate snapshot of concentration of contaminant at a specific time and place, better supports the goal of dispersion predictions – to help decision makers assess the likelihood that people will be harmed.

    

With the experience of METREX behind them and the recognition that Washington is now an attractive target, NOAA scientists have deployed an array of meteorological stations in the downtown area. These stations report not only the wind speed and direction, but also the intensity of the turbulence. Sonic anemometry is used. Sonic anemometers measure the speed of sound along three axes, and derive from these data the wind speed along these axes with great accuracy and frequency. A measurement frequency of ten times per second is typical. The instruments are mounted on 10 m towers, mostly on the tops of buildings where data on the skimming flow can be obtained. The second figure attached gives some details of the current deployment. One of the most visible locations can been seen on the roof of the National Academy of Sciences. Data are analyzed by computers on each tower and are transmitted to a central analysis location every fifteen minutes. The data already show the dangers inherent in assuming the relevance of nearby airport data. The third figure attached shows the differences in the distribution of wind speeds and directions across the downtown area. If airport data were used to address the case of a dispersion situation on the Mall, then the answers would be wrong.


For obvious reasons, the Washington downtown system is referred to as “DCNet.” It is proposed that the operation should be extended to cover the greater DC Capital Region. The system is a demonstration of capabilities that now exist and are ready for deployment. The trial system enables a user to identify a source location with the click of a mouse, and define the downwind area of potential high risk using observations from the DCNet system. There is no long wait involved. Results can be generated almost instantaneously. In practice, this new generation of dispersion system relies on access to the best available weather forecast data as well as the information from dedicated arrays of sensors like DCNet. There are, of course, many other sources of meteorological information that could be accessed (highway sensors operated by Departments of Transportation, for example). There are additional data available from radars and from other remote sensing sources. A challenge to the research community is to sort out how best to make use of data from all available origins.


It has already been emphasized that the main goal of DCNet is to refine our understanding of how hazardous trace gases and particles are dispersed across the kind of area where people work and live. To this end, the operational systems that are now being improved are viewed not as being final developments, but as continuously evolving capabilities with continuing upgrades as improved understanding warrants. A major concern is that an incorrect forecast could lead to decisions that do more harm than good. To demonstrate the accuracy of the forecasts, a new round of tracer studies will be required.


The Washington exercise is seen as a prototype of what could eventually be a nationwide program. There is testing and development ahead, well before any decisions about wider deployment are made. In the meantime, the system now in place offers this area an unparalleled capability to plan for possible attacks, and to respond if one were to occur.


On the regional scale, weather forecasting and dispersion forecasting systems are becoming increasingly integrated. There are already model systems that combine the two, and it is the present intention to install one of these modeling frameworks as the nation’s premier forecasting tool in the near to intermediate future. This new model framework is known as the Weather Research and Forecasting model (WRF). Once WRF is up and running, there will be no need for self-standing dispersion codes that access meteorological data from elsewhere and compute dispersion from those data. It can be said that we are presently operating in a stop-gap mode while the National Weather Service, working with the Department of Defense and a variety of other agencies, refines its next generation of forecasting model. In the future, the dispersion forecasts will be a far more routine product than at the moment. Moreover, these dispersion products will make full use of the remote probing data base that is now becoming a mainstream part of the national meteorological observing system (such as advanced RADAR and sonic anemometry). Other agencies will be able to access the products using the real-time and streamlined communications systems on which the NWS relies. In the meantime, the NOAA operational systems already provide state-of-the-art forecasts of dispersion and are ready for refinement to address specific situations of special concern.


Thank you, Mr. Chairman and Members of the Subcommittee. This concludes my testimony for today. Thank you for the opportunity to testify, and I would be happy to respond to any questions that the Subcommittee may have.