TESTIMONY OF

THOMAS R. KARL

DIRECTOR
NATIONAL CLIMATIC DATA CENTER
NATIONAL ENVIRONMENTAL SATELLITE, DATA, AND INFORMATION SERVICES
NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION

BEFORE THE
SUBCOMMITTEE ON OVERSIGHT AND INVESTIGATIONS
ENERGY AND COMMERCE COMMITTEE
UNITED STATES HOUSE OF REPRESENTATIVES

Subcommittee Topic:
Evaluating the US National Climate Change Assessment:
Do climate models project a useful picture of future climate?

July 25, 2002

Introduction
Good morning, Chairman Greenwood and members of the Subcommittee. I am Thomas R. Karl, Director of NOAA's National Climatic Data Center. I was invited to appear today because I was one of the three Co-Chairs of the Report of the National Assessment Synthesis Team (NAST).

I would like to begin by emphasizing that the reports of the National Assessment Synthesis Team are not a product of the U.S. Government, and they do not represent government policy. In fact, they have sometimes been quite controversial. The National Assessment Synthesis Team is an advisory committee chartered under the Federal Advisory Committee Act. The NAST reports are not policy positions or official statements of the U.S. government. Rather, they were produced by selected members of the scientific community and offered to the government for its consideration.

The Synthesis Team was comprised of individuals drawn from governments, universities, industry, and non-governmental organizations that had responsibility for broad oversight of the National Assessment entitled "Climate Change Impacts on the United States — The Potential Consequences of Climate Variability and Change." The purpose of the Assessment was to synthesize, evaluate, and report on what we presently know – and don't know – about the potential consequences of climate variability and change for the United States in the 21st century. It attempted to review climate vulnerabilities of particular regions of the nation and of particular sectors, and sought to provide a number of adaptation measures to reduce the risk, and maximize the potential benefits and opportunities of climate change, whatever its cause. The National Assessment was conducted from 1997 to 2000 and was our first attempt to generate climate scenarios for various regions and sectors across the United States, which turned out to be a very challenging task. I am very pleased to have this opportunity to present testimony regarding the basis for the scenarios of 21st century climate used in the National Assessment.

As a basis for the National Assessment, and in the context of the uncertainties inherent in looking forward 100 years, the NAST pursued a three-pronged approach to considering how much the climate may change. The three approaches involved use of: (1) historical data to examine the continuation of trends or recurrence of past climatic extremes; (2) comprehensive, state-of-the-science (though still with significant limitations), model simulations to provide plausible scenarios for how the future climate may change; and (3) sensitivity analyses that can be used to explore the resilience of societal and ecological systems to climatic fluctuations and change. Of particular interest for this hearing is the second of these approaches, and that is where I will focus my remarks. As a pretext however, I note that the National Assessment rests on a combination of these approaches.

Developing Model-based Scenarios for the 21st Century

Projecting changes in factors that influence climate
Because future trends in fossil fuel use and other human activities are uncertain, the Intergovernmental Panel on Climate Change (IPCC) has developed a set of scenarios for how the 21st century may evolve. These scenarios consider a wide range of possibilities for changes in population, economic growth, technological development, improvements in energy efficiency and the like. The two primary climate scenarios used in the National Assessment were based on a mid-range emission scenario used in the second IPCC report. This scenario assumes no major changes in policies to limit greenhouse gas emissions. Other important assumptions in the scenario are that by the year 2100:

• world population is projected to nearly double to about 11 billion people;
• the global economy is projected to continue to grow at about the average rate it has been growing, reaching more than ten times its present size;
• increased use of fossil fuels are projected to triple CO2 emissions and raise sulfur dioxide emissions, resulting in atmospheric CO2 concentrations of just over 700 parts per million; and
• total energy produced each year from non-fossil sources such as wind, solar, biomass, hydroelectric, and nuclear are projected to increase to more than ten times its current amount, providing more than 40% of the world's energy, rather than the current 10%.

There are a number of other important factors besides fossil fuel emissions that cause climate to change and vary. These were not part of the scenario used to drive climate change in the two primary models used in the National Assessment, because at the time of the National Assessment these simulations were not available. Figure 1 depicts the magnitude of these other climate forcings that were omitted from the emission scenario. Clearly, the two largest forcings are those related to increases in greenhouse gases and aerosols, both included in the two primary models used in the National Assessment. The addition of other forcings are an important consideration for improvement of future assessments, for example the

 

role of black carbon aerosols, and a more thorough treatment of land vegetative feedback effects which become quite important on local and regional space scales compared to global scales, e.g., the urban heat island.

Which models to use?
The NAST developed a set of guidelines to aid in narrowing the set of primary model simulations to be considered for use by the Assessment teams. This helped ensure a degree of consistency across the broad number of research teams participating in the Assessment. These guidelines included various aspects related to the structure of the model itself, the character of the simulations, and the availability of the needed results. Specifically this meant that the models must, to the greatest extent possible:

• be coupled atmosphere-ocean general circulation models that include comprehensive representations of the atmosphere, oceans, and land surface, and the key feedbacks affecting the simulation of climate and climate change;

• simulate the evolution of the climate through time from at least as early as the start of the detailed historical record in 1900 to at least as far as into the future as the year 2100 based on a well-understood scenario for changes in atmospheric composition that takes into account time-dependent changes in greenhouse gas and aerosol concentrations;

• provide the highest practicable spatial and temporal resolution (roughly 200 miles [about 300 km] in longitude and 175 to 300 miles [about 275 to 425 km] in latitude over the central US);

• include the diurnal cycle of solar radiation in order to provide estimates of changes in minimum and maximum temperature and to be able to represent the development of summertime convective rainfall;

• be capable, to the extent possible, of representing significant aspects of climate variations such as the El Niño-Southern Oscillation cycle;

• have completed their simulations in time to be processed for use in impact models and to be used in analyses by groups participating in the National Assessment;

• be models that are well-understood by the modeling groups who participated in the development of the Third Assessment Report of the Intergovernmental Panel on Climate Change (IPCC) in order to ensure comparability between the US efforts and those of the international community;

• provide a capability for interfacing their results with higher-resolution regional modeling studies (e.g., mesoscale modeling studies using resolutions finer by a factor of 5 to 10); and

• allow for a comprehensive array of their results to be provided openly over the World Wide Web.

Including at least the 20th century in the simulation adds the value of comparisons between the model results and the historical record and can be used to help initialize the deep ocean to the correct values for the present-day period. Having results from models with specific features, such as simulation of the daily cycle of temperature, which is essential for use in cutting edge ecosystem models, was important for a number of applications that the various Assessment teams were planning.

At the time of the National Assessment only two models, the Canadian Climate Centre Model and the United Kingdom's Hadley Centre model, were able to satisfactorily meet these criteria. Today however, if the Assessment were repeated with the same criteria, several more models would meet these criteria, including modeling efforts in the USA. Let me emphasize the importance of this, which represents another limitation of the National Assessment. In 1998 the Climate Research Council (which I chaired) of the National Research Council issued a report, Capacity of U.S. Climate Modeling to Support Climate Change Assessment Activities. While improvements in model capability have occurred during the past four years, key findings from the CRC report are worthy of note:

The CRC finds that the United States lags behind other countries in its ability to model long-term climate change. Those deficiencies limit the ability of the United States to predict future climate states … Although collaboration and free and open information and data exchange with foreign modeling centers are critical, it is inappropriate for the United States to rely heavily upon foreign centers to provide high-end capabilities. There are a number of reasons for this, including the following: (1) U.S. scientists do not necessarily have full, open and timely access to output from European models…. (2) Decisions that might substantially affect the U.S. economy might be made based upon considerations of simulations (e.g. nested-grid runs) produced by countries with different priorities than those of the United States.

Furthermore, the report noted, "While leading climate models are global in scale, their ability to represent small-scale, regionally dependent processes … can currently only be depicted in them using high-resolution, nested grids. It is reasonable to assume that foreign modeling centers will implement such nested grids to most realistically simulate processes on domains over their respective countries which may not focus on or even include the United States."

The use of observations
Observations were an essential part of developing climate scenarios for the 21st century in the National Assessment. Reliance on model simulations provides only a limited opportunity to investigate the consequences of climate variability and change. To minimize this limitation, in the National Assessment the historical record was used to help determine regional and sector specific sensitivities to climate changes and variations of differing, but contextual realistic changes.

The observations were also used to understand how the models simulated present and past climate (see Figure 2), and to correct a number of model biases. While climate models have shown significant improvement over recent decades, and the models used in the National Assessment were among the world's best, there were a number of shortcomings in applying the models to study potential regional-scale consequences of climate change. This is a fundamental limitation to the results of the National Assessment, and should be kept in mind. In the National Assessment, several methods were used in an attempt to address these problems. Most importantly, the output from the primary models (the Hadley and Canadian) for temperature and precipitation were passed through a set of standardization processing algorithms to re-calibrate the model simulations with the observations. This is especially important in areas of complex terrain such as mountainous regions of the West were model resolution was insufficient to adequately resolve detailed small-scale climate characteristics. The processing procedure accounted for at least some of the shortcomings and biases in the models. So, the model scenario results used in the impact assessments were often adjusted to remove the systematic differences with observations that were present in the model simulations. Such a procedure is similar to what is now being implemented in daily weather forecasting, where actual model projections are not used, but rather the historical statistical and dynamical relationships between the weather model forecasts and actual observations are used to generate local weather forecasts. This adjustment process is fully described in the foundation report of the National Assessment.

In addition, some of the regional teams applied other types of "down-scaling" techniques to the climate model results in order to derive estimates of changes occurring at a finer spatial resolution. One such technique has been to use the global climate model results as boundary conditions for mesoscale models that cover some particular region (e.g., the West Coast with its Sierra Nevada and Cascade Mountains). These models are able to represent important processes and mountain ranges on finer scales than do global climate models. These small-scale simulations however, have not been as well tested as global models and are very computer intensive. It has not yet been possible to apply the techniques nationally or for the entire 20th or 21st centuries. With the rapid advances in computing power expected in the future, this approach should become more feasible for future assessments. To overcome the computational limitations of mesoscale models, some of the Assessment Teams developed and tested empirically based statistical techniques to estimate changes at finer scales than the global climate models, and these efforts are discussed in the various regional assessment reports. These techniques have the important advantage of being based on observed weather and climate relationships, but have the shortcoming of assuming that the relationships prevailing today will not change in the future.

Another type of tool developed for use in the sensitivity analyses were statistical models and weather generators used to calculate probabilities of unusual weather and climate events. These models enabled impact analysts to compose "what if" questions for strings of weather and climate events that could be important to their specific sector or region. Other approaches focused on using a variety of other types of observational data.

Evaluation of the Models

Among the tests that have been used to evaluate the skill of climate models have been evaluations of climate model output to simulate present weather and climate, the cycle of the seasons, climatic variations over the past 20 years (the time period when the most complete data sets are available), climatic changes over the past 100 to 150 years during which the world has warmed, and climatic conditions for periods in the geological past when the climate was quite different than at present.

There are so many kinds of evaluations that can be made it is not possible to provide one test to ascertain the appropriateness of any model for climate impact assessments. For example, models may be expected to reproduce the past climate for hemispheric and global averages on century time-scales because much of the climate noise due to seasonal to inter-annual climate variability tends to be less important. This includes many of the important climate oscillations such as the El Nino, the North Atlantic Oscillation, the Pacific Decadal Oscillation, and others. Because models generally replicate the chaotic behavior of the natural climate, the climate models simulate their own year-by-year climates and they will not produce the precise timing of these events to match the observations. On the other hand, the climate models may be expected to reproduce the statistical distribution of these events. So, to compare models to observations it is important to be able to average out these natural variations that can have very large impacts for given regions in specific years. For this reason in the National Assessment comparisons of the model simulations with observations on regional and subregional levels were made by averaging over multiple decades or longer.

In conducting climate model evaluations it is tempting to prefer those models where the simulations most closely match the observations, but several complications must be accounted for in such intercomparisons. First, there are inherent errors and biases in our observational data. Models, even if they are provided perfect forcing scenarios and had perfect chemistry, physics and biology, should not be expected to perfectly match imperfect observations. By cross comparing observations from differing data sets and observing systems we can roughly estimate some of the observational errors and biases. Second, because of the chaotic nature of the climate, we cannot expect to match the year-by-year or decade-by-decade fluctuations in temperature that have been observed during the 20th century. Third, the particular model simulations used in the National Assessment did not include consideration of all of the effects of human-induced and naturally-induced changes that are likely to have influenced the climate, including changes in stratospheric and tropospheric ozone, volcanic eruptions, solar variability, and changes in land cover (and associated changes relating to biomass burning, dust generation, etc.). Finally, while it is desirable for model simulations not to have significant biases in representing the present climate, having a model that more accurately reproduces the present and past climate does not necessarily mean that projections of changes in climate developed using such a model would provide more accurate projections of climate change than models that do not give as accurate simulations. This can be the case for at least two reasons. First, what matters most for simulation of changes in future climate is proper treatment of the feedbacks that contribute to amplifying or limiting the changes, and accurate representation of the 20th century does not guarantee this will be the case. Second, because projected changes are calculated by taking differences between perturbed and unperturbed cases, the effects of at least some of the systematic biases present in a model simulation of the present climate can be eliminated. While potential nonlinearities and thresholds make it unlikely that all biases can be removed in this manner, it is also possible that the projected changes calculated by such a model could turn out to be more accurate than simulations with a model that provided a better match to the 20th century climate.

Recognizing these many limitations, evaluation of the simulations from the Canadian and Hadley models are briefly summarized here to give an indication of the kinds of tests climate scientists have completed to assess the general adequacy of the models for use in assessing the impacts of climate change and variability. As depicted in Figure 2 both primary models capture the rise in global temperature since the late 1970s, but do not do as well in reproducing decadal variations. The question of how these two models compare to other climate models, several of which were not available at the time of the National Assessment, is addressed in Figure 3. Note that the scaling factor required to match in the increase in temperature during the 20th century for all models is close to one, except for the Canadian Climate Model which is somewhat less than one, reflecting the relatively high sensitivity of this model to increases in greenhouse gases, although the scaling factor in a later version of the model (CGCM2 in Figure 3) is closer to one. It is also noteworthy that the later version of the Hadley Centre Model very closely reproduces the rate of 20th century warming when a more complete set of forcings, indirect sulfate forcing and tropospheric ozone, is added to the model. Another test of a model's ability to reproduce 20th Century global temperatures is to compare the annual temperatures generated by the models with the observations. To assess relative skill, errors can be compared to projections based on temperature persistence. That is, always predicting the annual mean temperature to be equal to the longer-term mean over the length of the averaging period centered on either side of the prediction year. Figure 4 shows some results of such a test for averaging periods from 10 to 50 years. This is a difficult test for a model to show skill because the persistence forecast actually includes information about the annual mean temperature both before and after the "prediction year." In all cases the model simulations have smaller errors than the persistence based projection, indicating significant skill.

So, analyses at the global scale for the two primary models used in the National Assessment indicate that there is general agreement with the observed long-term trend in temperature over the 20th century, but the Canadian Climate Model is significantly more sensitive to greenhouse gases compared to the Hadley Centre Model, and may be thought of as the "hotter" of the two models. This higher climate sensitivity of the Canadian model may be due to projection an earlier melting of the Arctic sea ice than the Hadley model. It is not yet clear how rapidly this melting may take place.

The question as to whether the Canadian Climate Model is an outlier can be addressed in Figure 5 where the global warming rate has been plotted for various models with similar forcings of greenhouse gases and sulfate aerosols. The Canadian Climate Model is seen to have a relatively high sensitivity to increases in greenhouse gases compared to other models, but its sensitivity is quite comparable to a model not used in the National Assessment, NOAA's Geophysical Fluid Dynamics Laboratory R15 model. So, although the Canadian model does appear to be one of the more sensitive models to increases in greenhouse gases, it is not an outlier. By comparison the Hadley Centre model appears to have moderate sensitivity to increases in greenhouse gases.

The National Assessment was not performed on global space scales, so it is important to understand the differences between model simulations and observations on regional scales. As part of a long-term Climate Model Intercomparison Project (CMIP2), Dr. Benjamin Santer of the Lawrence Livermore National Laboratory has recently compared results from a number of climate models related to their ability to reproduce the annual mean precipitation and the annual cycle of precipitation across North America. The results of this study, which included the two primary models used in the National Assessment, are depicted in Figures 6 and 7. The figure shows the correlation between the patterns of the model output and the observations (the y-axis) along with a measure of the differences in actual precipitation (the x-axis). If there were no errors in our observing capability, a perfect model would reproduce the observations exactly and have perfect correlation with the observations, the difference between any observed model grid point and observational grid point would be zero, and it would appear as a point in the far upper left corner of the plot. By comparing two different observational data sets we can get an estimate of the errors in the observations and this has been done in Figures 6 and 7 by comparing two different 20-year climatologies over North America by two different research groups. So, no model should be expected to be in the quadrant of the diagram to the upper left of the less than perfect observational data sets. It is clear in Figures 6 and 7 that the Hadley Centre model used in the National Assessment reproduces the observations better than all other models, while the Canadian Climate Centre Model does not do as well, but is by no means an outlier.

Although the changes in global scale features and the regional simulations of precipitation of the two primary models are seen to be rather typical of other models, there are important issues on regional scales that suggest that significant uncertainties remain in our ability to effectively use these models for impact assessments. For example, problems with the way these climate models simulate ENSO variability suggest that the projected pattern of changes may not be definitive. Also, as illustrated by the different projections of changes in summer precipitation used in the National Assessment in the Southeast, there are often several processes that can contribute to the pattern of change. The same process can lead to different projections of changes when imposed on a slightly different base state of the climate. For example, the proportion of the oceans that are frozen versus liquid, the amount of snow cover extent, the dryness of the ground surface, the strength of North Atlantic deep water circulation, etc., all can play important roles. In addition, the different representations of land surface processes, clouds, sea-ice dynamics, horizontal and vertical resolution, as well as many other factors included in different climate models, can have an important impact on projections of changes in regional precipitation. This dependence occurs because precipitation, unlike atmospheric dynamics, is a highly regionalized feature of the climate, depending on the interaction of many processes, many of which require a set of model parameterizations. Given these many limitations, in the National Assessment the model simulations were viewed as projections not as predictions. The significance of this distinction can be seen in the following quote from the recently-released Climate Action Report 2002: "Use of these model results is not meant to imply that they provide accurate predictions of the specific changes in climate that will occur over the next hundred years. Rather, the models are considered to provide plausible projections of potential changes for the 21st century. For some aspects of climate, the model results differ. For example, some models, including the Canadian model [used in this Assessment] project more extensive and frequent drought in the United States, while others, including the Hadley model [the other model used in the Assessment] do not. As a result, the Canadian model suggests a hotter and drier Southeast during the 21st century, while the Hadley model suggests warmer and wetter conditions. Where such differences arise, the primary model scenarios provide two plausible, but different alternatives."

How Were the Model Projections Used?

They model projections were used as indications of the types of consequences that might result. For example, as evident in Figure 2, although the emissions scenarios are the same for the Canadian and Hadley simulations, the Canadian model scenario projects more rapid global warming than does the Hadley model scenario. This greater warming in the Canadian model scenario occurs in part because the Hadley model scenario projects a wetter climate at both the national and global scales, and in part because the Canadian model scenario projects a more rapid melting of Arctic sea ice than the Hadley model scenario.

Recognizing that all model results are plausible projections rather than specific quantitative predictions, the consistency of the temperature projections of the primary models used for the National Assessment were assessed in a broader context. Figure 8 illustrates how this strategy was used. It is apparent that virtually all models consistently show a much greater than the global average warming over the US during winter and a greater than average warming during summer, except for Alaska. So, in the National Assessment all the scenarios of temperature change related to increased temperatures and the increases were often as larger or larger than the global mean temperature increase.

Although there are many similarities in the projected changes of temperature amongst the many climate models considered by the IPCC (Figure 8), this is not true of precipitation changes. In the National Assessment the Hadley Centre model often projected significantly wetter conditions compared to the Canadian model, but this variation is typical of our present state of understanding as depicted in Figure 9. Only during winter is there a consistent pattern of a small increase of precipitation among most of the climate models; by contrast during summer there is not much agreement about the sign or magnitude of the precipitation change, except for a general tendency for more precipitation in the high latitudes of North America. The inconsistencies among all the models with respect to summertime mid-latitude North American precipitation (Figure 9) were reflected in the two scenarios used in the National Assessment, ensuring consideration of a range of possible outcomes. To address this range of possible outcomes a number of "what if" scenarios were developed and used in the National Assessment. For example, in the West, although both models in the National Assessment projected precipitation increases, a "what-if" scenario of less precipitation was used to broaden the assessment of possible climate impacts, vulnerabilities, and adaptation measures.

Interestingly, despite the fact that the global climate models do not agree well on the sign of summer precipitation changes, virtually all climate models indicate that as greenhouse gases increase more intense precipitation events will occur over many areas. Indeed, observations reflect this today in many mid and high latitude land areas where data are available for such an assessment. For these reasons and the fact an increase in precipitation intensity can effectively be argued from simple thermodynamic considerations, this attribute of precipitation change was an important scenario considered by the sectoral and regional impact and adaptation assessments.

It should also be noted in the National Assessment, due to the nature of the differences among various models, wherever feasible other model simulations were used to assess possible impacts. A particularly noteworthy example comes from the Great Lakes Region. Results from ten models were used to simulate changes in Great Lake levels during the 21st century. All but one of the models suggested lower Lake levels. So a combination of the primary models, other climate models, and observations were instrumental in identifying key climate impacts and vulnerabilities for the 21st Century.

Future Assessments

To build confidence in the projections used for future climate assessments, much remains to be done. Further improvements in climate models are needed, especially in the representations of clouds, aerosols (and their interactions with clouds), sea ice, hydrology, ocean currents, regional orography, and land surface characteristics. Improving projections of the potential changes in atmospheric concentrations of greenhouse gases, aerosols and land use is important. Climate model simulations based on these revised emissions forecasts should provide improved sets of information for assessing climate impacts. In addition to having results from more models available, ensembles of simulations from several model runs are needed so that the statistical significance of the projections can be more fully examined. As part of these efforts, it is important to develop greater understanding of how the climate system works (e.g., of the role of atmosphere-ocean interactions and cloud feedbacks), to refine model resolution, to more completely incorporate existing understanding of particular processes into climate models, to more thoroughly test model improvements, and to augment computational and personnel resources in order to conduct and more fully analyze a wider variety of model simulations, including mesoscale modeling studies.

While much remains to be done that will take time, much can also be done in the next few years that can substantially improve the set of products and tools available to assess climate impacts. For example, an intensified analysis program is needed to provide greater understanding of the changes and the reasons why they occur. New efforts to incorporate the interactive effects of changes in land use and vegetation in meso-scale and global models will help in understanding local and regional climate change and variability. A better understanding of the changes in weather patterns and extremes in relation to global changes is important. Improved efforts that combine analysis of the model results with the insights available from analysis of historical climatology and past weather patterns needs to be a priority. Regional climate scenarios can also be developed using a combination of climate model output and dynamical reasoning. More use of mesoscale models is important because they can provide higher resolution of spatial conditions.

In the National Assessment, we were able to consider only one set of emission scenarios rather than a range of emission scenarios. For the future, the actual emissions of greenhouse gases and aerosols could be different than the baseline used. Changing the emissions scenario would give increasingly divergent climate scenarios as the time horizon expanded. This would likely become important beyond the next few decades as different emission scenarios are not likely to significantly affect climate scenarios because of the relatively slow response of the global climate and energy systems, and because a large portion of the change will be due to past emissions.

As recently stated by the Assistant Secretary for Oceans and Atmosphere, Dr. Mahoney, the highest and best use of the scientific information developed in the combined United States Global Climate Research Program (USGCRP) and the President's Climate Change Research Initiative (CCRI) could be the development of comparative information that will assist decision makers, stakeholders and the general public in debating and selecting optimal strategies for mitigating global change, while maintaining sound economic and energy security conditions in the United States and throughout the world. Significant progress in developing and applying science-based decision tools during the next 1 to 3 years must be a key goal of the combined USGCRP and CCRI program. Examples of analyses expected to be completed during this time period that would improve our nations ability to conduct a subsequent National Assessment include:

• Long-term global climate model projections (e.g., up to the year 2100) for a wide selection of potential mitigation strategies, to evaluate the expected range of outcomes for the different strategies.
• Detailed analyses of variations from defined "base" strategies, to investigate the importance of specific factors, and to search for strategies with optimum effectiveness.
• Linked climate change and ecosystem change analyses for several suggested strategies, to search for optimum benefits.
• Detailed analyses of the outcomes that would be expected from application of the wide selection of energy conservation technologies, and carbon sequestration strategies, currently being investigated by the National Climate Change Technology Initiative

Summary

The National Assessment conducted from 1997-2000 was a first step. It relied on a number of techniques to develop climate scenarios for the 21st century including: historical data to examine the continuation of trends or recurrence of past climatic extremes; climate model simulations in an attempt to provide plausible scenarios for how the future climate may change; and sensitivity analyses to explore the resilience of societal and ecological systems to climatic fluctuations and change. Numerous climate models were used in the National Assessment, but the two primary models were selected on the basis of a set of objective criteria. Today, if the Assessment were repeated with the similar criteria, results of several other models would be included.

Intercomparison of the models used in the National Assessment with observations and other models indicates that the two primary models used in the National Assessment reflects the state of scientific understanding approximately 2-3 years ago. This had important consequences. For example, the amount of summertime precipitation expected over much of the contiguous USA as the climate warmed was quite uncertain and required use of several "what if" analyses to assess potential impacts. Other projected changes were more certain, like increased temperatures everywhere, during all seasons, and impact analyses could focus on the magnitude as opposed to the sign of projected change.

In conclusion, the National Assessment we conducted on the impact of climate variability and change had significant limitations, but was a first step. Quite clearly, more needs to be done and such efforts will provide more effective decision support tools to help frame adaptation and mitigation measures to avoid the risk and harm of climate change and maximize its potential benefits.

It is important to note a major recommendation in the National Research Council's recent analysis (2001) of some key questions related to Climate Change Science. Specifically, that report states that "the details of the regional and local climate change consequent to an overall level of global climate change" requires further understanding. The uncertainties that surfaced in generating scenarios for the National Assessment was clearly in our minds when we made this recommendation.

Resolving these uncertainties will be essential to understanding the scope of any climate change impact. Quite clearly, more needs to be done and such efforts will provide more effective decision support tools to help frame adaptation and mitigation measures to avoid the potential risk and harm of climate change and maximize its potential benefits.



Karl, Thomas R.,
Director, National Climatic Data Center
Asheville, North Carolina.

Born 22 November 1951, Evergreen Park, Illinois. B.S., Meteorology, Northern Illinois University, De Kalb, Illinois, 1973; M.S., Meteorology, University of Wisconsin-Madison, 1974, Hon. Doctor of Humane Letters, North Carolina State University, 2002. University of Wisconsin, Weather Forecaster, 1975; Weather Central, Madison, Wisconsin, TV and Radio Weathercasting, 1975; NOAA, Air Resources Laboratory, Research Meteorologist, 1975–79; NOAA, National Weather Service (NWS), Meteorological Intern, Anchorage, Alaska, 1979–80; NOAA, NWS Air Traffic Control Meteorologist, Anchorage, Alaska, 1980; NOAA, NESDIS/National Climatic Data Center (NCDC), Meteorologist, 1980–87; University of North Carolina, Asheville, North Carolina, Adjunct Instructor, Department of Mathematics, 1986–88; NOAA/NESDIS/NCDC: Research Meteorologist, 1987–89; Chief, Climate Perspectives Branch, 1989; Chief, Climate Analysis Division, 1989; Chief, Global Climate Laboratory, 1990–92; Senior Scientist, 1992–98; Director, 1998–. Lead and Convening Lead Author, Intergovernmental Panel on Climate Change (IPCC), 1989, 1992, 1995, 2001; US/USSR Committee on Climatic Change, Joint US/USSR Commission on the Protection of the Environment, 1987–91; Environmental Protection Agency Climate Change Advisory Panel, 1987–91; NOAA Task Force on Data Management for Climate and Global Change, 1989–90; National Academy of Sciences, Effects Sub-Committee on the Policy implications of Global Warming, 1990–91; National Research Council, National Academy of Sciences, Climate Research Committee, 1991–99 and Chair, 1998–99; National Academy of Sciences, EOSDIS Review Panel, 1991–93; NOAA's Office of Chief Scientist Working Group (WG) on Information for Policy-makers, 1992–93; Project Manager, NOAA's Climate Continuity and Quality, 1992–93; Department of Energy's Climate Change Detection Program, 1992–93; Science Advisory Panel for the Climate and Global Change Data and Information Management Main Program Element (MPE), 1992–94; Global Climate Observing System (GCOS) Joint Data and Information Management Panel (JDIMP), 1994–98 and Chair, 1996–98; White House Committee on the Environment and Natural Resources, 1995–97; Environmental Services Data and Information Management (ESDIM) review committee, 1995–97; Science Advisor, NOAA/AES of Canada North American Observing System (NAOS), 1995–97; IEEE Metadata Committee, 1995–96; Co-Chair, NOAA's Decadal-to-Centennial Prediction and Assessment Strategic Planning Team, 1996–; Climate Requirements Working Group Co-Chair, National Polar-Orbiting Operational Satellite (NPOESS), 1996; Chair, NOAA's Council on Long-term Climate Monitoring, 1997–; Co-Chair, US National Climate Assessment, 1998–2000; Program Director for NOAA's Climate Change Data and Detection Program, 1994–. AMS: Member, 1972; Fellow, 1993; Editors Award, Journal of Climate, 1988; Chair, Applied Climatology Committee, 1989–91; Associate Editor, Journal of Climate, 1989–95; Chair, Global Change Symposia, 1995–2000; Editor, Journal of Climate, 1998–2000. Co-Editor, Atmospheric Research, 1995; Guest Editor and Associate Editor, Climatic Change, 1992–. AGU: Member, 1983; Fellow, 1998–; AGU Committee on Atmospheric Sciences 1997–; Committee on the Science of Climate Change, National Research Council, 2001; Department of Commerce Bronze Medal, 1988; NOAA Administrator's Award, 1989; Department of Commerce Gold Medal, 1990, 1998; Helmut Landsberg Award, 1993; Climate Institute Outstanding Scientific Achievement Award, 1996; National Associate of the National Academy of Science, 2001. Editor and Co-Author of textbooks addressing various climate issues. Authored and co-authored over 100 articles appearing in AMS journals, AGU journals, Science, Nature, and more popular magazines like Scientific American and National Geographic Research and Exploration as well as numerous atlases, technical reports, conference and workshop proceedings. Numerous news media interviews, testimonies to the U.S. Congress, briefings to cabinet level officials including the President and Vice President of the United States.