<DOC>
[107th Congress House Hearings]
[From the U.S. Government Printing Office via GPO Access]
[DOCID: f:81495.wais]


 
   THE U.S. NATIONAL CLIMATE CHANGE ASSESSMENT: DO THE CLIMATE MODELS 
             PROJECT A USEFUL PICTURE OF REGIONAL CLIMATE?
=======================================================================

                                HEARING

                               before the

                            SUBCOMMITTEE ON
                      OVERSIGHT AND INVESTIGATIONS

                                 of the

                    COMMITTEE ON ENERGY AND COMMERCE
                        HOUSE OF REPRESENTATIVES

                      ONE HUNDRED SEVENTH CONGRESS

                             SECOND SESSION

                               __________

                             JULY 25, 2002

                               __________

                           Serial No. 107-117

                               __________

       Printed for the use of the Committee on Energy and Commerce


 Available via the World Wide Web: http://www.access.gpo.gov/congress/
                                 house







                       U. S. GOVERNMENT PRINTING OFFICE
81-495                          WASHINGTON : 2002
___________________________________________________________________________
For Sale by the Superintendent of Documents, U.S. Government Printing Office
Internet: bookstore.gpo.gov  Phone: toll free (866) 512-1800; (202) 512-1800  
Fax: (202) 512-2250 Mail: Stop SSOP, Washington, DC 2040-0001










                   COMMITTEE ON ENERGY AND COMMERCE

               W.J. ``BILLY'' TAUZIN, Louisiana, Chairman

MICHAEL BILIRAKIS, Florida           JOHN D. DINGELL, Michigan
JOE BARTON, Texas                    HENRY A. WAXMAN, California
FRED UPTON, Michigan                 EDWARD J. MARKEY, Massachusetts
CLIFF STEARNS, Florida               RALPH M. HALL, Texas
PAUL E. GILLMOR, Ohio                RICK BOUCHER, Virginia
JAMES C. GREENWOOD, Pennsylvania     EDOLPHUS TOWNS, New York
CHRISTOPHER COX, California          FRANK PALLONE, Jr., New Jersey
NATHAN DEAL, Georgia                 SHERROD BROWN, Ohio
RICHARD BURR, North Carolina         BART GORDON, Tennessee
ED WHITFIELD, Kentucky               PETER DEUTSCH, Florida
GREG GANSKE, Iowa                    BOBBY L. RUSH, Illinois
CHARLIE NORWOOD, Georgia             ANNA G. ESHOO, California
BARBARA CUBIN, Wyoming               BART STUPAK, Michigan
JOHN SHIMKUS, Illinois               ELIOT L. ENGEL, New York
HEATHER WILSON, New Mexico           TOM SAWYER, Ohio
JOHN B. SHADEGG, Arizona             ALBERT R. WYNN, Maryland
CHARLES ``CHIP'' PICKERING,          GENE GREEN, Texas
Mississippi                          KAREN McCARTHY, Missouri
VITO FOSSELLA, New York              TED STRICKLAND, Ohio
ROY BLUNT, Missouri                  DIANA DeGETTE, Colorado
TOM DAVIS, Virginia                  THOMAS M. BARRETT, Wisconsin
ED BRYANT, Tennessee                 BILL LUTHER, Minnesota
ROBERT L. EHRLICH, Jr., Maryland     LOIS CAPPS, California
STEVE BUYER, Indiana                 MICHAEL F. DOYLE, Pennsylvania
GEORGE RADANOVICH, California        CHRISTOPHER JOHN, Louisiana
CHARLES F. BASS, New Hampshire       JANE HARMAN, California
JOSEPH R. PITTS, Pennsylvania
MARY BONO, California
GREG WALDEN, Oregon
LEE TERRY, Nebraska
ERNIE FLETCHER, Kentucky

                  David V. Marventano, Staff Director

                   James D. Barnette, General Counsel

      Reid P.F. Stuntz, Minority Staff Director and Chief Counsel

                                 ______

              Subcommittee on Oversight and Investigations

               JAMES C. GREENWOOD, Pennsylvania, Chairman

MICHAEL BILIRAKIS, Florida           PETER DEUTSCH, Florida
CLIFF STEARNS, Florida               BART STUPAK, Michigan
PAUL E. GILLMOR, Ohio                TED STRICKLAND, Ohio
RICHARD BURR, North Carolina         DIANA DeGETTE, Colorado
ED WHITFIELD, Kentucky               CHRISTOPHER JOHN, Louisiana
  Vice Chairman                      BOBBY L. RUSH, Illinois
CHARLES F. BASS, New Hampshire       JOHN D. DINGELL, Michigan,
ERNIE FLETCHER, Kentucky               (Ex Officio)
W.J. ``BILLY'' TAUZIN, Louisiana
  (Ex Officio)

                                  (ii)


                            C O N T E N T S

                               __________
                                                                   Page

Testimony of:
    Janetos, Anthony C., Senior Fellow, The H. John Heinz III 
      Center for Science, Economics, and the Environment.........     5
    Karl, Thomas R., Director, National Climatic Data Center.....    13
    Lashof, Daniel A., Deputy Director, Climate Center, Natural 
      Resources Defense Council..................................    27
    Michaels, Patrick J., Professor and Virginia State 
      Climatologist, Department of Environmental Sciences, 
      University of Virginia.....................................    50
    O'Brien, James J., Director, Center for Ocean-Atmospheric 
      Prediction Studies, Florida State University...............    34
    Pielke, Roger A., Sr., President-elect, American Association 
      of State Climatologists, Colorado State Climatologist, and 
      Professor, Department of Atmospheric Science, Colorado 
      State University...........................................    42

                                 (iii)

  


  THE U.S. NATIONAL CLIMATE CHANGE ASSESSMENT: DO THE CLIMATE MODELS 
             PROJECT A USEFUL PICTURE OF REGIONAL CLIMATE?

                              ----------                              


                        THURSDAY, JULY 25, 2002

                  House of Representatives,
                  Committee on Energy and Commerce,
              Subcommittee on Oversight and Investigations,
                                                    Washington, DC.
    The subcommittee met, pursuant to notice, at 9:30 a.m., in 
room 2322, Rayburn House Office Building, James C. Greenwood 
(chairman) presiding.
    Members present: Representatives Greenwood, Deutsch, and 
Fletcher.
    Staff present: Peter Spencer, professional staff; Yong 
Choe, legislative clerk; and Michael L. Goo, minority counsel.
    Mr. Greenwood. On the record. Good morning. The meeting 
will come to order, and let me begin by apologizing to our 
witnesses and to our guests for the tardiness. It was actually 
unavoidable.
    The Chair recognizes himself for 5 minutes for an opening 
statement.
    Good morning, and welcome. This morning we will stand at 
the intersection of science, policymaking, and public concern 
about climate change to consider if an influential report 
provides the guidance necessary to navigate this often 
confusing and uncertain territory.
    At issue is the use of climate models to create the 
regional climate change scenarios that frame the discussion in 
what is called in shorthand the U.S. National Assessment on 
Climate Change.
    The national report about this assessment, prepared by 
scientists and researchers under a Federal advisory committee, 
known as the National Assessment Synthesis Team, two co-chairs 
of which are before us today, seeks to provide policymakers and 
the public with plausible pictures of regional climate 50 to 
100 years from now under the impact of global warming.
    Now let me note as we head into this a couple of points 
about my perspective. First, I tend to agree with the view 
expressed in some of the testimony we will hear this morning 
that there are some reasonable mitigation--there are some 
reasonable mitigation measures and other policy strategies we 
can take to address climate change risks, and that these do not 
depend upon the scientific dispute before us. Indeed, this 
dispute should not be used to avoid decisions on such policies.
    Of course, there continues to be much debate about some of 
these policy decisions, how much can or should we do, when 
should we do it, and the debate has engaged many members of the 
Energy and Commerce Committee on both sides of the aisle.
    The hearing today, though it will help inform the debate, 
is not the appropriate forum to conduct that debate, which 
would only distract us from the important questions before us 
this morning.
    Second, this is not to suggest we should glide over 
questions of science and the scientific validity of the tools 
and methods used to drive understanding of inherently science 
based issues. We need sound science to inform our decisions and 
to ensure our actions in the name of science--to ensure that 
our actions in the name of science aren't misguided because we 
were more confident than we should have been.
    So we begin today with a straightforward question: Do the 
climate models project a useful picture of regional climate? We 
have asked our panelists today, all scientists and all quite 
familiar with the controversies about climate, climate 
variability and impacts, and the national assessment, to 
comment on this and to speak to the role and suitability of the 
models used in this report.
    In the U.S. Climate Action Report released to the United 
Nations this past May, reference to the National Assessment 
discussed the use of the models this way. ``Use of these models 
is not meant to imply that they provide accurate predictions of 
the scientific changes in climate that will occur over the next 
100 years. Rather, the models are considered plausible 
projections of potential changes for the 21st century.''
    Two initial questions come to my mind when I read this, and 
I hope the witnesses can assist us in answering these questions 
this morning. The first has to do with the plausibility of the 
picture painted by the models. This is basically a science 
question, which I am sure the experts here can sort out for 
this layman, and this is how reliable are the predictions of 
plausible regional outcomes, given the admitted limitations of 
the modeling, and what would this mean for the usefulness of 
the report? And given the wide variation in the projections' 
results that oppose each other in one area but are similar in 
others, is it reasonable to rely upon them to take specific 
actions or to adopt specific policies?
    This appears to be a thoughtful report, and I believe the 
authors sincerely attempted to work through describing some of 
the uncertainty for policymakers and the public. Was it 
sufficient? How did the models work in the full picture here?
    The second question relates to the problem of communicating 
the uncertainty. The reference above makes a rather nuanced 
description of predictions versus the projections. Yet the New 
York Times which reported the Climate Action Report's reference 
to the Assessment wrote this back in May: ``The report says the 
United States will be substantially changed in the next few 
decades, very likely seeing the disruptions of snow-fed water 
supplies, more stifling heat waves, and the permanent 
disappearance of Rocky Mountain meadows and coastal marshes.''
    Was this the message the authors want the public to take 
away? We must come to grips with the fact that a scientific 
assessment such as this is more than an academic exercise read 
by the few who can grasp all the complexities. It is a document 
meant to guide us, policymakers and the public, though 
complicated policy intersections where we really rely on 
science as much as we can.
    The stakes here are as high as any could be, the very 
inhabitability of our planet. The cost of reducing the stakes 
is also high. For both of these reasons, the reliability of our 
predictive models must be high indeed.
    I thank the witnesses again, especially those who have 
traveled so far to testify this morning. I now recognize the 
ranking member, Mr. Deutsch of Florida, for his opening 
statement.
    Mr. Deutsch. Thank you, Mr. Chairman. I believe this is our 
third hearing regarding climate change. Just a few comments 
representing south Florida. At some level I think we are 
probably more affected by potential climate change than 
anywhere in the country, and there won't be much of south 
Florida left.
    The sea level of Florida's Gulf Coast has risen sharply, up 
as many as 8 inches over the last 100 years, and more sharply 
over the last few decades and, as people are well aware, higher 
sea levels can mean beach erosion, threatening homes and 
communities, coral reef erosion, and more intense and damaging 
storms and hurricanes.
    Florida's average temperature since the Sixties has also 
risen. Higher temperatures mean more heat related illnesses, 
decreasing air quality. Both higher sea levels and higher 
temperatures will seriously affect obviously greater areas, 
including our Everglades restoration efforts, and can severely 
affect our tourism industry. Florida is a community where 
environment and the economy effectively are one.
    I look forward to the testimony. I am just somewhat 
disappointed. As you are well aware, this is our last week in 
session, and we were in session until about two o'clock 
yesterday evening, and I don't really expect many members to be 
here this morning, which is unfortunate. But I am sure their 
staffs can review the record. I yield back.
    Mr. Greenwood. Thank you, ranking member.
    [Additional statements submitted for the record follow:]
    Prepared Statement of Hon. Paul E. Gillmor, a Representative in 
                    Congress from the State of Ohio
    Mr. Chairman, I wanted to quickly add my comments regarding climate 
change. In particular, I appreciate the opportunity to learn about the 
role of climate models as well as to discuss whether the U.S. National 
Climate Change Assessment should continue to serve as a benchmark with 
regard to potential impacts of climate change on our environment and 
human health.
    I also look forward to hearing from our panel of witnesses. I 
should also point out that in a time when the U.S. economy is so 
dependent upon energy, and so much of our energy is derived from fossil 
fuels, reducing emissions poses major challenges. Like many Members, I 
feel that rushing to judgement on these matters could be very costly, 
both socially and economically. In an effort to produce sound 
environmental policy while maintaining steady economic growth, I am 
hopeful that we will continue to review scientific information about 
climate change to evaluate potential economic and strategic impacts of 
a warmer, and perhaps more variable, climate.
    Again, I thank the Chairman and yield back my time.
                                 ______
                                 
 Prepared Statement of Hon. W.J. ``Billy'' Tauzin, Chairman, Committee 
                         on Energy and Commerce
    Thank you Chairman Greenwood. And, let me also thank you for 
putting together what promises to be an informative hearing--one that 
gets to the heart of a controversy that has lingered over this national 
assessment for a couple of years now.
    I can tell you I have a pretty good appreciation, as does everybody 
from the Bayou State, for what Mother Nature can do to us. And she sure 
does remind us in a variety of ways.
    The U.S. National Assessment also reminds us about the ways our 
country may someday be affected by climate change--whether that climate 
change is natural or influenced by man. But it also conveys some 
pictures of the future that, as we'll hear this morning, might not be 
quite what they seem.
    I look forward to learning more about the use of climate models in 
the assessment. I'm curious to know whether the inherent uncertainties 
in these models--uncertainties I understand to be widely accepted 
within the science community--were properly accounted for when using 
the models to sketch out the climate change scenarios in this report.
    I'm also curious to learn whether, if they weren't properly 
accounted for, whether they undercut what was otherwise a well-
intentioned, and potentially useful report. Did the models, in effect, 
send all this good research focusing on the wrong impacts?
    Nobody has perfect foresight. But we do have scientific assessments 
and other tools to help us reduce the odds that our decisions about the 
future are more than wild guesses. What troubles me, and I believe many 
Members who must confront difficult and potentially expensive decisions 
about climate change, is that something that is asserted to be sound 
science, is not as sound as it was portrayed to be. This creates false 
assurance where perhaps knowledge of what we don't know would be more 
useful to guard against risks. It also threatens to undercut public 
trust in the science policymakers use to make their decisions.
    We have before us today a distinguished panel of experts who can 
explain the role of climate modeling in this assessment. They can put 
matters in proper perspective for us. We have them all here on one 
panel, too, so that perhaps we can generate some discussion to get 
further to the bottom of this controversy.
    I thank you again, Mr. Chairman and yield back the remainder of my 
time.

    Mr. Greenwood. It is the case that we were in session until 
two o'clock this morning. It is not an excuse, just an 
explanation for why some of the members may come in a little 
later than otherwise.
    To the panelists, you are aware that the House--this 
committee is holding an investigative hearing, and I think you 
have been informed that when we hold investigative hearings, it 
is our custom to take our testimony under oath. Do any of you 
object to giving your testimony under oath? Okay.
    Not normally for this kind of hearing but for other 
hearings we must inform you that you are entitled to have a 
counsel, have a lawyer represent you, be represented by 
counsel. Do any of you wish to be represented by counsel? Okay. 
In that case, if you would all stand and raise your right hand, 
I will give you the oath.
    [Witnesses sworn.]
    Mr. Greenwood. Thank you. You are under oath. Let me 
introduce the panel. From my left to right, Dr. Anthony C. 
Janetos, Senior Fellow with the H. John Heinz Center for 
Science, Economics, and the Environment; Dr. Thomas Karl, 
Director of the National Climatic Data Center in North 
Carolina; Dr. Daniel Lashof, Deputy Director of the Climate 
Center, the Natural Resources Defense Counsel; Dr. James J. 
O'Brien, Director of the Center for Ocean-Atmospheric 
Prediction Studies at Florida State University; Dr. Roger 
Pielke, Sr., President-Elect of the American Association of 
State Climatologists, Colorado State Climatologist, and 
Professor in the Department of Atmospheric Science at Colorado 
State University; and Dr. Patrick J. Michaels, Professor and 
Virginia State Climatologist, Department of Environmental 
Sciences, University of Virginia.
    We welcome you all. Thank you for helping us this morning. 
Dr. Janetos, we will begin with you. You are recognized for 5 
minutes to give your testimony.

  TESTIMONY OF ANTHONY C. JANETOS, SENIOR FELLOW, THE H. JOHN 
 HEINZ III CENTER FOR SCIENCE, ECONOMICS, AND THE ENVIRONMENT; 
THOMAS R. KARL, DIRECTOR, NATIONAL CLIMATIC DATA CENTER; DANIEL 
 A. LASHOF, DEPUTY DIRECTOR, CLIMATE CENTER, NATURAL RESOURCES 
 DEFENSE COUNCIL; JAMES J. O'BRIEN, DIRECTOR, CENTER FOR OCEAN-
ATMOSPHERIC PREDICTION STUDIES, FLORIDA STATE UNIVERSITY; ROGER 
A. PIELKE, SR., PRESIDENT-ELECT, AMERICAN ASSOCIATION OF STATE 
 CLIMATOLOGISTS, COLORADO STATE CLIMATOLOGIST, AND PROFESSOR, 
 DEPARTMENT OF ATMOSPHERIC SCIENCE, COLORADO STATE UNIVERSITY; 
     AND PATRICK J. MICHAELS, PROFESSOR AND VIRGINIA STATE 
CLIMATOLOGIST, DEPARTMENT OF ENVIRONMENTAL SCIENCES, UNIVERSITY 
                          OF VIRGINIA

    Mr. Janetos. Thank you, Mr. Chairman. I am pleased to have 
this opportunity to address this committee on the topic of ``Do 
the Climate Models Project a Useful Picture of Climate 
Change?''
    Mr. Greenwood. You might want to pull the microphone. It is 
fairly directional, if you could--Thank you.
    Mr. Janetos. Thanks. The US National Assessment, ``Climate 
Change Impacts on the United States: The Potential Consequences 
of Climate Variability and Change'' was released in November of 
2000, following an extensive series of peer reviews and public 
comment.
    This first document, the overview, was followed about a 
month later by the release of the foundation, a much more 
extensive, fully documented background document that lays out 
all of the analytical detail and data that were used in the 
National Assessment. We believe that the National Assessment is 
an extensive synthesis of the best available scientific 
information on this important topic.
    There are three questions about climate change that have 
dominated discussions. How much climate change is going to 
occur? What will happen as a result? What can countries do 
about it? There are obviously heated opinions about each of 
these, but the issues are real, and it is critical to 
understand the underlying scientific knowledge about each if 
sound decisions are to be made. The national assessment report 
focuses on the second of these questions: What will happen as a 
result?
    A national assessment of the potential impacts of climate 
change was called for in the 1990 legislation that established 
the U.S. Global Change Research Program. For several years, 
that program focused on developing the basic scientific 
knowledge that the international scientific assessment process, 
overseen by the IPCC, depends on.
    That scientific research provided increasing evidence that 
change in the climate system is, in fact, occurring. It has 
become increasingly clear that there is a need to understand 
what is at stake for natural resources and human well-being in 
the U.S.
    In response to this need, in 1998 Dr. John Gibbons, then 
Science Advisor to the President, requested the USGCRP to 
undertake a national assessment originally called for in the 
legislation. He directed--asked the program to investigate a 
series of important questions:
    What are the current environmental stresses and issues for 
the United States that form a backdrop for additional impacts 
of climate change?
    How might climate change and variability exacerbate or 
ameliorate existing problems?
    What are the priority research and information needs that 
can better prepare policymakers for making wise decisions 
related to climate change and variability? What information and 
answers to what key questions could help decisionmakers make 
better informed decisions about risk, priorities, and 
responses? What are the potential obstacles to information 
transfer?
    What research is most important to complete over the short 
term and over the long term?
    What coping options exist that can build resilience to 
current environmental stresses, and also possibly lessen the 
impacts of climate change? How can we simultaneously build 
resilience and flexibility for the various sectors considering 
both the short and long term implications?
    What natural resource planning and management options make 
most sense in the face of future uncertainty?
    What choices are available for improving our ability to 
adapt to climate change and variability, and what are the 
consequences of those choices?
    A variety of efforts emerged in response to Dr. Gibbons' 
quite daunting charge. Over 20 workshops were held around the 
country, involving academics, business people representing a 
range of industries including manufacturing, power generation 
and tourism, and people who work closely on the land and in the 
water, including resource managers, ranchers, farmers, 
foresters and fishermen.
    Each workshop identified a range of issues of concern to 
stakeholders in those regions, many of them quite unrelated to 
climate change per se. Most were followed by the initiation of 
scientific, university led regional studies.
    In addition to these kinds of bottom-up efforts, it was 
decided that it was also necessary to create a national level 
synthesis of what is known about the potential for climate 
impacts for the U.S. as a whole, addressing the issues 
identified in the regional workshops and national studies.
    This synthesis, obviously, needed to build on the work that 
had begun to emerge from the subsequent regional and national 
studies, but also to draw on the existing scientific literature 
and analyses done with the most up to date ecological and 
hydrological models and data that could be obtained.
    The National Assessment Synthesis Team, the NAST, was 
established by the NSF as an independent committee under the 
Federal Advisory Committee Act specifically in order to carry 
out this second step. It was made up of experts from academia, 
industry, government laboratories, and non-governmental 
organizations, and in order to ensure its openness and 
independence, all meetings of the NAST were open to the public, 
all documents discussed in its meetings are available through 
the NSF, as are all the review comments received and the 
responses to them.
    This is perhaps out of the ordinary for a scientific study, 
but most scientific studies do not focus on issues of such 
broad and deep implications for the country, and about which 
there is such heated debate.
    Our first action was to publish a plan for the conduct of 
the national synthesis. In addition, five issues, agriculture, 
water, forests, human health, and coastal and marine systems, 
were selected to be topics for national studies. Carrying out 
this plan was, obviously, a major undertaking, with the two 
reports that I mentioned earlier as the two primary national 
outputs.
    Both of those national outputs have been through extensive 
review. At the end of 1999 two rounds of technical peer review 
were undertaken, and during the spring of 2000 an additional 
review by about 20 experts who had been outside of the 
assessment process was undertaken. Over 300 sets of comments 
were received from scientists in universities, industry, NGO's, 
and government labs. The responses to external comments have 
been described in comprehensive review memorandums.
    The final stage of that process, a 60-day public comment 
period specifically requested by Congress, after which final 
revisions were then completed. The report was submitted to the 
President so that it could be transmitted to Congress, as 
called for in the original legislation. Hundreds of additional 
comments were received during the public comment period, each 
of which was responded to.
    In order to ensure that we did our job well, an oversight 
panel was also established through the offices of the 
President's Council of Advisors on Science and Technology. That 
oversight panel was chaired by Dr. Peter Raven, Director of the 
Missouri Botanical Garden and former Home Secretary of the 
National Academy of Sciences, and Dr. Mario Molina, Professor 
of Atmospheric Chemistry at MIT and recent Nobel prize winner 
for his research on stratospheric ozone depletion. Its 
membership, like the NAST's, was also drawn from academia, 
industry, and the NGO's. It reviewed and approved the plans for 
the assessment. It reviewed each draft of the report, and 
reviewed the response of our synthesis team to all comments.
    It is important to realize that the national assessment 
does not attempt to predict exactly what the future will hold 
for the U.S. It examined the potential implications of two 
primary climate scenarios, each based on the same assumptions 
about future global emissions of greenhouse gases, the same 
assumptions that has been used as one of many emission 
scenarios examined by the IPCC.
    The two climate scenarios were based on output from two 
different global climate models used in the IPCC assessments, 
and we believe they were clearly within the range of global--
The results were clearly within the range of the global annual 
average temperature changes shown by many such models, one of 
them near the low end of this range and one near the high end. 
Both also exhibit warming trends for the U.S. that are larger 
than the global average, but this is not surprising.
    In addition to the two primary models from the Canadian 
Climate Centre and the Hadley Centre, in different parts of the 
national process results from climate models developed at the 
National Center for Atmospheric Research, NOAA's Geophysical 
Fluid Dynamics Laboratory, NASA's Goddard Institute for Space 
Studies, and the Max Planck Institute were also used in various 
aspects of the assessment.
    The NAST was aware of the scientific issues surrounding the 
use of regional results form any general circulation models. In 
the analyses done with the climate models' regional outputs, 
simulations from the models were used to adjust historically 
observed data using methods that had already been peer reviewed 
in other studies, in order to depict scenarios that had 
sufficient regional richness for analysis. So, in fact, we did 
not use, for the most part, the raw data from the GCMs, but 
used that to adjust historical data.
    In addition to models, the National Assessment used two 
other ways to think about potential future climate. Many groups 
involved in our process used historical climate records to 
evaluate sensitivities of regions, sectors and natural 
resources, the climate variability and extremes that have in 
fact occurred during the 20th Century.
    Looking at real historical climate events, their impacts, 
and how people have adapted, gives valuable insights into 
potential future impacts that complement those provided by 
model projects. In addition, the assessment used sensitivity 
analyses, some of which ask how and by how much the climate 
would have to change to result in impacts on particular regions 
and sectors.
    These climate scenarios describe significantly different 
futures that are scientifically plausible, given our current 
understanding of how the climate system operates. That 
understanding will, no doubt, continue to improve. As 
importantly, they describe separate baselines for analysis of 
how natural ecosystems, agriculture, water supplies, etcetera, 
might change as a result.
    In order to investigate such changes, the potential impacts 
of changes in the physical climate system, the report relies on 
up to date ecological and natural resource models, on empirical 
observations from the literature, on investigations of how 
those systems have responded to climate variability that has 
been observed over the past century, and on the accumulated 
scientific knowledge that is available about the sensitivity of 
natural resources to climate, and about how the regions of the 
U.S. have and potentially could respond.
    The U.S. National Assessment presents the results for each 
scenario clearly, and then takes the important additional step 
of explicitly describing the NAST's scientific judgment about 
the uncertainty inherent in each result. Those results that are 
viewed to be robust are described in more certain terms. Those 
viewed to be the result of poorly understood or unreconciled 
differences between models are described in substantially more 
circumspect language.
    The lexicon of terms used to denote the NAST's greater or 
lesser confidence is explicitly described in the beginning of 
the Overview report. This helps ensure that the report does not 
mask important results by thoughtlessly merging models or 
overstating the scientific capability for assessing potential 
impacts.
    Finally, the report begins to identify possible options for 
adaptation to this changing world. It does not do a complete 
analysis of the costs, benefits or feasibility of these 
options, however, which would be a necessary next step for 
developing policies to address those issues.
    Future assessments will need to consider climate change in 
the context of the suite of environmental stresses that we all 
face. Perhaps most importantly, our report acknowledges very 
clearly that scientific uncertainties remain and that we can 
expect surprises as this uncontrolled experiment with the 
earth's geochemistry plays out over the coming decades.
    Thank you very much.
    [The prepared statement of Anthony C. Janetos follows:]
Prepared Statement of Anthony C. Janetos, Sr. Fellow, H. John Heinz III 
           Center for Science, Economics, and the Environment
    I am pleased to have the opportunity to address the US House of 
Representatives Committee on Energy and Commerce, Subcommittee on 
Oversight and Investigations on the topic of ``The US National Climate 
Change Assessment: Do the Climate Models Project a Useful Picture of 
Climate Change?''
    The US National Assessment, Climate Change Impacts on the United 
States: the Potential Consequences of Climate Variability and Change 
was released in November of 2000, following an extensive series of peer 
reviews and public comment. This first document, the Overview, was 
followed about a month later by the release of the Foundation, a much 
more extensive, fully documented background document that lays out all 
of the analytical detail and data that were used in the National 
Assessment. The National Assessment is an extensive synthesis of the 
best available scientific information on this important topic.
    There are three questions about climate change that dominate 
discussions of this important topic. How much climate change is going 
to occur? What will happen as a result? What can countries do about it? 
There are obviously heated political opinions about each of these, but 
the issues are real, and it is critical to understand the underlying 
scientific knowledge about each if sound decisions are to be made. The 
assessment report focuses on the second of these questions.
    A national assessment of the potential impacts of climate change 
was called for in the 1990 legislation that established the US Global 
Change Research Program (USGCRP). For several years, the research 
program focused on developing the basic scientific knowledge that the 
international scientific assessment process overseen by the 
Intergovernmental Panel on Climate Change (IPCC) depends on. The IPCC 
was jointly established by the World Meteorological Organization and 
the United Nations Environmental Programme in 1988. As scientific 
research has provided compelling evidence that climate change is in 
fact occurring, it has become increasingly clear that there is a need 
to understand what is at stake for natural resources and human well-
being in the US. In response to this need, in 1998, Dr. John H. 
Gibbons, then Science Advisor to the President, requested the USGCRP to 
undertake a the national assessment originally called for in the 
legislation. Dr. Gibbons asked the USGCRP to investigate a series of 
important questions:

<bullet> What are the current environmental stresses and issues for the 
        United States that form a backdrop for additional impacts of 
        climate change?
<bullet> How might climate change and variability exacerbate or 
        ameliorate existing problems?
<bullet> What are the priority research and information needs that can 
        better prepare policy makers for making wise decisions related 
        to climate change and variability? What information and answers 
        to what key questions could help decision-makers make better-
        informed decisions about risk, priorities, and responses? What 
        are the potential obstacles to information transfer?
<bullet> What research is most important to complete over the short 
        term? Over the long term?
<bullet> What coping options exist that can build resilience to current 
        environmental stresses, and also possibly lessen the impacts of 
        climate change? How can we simultaneously build resilience and 
        flexibility for the various sectors considering both the short 
        and long-term implications?
<bullet> What natural resource planning and management options make 
        most sense in the face of future uncertainty?
<bullet> What choices are available for improving our ability to adapt 
        to climate change and variability and what are the consequences 
        of those choices? How can we improve contingency planning? How 
        can we improve criteria for land acquisition?
    A variety of efforts emerged in response to Dr. Gibbons' charge.
    Over twenty workshops were held around the country, involving 
academics, business-people representing a range of industries including 
manufacturing, power generation and tourism, and people who work 
closely with land and water ecosystems including resource managers, 
ranchers, farmers, foresters and fishermen. Each workshop identified a 
range of issues of concern to stakeholders in those regions, many of 
them quite unrelated to climate change, per se. Most workshops were 
followed by the initiation of scientific, university-led regional 
studies.
    In addition to these kind of ``bottom-up'' efforts, it was decided 
that it was also necessary to create a national-level synthesis of what 
is known about the potential for climate impacts for the US as a whole, 
addressing the issues identified in the regional workshops and national 
studies. This synthesis obviously needed to build on the work that had 
begun to emerge from the subsequent regional and national studies, but 
also to draw on the existing scientific literature and analyses done 
with the most up-to-date ecological and hydrological models and data 
that could be obtained. The National Assessment Synthesis Team (NAST) 
was established by the National Science Foundation as an independent 
committee under the Federal Advisory Committee Act (FACA) specifically 
in order to carry out this second step. This committee was made up of 
experts from academia, industry, government laboratories, and non-
governmental organizations (NGO's) (membership list is Attachment 1). 
In order to ensure openness and independence, all meetings of the NAST 
were open to the public, all documents discussed in its meetings are 
available through the National Science Foundation, as are all the 
review comments already received and responses to them. This is perhaps 
out of the ordinary for a scientific study; but most scientific studies 
do not focus on issues of such broad and deep implications for American 
society, and about which there is such heated rhetoric.
    The NAST's first action was to publish a plan for the conduct of 
the national synthesis. In addition, five issues (agriculture, water, 
forests, health, and coastal and marine systems), out of the many 
identified, were selected to be topics for national studies. Carrying 
out this plan was a major undertaking. The end result has been the 
production of a comprehensive two-volume national assessment report. 
The ``Foundation'' volume is more than 600 pages long, with more than 
200 figures and tables, with analyses of the five national sectors, and 
9 regions that together cover the entire US. It is extensively 
referenced, and a commitment was made that all sources used in its 
preparation were to be open and publicly available. The ``Overview'' 
volume is about 150 pages long, written in a style that is more 
accessible to the lay public, and summarizes the Foundation in a way 
that is understandable and informative, and which we are confident is 
scientifically sound. Both documents have already been through 
extensive review. At the end of 1999, two rounds of technical peer 
review were undertaken, and during the spring of 2000, an additional 
review by about 20 experts outside the assessment process was 
undertaken. Over 300 sets of comments have been received from 
scientists in universities, industry, NGO's, and government 
laboratories. The responses to all external comments have been 
described in comprehensive review memorandums. The final stage of the 
process, a 60 day public comment period specifically requested by 
Congress, after which final revisions was then completed, and the 
report was submitted to the President so that it could be transmitted 
to Congress, as called for in the original legislation. Hundreds of 
additional comments were received during the public comment period, 
each of which was responded to.
    In order to ensure that the NAST carried out its charge well, an 
oversight panel was also established through the offices of the 
President's Council of Advisors on Science and Technology (membership 
list is Attachment 2). The oversight panel was chaired by Dr. Peter 
Raven, Director of the Missouri Botanical Garden and former Home 
Secretary of the National Academy of Sciences, and Dr. Mario Molina, 
Professor of Atmospheric Chemistry at MIT, and recent Nobel-prize 
winner for his research on stratospheric ozone depletion. Its 
membership, like the NAST's, was also drawn from academia, industry, 
and NGO's. It reviewed and approved the plans for the assessment, 
reviewed each draft of the report, and reviewed the response of the 
NAST to all comments.
    What have been the results of this extraordinarily open process? 
What assumptions drive the analysis? What conclusions have been 
reached?
    It is important to realize that the national assessment does not 
attempt to predict exactly what the future will hold for the US. It 
examined the potential implications of two primary climate scenarios, 
each based on the same assumptions about future ``business as usual'' 
global emissions of greenhouse gases that the IPCC has used for many of 
its analyses. The two climate scenarios were based on output from two 
different global climate models used in the IPCC assessment. They are 
clearly within the range of global annual average temperature changes 
shown by many such models, one near the low and one near the high end 
of the range. Both exhibit warming trends for the US that are larger 
than the global average. This is not surprising. For many years, one of 
the most robust results of global climate models has been that greater 
warming is expected in more northerly latitudes, and that land surfaces 
are expected to warm more than the global average. We have used 
assumptions that are entirely consistent with those used by the IPCC. 
In addition to the two primary models from the Canadian Climate Centre 
and the Hadley Centre, results from climate models developed at the 
National Center for Atmospheric Research, NOAA's Geophysical Fluid 
Dynamics Laboratory, NASA's Goddard Institute for Space Studies, and 
the Max Planck Institute were also used in various aspects of the 
Assessment.
    The NAST was aware of the scientific issues surrounding the use of 
regional results from any general circulation models. In the analyses 
done with the climate models' regional outputs, simulations from the 
models were used to adjust historically observed data in order using 
methods that had already been peer-reviewed in other studies, in order 
to depict scenarios that had sufficient regional richness for analysis.
    In addition to models, the Assessment used two other ways to think 
about potential future climate. First, it used historical climate 
records to evaluate sensitivities of regions and sectors to climate 
variability and extremes that have occurred in the 20th century. 
Looking at real historical climate events, their impacts, and how 
people have adapted, gives valuable insights into potential future 
impacts that complement those provided by model projections. In 
addition, the Assessment used sensitivity analyses, which ask how, and 
how much, the climate would have to change to bring major impacts on 
particular regions and sectors.
    These climate scenarios describe significantly different futures 
that are scientifically plausible, given our current understanding of 
how the climate system operates. As importantly, they describe separate 
baselines for analysis of how natural ecosystems, agriculture, water 
supplies, etc. might change as a result. In order to investigate such 
changes, i.e. the potential impacts of climate changes, the report 
relies on up-to-date models, on empirical observations from the 
literature, on investigations of how these systems have responded to 
climate variability that has been observed over the past century in the 
US, and on the accumulated scientific knowledge that is available about 
the sensitivities of resources to climate, and about how the regions of 
the US have and potentially could respond.
    One additional important point about the scenarios should be 
mentioned. The report does not average the results of models that 
disagree; it explicitly avoids doing so. The best example of this is in 
the analysis of potential changes in precipitation, where the two 
models used to create the scenarios give quite different results for 
some areas of the US. We have chosen to highlight these differences and 
explain that regional-scale precipitation projections are much more 
uncertain compared with temperature, rather than attempting to merge 
the results or guess which is more likely. The knowledge that the 
direction of precipitation change in some areas is quite uncertain is 
valuable for planning purposes, and clearly represents and important 
research challenge. There is however, consistency among models and 
observations on other aspects of precipitation changes. For example, 
both models and observations show an increase in the proportion of 
precipitation derived from heavy and extreme events as the climate 
warms. So, both types of information are pertinent to help with the 
identification of potential coping actions. In this respect, the report 
follows the procedure that the IPCC itself uses for its global impacts 
reports, each of which examines the potential impacts for entire 
continents.
    The US national assessment presents the results for each scenario 
clearly, and then takes the important additional step of explicitly 
describing the NAST's scientific judgment about the uncertainty 
inherent in each result. Those results that are viewed to be robust are 
described in more certain terms; those viewed to be the result of 
poorly understood or unreconciled differences between models are 
described in more circumspect language. The lexicon of terms used to 
denote the NAST's greater or lesser confidence is explicitly described 
in the beginning of the Overview report. This helps ensure that the 
report does not mask important results by thoughtlessly merging models, 
or overstating the scientific capability for assessing potential 
impacts. Finally, the report begins to identify possible options for 
adaptation to this changing world. It does not do a complete analysis 
of the costs, benefits, or feasibility of these options however, which 
is a necessary next step for developing policies to address these 
issues.
    The report's key findings present important observations for all 
Americans:
    1. Increased warming. Assuming continued growth in world greenhouse 
gas emissions, the climate models used in this Assessment project that 
temperatures in the US will rise 5-10 deg.F (3-5 deg.C) on average in 
the next 100 years.
    2. Differing regional impacts. Climate change will vary widely 
across the US. Temperature increases will vary somewhat from one region 
to the next. Heavy and extreme precipitation events are likely to 
become more frequent, yet some regions will get drier. The potential 
impacts of climate change will also vary widely across the nation.
    3. Vulnerable ecosystems. Many ecosystems are highly vulnerable to 
the projected rate and magnitude of climate change. A few, such as 
alpine meadows in the Rocky Mountains and some barrier islands, are 
likely to disappear entirely in some areas. Others, such as forests of 
the Southeast, are likely to experience major species shifts or break 
up. The goods and services lost through the disappearance or 
fragmentation of certain ecosystems are likely to be costly or 
impossible to replace.
    4. Widespread water concerns. Water is an issue in every region, 
but the nature of the vulnerabilities varies, with different nuances in 
each. Drought is an important concern in every region. Floods and water 
quality are concerns in many regions. Snow-pack changes are especially 
important in the West, Pacific Northwest, and Alaska.
    5. Secure food supply. At the national level, the agriculture 
sector is likely to be able to adapt to climate change. Overall, US 
crop productivity is very likely to increase over the next few decades, 
but the gains will not be uniform across the nation. Falling prices and 
competitive pressures are very likely to stress some farmers, while 
benefiting consumers.
    6. Near-term increase in forest growth. Forest productivity is 
likely to increase over the next several decades in some areas as trees 
respond to higher carbon dioxide levels. Over the longer term, changes 
in larger-scale processes such as fire, insects, droughts, and disease 
will possibly decrease forest productivity. In addition, climate change 
is likely to cause long-term shifts in forest species, such as sugar 
maples moving north out of the US.
    7. Increased damage in coastal and permafrost areas. Climate change 
and the resulting rise in sea level are likely to exacerbate threats to 
buildings, roads, power lines, and other infrastructure in climatically 
sensitive places. For example, infrastructure damage is related to 
permafrost melting in Alaska, and to sea-level rise and storm surge in 
low-lying coastal areas.
    8. Adaptation determines health outcomes. A range of negative 
health impacts is possible from climate change, but adaptation is 
likely to help protect much of the US population. Maintaining our 
nation's public health and community infrastructure, from water 
treatment systems to emergency shelters, will be important for 
minimizing the impacts of water-borne diseases, heat stress, air 
pollution, extreme weather events, and diseases transmitted by insects, 
ticks, and rodents.
    9. Other stresses magnified by climate change. Climate change will 
very likely magnify the cumulative impacts of other stresses, such as 
air and water pollution and habitat destruction due to human 
development patterns. For some systems, such as coral reefs, the 
combined effects of climate change and other stresses are very likely 
to exceed a critical threshold, bringing large, possibly irreversible 
impacts.
    10. Uncertainties remain and surprises are expected. Significant 
uncertainties remain in the science underlying regional climate changes 
and their impacts. Further research would improve understanding and our 
ability to project societal and ecosystem impacts, and provide the 
public with additional useful information about options for adaptation. 
However, it is likely that some aspects and impacts of climate change 
will be totally unanticipated as complex systems respond to ongoing 
climate change in unforeseeable ways.
    Given these findings it is clear that climate impacts will vary 
widely across the Nation, as one would expect for a country as large 
and ecologically diverse as the US. Natural ecosystems appear to be 
highly vulnerable to climate changes of the magnitude and rate which 
appear to be likely; some ecosystems surprisingly so. The potential 
impacts on water resources are an important issue in every region 
examined, although the nature of the concern is very different for the 
mountainous West than for the East. The potential for drought is a 
concern across the country. The nation's food supply appears secure, 
but there are very likely to be regional gains and losses for farmers, 
leading to a more complex picture on a region-by-region basis. Forests 
are likely to grow more rapidly for a few decades because of increasing 
carbon dioxide concentrations in the atmosphere, but it is unclear 
whether those trends will be maintained as the climate system itself 
changes, leading to other disturbances such as fire and pest outbreaks. 
However, the climate change itself will, over time, lead to shifts in 
the tree species in each region of the country, some of them 
potentially quite profound. Coastal areas in many parts of the US and 
the permafrost regions of Alaska are already experiencing disruptions 
from sea-level rise and recent regional warming; these trends are 
likely to accelerate. Climate change will very likely magnify the 
cumulative impacts of other environmental stresses about which people 
are already concerned, such as air and water pollution, and habitat 
destruction due to development patterns. There are clearly links 
between human health, current climate, and air pollution. The future 
vulnerability of the US population to the health impacts of climate 
change depends on our capacity to adapt to potential adverse changes. 
Many of these adaptive responses are desirable from a public health 
perspective irrespective of climate change. Future assessments need to 
consider climate change in the context of the suite of environmental 
stresses that we all face. Perhaps most importantly, the report 
acknowledges very clearly that scientific uncertainties remain, and 
that we can expect surprises as this uncontrolled experiment with the 
Earth's geochemistry plays out over the coming decades.
                              Attachment 1
               national assessment synthesis team members
Jerry M. Melillo, Co-chair, Ecosystems Center, Marine Biological 
Laboratory; Anthony Janetos, Co-chair, World Resources Institute; 
Thomas R. Karl, Co-chair, NOAA, National Climatic Data Center; Robert 
Corell (from January 2000), American Meteorological Society and Harvard 
University; Eric J. Barron, Pennsylvania State University; Virginia 
Burkett, USGS, National Wetlands Research Center; Thomas F. Cecich, 
Glaxo Wellcome, Inc.; Katharine Jacobs, Arizona Department of Water 
Resources; Linda Joyce USDA Forest Service; Barbara Miller, World Bank; 
M. Granger Morgan, Carnegie Mellon University; Edward A. Parson (until 
January 2000), Harvard University; Richard G. Richels, EPRI; and David 
S. Schimel, National Center for Atmospheric Research. Additional Lead 
Authors; David Easterling (NOAA National Climatic Data Center); Lynne 
Carter (National Assessment Coordination Office); Benjamin Felzer 
(National Center for Atmospheric Research); John Field (University of 
Washington); Paul Grabhorn (Grabhorn Studio); Susan J. Hassol (Aspen 
Global Change Institute); Michael MacCracken (National Assessment 
Coordination Office); Joel Smith (Stratus Consulting); and Melissa 
Taylor (National Assessment Coordination Office).
                              Attachment 2
 independent review board of the president's committee of advisers on 
                     science and technology (pcast)
Peter Raven, Co-chair, Missouri Botanical Garden and PCAST; Mario 
Molina, Co-chair, MIT and PCAST; Burton Richter, Stanford University; 
Linda Fisher, Monsanto; Kathryn Fuller, World Wildlife Fund; John 
Gibbons, National Academy of Engineering; Marcia McNutt, Monterey Bay 
Aquarium Research Institute; Sally Ride, University of California San 
Diego and PCAST; William Schlesinger, Duke University; James Gustave 
Speth, Yale University; and Robert White, University Corporation for 
Atmospheric Research, and Washington, Advisory Group.

    Mr. Greenwood. Thank you, Dr. Janetos. Thank you very much 
for your testimony.
    Dr. Karl.

                   TESTIMONY OF THOMAS R. KARL

    Mr. Karl. Good morning, Chairman Greenwood and members of 
the subcommittee. I was one of the three co-chairs of the 
report of the National Assessment Team. As Dr. Janetos has 
indicated, the synthesis team was comprised of scientists and 
other specialists from universities, industries, governments 
and non-governmental organizations.
    The National Assessment reports are not policy positions or 
official statements of the U.S. Government. Rather, they were 
produced by a selected set of members of the scientific 
community and offered to the government for its consideration. 
I am very pleased to have this opportunity to present the 
testimony regarding the basis for the scenarios of the 21st 
century climate used in the National Assessment.
    The purpose of the National Assessment was to synthesize, 
evaluate, and report on what we knew about the consequences of 
climate variability and change for the United States in the 
21st Century.
    The National Assessment was our first attempt to generate 
climate scenarios for various regions and sectors across the 
United States. It relied on a number of techniques to develop 
climate scenarios for the 21st Century, including historical 
data to examine the continuation of trends, the recurrence of 
past climate extremes, climate model simulations in attempt to 
provide plausible scenarios for how the future climate may 
change, and sensitivity analysis to explore the resilience of 
societal and ecology systems to climate fluctuations and 
change.
    Numerous climate models were used in the National 
Assessment, but the two primary models were selected on the 
basis of a set of objective criteria that I have described in 
some detail in my written testimony. Today, if the assessment 
were repeated with similar criteria, results of several other 
models would be included.
    As I described in some detail in my written testimony, in a 
comparison of the models used in the National Assessment with 
observations and other models indicates that the two primary 
model used in the National Assessment reflected the state of 
scientific understanding when the National Assessment was 
conducted between 1997 and 2000.
    This had important consequences. For example, the amount of 
summertime precipitation expected over much of the contiguous 
USA as the climate warmed was quite uncertain and required the 
use of several what-if analyses to assess potential impacts. 
Other projected changes were less uncertain, like increased 
temperatures everywhere during all seasons. So the impact 
analysis could focus on the magnitude of the warming as opposed 
to the sign of the projected changes.
    Interestingly, despite the fact that global models do not 
agree well in the sign of summer precipitation changes, in 
general climate models indicate that as greenhouse gases 
increase, on average more intense precipitation will occur. 
Indeed, observations in the USA and elsewhere reflect this 
today. That is, a greater proportion of the total precipitation 
occurs in heavy and very heavy precipitation events.
    This attribute of precipitation change was another scenario 
considered by the sectorial and regional impact and adaptation 
assessments. Given the many differences among models, wherever 
feasible the National Assessment relied on model simulations to 
assess impacts to the greatest extent possible. A particularly 
noteworthy example comes from the Great Lakes region. Results 
from 10 models were used to assess changes in Great Lake levels 
during the 21st Century.
    In conclusion, the National Assessment we conducted on the 
impact of climate change had significant limitations, but was 
an important first step. Quite clearly, more needs to be done, 
and such efforts can provide more effective decision support 
tools, help frame adaptation mitigation measures to avoid the 
potential risk and harm of climate change, and maximize the 
potential benefits.
    I want to thank the chairman for allowing me the 
opportunity to describe the rationale used in the National 
Assessment to develop the climate scenarios for the 21st 
Century. I would be happy to answer any questions later. Thank 
you.
    [The prepared statement of Thomas R. Karl follows:]
Prepared Statement of Thomas R. Karl, Director, National Climatic Data 
    Center, National Environmental Satellite, Data, and Information 
       Services, National Oceanic and Atmospheric Administration
                              introduction
    Good morning, Chairman Greenwood and members of the Subcommittee. I 
am Thomas R. Karl, Director of NOAA's National Climatic Data Center. I 
was invited to appear today because I was one of the three Co-Chairs of 
the Report of the National Assessment Synthesis Team (NAST).
    I would like to begin by emphasizing that the reports of the 
National Assessment Synthesis Team are not a product of the U.S. 
Government, and they do not represent government policy. In fact, they 
have sometimes been quite controversial. The National Assessment 
Synthesis Team is an advisory committee chartered under the Federal 
Advisory Committee Act. The NAST reports are not policy positions or 
official statements of the U.S. government. Rather, they were produced 
by selected members of the scientific community and offered to the 
government for its consideration.
    The Synthesis Team was comprised of individuals drawn from 
governments, universities, industry, and non-governmental organizations 
that had responsibility for broad oversight of the National Assessment 
entitled ``Climate Change Impacts on the United States--The Potential 
Consequences of Climate Variability and Change.'' The purpose of the 
Assessment was to synthesize, evaluate, and report on what we presently 
know--and don't know--about the potential consequences of climate 
variability and change for the United States in the 21st century. It 
attempted to review climate vulnerabilities of particular regions of 
the nation and of particular sectors, and sought to provide a number of 
adaptation measures to reduce the risk, and maximize the potential 
benefits and opportunities of climate change, whatever its cause. The 
National Assessment was conducted from 1997 to 2000 and was our first 
attempt to generate climate scenarios for various regions and sectors 
across the United States, which turned out to be a very challenging 
task. I am very pleased to have this opportunity to present testimony 
regarding the basis for the scenarios of 21st century climate used in 
the National Assessment.
    As a basis for the National Assessment, and in the context of the 
uncertainties inherent in looking forward 100 years, the NAST pursued a 
three-pronged approach to considering how much the climate may change. 
The three approaches involved use of: (1) historical data to examine 
the continuation of trends or recurrence of past climatic extremes; (2) 
comprehensive, state-of-the-science (though still with significant 
limitations), model simulations to provide plausible scenarios for how 
the future climate may change; and (3) sensitivity analyses that can be 
used to explore the resilience of societal and ecological systems to 
climatic fluctuations and change. Of particular interest for this 
hearing is the second of these approaches, and that is where I will 
focus my remarks. As a pretext however, I note that the National 
Assessment rests on a combination of these approaches.
         developing model-based scenarios for the 21st century
Projecting changes in factors that influence climate
    Because future trends in fossil fuel use and other human activities 
are uncertain, the Intergovernmental Panel on Climate Change (IPCC) has 
developed a set of scenarios for how the 21st century may evolve. These 
scenarios consider a wide range of possibilities for changes in 
population, economic growth, technological development, improvements in 
energy efficiency and the like. The two primary climate scenarios used 
in the National Assessment were based on a mid-range emission scenario 
used in the second IPCC report. This scenario assumes no major changes 
in policies to limit greenhouse gas emissions. Other important 
assumptions in the scenario are that by the year 2100:

<bullet> world population is projected to nearly double to about 11 
        billion people;
<bullet> the global economy is projected to continue to grow at about 
        the average rate it has been growing, reaching more than ten 
        times its present size;
<bullet> increased use of fossil fuels are projected to triple 
        CO<INF>2</INF> emissions and raise sulfur dioxide emissions, 
        resulting in atmospheric CO<INF>2</INF> concentrations of just 
        over 700 parts per million; and
<bullet> total energy produced each year from non-fossil sources such 
        as wind, solar, biomass, hydroelectric, and nuclear are 
        projected to increase to more than ten times its current 
        amount, providing more than 40% of the world's energy, rather 
        than the current 10%.
    There are a number of other important factors besides fossil fuel 
emissions that cause climate to change and vary. These were not part of 
the scenario used to drive climate change in the two primary models 
used in the National Assessment, because at the time of the National 
Assessment these simulations were not available. Figure 1 depicts the 
magnitude of these other climate forcings that were omitted from the 
emission scenario. Clearly, the two largest forcings are those related 
to increases in greenhouse gases and aerosols, both included in the two 
primary models used in the National Assessment. The addition of other 
forcings are an important consideration for improvement of future 
assessments, for example the
    role of black carbon aerosols, and a more thorough treatment of 
land vegetative feedback effects which become quite important on local 
and regional space scales compared to global scales, e.g., the urban 
heat island.
Which models to use?
    The NAST developed a set of guidelines to aid in narrowing the set 
of primary model simulations to be considered for use by the Assessment 
teams. This helped ensure a degree of consistency across the broad 
number of research teams participating in the Assessment. These 
guidelines included various aspects related to the structure of the 
model itself, the character of the simulations, and the availability of 
the needed results. Specifically this meant that the models must, to 
the greatest extent possible:

<bullet> be coupled atmosphere-ocean general circulation models that 
        include comprehensive representations of the atmosphere, 
        oceans, and land surface, and the key feedbacks affecting the 
        simulation of climate and climate change;
<bullet> simulate the evolution of the climate through time from at 
        least as early as the start of the detailed historical record 
        in 1900 to at least as far as into the future as the year 2100 
        based on a well-understood scenario for changes in atmospheric 
        composition that takes into account time-dependent changes in 
        greenhouse gas and aerosol concentrations;
<bullet> provide the highest practicable spatial and temporal 
        resolution (roughly 200 miles [about 300 km] in longitude and 
        175 to 300 miles [about 275 to 425 km] in latitude over the 
        central US);
<bullet> include the diurnal cycle of solar radiation in order to 
        provide estimates of changes in minimum and maximum temperature 
        and to be able to represent the development of summertime 
        convective rainfall;
<bullet> be capable, to the extent possible, of representing 
        significant aspects of climate variations such as the El Nino-
        Southern Oscillation cycle;
<bullet> have completed their simulations in time to be processed for 
        use in impact models and to be used in analyses by groups 
        participating in the National Assessment;
<bullet> be models that are well-understood by the modeling groups who 
        participated in the development of the Third Assessment Report 
        of the Intergovernmental Panel on Climate Change (IPCC) in 
        order to ensure comparability between the US efforts and those 
        of the international community;
<bullet> provide a capability for interfacing their results with 
        higher-resolution regional modeling studies (e.g., mesoscale 
        modeling studies using resolutions finer by a factor of 5 to 
        10); and
<bullet> allow for a comprehensive array of their results to be 
        provided openly over the World Wide Web.
    Including at least the 20th century in the simulation adds the 
value of comparisons between the model results and the historical 
record and can be used to help initialize the deep ocean to the correct 
values for the present-day period. Having results from models with 
specific features, such as simulation of the daily cycle of 
temperature, which is essential for use in cutting edge ecosystem 
models, was important for a number of applications that the various 
Assessment teams were planning.
    At the time of the National Assessment only two models, the 
Canadian Climate Centre Model and the United Kingdom's Hadley Centre 
model, were able to satisfactorily meet these criteria. Today however, 
if the Assessment were repeated with the same criteria, several more 
models would meet these criteria, including modeling efforts in the 
USA. Let me emphasize the importance of this, which represents another 
limitation of the National Assessment. In 1998 the Climate Research 
Council (which I chaired) of the National Research Council issued a 
report, Capacity of U.S. Climate Modeling to Support Climate Change 
Assessment Activities. While improvements in model capability have 
occurred during the past four years, key findings from the CRC report 
are worthy of note:
        The CRC finds that the United States lags behind other 
        countries in its ability to model long-term climate change. 
        Those deficiencies limit the ability of the United States to 
        predict future climate states . . . Although collaboration and 
        free and open information and data exchange with foreign 
        modeling centers are critical, it is inappropriate for the 
        United States to rely heavily upon foreign centers to provide 
        high-end capabilities. There are a number of reasons for this, 
        including the following: (1) U.S. scientists do not necessarily 
        have full, open and timely access to output from European 
        models . . . (2) Decisions that might substantially affect the 
        U.S. economy might be made based upon considerations of 
        simulations (e.g. nested-grid runs) produced by countries with 
        different priorities than those of the United States.
    Furthermore, the report noted, ``While leading climate models are 
global in scale, their ability to represent small-scale, regionally 
dependent processes . . . can currently only be depicted in them using 
high-resolution, nested grids. It is reasonable to assume that foreign 
modeling centers will implement such nested grids to most realistically 
simulate processes on domains over their respective countries which may 
not focus on or even include the United States.''
The use of observations
    Observations were an essential part of developing climate scenarios 
for the 21st century in the National Assessment. Reliance on model 
simulations provides only a limited opportunity to investigate the 
consequences of climate variability and change. To minimize this 
limitation, in the National Assessment the historical record was used 
to help determine regional and sector specific sensitivities to climate 
changes and variations of differing, but contextual realistic changes.
    The observations were also used to understand how the models 
simulated present and past climate (see Figure 2), and to correct a 
number of model biases. While climate models have shown significant 
improvement over recent decades, and the models used in the National 
Assessment were among the world's best, there were a number of 
shortcomings in applying the models to study potential regional-scale 
consequences of climate change. This is a fundamental limitation to the 
results of the National Assessment, and should be kept in mind. In the 
National Assessment, several methods were used in an attempt to address 
these problems. Most importantly, the output from the primary models 
(the Hadley and Canadian) for temperature and precipitation were passed 
through a set of standardization processing algorithms to re-calibrate 
the model simulations with the observations. This is especially 
important in areas of complex terrain such as mountainous regions of 
the West were model resolution was insufficient to adequately resolve 
detailed small-scale climate characteristics. The processing procedure 
accounted for at least some of the shortcomings and biases in the 
models. So, the model scenario results used in the impact assessments 
were often adjusted to remove the systematic differences with 
observations that were present in the model simulations. Such a 
procedure is similar to what is now being implemented in daily weather 
forecasting, where actual model projections are not used, but rather 
the historical statistical and dynamical relationships between the 
weather model forecasts and actual observations are used to generate 
local weather forecasts. This adjustment process is fully described in 
the foundation report of the National Assessment.
    In addition, some of the regional teams applied other types of 
``down-scaling'' techniques to the climate model results in order to 
derive estimates of changes occurring at a finer spatial resolution. 
One such technique has been to use the global climate model results as 
boundary conditions for mesoscale models that cover some particular 
region (e.g., the West Coast with its Sierra Nevada and Cascade 
Mountains). These models are able to represent important processes and 
mountain ranges on finer scales than do global climate models. These 
small-scale simulations however, have not been as well tested as global 
models and are very computer intensive. It has not yet been possible to 
apply the techniques nationally or for the entire 20th or 21st 
centuries. With the rapid advances in computing power expected in the 
future, this approach should become more feasible for future 
assessments. To overcome the computational limitations of mesoscale 
models, some of the Assessment Teams developed and tested empirically 
based statistical techniques to estimate changes at finer scales than 
the global climate models, and these efforts are discussed in the 
various regional assessment reports. These techniques have the 
important advantage of being based on observed weather and climate 
relationships, but have the shortcoming of assuming that the 
relationships prevailing today will not change in the future.
    Another type of tool developed for use in the sensitivity analyses 
were statistical models and weather generators used to calculate 
probabilities of unusual weather and climate events. These models 
enabled impact analysts to compose ``what if'' questions for strings of 
weather and climate events that could be important to their specific 
sector or region. Other approaches focused on using a variety of other 
types of observational data.
                        evaluation of the models
    Among the tests that have been used to evaluate the skill of 
climate models have been evaluations of climate model output to 
simulate present weather and climate, the cycle of the seasons, 
climatic variations over the past 20 years (the time period when the 
most complete data sets are available), climatic changes over the past 
100 to 150 years during which the world has warmed, and climatic 
conditions for periods in the geological past when the climate was 
quite different than at present.
    There are so many kinds of evaluations that can be made it is not 
possible to provide one test to ascertain the appropriateness of any 
model for climate impact assessments. For example, models may be 
expected to reproduce the past climate for hemispheric and global 
averages on century time-scales because much of the climate noise due 
to seasonal to inter-annual climate variability tends to be less 
important. This includes many of the important climate oscillations 
such as the El Nino, the North Atlantic Oscillation, the Pacific 
Decadal Oscillation, and others. Because models generally replicate the 
chaotic behavior of the natural climate, the climate models simulate 
their own year-by-year climates and they will not produce the precise 
timing of these events to match the observations. On the other hand, 
the climate models may be expected to reproduce the statistical 
distribution of these events. So, to compare models to observations it 
is important to be able to average out these natural variations that 
can have very large impacts for given regions in specific years. For 
this reason in the National Assessment comparisons of the model 
simulations with observations on regional and subregional levels were 
made by averaging over multiple decades or longer.
    In conducting climate model evaluations it is tempting to prefer 
those models where the simulations most closely match the observations, 
but several complications must be accounted for in such 
intercomparisons. First, there are inherent errors and biases in our 
observational data. Models, even if they are provided perfect forcing 
scenarios and had perfect chemistry, physics and biology, should not be 
expected to perfectly match imperfect observations. By cross comparing 
observations from differing data sets and observing systems we can 
roughly estimate some of the observational errors and biases. Second, 
because of the chaotic nature of the climate, we cannot expect to match 
the year-by-year or decade-by-decade fluctuations in temperature that 
have been observed during the 20th century. Third, the particular model 
simulations used in the National Assessment did not include 
consideration of all of the effects of human-induced and naturally-
induced changes that are likely to have influenced the climate, 
including changes in stratospheric and tropospheric ozone, volcanic 
eruptions, solar variability, and changes in land cover (and associated 
changes relating to biomass burning, dust generation, etc.). Finally, 
while it is desirable for model simulations not to have significant 
biases in representing the present climate, having a model that more 
accurately reproduces the present and past climate does not necessarily 
mean that projections of changes in climate developed using such a 
model would provide more accurate projections of climate change than 
models that do not give as accurate simulations. This can be the case 
for at least two reasons. First, what matters most for simulation of 
changes in future climate is proper treatment of the feedbacks that 
contribute to amplifying or limiting the changes, and accurate 
representation of the 20th century does not guarantee this will be the 
case. Second, because projected changes are calculated by taking 
differences between perturbed and unperturbed cases, the effects of at 
least some of the systematic biases present in a model simulation of 
the present climate can be eliminated. While potential nonlinearities 
and thresholds make it unlikely that all biases can be removed in this 
manner, it is also possible that the projected changes calculated by 
such a model could turn out to be more accurate than simulations with a 
model that provided a better match to the 20th century climate.
    Recognizing these many limitations, evaluation of the simulations 
from the Canadian and Hadley models are briefly summarized here to give 
an indication of the kinds of tests climate scientists have completed 
to assess the general adequacy of the models for use in assessing the 
impacts of climate change and variability. As depicted in Figure 2 both 
primary models capture the rise in global temperature since the late 
1970s, but do not do as well in reproducing decadal variations. The 
question of how these two models compare to other climate models, 
several of which were not available at the time of the National 
Assessment, is addressed in Figure 3. Note that the scaling factor 
required to match in the increase in temperature during the 20th 
century for all models is close to one, except for the Canadian Climate 
Model which is somewhat less than one, reflecting the relatively high 
sensitivity of this model to increases in greenhouse gases, although 
the scaling factor in a later version of the model (CGCM2 in Figure 3) 
is closer to one. It is also noteworthy that the later version of the 
Hadley Centre Model very closely reproduces the rate of 20th century 
warming when a more complete set of forcings, indirect sulfate forcing 
and tropospheric ozone, is added to the model. Another test of a 
model's ability to reproduce 20th Century global temperatures is to 
compare the annual temperatures generated by the models with the 
observations. To assess relative skill, errors can be compared to 
projections based on temperature persistence. That is, always 
predicting the annual mean temperature to be equal to the longer-term 
mean over the length of the averaging period centered on either side of 
the prediction year. Figure 4 shows some results of such a test for 
averaging periods from 10 to 50 years. This is a difficult test for a 
model to show skill because the persistence forecast actually includes 
information about the annual mean temperature both before and after the 
``prediction year.'' In all cases the model simulations have smaller 
errors than the persistence based projection, indicating significant 
skill.
    So, analyses at the global scale for the two primary models used in 
the National Assessment indicate that there is general agreement with 
the observed long-term trend in temperature over the 20th century, but 
the Canadian Climate Model is significantly more sensitive to 
greenhouse gases compared to the Hadley Centre Model, and may be 
thought of as the ``hotter'' of the two models. This higher climate 
sensitivity of the Canadian model may be due to projection an earlier 
melting of the Arctic sea ice than the Hadley model. It is not yet 
clear how rapidly this melting may take place.
    The question as to whether the Canadian Climate Model is an outlier 
can be addressed in Figure 5 where the global warming rate has been 
plotted for various models with similar forcings of greenhouse gases 
and sulfate aerosols. The Canadian Climate Model is seen to have a 
relatively high sensitivity to increases in greenhouse gases compared 
to other models, but its sensitivity is quite comparable to a model not 
used in the National Assessment, NOAA's Geophysical Fluid Dynamics 
Laboratory R15 model. So, although the Canadian model does appear to be 
one of the more sensitive models to increases in greenhouse gases, it 
is not an outlier. By comparison the Hadley Centre model appears to 
have moderate sensitivity to increases in greenhouse gases.
    The National Assessment was not performed on global space scales, 
so it is important to understand the differences between model 
simulations and observations on regional scales. As part of a long-term 
Climate Model Intercomparison Project (CMIP2), Dr. Benjamin Santer of 
the Lawrence Livermore National Laboratory has recently compared 
results from a number of climate models related to their ability to 
reproduce the annual mean precipitation and the annual cycle of 
precipitation across North America. The results of this study, which 
included the two primary models used in the National Assessment, are 
depicted in Figures 6 and 7. The figure shows the correlation between 
the patterns of the model output and the observations (the y-axis) 
along with a measure of the differences in actual precipitation (the x-
axis). If there were no errors in our observing capability, a perfect 
model would reproduce the observations exactly and have perfect 
correlation with the observations, the difference between any observed 
model grid point and observational grid point would be zero, and it 
would appear as a point in the far upper left corner of the plot. By 
comparing two different observational data sets we can get an estimate 
of the errors in the observations and this has been done in Figures 6 
and 7 by comparing two different 20-year climatologies over North 
America by two different research groups. So, no model should be 
expected to be in the quadrant of the diagram to the upper left of the 
less than perfect observational data sets. It is clear in Figures 6 and 
7 that the Hadley Centre model used in the National Assessment 
reproduces the observations better than all other models, while the 
Canadian Climate Centre Model does not do as well, but is by no means 
an outlier.
    Although the changes in global scale features and the regional 
simulations of precipitation of the two primary models are seen to be 
rather typical of other models, there are important issues on regional 
scales that suggest that significant uncertainties remain in our 
ability to effectively use these models for impact assessments. For 
example, problems with the way these climate models simulate ENSO 
variability suggest that the projected pattern of changes may not be 
definitive. Also, as illustrated by the different projections of 
changes in summer precipitation used in the National Assessment in the 
Southeast, there are often several processes that can contribute to the 
pattern of change. The same process can lead to different projections 
of changes when imposed on a slightly different base state of the 
climate. For example, the proportion of the oceans that are frozen 
versus liquid, the amount of snow cover extent, the dryness of the 
ground surface, the strength of North Atlantic deep water circulation, 
etc., all can play important roles. In addition, the different 
representations of land surface processes, clouds, sea-ice dynamics, 
horizontal and vertical resolution, as well as many other factors 
included in different climate models, can have an important impact on 
projections of changes in regional precipitation. This dependence 
occurs because precipitation, unlike atmospheric dynamics, is a highly 
regionalized feature of the climate, depending on the interaction of 
many processes, many of which require a set of model parameterizations. 
Given these many limitations, in the National Assessment the model 
simulations were viewed as projections not as predictions. The 
significance of this distinction can be seen in the following quote 
from the recently-released Climate Action Report 2002: ``Use of these 
model results is not meant to imply that they provide accurate 
predictions of the specific changes in climate that will occur over the 
next hundred years. Rather, the models are considered to provide 
plausible projections of potential changes for the 21st century. For 
some aspects of climate, the model results differ. For example, some 
models, including the Canadian model [used in this Assessment] project 
more extensive and frequent drought in the United States, while others, 
including the Hadley model [the other model used in the Assessment] do 
not. As a result, the Canadian model suggests a hotter and drier 
Southeast during the 21st century, while the Hadley model suggests 
warmer and wetter conditions. Where such differences arise, the primary 
model scenarios provide two plausible, but different alternatives.''
                  how were the model projections used?
    They model projections were used as indications of the types of 
consequences that might result. For example, as evident in Figure 2, 
although the emissions scenarios are the same for the Canadian and 
Hadley simulations, the Canadian model scenario projects more rapid 
global warming than does the Hadley model scenario. This greater 
warming in the Canadian model scenario occurs in part because the 
Hadley model scenario projects a wetter climate at both the national 
and global scales, and in part because the Canadian model scenario 
projects a more rapid melting of Arctic sea ice than the Hadley model 
scenario.
    Recognizing that all model results are plausible projections rather 
than specific quantitative predictions, the consistency of the 
temperature projections of the primary models used for the National 
Assessment were assessed in a broader context. Figure 8 illustrates how 
this strategy was used. It is apparent that virtually all models 
consistently show a much greater than the global average warming over 
the US during winter and a greater than average warming during summer, 
except for Alaska. So, in the National Assessment all the scenarios of 
temperature change related to increased temperatures and the increases 
were often as larger or larger than the global mean temperature 
increase.
    Although there are many similarities in the projected changes of 
temperature amongst the many climate models considered by the IPCC 
(Figure 8), this is not true of precipitation changes. In the National 
Assessment the Hadley Centre model often projected significantly wetter 
conditions compared to the Canadian model, but this variation is 
typical of our present state of understanding as depicted in Figure 9. 
Only during winter is there a consistent pattern of a small increase of 
precipitation among most of the climate models; by contrast during 
summer there is not much agreement about the sign or magnitude of the 
precipitation change, except for a general tendency for more 
precipitation in the high latitudes of North America. The 
inconsistencies among all the models with respect to summertime mid-
latitude North American precipitation (Figure 9) were reflected in the 
two scenarios used in the National Assessment, ensuring consideration 
of a range of possible outcomes. To address this range of possible 
outcomes a number of ``what if'' scenarios were developed and used in 
the National Assessment. For example, in the West, although both models 
in the National Assessment projected precipitation increases, a ``what-
if'' scenario of less precipitation was used to broaden the assessment 
of possible climate impacts, vulnerabilities, and adaptation measures.
    Interestingly, despite the fact that the global climate models do 
not agree well on the sign of summer precipitation changes, virtually 
all climate models indicate that as greenhouse gases increase more 
intense precipitation events will occur over many areas. Indeed, 
observations reflect this today in many mid and high latitude land 
areas where data are available for such an assessment. For these 
reasons and the fact an increase in precipitation intensity can 
effectively be argued from simple thermodynamic considerations, this 
attribute of precipitation change was an important scenario considered 
by the sectoral and regional impact and adaptation assessments.
    It should also be noted in the National Assessment, due to the 
nature of the differences among various models, wherever feasible other 
model simulations were used to assess possible impacts. A particularly 
noteworthy example comes from the Great Lakes Region. Results from ten 
models were used to simulate changes in Great Lake levels during the 
21st century. All but one of the models suggested lower Lake levels. So 
a combination of the primary models, other climate models, and 
observations were instrumental in identifying key climate impacts and 
vulnerabilities for the 21st Century.
                           future assessments
    To build confidence in the projections used for future climate 
assessments, much remains to be done. Further improvements in climate 
models are needed, especially in the representations of clouds, 
aerosols (and their interactions with clouds), sea ice, hydrology, 
ocean currents, regional orography, and land surface characteristics. 
Improving projections of the potential changes in atmospheric 
concentrations of greenhouse gases, aerosols and land use is important. 
Climate model simulations based on these revised emissions forecasts 
should provide improved sets of information for assessing climate 
impacts. In addition to having results from more models available, 
ensembles of simulations from several model runs are needed so that the 
statistical significance of the projections can be more fully examined. 
As part of these efforts, it is important to develop greater 
understanding of how the climate system works (e.g., of the role of 
atmosphere-ocean interactions and cloud feedbacks), to refine model 
resolution, to more completely incorporate existing understanding of 
particular processes into climate models, to more thoroughly test model 
improvements, and to augment computational and personnel resources in 
order to conduct and more fully analyze a wider variety of model 
simulations, including mesoscale modeling studies.
    While much remains to be done that will take time, much can also be 
done in the next few years that can substantially improve the set of 
products and tools available to assess climate impacts. For example, an 
intensified analysis program is needed to provide greater understanding 
of the changes and the reasons why they occur. New efforts to 
incorporate the interactive effects of changes in land use and 
vegetation in meso-scale and global models will help in understanding 
local and regional climate change and variability. A better 
understanding of the changes in weather patterns and extremes in 
relation to global changes is important. Improved efforts that combine 
analysis of the model results with the insights available from analysis 
of historical climatology and past weather patterns needs to be a 
priority. Regional climate scenarios can also be developed using a 
combination of climate model output and dynamical reasoning. More use 
of mesoscale models is important because they can provide higher 
resolution of spatial conditions.
    In the National Assessment, we were able to consider only one set 
of emission scenarios rather than a range of emission scenarios. For 
the future, the actual emissions of greenhouse gases and aerosols could 
be different than the baseline used. Changing the emissions scenario 
would give increasingly divergent climate scenarios as the time horizon 
expanded. This would likely become important beyond the next few 
decades as different emission scenarios are not likely to significantly 
affect climate scenarios because of the relatively slow response of the 
global climate and energy systems, and because a large portion of the 
change will be due to past emissions.
    As recently stated by the Assistant Secretary for Oceans and 
Atmosphere, Dr. Mahoney, the highest and best use of the scientific 
information developed in the combined United States Global Climate 
Research Program (USGCRP) and the President's Climate Change Research 
Initiative (CCRI) could be the development of comparative information 
that will assist decision makers, stakeholders and the general public 
in debating and selecting optimal strategies for mitigating global 
change, while maintaining sound economic and energy security conditions 
in the United States and throughout the world. Significant progress in 
developing and applying science-based decision tools during the next 1 
to 3 years must be a key goal of the combined USGCRP and CCRI program. 
Examples of analyses expected to be completed during this time period 
that would improve our nations ability to conduct a subsequent National 
Assessment include:

<bullet> Long-term global climate model projections (e.g., up to the 
        year 2100) for a wide selection of potential mitigation 
        strategies, to evaluate the expected range of outcomes for the 
        different strategies.
<bullet> Detailed analyses of variations from defined ``base'' 
        strategies, to investigate the importance of specific factors, 
        and to search for strategies with optimum effectiveness.
<bullet> Linked climate change and ecosystem change analyses for 
        several suggested strategies, to search for optimum benefits.
<bullet> Detailed analyses of the outcomes that would be expected from 
        application of the wide selection of energy conservation 
        technologies, and carbon sequestration strategies, currently 
        being investigated by the National Climate Change Technology 
        Initiative
                                summary
    The National Assessment conducted from 1997-2000 was a first step. 
It relied on a number of techniques to develop climate scenarios for 
the 21st century including: historical data to examine the continuation 
of trends or recurrence of past climatic extremes; climate model 
simulations in an attempt to provide plausible scenarios for how the 
future climate may change; and sensitivity analyses to explore the 
resilience of societal and ecological systems to climatic fluctuations 
and change. Numerous climate models were used in the National 
Assessment, but the two primary models were selected on the basis of a 
set of objective criteria. Today, if the Assessment were repeated with 
the similar criteria, results of several other models would be 
included.
    Intercomparison of the models used in the National Assessment with 
observations and other models indicates that the two primary models 
used in the National Assessment reflects the state of scientific 
understanding approximately 2-3 years ago. This had important 
consequences. For example, the amount of summertime precipitation 
expected over much of the contiguous USA as the climate warmed was 
quite uncertain and required use of several ``what if'' analyses to 
assess potential impacts. Other projected changes were more certain, 
like increased temperatures everywhere, during all seasons, and impact 
analyses could focus on the magnitude as opposed to the sign of 
projected change.
    In conclusion, the National Assessment we conducted on the impact 
of climate variability and change had significant limitations, but was 
a first step. Quite clearly, more needs to be done and such efforts 
will provide more effective decision support tools to help frame 
adaptation and mitigation measures to avoid the risk and harm of 
climate change and maximize its potential benefits.
    It is important to note a major recommendation in the National 
Research Council's recent analysis (2001) of some key questions related 
to Climate Change Science. Specifically, that report states that ``the 
details of the regional and local climate change consequent to an 
overall level of global climate change'' requires further 
understanding. The uncertainties that surfaced in generating scenarios 
for the National Assessment was clearly in our minds when we made this 
recommendation.
    Resolving these uncertainties will be essential to understanding 
the scope of any climate change impact. Quite clearly, more needs to be 
done and such efforts will provide more effective decision support 
tools to help frame adaptation and mitigation measures to avoid the 
potential risk and harm of climate change and maximize its potential 
benefits.
[GRAPHIC] [TIFF OMITTED] 81495.001

    Figure 1 Global, annual-mean radiative forcings (Wm<SUP>-2</SUP>) 
due to a number of agents for the period from pre-industrial (1750) to 
present (about 2000). In the National Assessment forcings due to 
greenhouse gases (the first column) and sulfate (the fourth column) 
were the only forcings used in the emission scenario. The height of the 
vertical bars represent the best estimate value, while its absence 
denotes no best estimate is possible. The vertical line about the 
rectangular bar with ``x'' provides an estimate of the uncertainty 
range. (From IPCC, 2001) 
[GRAPHIC] [TIFF OMITTED] 81495.002

    Figure 2 Trends of global temperature from observations, the United 
Kingdom's Hadley Center Global Climate Model, and the Canadian Climate 
Center's Global Climate Model. Trends have been smoothed to remove 
year-to-year high frequency variations.
[GRAPHIC] [TIFF OMITTED] 81495.003

    Figure 3 Estimates of the ``scaling factors'' by which the 
amplitude of several model-simulated signals must be multiplied to 
reproduce the corresponding change in the observed record. The vertical 
lines represent the 5-95% confidence interval due to internal natural 
variability. The models used in the National Assessment were the HadCM2 
with greenhouse gases and sulfur (GS) and the CGCM1 with greenhouse 
gases and sulfur (GS). Abbreviations: GS includes greenhouse and 
sulfate forcing and GSIO includes also includes the indirect effect of 
sulfate aerosol forcing plus tropospheric ozone forcing. See IPCC(2001) 
for details.
[GRAPHIC] [TIFF OMITTED] 81495.004

    Figure 4 A comparison of the ability of the Hadley Center and 
Canadian Climate Center coupled global climate models used in the 
National Assessment to simulate the 20th century global climate 
compared with using the mean temperature over various time segments to 
predict year-to-year variations of global temperatures (persistence). 
Standard errors less than persistence based on observations reflect 
skillful simulations.
[GRAPHIC] [TIFF OMITTED] 81495.005

    Figure 5 The time evolution of the globally averaged temperature 
change (relative to 1961-90 mean temperature) for various climate 
models forced with the emission scenarios used in the National 
Assessment (see IPCC 2001 for details)
[GRAPHIC] [TIFF OMITTED] 81495.006

    Figure 6 Results of a coupled ocean-atmosphere global Climate Model 
Intercomparison Project (CMIP) being conducted by the Lawrence 
Livermore National Laboratory. This comparison relates to the spatial 
distribution of annual precipitation across North America. All models 
are compared to the ``Xie/Arkin'' observational data set. The 
difference between two differing observation-based data sets reflect 
observational uncertainties, so we would not expect any model to 
skillfully exceed these differences. All models are evaluated on the 
basis of pattern correlations with the observations and the relative 
differences of annual precipitation integrated across all model grid 
points in North America. The Hadley Center climate model used in the 
National Assessment is shown with an ``*'' and the Canadian Climate 
Center is shown with a ``#'' symbol.
[GRAPHIC] [TIFF OMITTED] 81495.007

    Figure 7 Similar to Figure 6 except the results relate to the 
ability of the models to reproduce the annual cycle of precipitation.
[GRAPHIC] [TIFF OMITTED] 81495.008

    Figure 8 Analysis of coupled ocean-atmosphere inter-model 
consistency in regional temperature change based on much greater (40%) 
than average global warming, greater than average warming, less than 
average warming, inconsistent rates of warming, or cooling for the 21st 
century based on five model simulations (the Hadley and Canadian models 
used in the National Assessment and three other models used in the IPCC 
(2001) assessment) with 21st century increases in both greenhouse gases 
and sulfates (see IPCC 2001 for details).
[GRAPHIC] [TIFF OMITTED] 81495.009

    Figure 9 Similar to Figure 8 except for precipitation and a large 
change represents a change in excess of 20% and a small change is 
between 5 and 20% (see IPCC, 2001 for more details).

    Mr. Greenwood. We thank you, Dr. Karl. Thank you so much.
    Dr. Lashof.

                  TESTIMONY OF DANIEL A. LASHOF

    Mr. Lashof. Thank you, Mr. Chairman. In summarizing my 
written statement I want to try to make three points.
    The first is on the general value of climate models in 
looking into the future and trying to understand what is going 
on.
    The second is the fact that the National Assessment and the 
models that underlie it were accepted not just by the Clinton 
ad- ministration. They were reviewed more recently by the Bush 
ad- ministration, showing very broad partisan acceptance of 
those re- sults.
    Third, I want to present an example of the use of climate 
models to one particular study that we conducted on the effects 
of global warming on trout and salmon expected in the United 
States.
    So why climate models? Why do we use climate models to 
exam- ine the effects of global warming. The fact, Mr. 
Chairman, is that we only have one earth, and it is, therefore, 
impossible to conduct a standard controlled experiment where 
you take one plot and apply an experimental drug or chemical to 
it and another plot which is the control, which you leave 
undisturbed.
    We are, in fact, conducting an experiment on the earth by 
adding heat trapping carbon dioxide and other greenhouse gases 
to the at- mosphere, but we have no control.
    So the only way we can examine what the effects of the 
experi- ment that we are already engaged in would be is to have 
climate models which represent the earth based on the best 
available data we have from the atmosphere of the oceans, the 
land surface, and mathematical descriptions of the fundamental 
laws of physics. Those are run in a simulation on a computer, 
and that is what we call climate modeling.
    Of course, this type of simulation model is not unique to 
cli- mates. It is used to simulate everything from--We test 
crash cars in computers. We test fly airplanes in computers. We 
test detonate

nuclear weapons in computers. In fact, it is no accident that 
Law- rence Livermore National Laboratory does both climate 
modeling and nuclear weapons simulations using some of the most 
advanced computers in the world. So the basic idea is we are 
running this experiment on the climate, and we want to know 
what is going to happen, because if we wait to see everything 
that happens and we don't like the results, it is too late to 
change it, because these car- bon dioxide and other greenhouse 
gases last in the atmosphere for a very long period of time.
    So that basic approach was taken. The models were selected, 
as we have heard. They are representative of what is in the 
inter- national community. I just want to emphasize that, as 
Tony Janetos explained, extensive peer review process, comments 
by Pat Michaels and others were submitted both on the National 
Assess- ment and the subsequent climate action report. They 
were fully considered. Responses are fully documented in the 
public record.
    You can find those responses on the website of the Global 
Change Research Program. I believe he is going to repeat many 
of those comments today, and it is going to be difficult to 
sort out all of that in this particular forum. I think it is 
important to recognize that those comments were considered, and 
detailed responses to them are available in the public record.
    So just to make the point that the administration--the 
current administration also accepted these conclusions, it is 
worth noting that in 2001 the Intergovernmental Panel on 
Climate Change's Synthesis Report of its Third Assessment 
Report was adopted. The State Department submitted detailed 
comments on the draft of this document under this 
administration, and the administration fully participated in a 
plenary session in September 2001 where the summary of 
policymakers was adopted.
    I quote extensively--or I quote not extensively, but I 
quote from that report in my written testimony a few examples 
of the conclu- sions from that report, which basically show 
that global warming is happening, that we expect to see more 
heat waves, heavy pre- cipitation events, fewer cold days. 
These findings were embraced by the administration.
    Let me focus a little bit more on the U.S. Climate Action 
Report of 2002. This report was based upon conclusions of the 
National Academy of Sciences, the IPCC climate change report 
that I just mentioned, and the National Assessment that we have 
been dis- cussing today. It was thoroughly vetted by this 
administration and approved before its official release and 
transmittal to the United Nations Framework Convention on 
Climate Change.
    Among the key findings of the Climate Action Report are 
that, for example, rather than, ``Rather than simply 
considering the po- tential influences of arbitrary changes in 
temperature, precipita- tion, and other variables, the use of 
climate models scenarios en- sured that the set of climate 
conditions considered was internally consistent and physically 
plausible.'' That is the basic reason for using the models.
    Natural ecosystems appear to be the most vulnerable to 
climate change, because generally little can be done to help 
them adapt to the projected rate and amount of change. Sea 
level rise at mid- range rates is projected to cause additional 
loss of coastal wetlands,

particularly in areas where there are obstructions to landward 
mi- gration, and put coastal communities at greater risk of 
storm surges, especially in the southeastern United States.
    Further, it found that reduced snow pack is very likely--
and this term ``very likely,'' as Dr. Janetos explained, is a 
specific term used to represent that this is a robust finding--
to alter the timing and amount of water supplies, potentially 
exacerbating water short- ages, particularly throughout the 
western United States. Current water management practices 
cannot be successfully altered or modified.
    So I think the clear conclusion from these findings is that 
global warming does pose a very severe threat to public health 
and wel- fare in the United States Let me just finish by 
summarizing the example of a recent study that NRDC and 
Defenders of Wildlife re- leased in May that used some of the 
climate models, updated versions of two of the models used in 
the National Assessment plus a third model to project the 
likely effects of global warming on a particularly valued sport 
fish, trout and salmon, in the United States.
    We found, based on this analysis in this report, which I 
would ask to be included in the record, that at the regional 
level the loss of trout habitat in the northeast and southwest 
could be particu- larly severe, although losses are also 
expected in the southeast and Rocky Mountain regions.
    For example, in Pennsylvania we found that losses of trout 
habi- tat are projected to be 6 to 11 percent by 2030, 22 to 28 
percent by 2060, and 33 to 44 percent by 2090, assuming 
continued emis- sion increases of heat trapping gases. At the 
national level the re- sults are loss of 5 to 17 percent by 
2030, 14 to 34 percent by 2060.
    This range of results are based on using a variety of 
climate models to look at the effects that are possible. 
Providing that range is very helpful, because it gives us a 
sense not of a precise pre- diction but of the likely outcomes 
and the probability of those out- comes, and gives us a way to 
really anticipate the types of effects we will look at if we 
don't take action.
    So I believe that climate models are very useful to give a 
picture of what will happen. They are not precise predictions, 
but they do very usefully inform us when we make decisions 
about whether to control emissions of carbon dioxide and other 
heat trapping gases. Though I know it is not the subject of 
this hearing, I conclude from that that it is time to take 
action, and it is time for mandatory lim- its on emissions of 
greenhouse gases.
    Thank you very much, Mr. Chairman.
    [The prepared statement of Daniel A. Lashof follows:]
   Prepared Statement of Daniel A. Lashof, Science Director, Climate 
               Center, Natural Resources Defense Council
                              introduction
    Thank you Mr. Chairman and members of the committee. My name is 
Daniel Lashof, and I am the Science Director of the Natural Resources 
Defense Council's Climate Center. I appreciate the opportunity to 
appear before you today.
    I have been engaged in research and assessment related to global 
climate change for more than 15 years. I was a reviewer of the National 
Assessment Synthesis Re- port. I have also served as a Lead Author of 
the Intergovernmental Panel on Cli- mate Change Special Report Land 
Use, Land-Use Change, and Forestry and as a

reviewer of several reports by the panel. I have also served on the 
National Re- search Council's Committee on Atmospheric Chemistry and on 
the Energy Research and Development Panel of the Presidents' Committee 
of Advisers on Science and Technology. Previously I served on the 
Federal Advisory Committee on Options for Reducing Greenhouse Gas 
Emissions from Personal Motor Vehicles. I hold a bach- elor's degree in 
physics and mathematics from Harvard University and a doctorate in 
Energy and Resources from the University of California at Berkeley.
    The Natural Resources Defense Council (NRDC) is a national, non-
profit organiza- tion of scientists, lawyers, and environmental 
specialists, dedicated to protecting public health and the environment. 
Founded in 1970, NRDC serves more than 500,000 members from offices in 
New York, Washington, Los Angeles, and San Francisco.
    In my statement today I will address the value of using climate 
models to assess the potential effects of global warming on the Untied 
States and illustrate this by reviewing the results of a recent study 
published by NRDC and Defenders of Wild- life on the threat posed by 
global warming to trout and salmon.
                  experimenting on the earth's climate
    Mr. Chairman, there is only one earth. It is therefore impossible 
to conduct a con- trolled physical experiment that compares an 
``experimental'' earth with elevated concentrations of carbon dioxide 
(CO<INF>2</INF>) and other heat-trapping gases to a ``control'' earth 
with an unpolluted atmosphere. Instead we are currently conducting an 
un- controlled experiment in which emissions from power plants, 
automobiles and other sources are adding to a thickening layer of 
carbon pollution in the only atmosphere we have. The problem is that if 
we don't like the consequences of this experiment it will be too late 
to reverse them.
    Given our one-earth experimental design, which I don't think even 
Congress has the power to change, the best approach available to us is 
to simulate the earth's climate system using all available data on the 
composition of the atmosphere, the properties of the earth's surface, 
and the conditions of the earth's oceans combined with mathematical 
equations that describe the fundamental physical laws of motion and 
conservation of mass and energy. This is called climate modeling. 
Climate mod- els allow us to conduct non-destructive controlled 
experiments: An ``experimental'' simulation with rising concentrations 
of heat-trapping gases can be compared to a ``control'' simulation with 
constant concentrations.
    The idea of using computers to simulate physical systems with 
mathematical models is not unique to climate modeling. Simulation 
models are used to test-crash cars, test-fly airplanes, and test-
detonate nuclear weapons. All without the need to sweep up afterward. 
If computer models were inherently useless, Boeing 777's would be 
falling out of the skies. In fact, it's no accident that the Lawrence 
Liver- more National Laboratory does both climate sim- ulations and 
nuclear weapon simulations. And for the same reason. It is safer to run 
these tests on computer models than on the real thing.
    Climate models are in fact a remarkable achievement of modern 
science. Despite the incredible complexity of the earth's climate 
system, these models are able to simulate with high fidelity the major 
processes that determine the variations in the earth's climate over 
space and time: from the polar vortex to tropical monsoons and from the 
depths of winter to the heat of summer and everything in between. Are 
the models perfect? Of course not. Someone looking selectively for 
discrepancies will always be able to find something to point to and 
there will always be room for re- finements. Nevertheless, overall the 
models have achieved a level of realism and ac- curacy that makes them 
very useful tools. Indeed, they are the only tool we have for safely 
performing experiments to investigate the effects of large-scale 
pollution of the atmosphere with heat-trapping gases.
 the bush administration recognizes the threat posed by global warming
    The current Bush Administration has recognized the value of using 
simulation models to test the potential consequences of global warming 
on the United States in two recent reports that underwent extensive 
interagency review. These are the 2001 Intergovernmental Panel on 
Climate Change's (IPCC) Synthesis Report of the Third Assessment Report 
and the U.S. Climate Action Report 2002, formally known as the Third 
National Communication of the United States of America Under the United 
Nations Framework Convention on Climate Change (UNFCCC).
    First, in August 2001, the State Department submitted detailed 
comments on the draft of the IPCC's Synthesis Report of the Third 
Assessment Report. The adminis- tration carefully reviewed this report 
and, while suggesting some changes and clari- fications, agreed with 
all the key findings. Furthermore, they participated fully in

the IPCC Plenary meeting in September 2001, where the final IPCC TAR 
Synthesis Report Summary for Policymakers (SPM) was approved in detail. 
Among other things, this report concludes that:

<bullet> ``There is new and stronger evidence that most of the warming 
        observed over the last 50 years is attributable to human 
        activities.'' (Climate Change 2001: Synthesis Report, SPM, p. 
        5)
<bullet> ``Projections using the SRES emissions scenarios in a range of 
        climate models result in an increase in globally averaged 
        surface temperature of 1.4 to 5.8 C over the period 1990 to 
        2100. This is about two to ten times larger than the central 
        value of observed warming over the 20th century and the 
        projected rate of warming is very likely to be without 
        precedent during at least the last 10,000 years, based on 
        paleoclimate data.'' (SPM, p. 8)
<bullet> ``Models project that increasing atmospheric concentrations of 
        greenhouse gases result in changes in frequency, intensity, and 
        duration of extreme events, such as more hot days, heat waves, 
        heavy precipitation events, and fewer cold days. Many of these 
        projected changes would lead to increased risks of floods and 
        droughts in many regions, and predominantly adverse impacts on 
        ecological systems, socio-economic sectors, and human health.'' 
        (SPM, p. 14)
    Then, in May 2002, the administration released the U.S. Climate 
Action Report 2002 and submitted it to the Secretariat of the UNFCCC. 
This report is based upon conclusions by the National Academy of 
Sciences, the IPCC climate change reports, and the U.S. Global Change 
Research Program's U.S. National Assessment of the Potential 
Consequences of Climate Variability and Change. It was thoroughly 
vetted by this administration and approved before its official release. 
Among the key finding of the Climate Action Report are:

<bullet> ``To provide an objective and quantitative basis for an 
        assessment of the potential consequences of climate change, the 
        U.S. National Assessment was organized around the use of 
        climate model scenarios that specified changes in the climate 
        that might be experienced across the United States (NAST 2001). 
        Rather than simply considering the potential influences of 
        arbitrary changes in temperature, precipitation, and other 
        variables, the use of climate model scenarios ensured that the 
        set of climate conditions considered was internally consistent 
        and physically plausible.'' (p.84)
<bullet> ``Use of these model results is not meant to imply that they 
        provide accurate predictions of the specific changes in climate 
        that will occur over the next 100 years. Rather, the models are 
        considered to provide plausible projections of potential 
        changes for the 21st century. For some aspects of climate, all 
        models, as well as other lines of evidence, are in agreement on 
        the types of changes to be expected. For example, compared to 
        changes during the 20th century, all climate model results 
        suggest that warming during the 21st century across the country 
        is very likely to be greater, that sea level and the heat index 
        are going to rise more, and that precipitation is more likely 
        to come in the heavier categories experienced in each region.'' 
        (p.84)
<bullet> ``The model scenarios used in the National Assessment project 
        that the continuing growth in greenhouse gas emissions is 
        likely to lead to annual-average warming over the United States 
        that could be as much as several degrees Celsius (roughly 3-
        9 deg.F) during the 21st century. In addition, both 
        precipitation and evaporation are projected to increase, and 
        occurrences of unusual warmth and extreme wet and dry 
        conditions are expectedto become more frequent.'' (p.84)
<bullet> ``Natural ecosystems appear to be the most vulnerable to 
        climate change because generally little can be done to help 
        them adapt to the projected rate and amount of change.
<bullet> ``Sea level rise at mid-range rates is projected to cause 
        additional loss of coastal wetlands, particularly in areas 
        where there are obstructions to landward migration, and put 
        coastal communities at greater risk of storm surges, especially 
        in the southeastern United States.
<bullet> ``Reduced snow-pack is very likely to alter the timing and 
        amount of water supplies, potentially exacerbating water 
        shortages, particularly throughout the western United States, 
        if current water management practices cannot be successfully 
        altered or modified.
<bullet> ``Increases in the heat index (which combines temperature and 
        humidity) and in the frequency of heat waves are very likely.'' 
        (p.82).
    The clear conclusion from these findings is that global warming 
poses a severe threat to public health and the environment in the 
United States.
                     trout and salmon in hot water
    A study published by NRDC and Defenders of Wildlife in May on the 
threat posed by global warming to trout and salmon in the United States 
provides one example of the kind of analysis that can be usefully 
performed using the regional results of global climate models. Because 
trout and salmon are known to be intolerant of warm water, their 
abundance could be threatened if future climate change warms the 
streams they inhabit. I ask that this report be included in the hearing 
record.
    Trout and salmon are highly valued for their contribution to the 
economy and culture of the United States. They thrive in the cold, 
clear streams found in many mountainous and northern regions of the 
country. About 10 million Americans spend an average of ten days per 
year angling in streams or lakes for these fish. Dams, water 
diversions, pollution, and development threaten trout and salmon, which 
have already disappeared from many of the streams where they were 
formerly found. Global warming poses a less visible but no less severe 
threat to their survival.
    To assess the magnitude of this threat we contracted with Abt 
Associates to perform a new simulation study of how climate change 
might affect existing habitat for four species of trout (brook, 
cutthroat, rainbow, and brown) and four species of salmon (chum, pink, 
coho and chinook) in streams throughout the contiguous United States. 
The simulation uses the results of three different climate models, 
including updated versions of the Canadian model (CGCM2) and the Hadley 
Center model (HadCm3) used in the National Assessment, as well as an 
Australian model (CSIRO-Mk2). The changes in air temperatures projected 
by these global climate models are used to project the impact of global 
warming on U.S. stream temperatures, using a new, more accurate method 
to estimate the relationship between air and stream temperatures.
    Interestingly, the version of the Hadley Center model used for this 
study projects warming rates for the United States that are quite 
similar to Canadian Model results used in the National Assessment. 
Trout and salmon are particularly sensitive to increases in summer 
temperature and the Hadley Model (HadCm3) projects an increase in 
average July temperatures for the contiguous United States of as much 
as 10 degrees Fahrenheit by 2090, assuming that emissions of heat-
trapping gases are not curtailed.
    The study found that trout and salmon habitat is indeed vulnerable 
to the effects of global warming. At the national level we estimate 
that individual species of trout and salmon could lose 5-17 percent of 
their existing habitat by the year 2030, 14-34 percent by 2060, and 21-
42 percent by 2090, based on emissions scenarios A1 and A2 from the 
Intergovernmental Panel on Climate Change (IPCC), depending on the 
species considered and model used. Projected effects on trout and 
salmon are lower for IPCC scenarios B1 and B2, which assume that global 
CO<INF>2</INF> emissions are reduced for reasons not directly related 
to global warming. For these scenarios, we estimate habitat losses of 
4-20 percent by 2030, 7-31 percent by 2060, and 14-36 percent by 2090, 
depending on fish species and model. Of particular concern is the 
number of stream locations that become unsuitable for all modeled 
species (Exhibit 1).
    At the regional level, loss of trout habitat in the Northeast and 
the Southwest could be particularly severe, although losses are also 
expected in the Southeast and Rocky Mountain regions. For example, in 
Pennsylvania losses of trout habitat are projected to be 6-11 percent 
by 2030, 22-28 percent by 2060, and 33-44 percent by 2090, based on the 
A1 and A2 emission scenarios. Significant losses of salmon habitat are 
projected throughout their current range. The number of locations 
expected to become unsuitable for both trout and salmon expands 
steadily over time, assuming emissions of heat-trapping gases continue 
to increase (Exhibit 2).
    These results are robust with respect to key model specifications 
and assumptions. For a given emissions scenario, the greatest 
uncertainty is due to differences among the global climate models, yet 
the results provide a valuable indicator of the regions most vulnerable 
to loss of cold water fish habitat. Differences among the scenarios for 
future emissions of heat-trapping gases also significantly affect the 
results, even though none of the scenarios examined assumes that 
policies are adopted specifically to address global warming. For all 
emissions scenarios our results are likely to understate expected 
losses of habitat because of the several dimensions of climate change 
and potential effects on habitat that were beyond the scope of the 
study. These include potential effects on stream flows, changes to the 
temperature of groundwater discharge, changes in ocean conditions, and 
other considerations. In addition, these results must be viewed within 
the context of other present and future threats to fish habitat, which 
are likely to add to the temperature-related losses estimated in the 
report.
    This analysis demonstrates that it is possible to draw robust 
conclusions about the vulnerability of key resources to the effects of 
global warming, despite variations in climate model projections. The 
results show that future strategies to protect trout and salmon will 
need to address the potential effects of global warming.
               responding to the threat of global warming
    The administration has recognized the threat posed to the United 
States by global warming and has reaffirmed the United States' 
commitment to the objective of the Framework Convention on Climate 
Change, which is to stabilize greenhouse gas concentrations in the 
atmosphere at safe levels. Nonetheless, the administration has refused 
to consider any mandatory limits on emissions of heat-trapping gases. 
This position is both illogical and irresponsible.
    The administration has argued, in essence, that mandatory limits on 
emissions of CO<INF>2</INF> and other heat-trapping gases would harm 
the economy, and that therefore we should rely on voluntary measures 
and adapt to changes in climate. The administration has not advanced 
any analysis, however, to suggest that voluntary action has any chance 
of stabilizing greenhouse gas concentrations in the atmosphere. Indeed, 
the United States has now relied on voluntary measures for more than a 
decade and emissions have continued to increase.
    The administration's claim that setting mandatory limits on 
emissions now would harm the economy is equally unsupported by 
analysis. While it is possible to construct straw-man proposals that 
would be costly, surely there must be some level and timetable for a 
CO<INF>2</INF> emission limit that would be affordable. Yet the 
administration has rejected any mandatory limit out of hand. In fact, 
failure to set limits now will lead to stranded investments in new 
highly emitting power plants and other equipment that will become 
obsolete when limits are established in the future.
    Further delay in establishing mandatory limits on heat-trapping gas 
emissions is irresponsible because our window for taking action in time 
to stabilize greenhouse gas concentrations at safe levels is rapidly 
closing. The IPCC Synthesis Report cited earlier, which was adopted 
with the full participation of the administration, makes this quite 
clear:

<bullet> ``The severity of the adverse impacts will be larger for 
        greater cumulative emissions of greenhouse gases and associated 
        changes in climate.'' (SPM p.9)
<bullet> ``Inertia is a widespread inherent characteristic of the 
        interacting climate, ecological, and socieconomic systems. Thus 
        some impacts of anthropogenic climate change may be slow to 
        become apparent, and some could be irreversible if climate 
        change is not limited in both rate and magnitude before 
        associated thresholds, whose positions may be poorly known, are 
        crossed.'' (SPM p. 16)
<bullet> ``The pervasiveness of inertia and the possibility of 
        irreversibility in the interacting climate, ecological, and 
        socio-economic systems are major reasons why anticipatory 
        adaptation and mitigation actions are beneficial. A number of 
        opportunities to exercise adaptation and mitigation options may 
        be lost if action is delayed.'' (SPM p. 18)
    Mr. Chairman, global warming poses a clear threat to the United 
States. The good news is that this is a threat that we know how to 
stop. Now is the time to set mandatory limits on emissions of heat-
trapping gases.
    Thank you.
    [GRAPHIC] [TIFF OMITTED] 81495.010
    
    Mr. Greenwood. Thank you, Dr. Lashof. The last two times at 
my trout fishing in Pennsylvania I caught nothing. Now I know 
why.
    Dr. O'Brien.

                 TESTIMONY OF JAMES J. O'BRIEN

    Mr. O'Brien. Mr. Chairman, thank you very much for inviting 
me today. I have been a physical scientist in oceanography/
meteor- ology for 40 years. I will tell you that in the early 
part of my career primarily I was an ocean modeler, and my 
students and I are rec- ognized for that internationally. Then 
in the Seventies, late Seven- ties and Eighties, we contributed 
to understanding of El Nino and how it can be forecast, and 
then in the Nineties, while most sci- entists were studying 
what was happening in tropical countries, my students and I 
concentrated on impacts in the United States, and I have listed 
in my paper many of the things we have done.
    In 1990 I accepted the pro bono job as the State 
Climatologist in Florida. Mr. Deutsch, I am your State 
Climatologist. So if you have any constituents who need to know 
about climate variability or cli- mate data, please refer them 
to my office in Tallahassee.
    The reason I took it was very simple. Based on the climate 
varia- bility studies, which is part of my theme, the 
mitigatable impact in the State of Florida is at least $500 
million a year, primarily in forestry and agriculture, tourism 
and fisheries. I want to see that we accelerate this 
information for the people of Florida.
    Recently, we have actually developed--which is now being 
used by the wildfire management people in Florida--a way to 
predict up to 6 months in advance which county is more 
vulnerable for forest fires. You know we have had quite a time 
with 3 years of drought in the State of Florida.
    So we provide climate advice to the citizens of Florida for 
all sec- tors, but particularly agricultural, forestry, tourism 
and power gen- eration, and I am funded by NOAA in this area 
also to do the re-

search that goes along with providing the information. We work 
closely with Tom Karl and he provides us lots of the old data 
which is very useful.
    Now, turning to the National Regional Assessment of Climate 
Change, I was the co-chair for the Southeast Regional 
Assessment. Unfortunately, my co-chair, Dr. Ron Ritschard, died 
at an early year. We were funded by NASA.
    Today I want to focus on the question, and in my scientific 
opinion, the Hadley model is a state-of-the-art model, but it 
has poor horizontal resolution, inadequate physics, 
particularly in the ocean component. And since we deal 
primarily with--when we worry about whether it is going to be a 
cold winter in Chicago, you know, or too much rain in San 
Diego, these are related to what the ocean is doing, the memory 
that the ocean has, and it is very important if you are going 
to do a 100 year run that you have an adequate ocean model.
    A very prominent French physical oceanographer told me 
recently--fortunately, he is quite young--that he hopes that 
before he dies, the ocean models used by these global climate 
models represents something that he knows that's in the real 
ocean.
    Anyway, my opinion is that to Canadian model is very flawed 
and should never have been used. Even when it was first 
distributed to the team across the United States, the data was 
represented incorrectly geographically, and the attitude was, 
well, maybe that is not real important. I have no knowledge 
whether they actually ever fixed it up.
    You know, I enjoy learning about climate variability over 
the United States, such as floods, droughts, freezes, and 
hurricanes. I don't have time to go into what our studies have 
shown us, but for the average citizen, you know, they are 
wondering about the variability in climate. Okay? Is it 
different than, you know, my grandfather told me about? Is my 
experience different?
    They are not really interested in whether the average 
temperature is going to rise 3 to 5 degrees Fahrenheit in 100 
years. They are interested in is winter going to be colder than 
normal and many other things.
    For example, one of the things I discovered early on the 
Canadian model is they didn't have El Ninos in it. Now I was 
told it is in there, but I have looked at the data. I am an 
expert in that, and I couldn't find any sign of it, and I 
cannot imagine any climate model that we are going to run for 
100 years that doesn't have some robust signal of the way that 
our climate variability is changing on year to year and 
decadals.
    In the Hadley model, you see that, but you don't see that 
in the Canadian model. We are three State climatologists here 
today, and another one from the State of Alabama, Dr. John 
Christie, says that he believes that the Canadian model was 
modeling another planet than this one.
    Okay. Can we do better? I think we can really do better, 
and I actually have some very good news. Yesterday when I came, 
someone delivered to me a testimony of James Mahoney, who is 
now the Assistant Secretary of Commerce for Ocean Atmospheres, 
and he on July 11 this year before the Committee on Commerce, 
Science and Transportation of the U.S. Senate had delivered 
this paper which I can add into the record.
    In just part of it he says that uncertainties in climate 
models address exactly what we are talking about. It says the 
poor regional performance of current general circular models 
severely restricts the examination of potential global climate 
influence on key regional systems. I am so delighted that at 
very high level in government that that is now understood.
    I am going to conclude now and just say that I believe 
global climate change would occur. I am not convinced that we 
are going to see it in terms of surface temperature increase or 
sea level increase. It will change. We need to address what to 
do.
    In my outline I have indicated that we need a new Manhattan 
type project. We need an institute outside the government, labs 
in the government, where we hire the best managers,the best 
scientists, and give them finally decent computers so they can 
do the job correctly and make adequate American models.
    There is a model like this, sir. The model is the European 
Centre model for medium range weather prediction where the 
European nations got together and formed a center which is 
physically in Britain but has members all over. I think nobody 
will disagree in this room that they give the best week-long 
weather forecasts of anyplace. The reason they do is because of 
a unique way it is managed--good managers that don't stay there 
for lifetimes, good scientists that stay 5 to 10 years and then 
go to their home countries, and the best computers in the world 
for doing the problem right.
    The technical director of the Center for European Centre 
for Medium Weather Prediction, a good Irishman like me, 
recently said to General Kelly, the head of the Weather 
Service--He says, General Kelly, we are two decades ahead of 
you now; why don't you just buy our results and shut down the 
operation.
    Thank you very much.
    [The prepared statement of James J. O'Brien follows:]
Prepared Statement of James J. O'Brien, Robert O. Lawton Distinguished 
 Professor, Meteorology and Oceanography, The Florida State University
                              introduction
    I have been a physical scientist in oceanography and meteorology 
for 40 years. In my early years, my graduate students and postdocs 
concentrated primarily on modeling time dependent ocean motions. In the 
late 1970's and 1980's, we contributed to the physical understanding of 
El Nino. Namely, how it works and how it can be forecast.
    In the 1990's, while most other scientists were applying ENSO 
forecasts to tropical countries, my students and I have concentrated on 
impacts in the United States. We have written papers on: ENSO and 
Atlantic Hurricanes; ENSO and Tornadoes; ENSO and Precipitation; ENSO 
and Temperature; ENSO and Wild Fires (In Florida); ENSO and Snowfall; 
ENSO and Excessive Wind Events; ENSO and Great Lakes Snow Events; and 
ENSO and Freezes in Central Florida.
    In 1999, I accepted the pro bono job as official State of Florida 
Climatologist. We have been advising the Florida Commissioner of 
Agriculture on wild fire forecasts, droughts, hurricanes, etc. We 
provide climate advice to the citizens of Florida for all sectors, but, 
particularly agriculture, forestry, fisheries, tourism and power 
generation.
    In some local circles, I am labeled, Dr. El Nino for my research.
    Turning now to the National Regional Assessment of Climate Change, 
I was the Co-Chair for the Southeast Regional Assessment. (My Co-Chair, 
Dr. Ron Ritschard, recently died at a young age). Our work was funded 
by NASA, Huntsville, Alabama.
    In the early beginnings of the National Regional Assessment, the 
entire U.S. team met and we agreed there would be ``ONE'' Global 
CO<INF>2</INF> doubling model so everyone referring to future 
projections would be on the same page. There were two choices: (1) The 
Hadley, (British model), or (2) The Max Planck (German model). The 
Hadley model was selected. Subsequently, I attended a meeting of the 
U.S. National Resource Board Committee on Climate. Many senior 
scientists were shocked that there was no American models. In due time, 
a very new recent model, the Canadian Model was added for our use. It 
was recently computed so no documentation or response of this model was 
available to the assessment teams.
    In my opinion, the Hadley model is a state-of-the-art model with 
poor resolution, inadequate physics--particularly, the ocean component. 
The Hadley Model gives reasonable projections, but it is still flawed 
and I am sure that, in due time, will be improved. As better ocean 
models are improved in climate models, the future changes are greatly 
reduced.
    The Canadian's model is flawed, and, in my opinion, should never 
have been used. My effort to capture the attention of the leaders to 
recognize this were rejected outright. My team discovered that, 
initially, the data provided to the team had been incorrectly 
registered with respect to the geography of the U.S. Since the model 
has horizontal grid boxes around 500 km on a side, being set off by one 
grid, really confuses geographic identification. (As an aside, I do not 
know if this was fixed, but I was told it can't make any difference).
    I really enjoy learning about climate variability over the United 
States, such as droughts, floods, freezes, hurricanes, etc. For the 
citizen for whom climate is important, it is the variability which 
matters! It is not whether the average temperature will rise 3-5  deg.F 
in 100 years. The citizen wants to know ``Is this winter going to be 
colder than normal?'' and other simple questions. I discovered that we 
could not find ENSO variability in the ocean model of the Canadian 
model. I was told it was there, but it makes no difference if it is too 
small.
    Mr. Chairman, the variability of climate over most of the United 
States is primarily controlled by ENSO and other ocean-related 
phenomena (North Atlantic Oscillation, and the Pacific Decadal 
Oscillation) and land use changes. I cannot accept a 100 year climate 
run as useful if it doesn't also include the observed variability in 
the climate system.
    What is the climate system? It is the entire atmosphere, ocean, 
land, ice systems which are heated by the sun. The chemistry of global 
climate change is completely correct. We have an excellent scientific 
understanding of how radiatively-active gases such as carbon dioxide, 
methane and water vapor can delay heat in the climate system. There is 
an assumption that this extra heat will manifest itself in raising the 
temperature of the biosphere--that portion of the climate system in 
which we humans live. The data measured from the actual climate system 
seems to indicate other processes are dominate, such as stronger mid-
latitude storms which are important for distributing the extra heat. In 
my opinion, even the current models are not capable in calculating the 
climate system well enough for policymakers to believe in any 
projection.
    Can we do it better? I believe we can, but it will take a new 
effort and considerable investment. We know most of the physics of the 
climate system. In order to calculate the variability of the system, we 
need adequate computers. We need the kind of investment in computers 
that the Congress funds to DOD, NSF, NSA, etc.
    I propose a ``Manhattan Type Project'' to estimate future climate 
variability for our National Security. Any future climate change will 
probably require trillions of dollars to adjust our culture or mitigate 
the consequences. My vision is a NEW Institute, outside the government 
with top management, the best scientists and adequate resources. My 
estimate is $50M/year for at least 10 years.
    When I suggest this, OMB folks usually ask me, ``Dr. O'Brien, where 
are we going to find that money?'' My answer is, ``Give us 2 attack 
helicopters'' monies, and we will be happy for a few years. Give us a 
fighter jet monies, we will be very happy for a few years. Give us an 
aircraft carrier monies, and we will never ask for any more 
resources''. The Congress has to decide on the priorities. Do we want 
to understand the future climate or not?
    Returning to my belief, that we can do better in modeling climate, 
I am encouraged that each generation of climate change modeling gets 
better. The original CO<INF>2</INF>-doubling model by NASS,GISS under 
the leadership of Dr. Jim Hansen, estimated around 10  deg.F surface 
temperature change by 2050. This was so dramatic because no ocean was 
included. I remember reading in the Tallahassee Democrat, a story that 
said, as a result of the GISS model, that sea-level would rise 3 meters 
or 7-10 feet by 2050. The current IPCC estimates a few degrees 
temperature rise by 2100 and a doubling of the current sea-level rise 
of around 8-10 inches to 20 inches as the worst case by 2100. 
Certainly, policymakers will react differently to plan for a two feet 
rise in 100 years vs. 10 feet rise in a generation.
    Let me provide one more remark on sealevel rise. In order to double 
sealevel, one would expect to observe an increased rate of rise by 
2002. Everyone agrees that the current average rise is about 7-10 
inches a century, averaged over the globe. However, the experts who 
have tried to find any acceleration find none.
    How about global warming in the United States? I will leave this 
subject to my fellow climatologist, Dr. Tom Karl. I am, however, the 
State Climatologist of Florida. In Florida, the cities are warming at 
the rate of about one degree in the entire 20th century. But the rural 
places are cooling at the rate of more than one degree per century. I 
have included some graphics in my presentation documenting this for 
minimum temperature over the entire 20th Century. What is happening? My 
fellow, State Climatologist, from Colorado, Dr. Roger Pielke, Sr., 
explains this by land use changes. Dr. Karl has published work showing 
the cooling in the Southeast United States, but unfortunately, the 
summary of average temperature in Florida in the last century, in the 
S.E. Assessment summary, shows Florida warmer than the rural data would 
dictate.
    Finally, the ocean part of the global climate system models are 
very inadequate. The research community is aware that warm and cold 
ocean currents are very important in predicting the weather even for 10 
days. It is critical to model the oceans correctly if a global climate 
model is expected to work at all. A young French ocean modeler said to 
me recently, ``I hope that the ocean models used by global climate 
models look like the real ocean before I die!''
    There are hundreds of scientists other than climate modelers that 
have been told the Hadley and Canadian models are good projections of 
the future. This is a shame. When I joined the U.S. National Assessment 
Team as Co-Chair of the Southeast Regional Assessment, a bright young 
EPA ecologist from Louisiana reported to me that the number of 
hurricanes in the Gulf of Mexico were increasing due to global warming. 
I was unaware of this. Consequently, my students and I did a study. We 
found, that, in fact, the number of hurricanes have decreased 
significantly in the Gulf of Mexico. This is a published paper. 1998: 
Are Gulf Hurricanes Getting Stronger? Bull. Amer. Meteorol. Soc., 
79(7), pp. 1327-1328 (with Bove, M.C., and D. F. Zierden).
                               conclusion
    Global climate changes will occur. Whether surface temperatures 
will increase due to radiatively-active emissions is not clear. The 
Global Climate System must change. In order to address what the nation 
needs to do, I recommend a large investment in improving the basic 
understanding by investing in very good global climate system 
calculations.
[GRAPHIC] [TIFF OMITTED] 81495.011

[GRAPHIC] [TIFF OMITTED] 81495.012

[GRAPHIC] [TIFF OMITTED] 81495.013

    Mr. Greenwood. Thank you. My staff said it sounds like a 
field hearing. We will have to go over to Europe and take a 
look at that.
    Dr. Pielke.

                TESTIMONY OF ROGER A. PIELKE, SR.

    Mr. Pielke. Yes, Mr. Chairman, members of the committee, 
thank you for the opportunity to present testimony. I received 
my PhD and Master's degree from Penn State in the Department of 
Meteorology, and since the 1960's my research has focused on 
weather and climate studies using both models and observations.
    In my testimony I would like to convey two main points: 
First, that the perspective I am presenting today does not 
easily fit into the conventional two-sided debate over climate 
change. This third perspective, as I have written elsewhere, 
suggests that humans have an even greater impact on climate 
than is suggested by the international and national 
assessments.
    The human influence on climate is significant and multi-
faceted. However, any attempt to accurately predict future 
climate is fundamentally constrained by the significant and 
multi-faceted characteristics of the human influence on 
climate. By focusing on vulnerabilities rather than prediction 
as a focus of research, I believe that the scientific community 
can provide more comprehensive and likely more useful 
information to decisionmakers.
    These points are consistent with the American Association 
of State Climatologists Policy Statement on Climate Variability 
and Change which was approved on October 25, 2001, and I will 
read part of that statement:
    ``Our statement provides the perspective of our Society on 
issues of climate variability and change. Since the Society 
members work directly with users of climate information at the 
local, State and regional levels, it is uniquely able to put 
global climate issues into the local perspective which are 
needed by users of climate information. Our main conclusions 
are as follows:
    ``First, past climate is a useful guide to the future. 
Assessing past climate conditions provides a very effective 
analysis tool to assess societal and environmental 
vulnerability to future climate, regardless of the extent the 
future climate is altered by human activity. Our current and 
future vulnerability, however, will be different than in the 
past, even if the climate were not to change, because society 
and the environment change as well. Decision makers need 
assessments of how climate vulnerability has changed.
    ``Two, climate prediction is complex with many 
uncertainties. The AASC recognizes climate prediction is an 
extremely difficult undertaking. For time scales of a decade or 
more, understanding the empirical accuracy of such predictions, 
called verification, is simply impossible, since we have to 
wait a decade or longer to assess the accuracy of the 
forecasts.''
    In the remainder of my 5 minutes I will discuss one example 
of the scientific basis that underlie the statement. Greater 
detail is available in the peer reviewed scientific 
publications that are listed at the end of my written 
testimony.
    A fundamental basis of the U.S. National Assessment is the 
use of the Canadian and Hadley Centre General Circulation 
Models to project the future state of the climate as the basis 
for discussion of climate impacts and ultimately alternative 
courses of action by decisionmakers. The perspective I offer 
here suggests that in relying on GCMs to, in effect, bound the 
future state of the climate, the U.S. National Assessment may 
have had the effect of underestimating the potential for change 
and overestimating our ability to accurately characterize such 
changes with computer models.
    The hypothesis for using these models is that including 
human caused increases of carbon dioxide and other greenhouse 
gases and aerosols in the models are sufficient to predict long 
term effects on the climate of the United States. The position 
presented here is that such forcings are important, but a 
subset of those needed to develop plausible projections, and 
even if all the forcings were included, accurate long term 
prediction would remain challenging, if not impossible.
    To test the hypothesis that GCMs can accurately project 
climate, it is possible to compare model performances with 
observed data for the period 1979-2000. One test is the ability 
of the model to predict the averaged temperatures of the 
earth's atmosphere over this 20-year period. Such a test is a 
necessary condition for regional projection skill, since if 
globally averaged long term changes cannot be skillfully 
projected, there will necessarily be no regional skill.
    During this period, for example, at around 18,000 feet 
above sea level, the Canadian GCM projects a 0.7 degree C 
warming of the global averaged temperature. The Hadley Centre 
model also has atmosphere warming for this time period. The 
observations, in contrast, have no statistically significant 
change in these averaged atmospheric temperatures.
    Thus, either the models or the observations must be 
incorrect. Both cannot be correct. Since, for the 1979-2000 
time period, satellite, radiosonde and National Center for 
Environmental Prediction model reanalyses each agree closely 
with respect to global averages, the observations should be 
interpreted as our best estimate of reality.
    The scientific evidence, therefore, is that the models have 
failed to replicate the actual evolution of atmospheric 
temperatures over the time period 1979-2000. Thus using the 
results of these models as the basis for assessments, much less 
for particular decisions, for the next several decades is not 
justified. Such models clearly have usefulness as scientific 
tools with which to conduct sensitivity experiments, but it is 
important to not overstate their capabilities as predictive 
tools.
    One major reason for this difficulty is the absence and/or 
inadequate representation of significant human caused forcing 
of the climate. These include land use changes over time, the 
effect of aerosols on clouds and precipitation, and the 
biogeochemical effect of carbon dioxide. The Intergovernmental 
Panel on Climate itself concludes that there is a very low 
level of scientific understanding of these forcings.
    The importance of one of these effects can be illustrated 
by a just published paper of the influence of human caused land 
use change on the global climate. Even with a conservative 
estimate of land use change, the global redistribution of heat 
and the effects on regional climate is at least as large as 
simulated by the existing GCM simulations. However, even when 
these forcings are included, the complex interactions among the 
components of the climate system will likely limit our ability 
to skillfully predict the future. Indeed, we cannot even 
predict with any skill beyond a season in advance, and then 
only under special situations such as an evolving El Nino.
    As a result, we have--There is a new book that is coming 
out by the International Geosphere-Biosphere Programme titled 
``Vegetation, Water, Humans and the Climate,'' and there is a 
chapter in there which talks about how to evaluate the 
vulnerability in changing environmental conditions.
    This chapter basically proposes that we start first from an 
assessment of vulnerability. Only at that point do we bring in 
these other tools, such as GCM models, historical record, and 
so forth.
    Even the IPCC, I am told by some colleagues, is starting to 
embrace a greater focus on vulnerability, and several U.S. 
programs, most notably the Regional Integrated Science and 
Assessments program of NOAA, have also acknowledge the 
importance of vulnerability as a scientific organizing theme.
    Let me conclude by saying I wish to underscore that the 
inability of the U.S. National Assessment models to skillfully 
predict climate change does not mean that the radiative effect 
of anthropogenic greenhouse gases on climate is not important, 
nor does it suggest which policy responses to the issues of 
climate change make the most sense.
    Such matters of policy go well beyond any discussion of the 
issues of science and well beyond the information presented in 
my testimony today.
    Effective mitigation and adaptation policies in the context 
of climate variability and change do not depend on accurate 
prediction of the future and, consequently, a lack of ability 
to generate accurate predictions should not be used as a 
justification to ignore the policy challenges presented by 
climate. Too often, debate over climate substitutes for debate 
over policy.
    Thank you.
    [The prepared statement of Roger A. Pielke, Sr. follows:]
    [GRAPHIC] [TIFF OMITTED] 81495.014
    
    [GRAPHIC] [TIFF OMITTED] 81495.015
    
    [GRAPHIC] [TIFF OMITTED] 81495.016
    
    [GRAPHIC] [TIFF OMITTED] 81495.017
    
    [GRAPHIC] [TIFF OMITTED] 81495.018
    
    [GRAPHIC] [TIFF OMITTED] 81495.019
    
    [GRAPHIC] [TIFF OMITTED] 81495.020
    
    Mr. Greenwood. Thank you, Dr. Pielke.
    Dr. Michaels.

                TESTIMONY OF PATRICK J. MICHAELS

    Mr. Michaels. Mr. Chairman, I am sitting over here. Dr. 
O'Brien is much more handsome.
    I am a Professor of Environmental Sciences at the 
University of Virginia and past President of the American 
Association of State Climatologists, and I should say my 
colleague, Roger, is the future President.
    I offer you a word of caution on the science about which we 
base our Nation's policy on global warming. Mr. Chairman, would 
you tell people what was going to happen to the United States 
tempera- ture based upon a table of random numbers? I don't 
think so. But that is what happened in the assessment on 
climate change.
    Effects have causes. Our society is currently confronting a 
poten- tially serious effect, the specter of climate change 
caused by human alteration of the atmosphere. We ask scientists 
to quantify these causes and effects. They pursue truth by 
making hypotheses and testing them against reality. In climate 
science, these hypotheses

are computer models. If they are at odds with reality, they 
only inform bad policy.
    It is absolutely logical to want a scientific assessment of 
the effect of human induced climate variability on the U.S. 
Coming from the University of Virginia, I am commanded, as you 
know, to refer to its founder, Thomas Jefferson. Had he been 
alive and seen changes in the greenhouse effect that we have 
observed today, he would ask scientists what will happen to 
America's climate?
    So let's transport Mr. Jefferson's scientists in the 19th 
Century, newly minted in the environment, naive, not involved 
in the political process. What would they do? Well, they would 
probably learn about computer models such as we have today, and 
then they would use those computer models to drive impact of 
climate change on other aspects of our society, our farms, our 
forests, our water supply.
    That is, in a sense, what was used for the methodology for 
the U.S. National Assessment on Climate Change. Now what models 
would they choose? I argue they would find a climate model that 
predicted large changes, one that predicted medium changes, and 
probably a third that predicted small changes.
    In the very real case of the 20th Century National 
Assessment on Climate Change, two models were chosen. The first 
from the Canadian Climate Center, shown here in this Vu-Graph, 
predicts the largest changes of temperature of any of the 
models considered here in the report.
    It is also different than the dozens of other climate 
models. It is against the consensus of climate models, as 
described by the United Nations, because it has an exponential 
increase in temperature, meaning an increase which gets larger 
and larger in terms of rate, as opposed to the average of 
models. This is from the United Nations' new summary on climate 
change, which you can see clearly is a straight line.
    So not only have we chosen the most extreme temperature 
prediction, we have chosen one whose mathematical and 
functional form is at variance to the consensus of models.
    The second model used in the Assessment, from Britain's 
Hadley Center, predicts the largest changes in rainfall. These 
are the precipitation forecasts from the models considered. You 
can see this is at major variance to any of the other consensus 
models that we have.
    Consequently, the very real 20th Century scientists, as 
opposed to our 19th Century hypothetical scientists, chose the 
most extreme forecasts to guide our national assessment. I 
would bet our 19th Century scientists would ask another 
question: Do these models work? And they would test them, and 
they would discover that both the Hadley and the Canadian 
models chosen by the 20th Century counterparts were worse than 
a table of random numbers when applied to United States 
temperatures.
    At this point, I believe the 19th Century scientists would 
have stopped and said we do not have the tools to forward 
project climate. They might have said, perhaps we should take a 
look at how U.S. climate has changed as the greenhouse effect 
has changed and as global temperatures have changed.
    The very real 20th Century assessment teams was informed in 
the review process about this problem with the models. IN 
public comments, it was swept aside with a statement that 
United States temperatures are warming, model temperatures are 
warming and, therefore, everything is fine.
    In fact, the Canadian model predicts recent years to be 2.7 
degrees warmer than the years in which the Canadian model 
starts. The observed change in U.S. temperature is .9 degrees, 
a 300 percent error.
    Random numbers are not plausible scenarios. It is no longer 
science when our results are worse than random numbers. It is 
mathematical philosophy. It is scenario building, but it is not 
science. Mr. Chairman, whatever is based upon models that do 
not better than random numbers is science fiction, glossy, 
colorful, meticulous, but fiction.
    Unfortunately, the assessment serves as the basis for 
sweeping legislation on global warming at both the Federal and 
the State levels. Using computer models that demonstrably do 
not work can only inform bad policy.
    The first time I testified on the subject of global warming 
was in February 1889--yes, it seems like 1889--1989 before this 
very Energy and Commerce committee. I stated then that warming 
was likely to be at the lowest end of projected ranges based on 
a comparison of then existing models and observed temperatures. 
I stated that ``our policy should be commensurate with our 
science.''
    Thirteen years later I am compelled to tell you exactly the 
same. Thank you.
    [The prepared statement of Patrick J. Michaels follows:]
Prepared Statement of Patrick J. Michaels, Department of Environmental 
                    Sciences, University of Virginia
    This testimony makes no official representation for the University 
of Virginia or the Commonwealth of Virginia, and is tendered under the 
traditional protections of academic freedom.
    Effects have causes. Confronting our society today is a potentially 
serious effect, climate change, caused by human influence on our global 
atmosphere.
    The quantitative tools of mathematics and science are what we use 
to inform rational analysis of cause and effect. Science, in 
particular, obeys a rigid standard: that the tools we use must be 
realistic and must conform to observed reality. If they do not, we 
modify or abandon them in search of other analytical methods. Whenever 
the federal government releases a comprehensive science report, the 
public naturally assumes that it has passed these tests. The documents 
we will discuss today failed those tests. This failure was ignored in 
the public review process.
    There is no doubt that the issue of climate change rightly provokes 
private citizens and our government to ask what its potential effects 
might be on the United States. That was the purpose of the recent 
report Climate Change Impacts on the United States: The Potential 
Consequences of Climate Variability and Change. This document is often 
called the ``U.S. National Assessment'' (USNA) of climate change. This 
report forms much of the basis for Chapter 6 of the U.S. Climate Action 
Report--2002, a chapter on ``Impacts and Adaptation'' to climate 
change.
    The USNA began with a communication from President Clinton's 
National Science and Technology Council (NSTC), which was established 
in 1993. According to the USNA, ``This cabinet-level council is the 
principal means for the President to coordinate science, space and 
technology policies across the Federal Government.'' ``Membership 
consists of the Vice President [Al Gore], the Assistant to the 
President for Science and Technology, Cabinet Secretaries and Agency 
heads . . .'' The Council is clearly a political body (``coordinating . 
. . policies'') rather than a scientific one.
    This NSTC was, in turn, composed of several committees, including 
the Committee on Environment and Natural Resources, chaired in 1998 by 
two political appointees, D. James Baker and Rosina Bierbaum. Baker 
developed a further subcommittee of his committee, the Subcommittee on 
Global Change Research, to ``pro- vide for the development . . . of a 
comprehensive and integrated . . . program which will assist the Nation 
and the world to understand, assess, predict [emphasis added], and 
respond to human-induced and natural processes of global change.'' Ul- 
timately, this resulted in the selection of the National Assessment 
Synthesis Team (NAST).
    NAST was confronted with a daunting task, detailed in the schematic 
below. The chain of cause and effect begins with industrial activity 
and the combustion of com- pounds that alter the atmosphere's radiative 
balance. These are then distributed through the atmosphere. These 
affect the climate of the United States. Then, those changes in climate 
are input to a subsidiary series of computer models for forest growth, 
agriculture, etc.
[GRAPHIC] [TIFF OMITTED] 81495.021

    An understanding of the effects of climate change on the United 
States requires that there be no substantially weak links in this 
catena. As an example of a rel- atively strong link, I would estimate 
that we understand about 70 percent of the changes in atmospheric 
carbon dioxide that result from human activity. The reason this number 
is not 100 percent largely stems from the fact that the current con- 
centration of carbon dioxide seems low, given the amount emitted and 
assumptions about how it distributes through the atmosphere and the 
biosphere, and how it eventually returns to the soil and the ocean 
bottom.
    There are two main ways to assess the most important of these 
linkages, which is between ``Atmospheric Changes'' and ``Climate 
Changes in the United States.'' One involves the use of computer 
simulations, known as General Circulation Models (GCMs) to estimate how 
climate changes as a result of atmospheric alterations. An alternative 
method for assessment is described on page 10 of this Testimony.
    There are literally dozens of GCMs currently available, and the 
USNA considered a subgroup of these models. Eventually, they selected 
two, the Canadian Climate Centre model, acronymed CGCM1, and another 
from the United Kingdom Meteoro- logical Office, known as HadCM2 
<SUP>1</SUP>. The prime outputs of these models that are im- portant 
for the assessment of climate change are temperature and precipitation.
---------------------------------------------------------------------------
    \1\ In 1998, the National Research Council report Capacity of U.S. 
Climate Modeling to Support Climate Change Assessment Activities 
strongly remonstrated against the use of foreign models to assess U.S. 
climate. According to the NRC, ``. . . it is inappropriate for the 
United States to rely heavily upon foreign centers to provide high-end 
modeling capabilities. There are a number of reasons for this including 
. . . [the fact that] decisions that might substantially affect the 
U.S. economy might be based upon considerations of simulations . . . 
produced by countries with different priorities than those of the 
United States.''
---------------------------------------------------------------------------
    In using GCMs to project future climate at regional scales, the 
USNA clearly placed itself squarely against the consensus of world 
climate science. In 2001, the United Nations' Intergovernmental Panel 
on Climate Change (IPCC) compendium on climate change, the Third 
Assessment Report, states:
        ``Despite recent improvements and developments . . . a coherent 
        picture of regional climate change . . . cannot yet be drawn. 
        More co-ordinated efforts are

        thus necessary to improve the integrated hierarchy of models . 
        . . and apply these methods to climate change research in a 
        comprehensive strategy.''
    In other words, even three years after the Assessment team began 
its report rely- ing on GCMs, the consensus of world climate science 
was that they were inappro- priate for regional estimates, such as 
those required for the United States.
                        choice of extreme models
    As shown in the IPCC's Third Assessment Report of climate change, 
the average behavior of GCMs is to produce a linear (constant) rate of 
warming over the project- able future. In other words, once warming 
begins from human influence, it takes place at a constant, rather than 
an exponentially increasing rate.
    However, the CGCM1 is an outlier among the consensus of models, 
producing a warming that increases as a substantial exponent. This 
behavior can be seen in Fig- ure 1a, taken directly from the USNA, in 
which the CGCM1 clearly projects more warming than the others 
illustrated in the USNA.
    The USNA also illustrates a similarly disturbing behavior for 
precipitation. Figure 1b, again taken directly from the USNA, shows 
that the other model employed, HadCM2, predicts larger precipitation 
changes than the others that are illustrated in the USNA.
    A close inspection of Figure 1a reveals that CGCM1 predicts that 
the tempera- tures in the United States at the end of the 20th century 
should be about 2.7  deg.F warmer than they were at the beginning, but 
the observed warming during this time, according to the most recent 
analysis from the National Climatic Data Center, is 0.9  deg.F. CGCM1 
is making a 300 percent error in its estimation of U.S. tempera- ture 
changes in the last 100 years.
    My colleague Thomas Karl, Director of the National Climatic Data 
Center and co- chair of the USNA synthesis, explained that the reason 
CGCM1 was chosen was be- cause it was one of only two models (the other 
was HadCM2) that produced daily temperature output, and that this was 
required to drive some of the subsidiary mod- els, such as those for 
forest impacts.
    Michael MacCracken, Executive Director of the National Assessment 
Coordination Office, told me otherwise. He said that the two models 
were selected because they gave extreme results, and that this was a 
useful exercise. How the explanations of the co-chair and the Executive 
Director could be so different is still troubling to me.
                       the failure of the models
    GCMs are nothing more than hypotheses about the behavior of the 
atmosphere. The basic rule of science is that hypotheses do not 
graduate into facts unless they can be tested and validated against 
real data.
    As part of my review of the USNA in August 2000, I performed such a 
test. The results were very disappointing. Both CGCM1 and HadCM2 were 
incapable of simu- lating the evolution of ten-year averaged 
temperature changes (1991-2000, 1990- 1999, 1989-1998, etc. . . . back 
to 1900-1909) over the United States better than a table of random 
numbers. In fact, the spurious 300 percent warming error in CGCM1 
actually made it worse than random numbers, a dubious scientific 
achieve- ment, to say the least.
    I wrote in my review:
        ``The essential problem with the USNA is that it is based 
        largely on two climate models, neither one of which, when 
        compared to the 10-year smoothed behavior of the lower 48 
        states reduces the residual variance below the raw variance of 
        the data [this means that they did not perform any better than 
        a model that simply assumed a constant temperature]. The one 
        that generates the most lurid warming scenarios--the . . . 
        CGCM1 Model--also has a clear warm bias . . . All implied 
        effects, including the large temperature rise, are therefore 
        based upon a multiple scientific failure [of both models]. The 
        USNA's continued use of those models and that approach is a 
        willful choice to disregard the most fundamental of scientific 
        rules . . . For that reason alone, the USNA should be withdrawn 
        from the public sphere until it becomes scientifically based.''
    The Synthesis Team was required to respond to such criticism. 
Publicly, they de- flected this comment by stating that both U.S. 
temperatures and model tempera- tures rose in the 20th century, so use 
of the models was appropriate!
    This was a wildly unscientific response in the face of a clear, 
quantitative anal- ysis. The real reason for the models' failure can be 
found in the USNA itself (Figure 11 in Chapter 1 of the USNA Foundation 
document). It is reproduced here as our Figure 2. The discrepancies 
occur because:
1. U.S. temperatures rose rapidly, approximately 1.2 deg.F, from about 
        1910 to 1930. The GCMs, which base their predictions largely on 
        changes in atmospheric car-

bon dioxide, miss this warming, as by far the largest amounts of 
        emissions were after 1930.
2. U.S. temperatures fell, about 1.0 deg.F, from 1930 to 1975. This is 
        the period in which the GCMs begin to ramp up their U.S. 
        warming, and
3. U.S. temperatures rose again about 1.0 deg.F from 1975 to 2000, 
        recovering their decline between 1930 and 1975.
    It is eminently clear that much of the warming in the U.S. record 
took place before most of the greenhouse gas changes, and that nearly 
one-half of the ``greenhouse era,'' the 20th century, was accompanied 
by falling temperatures over the U.S. These models were simply too 
immature to reproduce this behavior because of their crude inputs.
    Despite their remarkably unprofessional public dismissal of a 
rigorous test of the USNA's core models, the Synthesis Team indeed was 
gravely concerned about the criticism. So much so, in fact, that they 
replicated my test, not just at 10 year-intervals, but at scales 
ranging from 1 to 25 years.
    At the larger time scales, they found the models applicable to 
global temperatures. But over the U.S., not surprisingly, they found 
exactly what I had. The models were worse than random numbers.
    It is difficult for me to invoke any explanation other than 
political pressure that would be so compelling as to allow the USNA to 
continue largely unaltered in this environment. And so the USNA was 
rushed to publication, ten days before Election Day, 2000.
    Given the failure of the models when directly applied to U.S. 
temperatures, there were other methods available to the USNA team. One 
would involve scaling various global GCMs to observed temperature 
changes, and then scaling the prospective global warming to U.S. 
temperatures. The first part of this exercise has been performed 
independently by many scientists in recent years, and published in many 
books and scientific journals. It yields a global warming in the next 
100 years of around 2.9  deg.F, which is at the lowest limit of the 
range projected by the IPCC in its Third Assessment Report.
    If applied to the United States this would similarly project a much 
more modest warming than appears in the USNA. Perhaps that is the 
reason such an obviously logical methodology was not employed after the 
failure of the models was discovered by a reviewer and then 
independently replicated by the USNA itself.
                           effect of the usna
    This discussion would be largely academic if the USNA were an 
inconsequential document. But, as noted above, it served largely as the 
basis for Chapter 6 of the U.S. Climate Action Report--2002. Further, 
it served as the basis for legislative findings for S. 556, a 
comprehensive proposal with extensive global warming related 
provisions, and it was clearly part of the findings for legislation 
restricting carbon dioxide emissions recently passed by the California 
Legislature. Hardly a week goes by without some press reference to 
regional alterations cited by the USNA. Would the USNA have such 
credibility if it were generally known that the driver models had 
failed?
             solving the structural problems with the usna
    The USNA synthesis team contains only two individuals who can 
logically claim, in my opinion, to be climatologists. Of the entire 14-
member panel, there is not one person who has expressed considerable 
public skepticism about processes that were creating increasingly lurid 
scenarios for climate change with little basis in fact. As noted above, 
the administrative structure that selected the synthesis team was 
clearly directed by political appointees, which no doubt contributed to 
this imbalance.
    In my August 2000 review, I wrote:
        ``Finally, we come to the subject of bias in selection of USNA 
        participants. There are plenty of knowledgeable climatologists, 
        including or excluding this reviewer, who have scientific 
        records that equal or exceed those of many of USNA's 
        participants and managers. They would have picked up the model 
        problem [that extreme versions were selected, and that they 
        could not simulate U.S. temperatures] at an early point and 
        would not have tried to sweep it under the rug. Where is Bob 
        Balling? Where is Dick Lindzen? Where are [Roger] Pielke Sr., 
        [a participant in this hearing], [Gerd] Weber or [Roy] 
        Spencer?''
    My review was tendered shortly after attending the annual meeting 
of the American Association of State Climatologists (AASC) in Logan, 
Utah, in August 2000. The AASC is the only professional organization in 
the U.S. devoted exclusively to climatology. Membership consists 
largely of senior scientists who are tasked by their states, usually 
through the state's major universities, to bring climate information 
and services to the public. Until 1972, the State Climatologists were 
employees of the U.S. Department of Commerce.
    In my review of the USNA I further noted that:
        ``Yesterday . . . I returned from the annual meeting of the 
        American Association of State Climatologists (I am a past 
        president of AASC). There were roughly 100 scientists present. 
        I can honestly state that not one positive comment was tendered 
        to me about the USNA, out of literally dozens made. If the 
        report is published in anything like its current form, I 
        predict it will provoke a public examination of how and why the 
        federal science establishment [could have produced such a 
        document].''
    That prediction has come true. It is why we are here today.
    Besides being research scientists, the State Climatologists are 
interpretive professionals who deal with the climate-related problems 
of their states on a day-to-day basis. It's hard to imagine a better-
suited team of professionals to provide a significant leadership role 
in any new Assessment.
                            recommendations
    1. The current USNA should be redacted from the public record. 2. 
Another Assessment should be undertaken, this time with a much more 
diverse synthesis team selected by a more diverse political process. 3. 
Professional interpreters of climate information, who will be called 
upon to explain or defend any future Assessment, such as the State 
Climatologists, should provide strong input to any new report. 4. Any 
new Assessment must be based only upon hypotheses that can be verified 
by observed data.
                               conclusion
    The 2000 document, Climate Change Impacts on the United States: The 
Potential Consequences of Climate Variability and Change, which served 
as the basis for an important chapter in the new Climate Action 
Report--2002, was based on two computer models which were extreme 
versions of the suite of available models. The two selected models 
themselves performed no better than a table of random numbers when 
applied to U.S. temperatures during the time when humans began to 
subtly change the composition of the earth's atmosphere. As a result, 
both reports are grounded in extremism and scientific failure. They 
must be removed from the public record.
    This scientific debacle resulted largely from a blatant intrusion 
of a multifaceted political process into the selection process for 
those involved in producing the U.S. National Assessment. The clear 
lesson is that increased professional diversity, especially 
intermingling state-based scientists with the federal climatologists, 
would have likely prevented this tragedy from ever occurring.

                               References

    IPCC (2001). Climate Change 2001: The Scientific Basis. 
Contribution of Working Group I to the Third Assessment Report of the 
Intergovernmental Panel on Climate Change. Houghton JT, Ding Y, Griggs 
DJ, Moguer M, van der Linden PJ, Dai X, Maskell K, Johnson CA, (eds). 
Cambridge University Press, Cambridge, UK, 881pp.
    National Assessment Synthesis Team (2001). Climate Change Impacts 
on the United States: The Potential Consequences of Climate Variability 
and Change. Report for the U.S. Global Change Research Program, 
Cambridge University Press, Cambridge, UK, 620pp.
    U.S. Department of State. U.S. Climate Action Report--2002. 
Washington, D.C., May 2002.
[GRAPHIC] [TIFF OMITTED] 81495.022

[GRAPHIC] [TIFF OMITTED] 81495.023

[GRAPHIC] [TIFF OMITTED] 81495.024

[GRAPHIC] [TIFF OMITTED] 81495.025

    Mr. Greenwood. Thank you.
    The Chair recognizes himself for 10 minutes for inquiry.
    Dr. Karl, I want to start with you. In your written 
testimony, you note several factors that cause climate change 
and variability that were not included in assessment models. 
This is on page 2 and more extensively on page 10. You also 
note that foreign models were problematic for use in the United 
States, and cite a National Research Council report which you 
chaired--this is from page 4 of your testimony--that 
underscored these important limitations.
    Can you explain why, given your own knowledge about the 
models' limitations, you went ahead with these models? Weren't 
they too limited for the public uses that would result from 
this report?
    Mr. Karl. Sure. I would be happy to, Mr. Chairman.
    The two models that were selected, by no means, were 
perfect models, and no model is perfect. I don't think anyone 
would argue that. I think the question that we all asked 
ourselves is whether they would be useful tools.
    As I tried to indicate in my testimony, we believe they 
were quite effective as useful tools, along with the 
observational data and ``what if'' scenarios. Maybe I can give 
you a little bit of an analogy.
    In daily weather forecasting today, operational weather 
models cannot predict tornados or hailstones, but yet our 
weather forecasts do give an idea of when we would expect 
tornados and hailstorms and are largely based on those 
operational weather models, despite the fact that the 
operational weather models do not have the high resolution 
details to be able to predict those phenomena.
    So in that sense, these models, we felt, were effective 
tools and, as I tried to indicate in my testimony, there was a 
number of issues that were neglected, those being changes in 
land use, as Dr. Pielke had described, changes in black soot 
and other aerosol, changes in stratospheric ozone depletion.
    Those simulations weren't available at that time, but the 
intent of the assessment was to look into the 21st Century, and 
if you look at the IPCC results, those forcings, although are 
important and it is important to try and understand the 
regional details, the two most important factors, that being 
aerosols and increases in greenhouse gases, were included in 
the models.
    So for that reason, we thought that it was a valuable tool 
to go ahead and use and, in fact, if we had not used them, I 
think we would have been negligent.
    Mr. Greenwood. How do you respond to Dr. Michaels' 
suggestion that these models haven't produced more--anything 
different than a random set of numbers?
    Mr. Karl. Random numbers? Yes, and I am glad you asked that 
question, because we have conducted those tests on those models 
similar to what Dr. Michaels has suggested. First, let me 
qualify. There's many tests you can do on models, and no one 
test should be used to say whether or not a model is effective.
    There's been many types of tests applied to these models 
and other models, but I can say the same kind of tests that Dr. 
Michaels suggests was applied to precipitation data over the 
U.S., and the model showed significant skill. If you apply the 
same test to global temperatures, the models show significant 
skill.
    There is a number of reasons why we think it is 
inappropriate, actually, to apply that test on the national 
temperature or precipitation over the course of the 20th 
Century. That is, as I indicated, the models do not have all 
those forcings. They did not have volcanic eruptions. We know 
the U.S. climate record was affected by volcanic eruptions in 
the 20th Century.
    It did not have solar variability. Some of the changes were 
affected by solar variability. The timing of El Ninos and North 
Atlantic oscillations and other important oscillations are not 
in these models when they simulate climate. They are trying to 
produce the stoclastic behavior of climate, and they can't 
predict the timing.
    So if you are looking at small regional scales where these 
effects are important in the historical record, it is going to 
be very difficult to evaluate a model. That is why most 
evaluations look at the global scale. They will aggregate 
regions, but they will aggregate it up globally to remove a lot 
of this noise and variability. When you do that, the models 
that we have used and many of the other models do show 
significant skill.
    In fact, a recent test by Lawrence Livermore National 
Laboratory looking at the annual cycle of precipitation and the 
total precipitation for the last 20 years showed that the 
Hadley Center model 2 that was used in the assessment exceeds 
all other models, and they tested 24 models. This was part of a 
model inter-comparison project and has been going on for a 
number of years.
    The Canadian climate model did not do as well. It wasn't an 
outlier. It was in, I would say, the lower third of the 
distribution of the models being used. So I think again it 
depends on what kind of tests you apply, and you have to look 
at the broad breadth of the scientific information that is out 
there, in my opinion.
    Mr. Greenwood. Dr. Michaels, you wanted to respond. Yes.
    Mr. Michaels. Yes. The standard test of whether----
    Mr. Greenwood. What I need you to do is you see how this 
microphone points directly at my mouth. That's what you need to 
do. If it is pointing over my head, you can't hear.
    Mr. Michaels. The standard test of whether a model 
performs, a model being a hypothesis, is a statistical test 
against random numbers. Tom very adequately answered the 
question, and I would like to point out what is kind of missing 
from his response, which was in fact that he did replicate my 
experiment and found, as I found, on 10-year averages that it 
was worse than random numbers.
    He did it on 1 year averages, on 5 year averages, on 10 and 
on 25 and found the same thing. Now we are talking about 
warming of the surface of the planet created by changes in 
greenhouse gases. After that surface and mid-atmosphere warm, 
that creates a change in the temperature distribution. That 
creates changes in precipitation.
    I think it is rather interesting to agree on this panel 
that we couldn't simulate the temperature of the United States 
and somehow be happy about the fact that the precipitation was 
right, because it is the temperature change that drives the 
precipitation change.
    Here is the real problem, if you must know. We are going to 
have great difficulty simulating the temperature history of the 
United States with these models for one main reason. There was 
a large warm-up in the United States' temperatures that 
occurred before the greenhouse effect changed very much, and 
then as the greenhouse gases began to ramp up into the 
atmosphere in the middle part of the century, the temperature 
dropped. In the latter part of the century, the temperature has 
returned to levels that are near the maxima that occurred after 
the large warm-up in the early 20th Century.
    It is going to be very, very difficult to simulate that, 
because no one really understands why the first one occurred 
and why it was of such similar magnitude to the latter one, 
given the large changes in the atmospheric greenhouse effect. I 
am left to conclude that we could not use the models for even 
assessing the annual temperature of the United States, and the 
irony of this report is it then devolves into regional 
assessments after having admittedly failed now with United 
States temperature.
    I think we need to rethink the validity of this entire 
process.
    Mr. Greenwood. Let me ask a question that is a bit off the 
script here, but Dr. O'Brien talked about the need for a 
Manhattan project. I'd like the panelists, if you would, 
starting with Dr. Janetos and going right down the line, to 
offer up your sense as to whether there is a lack of resources 
here. Do we need to, in fact, apply governmental and/or private 
sector dollars in a very significant way to create resources, 
computer and intellectual, in order to get a better grip on 
this?
    Mr. Janetos. Mr. Chairman, I would answer your question 
very briefly. Yes, we do. We have been very clear both in the 
National Assessment and in subsequent publications about the 
need for substantial additional research into the topics of 
vulnerabilities, and then to understand what changes, in fact, 
are the most plausible in the physical climate system itself.
    We have had a significant research program for sometime 
now, but this is one of the most challenging issues in both 
underlying biology, ecology, and the physics of the climate 
system that this country has had to address in environmental 
science, and I believe certainly deserves additional resources 
toward its investigation.
    Mr. Greenwood. Dr. Karl?
    Mr. Karl. Yes. I think one way to take a look at whether or 
not we need additional resources is to look back at some of the 
problems that we faced in the National Assessment, we attempted 
to do this. There were issues related to just understanding 
whether or not we had effective observations, not to simulate 
what the real climate is, but to look at the changes.
    You heard Dr. Pielke talk about changes in the mid-
troposphere. There are some new results coming out that 
suggest, well, the increases of temperature were a little more 
than we perhaps thought. It just reflects this issue of trying 
to understand what is happening in the climate itself is 
complex, requiring significant investment in time and 
resources.
    Then the issue related to the models: We were severely 
constrained by the number of models that were able to simulate, 
for example, just a day/night temperature. Many of the 
ecosystem modelers said this was critical for them to be able 
to look at the impacts further down into the century.
    So it is very clear that we need the details of climate 
from the observations. We need many more improvements in the 
models. My sense is that there is plenty of work out there to 
be done and plenty that could be very effective in helping to 
do another national assessment, if we so attempt it.
    Mr. Greenwood. Dr. Lashof?
    Mr. Lashof. Well, I certainly agree with that. We are 
investing quite a bit in the global change research program 
now. Additional resources certainly would be useful, and 
particularly the kind of detailed modeling center that Dr. 
O'Brien suggests, I think, would be very helpful in the United 
States.
    I would just add to that a caution, that the goal of 
furthering the research to get at many of the details that need 
to be addressed should not be posed as a substitute for the 
need to take action now to reduce emissions.
    I would just like to quote from the Intergovernmental Panel 
on Climate Change report, synthesis report from 2001, that says 
that, ``The pervasiveness of inertia and the possibility of 
irreversibilities in the interacting climate, ecological and 
socioeconomic systems are major reasons why anticipatory 
adaptation and mitigation actions are beneficial. A number of 
opportunities to exercise adaptation and mitigation options may 
be lost if action is delayed.''
    Mr. Greenwood. Dr. O'Brien, we know what your answer is, 
but perhaps you could elaborate on what you have in mind.
    Mr. O'Brien. Well, I am going to brag now. I have used more 
computer time than you probably imagine that a scientist and 
his students can use all my life. I can say for the absolute 
first time in my life I personally have adequate access to 
computer time. It is through two ways.
    One was when the Soviet Union surrendered, they finally 
changed procurement rules in the Department of Defense, and now 
in the Department of Defense they can buy computers. You know, 
in 1975 the poor guys inside Department of Defense had to guess 
what Kray and the other ones are going to have 8 years from 
now, because that is how long it took to buy a computer.
    We, fortunately, have ONR support, and I can get access to 
those for some of our ocean modeling. Florida State has 
invested in a large system which, as I'll brag, has put us 3, 4 
in the world, and the first in universities in the United 
States, and I am very happy.
    We know, for example, that why didn't we have any U.S. 
models? Well, there are two institutions that historically we 
would look to. That is the Geophysical Dynamics Lab, NOAA's lab 
in Princeton, and the National Center for Atmospheric Research, 
and neither one had adequate computer things to do these kind 
of models that our international partners are doing.
    You know, sir, in a word I want you to remember, computers 
are cheap. I find hundreds of young scientists, PhDs, working 
in these labs with absolutely inadequate computers, and you 
know, you figure out what the cost per manyear is for a PhD 
with all the support, and the computers these days with Moore's 
law operating are really, really inexpensive.
    The other thing is that at NCAR I have been developing a 
new climate model, and I am putting their best scientists on 
it. But they want to go back to individual papers and things 
like that. I really believe, to advance this, this is a 
national emergency, the kind of cost that you mentioned 
yourself, that climate change is going to cost this country 
trillions of dollars and problems as we go down 10 to 20 years. 
We need to put it in a situation where it is outside the 
politics of whatever the history of that lab is, and something 
where, you know, young--bright, young scientists will take this 
as a challenge, that they can go there and work for a while.
    You know, there is an example in our government. You know, 
they are having their 30th anniversary. It is ICASE. Under 
NASA, you know, they have this little think tank for numerical 
modelers in other areas besides weather and climate, and it is 
at Langley, and it is the NASA Administrator's budget, and it 
is an absolutely beautiful thing.
    They take the brightest young mathematicians that come out 
of our university systems, and put them in an environment in 
which they an really advance the understanding in areas like, 
you know, simulating, as someone said, aircraft and simulating 
other things.
    So I think it is really a priority, and I really think it 
is relatively cheap compared to the kind of money that we are 
spending on satellite systems, observing systems, that we 
really need to do this. I'm sorry to take so long.
    Mr. Greenwood. That's quite all right. Dr. Pielke.
    Mr. Pielke. Yes. I think there is a need to have a 
redirection of this effort, but I would like to focus more on 
the vulnerability perspective. That is, you start from that, 
and we have to assess vulnerabilities to environmental risks, 
societal risks, all kinds of risks that we can think of, and 
where does climate fit within that umbrella, and then develop 
plausible projections from both models, from historical record, 
artificial creation of data.
    This is, I think, a much more vibrant and inclusive 
approach than what the National Assessment did, because if we 
start from vulnerability and we find where our thresholds and 
our concerns are, that is where we can spend our resources.
    I think, in terms of developing better models, I would 
agree with Dr. O'Brien that we actually have a lot of computer 
tools available today, and we can do a lot more with these 
models. I think that needs to be integrated more into the 
process, some of the work with respect to land use change on 
the climate system, the multiple effect that aerosols have on 
cloud and precipitation.
    Climate is a very complicated problem. As I said in my 
testimony, I don't think that predictability may be the 
ultimate goal to understanding of the climate process itself. 
That is why I fall back to the vulnerability paradigm, because 
that permits us to make decisions even if we don't understand 
what exactly will happen in the future.
    Mr. Greenwood. Dr. Michaels?
    Mr. Michaels. Mr. Chairman, perhaps I spend too much time 
in Washington, but it would be hard for me to imagine a panel 
of scientists or agency heads saying that they didn't need the 
money, and you always have to be very careful.
    Mr. Greenwood. We have a predictive model that predicted 
that you all would say this.
    Mr. Michaels. Yes. Now having said that, let me offer 
somewhat of an alternative point of view, first of all, on the 
assessment. I think that it probably would have been 
appropriate had there been more involvement from the State 
climatologist community, because we are the people who have to 
respond to the press more than anybody else, and the public, 
when these reports come out. If we had had more input, I think 
we would have been happier with the report.
    Having said that, I might be able to simplify the problem 
for you a little bit, and I am going to show you a picture. Mr. 
Chairman, in absence of a picture, I'll paint you a picture.
    We have a number of computer models for the behavior of the 
atmosphere, and by and large, although there are a few outliers 
like the Canadian model, they predict straight line increases. 
They say once human warming starts from changes in the 
greenhouse effect, it takes place at a constant rate. I believe 
human warming has started from changes in the greenhouse 
effect, because I believe that human warming has started.
    Mr. Greenwood. So you are defining human warming as----
    Mr. Michaels. Greenhouse warming has started as a result of 
this.
    Mr. Greenwood. All right.
    Mr. Michaels. So perhaps what we ought to do is to 
adjudicate all these straight lines. You see some of them are 
going up like this. Some of them are going up like this, and 
some of them are going up like this, and some of them are going 
up like that. Now all the models say that once the warming 
starts, it takes place at a constant rate. So why don't we just 
plot the observed rate of human warming?
    You know what you get when you do that? You get something 
around 1.6 degrees Celsius over the next 100 years. I would 
think that our research effort should be attempting to answer 
the question why is the warming rate proceeding at the low end 
of the range of expectations, and why has it been so constant?
    I wish I could show you a chart right now to show you how 
constant it has been, and I think that that is the research 
question of the future.
    Again, my other answer is the next time around, let's get 
more of the State people in these reports, because they are the 
ones who can take these reports to the public and explain them 
the best.
    Mr. Greenwood. Feel free to find your picture and show it 
to us when you find it.
    The Chair recognizes the gentleman from Florida, Mr. 
Deutsch, for 10 minutes.
    Mr. Deutsch. Thank you. How much confidence do we have in 
the climate projects at this point in time, and then 
specifically related to that, we have--You know, there's 
different models and different scenarios. Can each of you 
comment on what is agreed to in these different models and 
scenarios? Maybe we could just go to Mr. Janetos.
    Mr. Janetos. Yes, sir. In the models that we used in the 
assessment itself, there was a single underlying emission 
scenario. This is, in fact, a limitation of the assessment. The 
scenario--The emissions scenario that we chose was one that had 
been thoroughly examined internationally.
    So in one sense, one of the things that was agreed quite 
constant throughout the assessment was the underlying forcings 
of greenhouse gas accumulations and changes in sulfates and 
aerosols, for example. the models that were used----
    Mr. Greenwood. Would you define forcings, because you have 
all been using that, and I am not sure that Mr. Deutsch and I 
understand.
    Mr. Janetos. The changes in the impacts on the atmosphere 
that actually cause climate to change and to vary, in an 
abbreviated way. The models--All of the models that were used 
in the assessment shows some warming. There was obviously--over 
the U.S. There is obviously disagreement in the actual 
magnitude of that warming.
    They also show--They do show rather different changes in 
precipitation, which Dr. Karl has referred to in his testimony. 
In each case we actually--The analysis that we actually 
performed was to take the changes in the models and apply those 
to an interpolated dataset of the historical record of the 
United States.
    So the actual variability that was analyzed was drawn from 
the historical record itself. We did this in order to attempt 
to be conservative in our analyses, in particular with respect 
to changes on intra-annual and decadal time scales. Thank you.
    Mr. Karl. If I could address the question simply, there's a 
number of items that I think everyone would agree on. One it is 
going to get warmer, and again, as Dr. Michaels has mentioned, 
the issues are how much warmer. That is why you will see in the 
IPCC reports this uncertainty range, and that is why I think it 
is important in these scenarios to look at that full range.
    So, clearly, being warmer is part of it, and then the 
implications of what happens when it is warmer, reduced snow 
pack, more rain versus snow, and you can imagine what some of 
those hydrological impacts might be.
    The other aspect that I think most of us would agree on is 
that we can only state in very general terms what we would 
expect to see with precipitation: Increase in mid and high 
latitude precipitation, generally globally more precipitation, 
subtropics perhaps less precipitation, and that dividing line 
between the subtropics and mid and high latitudes comes very 
close to the United States. That is why you see that we are 
very uncertain about just the exact sign of precipitation.
    One thing is also, I think, in general agreement. If it is 
not raining, with warmer temperatures, you generally have more 
evaporation, more evapotranspiration, depending--Here is where 
vegetation becomes important. So when you get down to local 
scales, if vegetation begins to change as temperatures 
increase, it could actually affect the amount of water that is 
being evaporated.
    So in the general sense, there is agreement when it's not 
raining, more evaporation; but there is important regional and 
local scale differences that we probably would not all agree 
on.
    Last, one item, that again in general all the models that 
we have looked at, all the models that are available in the 
literature--you can argue from thermal dynamic considerations 
from some of the equations that we use in physics that, as the 
globe warms, precipitation tends to fall in heavier events. 
This is what all the models are projecting.
    We are beginning to see this in the observations. It 
doesn't occur everywhere, but more areas we are seeing than 
areas we are not seeing it, and that is also reported in the 
IPCC report.
    So those are a number of the things, I think, that there's 
some consistency that I think we might all be able to agree 
upon.
    Mr. Deutsch. Dr. Lashof.
    Mr. Lashof. Thank you. Let me start by saying what we can't 
do. Dr. Michaels has made this argument that the models are 
like a table of random numbers. But the test that he has 
applied is a very particular test, and it is a test that 
basically says can these models predict the weather in 
Philadelphia or Miami on July 25, 2010 better than a table of 
random numbers.
    The answer to that is no. Why? Because 2010 is only 10 
years from now, and over a 10-year period natural variability, 
El Ninos, volcanic eruptions that can't be predicted, the 
general oscillations in heat between the atmosphere and the 
ocean system are of the same magnitude as the expected overall 
warming trend that's a result of adding heat trapping gases to 
the atmosphere.
    So over a 10-year period you don't expect to be able to do 
better than simply using roughly current conditions as your 
best predictor of the likely conditions then. Over a 30-year 
period or a 50-year period, then the effects of human 
alterations to the atmosphere dominate over these natural 
changes and, when you apply that test, the models do much 
better than a table of random numbers.
    So that's the fundamental point. As a result of that kind 
of consideration, there is agreement, again accepted by this 
administration as well as the last administration, that warming 
during the 21st Century will be larger than warming during the 
20th Century.
    Again, Dr. Michaels said, well, why don't we just take the 
observed data and draw a straight line through it. That's okay. 
The problem is that, when you have data with some scatter and 
you take a relatively small period and you want to project out 
over a long, you can draw a lot of straight lines with 
different slopes, and it doesn't help you answer the question 
how steep that slope is going to be. Again, that is why the 
models are useful. They give us more insight into that.
    Just a couple of other facts that are very robust to add to 
the ones Dr. Karl just mentioned. We expect sea level rise 
during the 21st Century will be significantly more rapid than 
sea level rise during the 20th Century. That has obvious 
implications for your State, Mr. Deutsch.
    In addition, the effects on coral reefs are expected to be 
very severe. The reason for that is both the increase in 
temperature--where coral reefs are already threatened by high 
sealable temperature events that cause coral bleaching, that 
becomes much more common--plus the direct effect of increased 
CO<INF>2</INF> which, as the atmosphere accumulates more carbon 
dioxide, carbon dioxide increases the acidity of the ocean and 
literally erodes the corals.
    So if we continue to add carbon dioxide to the atmosphere, 
with high confidence we can say, and the National Academy says, 
that coral reefs are extremely vulnerable to being wiped out in 
many areas. Those are just a couple of examples. Thank you.
    Mr. O'Brien. One of the interesting things about the global 
models is that some of us near my age remember the first one by 
Jim Hanson from NASA GIS in which he told the Senate that we 
would increase 10 degrees Fahrenheit by 2020, but he had a very 
small computer. So he had no ocean in his model, and eventually 
the GCMs, which were putting oceans in--The trend that I 
mentioned in my report is that what I see in the models is 
that, as the models keep increasing, the magnitude of the 
impact out at 100 years is decreasing.
    I think that Dr. Michaels' straight line--and I think there 
is probably only one--is a lower bound unless something else 
happens, because we might be going into an ice age, which has 
nothing to do with what man is doing to the planet. In fact, 
Mr. Deutsch, if you look in the back of my report, you will 
find out that where you live in south Florida is warming up, 
but most of Florida is actually cooling down. It is actually 
cooling down.
    I remind the panel that around 1880 in Savannah, Georgia, 
and Jacksonville, Florida, two wonderful places, they harvested 
tens of thousands of boxes of very good oranges which they 
shipped to Europe and to Washington and those areas, and now if 
you want to grow oranges, you have to be south of Orlando.
    So, clearly, part of the southeast has certainly not 
experienced this warming that some people are finding in the 
data. But I believe that the models will get better, and I 
believe that Dr. Pielke's ideas about vulnerability and other 
effects are extremely important in order to direct the modelers 
that are not in an ivory tower just doing these physical 
models, and we are already working in those areas.
    You know, right now in the State of Florida, actually by 
using climate variability, we are actually now providing forest 
fire predictions, county by county, month by month. So there's 
a lot to do in applied work, and I am very pleased that we have 
the support of that.
    So the models will get better as the resolution gets 
better. This is a known fact with weather prediction. You know, 
we went from, when I was in graduate school, about 250 
kilometer on side grids until now, you know, the weather 
predictions are getting down to 10 kilometers on a side, 
particularly at this European center that I mentioned earlier, 
and their forecasts for weather are getting very, very nice, 
much better than we have had in the past.
    So I really believe that we need these models. You know, 
also the Nation is investing a lot of new resources in the 
ocean. There is a large portion of the scientific community 
that believes that we also need to understand the ocean. The 
ocean is the flywheel in the climate system. It is the thing 
that will change, and I am sorry to tell you, Dr. Lashof, but 
the ocean's pH cannot incorporate--The ocean is a very buffered 
system.
    Also the things about corals is somewhat a red herring. 
There's later research. Remember, about 10 years ago the corals 
south of Florida were dying. They blamed it all on the El Nino. 
That was the era when everything was due to El Nino. In fact, 
actual experiments and in the literature, published not by me, 
of course, shows that, you know, a lot of this is natural, and 
sometimes the bleaching is actually beneficial for the coral 
for when they take their next bloom.
    It's sort of like in northern Florida and Georgia, you 
know, if we don't get any cooling in the winter, you don't get 
any peaches. Thank you.
    Mr. Pielke. Yes, sir. The fundamental hypothesis for these 
models is that we can predict the future change based on 
CO<INF>2</INF> and other greenhouse gases and aerosols, and not 
just CO<INF>2</INF> but the radiative effect of CO<INF>2</INF>, 
how it affects the greenhouse effect. But carbon dioxide has 
other effects such as biogeochemical effects, and there's the 
land use change that we have already mentioned.
    These make it--These haven't been included in the models. 
So we don't know if they have predictive ability, but it is a 
necessary condition to test. As I showed in my testimony, the 
current suite of models that were used in the U.S. National 
Assessment have failed to replicate the atmospheric change over 
the last 20 years.
    The atmosphere has to warm in order to warm the surface, 
and as to why the surface has warmed and the atmosphere hasn't, 
that is the subject of some controversy. But some of our 
initial work suggests maybe some of the surface data is not 
spatially representative. We can talk about that more, if you 
would like.
    For south Florida specifically, we have actually published 
papers on that subject, and we have shown, for example, the 
July, August warming that has occurred in south Florida over 
the past 80 years or so can be explained entirely by land use 
change, the fact of the draining of the marshes, the draining 
of the wetlands. Doesn't mean that is the only reason that it 
has occurred, but we can explain it. That has not been included 
in the National Assessment.
    Finally, I would like to conclude this answer with just 
going back to the statement of my Society of the American 
Association of State Climatologists. We specifically concluded 
that climate projects have not demonstrated skill in projecting 
future variability in change in such important climate 
conditions as growing season drought, flood producing rainfall, 
heat waves, tropical cyclones, and winter storms, and these 
types of events have a much more significant effect on society 
than average annual global temperature trends, even if we could 
predict them correctly.
    Mr. Michaels. Thank you. I think, I'm sure inadvertently, 
Dan misrepresented my analysis. We weren't just using 10-year 
decadal averages. We were looking at 10-year running means, 
1991 to 2000, 1990-1999, etcetera, on back through the 
historical record.
    He said that, if we had looked at 30-year averages, that 
would have been important. Well, I didn't, but Tom Karl was so 
interested in our analysis that he did, and he found that the 
models over the U.S. for temperature, in fact, were no better 
than random numbers on 25-year averages.
    I would like to get back to this notion of what we know and 
what we don't know. Both the House and Senate have considered--
thrown considerable resources at us, probably about $10 billion 
over the years, to study this issue of climate change, and much 
of it has gone toward the modeling of climate change.
    Now I am going to believe that for that $10 billion we at 
least got the mathematical form of those models correct. This 
is the grab bag of models. I could get you a whole bunch of 
others. What you see is they are straight lines in general. The 
Canadian model is an outlier.
    Now the reason for this is simple. It is because we are 
adding carbon dioxide in the atmosphere at a slightly 
exponential rate, if I could draw your attention to my hand, 
slightly greater than a straight line, but the response of the 
atmospheric temperature to carbon dioxide is what we call a 
logarithm. It begins to damp off. If you add up an exponent and 
a logarithm, you get a straight line. That is what we have 
here.
    As the greenhouse era began, and we can, I think, see that 
when we see the cold air masses in Siberia start to warm up--
that's a real strong signal of a greenhouse.
    Mr. Greenwood. When was that?
    Mr. Michaels. That's about around 1970 or so this begins to 
take place. We could plot the temperatures against this. I want 
to show you something. Now let me finish with an analogy.
    We have different weather forecasting models, and I teach 
weather forecasting at University of Virginia every once in a 
while, and some days the models will differ. We have the ADA 
model. We have the NGM model which stands for ``no good 
model.'' We have the ECMWF. We have all these models. What do 
you think we tell students, for all their tuition money, when 
we have all these different models forecasting slightly 
different weather for the next 3 days?
    We tell them to look out the window. We tell them look at 
what is happening around the country, and see which model 
corresponds best to reality. That is what we do for the weather 
forecasting problem, and that is what this graph does for the 
climate forecasting problem.
    I draw your attention to the blue dots, once they start to 
go up, how remarkably little they depart from the straight 
lines. It's just that the computers predict different straight 
lines. What has happened here--what explains this curve is not 
only the addition of the logarithmic and the exponential 
response, but a remarkable constant has emerged in our study of 
human influence on the atmosphere, which is the amount of 
carbon dioxide emitted per person is constant as population 
increases.
    Now we believe population is not going to increase as much 
as it was. This curve is a true indicator of what is happening, 
and I see absolutely no reason to believe that those constances 
are going to begin to suddenly depart from reality.
    Mr. Greenwood. I am going to recognize myself for an 
additional 10 minutes.
    Mr. Deutsch. Mr. Chairman, if I might, I have gotten a 
request, a unanimous consent request, that other members be 
allowed to submit statements and questions for the record.
    Mr. Greenwood. Without objection, they certainly will be.
    Dr. Pielke, in your testimony you described a policy 
statement of the American Association of State Climatologists 
which recommends that, ``Policies related to long term climate 
not be based on particular predictions, but instead should 
focus on policy alternatives that make sense for a wide range 
of plausible scenarios.''
    Does this mean State climatologists, by and large, do not 
consider the National Assessment a useful tool for 
policymaking?
    Mr. Pielke. Well, we didn't specifically talk about the 
National Assessment. We talked about the climate change issue 
in general. I would think we would fall back on our comment No. 
2 in our policy statement that recognizes that the models are--
or that climate prediction itself was a very complicated 
problem, and that verification is also difficult, if not 
impossible, because you have to wait a long period of time in 
order to come up with the predictions.
    I think we also recognize as State climatologists that 
climate is much more complex than is implied by the U.S. 
National Assessment, since they didn't, for example, include 
all the human forcings; and because of that, as I said a few 
minutes ago, we have concluded that there is no skill in any of 
these models, the IPCC or the U.S. National Assessment, for 
predicting these regional impacts of growing season, drought, 
flood producing rainfall and so forth.
    So even if the models did show global skill, which I don't 
think they have, they certainly have not shown regional skill 
as voted on by nearly a unanimous vote of our Society.
    Mr. Greenwood. Thank you. Dr. Janetos, let me return to 
you. You note in your testimony that the report explicitly 
describes the synthesis team's scientific judgment about the 
uncertainty inherent in each result. (a) Can you explain why 
this effect was sufficient, given the complexities of the 
undertaking, the public mindset, and the context in which the 
report would be taken?
    Mr. Janetos. It was certainly our hope and our intent to 
signal to our readership, however wide or narrow it might have 
ended up being, our judgment about the robustness and 
confidence that we had in our major findings. To give you a 
particular example, results that were only found in one model 
run from one GCM and one ecological model, as extreme as they 
might have been, were judged to be of substantial--We had 
substantially less confidence in those results than findings 
that were consistent amongst either climate models or ecosystem 
models.
    It was certainly not our intent, nor the design of this 
report, to have it serve as the sole basis for national 
policymaking, and it obviously is not being used as such, as a 
sole basis for policymaking, which I think is wise.
    Many of us have subsequently collaborated on a publication 
in which we lay out our views of the scientific uncertainties 
and recommend programs for addressing those, which is currently 
in press in the peer reviewed literature.
    Mr. Greenwood. Thank you. Mr. Karl, do you believe the 
caveats about uncertainty were sufficient?
    Mr. Karl. Yes, I believe that we went to great pains to 
develop a lexicon, as Dr. Janetos had indicated, to try and 
convey where it was clear in our minds that there was 
considerably higher probability, given all the assumptions of 
the scenarios that were generated, of the outcomes. Then there 
were some where we tried to convey the information in the sense 
that we just didn't know, and there was equal chances.
    So I thought that, in fact, the assessment followed a 
protocol that was begun in the first IPCC report in 1990 that 
tried to give asterisks, asterisks meaning one, two, three or 
four-star asterisks to try and convey some sense of confidence 
that the scientists had in the outcomes that they were 
expecting in the future.
    It was very clear when we were writing this report, words 
can be very deceiving. One individual may say likely, and it 
causes a whole different set of ideas to come to mind that, you 
know, maybe this is 95 percent certain. So we tried, and it is 
shown in the report--tried to use those words and link them 
with probabilities, not fixed probabilities but likely didn't 
mean 95 percent. It was somewhere between 65 and 85, 90 
percent. So we thought that this was a quite important thing to 
do.
    Mr. Greenwood. Any of the other panelists want to comment 
on the adequacy of the caveats?
    Mr. Michaels. Yes, I would, if you don't mind. It has 
clearly been established here that both Tom and I agree that 
there was the problem of the two driver models doing no better 
than the table of random numbers, and on temperature, not on 
precipitation, temperature being a very important variable for 
agriculture and many of what we call the subsequent impact 
models.
    To use a colloquialism, that's garbage, garbage in, and 
there is a transitive property of refuse when you apply it to 
subsequent computer models, and that's what comes out. I have 
yet to understand, I have yet to hear a justification for 
proceeding along this road when the leadership knew that there 
was this problem with the models.
    I think they should have stopped and said, wait a minute, 
we need to report back to you that we really can't go down this 
road, even though we were commanded to, because we don't have 
the tools. They could have come to you and said, listen--I 
mean, they could have disagreed with me, that's fine--we need a 
lot more money. We need a lot more support to study this 
problem and to give you what is an assessment that is based 
upon real numbers, not random numbers. That is my problem with 
the competence in this report.
    Mr. Greenwood. Thank you. Mr. Karl, can you elaborate on 
the timing of model improvements in your testimony? Can models 
ever provide a level of certainty needed to convince 
policymakers or even the State climatologists?
    Mr. Karl. First off, I would preface my comment that I 
think I will try and limit my comments to how the improvements 
in the models--how long it will take to narrow the 
uncertainties as opposed to when State climatologists or 
policymakers may choose to use them.
    I think that, if you take a look at history, you can get a 
good sense of how quickly we might be able to converge. If you 
look at this issue that really began to become a focus of the 
scientific community in the 1980's, the first models that were 
generated--in fact, if you even look before that, the first 
National Research Council report talked about the sensitivity 
of models to doubling of carbon dioxide on the global average 
temperature, and they gave an uncertainty range that stands to 
this day today.
    That first report done by the NRC now is over 30 years old, 
and you will see that we still have the same range of 
uncertainty, you know, doubling of CO<INF>2</INF>, 1.5 to 4 
degrees Celsius increase in temperature globally, and then the 
issues come down to, well, what is going to happen in the 
specific regions.
    I do see some significant improvements in the next number 
of years with the use of not only global models but coupling 
with regional climate models, putting in more of the regional 
details, as we have discussed. So there will be some 
improvements, but I would not expect that that range is going 
to change substantially in the next 5 or 10 years.
    If I may make one other comment with respect to some of Dr. 
Michaels' statements regarding whether or not these models are 
better than a table of random numbers--and again, I don't want 
to turn this into a scientific debate, but the way you apply 
tests to models is very important to know the framework. What's 
the level playing ground?
    These models that were run had one simulation. We know that 
you need many simulations to adequately capture important 
climate fluctuations, and we don't have that, and only if you 
have many, many different ensembles, orders of hundreds of 
climate model runs using the same forcings, can you hope to see 
what the scope of variability might be.
    These models did not include volcanic eruptions at the time 
they erupted, like Mount Pinatubo, El Chechon. So again there 
are--As I said in my testimony, my oral statement, there's many 
different tests out there, and it's very tenuous to put too 
much information on any single test.
    One other issue that's come up related to the tropospheric 
temperatures, mid-troposphere, that Dr. Pielke has argued show 
less warming than models projected. I just wanted to point out, 
if you go back to the early Sixties, we have radiosonde data 
that go the early Sixties. The warming produced by the 
observations and the models on a global basis are quite 
consistent.
    Mr. Greenwood. Dr. Michaels, you wanted to respond?
    Mr. Michaels. That is true, Tom, except you know and I know 
that the warming that occurred in the radiosondes--these are 
the weather balloons--is a peculiar warming that shows a step 
function somewhere around 1975, 1976.
    In fact, if you take this weather balloon record and go 
from its beginning, which depending on the record you are using 
is 1956 or--or 1957 or 1948--Take the 1956 to 1975, and it is 
constant--or 1976. There is no warming. Then you take the 1977 
to now or to the late 20th Century, and it is constant.
    There is this jump that occurred in the mid-1970's. Some 
people call it the great climate shift. We have no idea what it 
was. We also have no computer model. O'Brien will explain it 
all. We have also no computer model that I know of for change 
in greenhouse gases that says all of a sudden the tropospheric 
temperature jumps.
    So it is a little misleading to say, yeah, those records 
match up, because the computer models are predicting a smooth 
change--you saw that--in the free troposphere, and the 
atmosphere isn't obeying the law as specified by the computer.
    I think Tom and I are in agreement, by the way, largely. If 
I were to deconstruct, and as a college professor I am forced 
to do this--If I could deconstruct your answer about, well, 
Michaels' test, you know, really was a little bit harsh, 
because he didn't include volcanoes or something like that, 
isn't Tom Saying that the models were inadequate for this 
report?
    Mr. Greenwood. I will let him answer that himself. 
Actually, we have a pending vote. So I am going to ask the last 
question of the hearing, and it is kind of a wide open 
question.
    That is this. The Congress and the Executive commanded that 
this study be done, but I want to ask each of you to respond to 
this question. If you had the power to command the Congress and 
the President with regard to the policies that we should enact 
and employ with regard to this entire range of global warming, 
everything from resources needed to study, the policy decisions 
with regard to emissions, how would you command us? Dr. 
Janetos?
    Mr. Janetos. Mr. Chairman, a daunting question indeed. I 
think my command would be twofold. One would be to take those 
actions which make sense now, not to imagine that the 
uncertainty in the science acts as a break to inhibit 
mitigation activities that make sense----
    Mr. Greenwood. Could you give us an example of those 
actions that you think we should take?
    Mr. Janetos. I believe that some measure of mitigation for 
greenhouse gas emissions is in order, mitigation actions that 
are achievable with current technology and at reasonable cost.
    I also believe quite strongly that resources and a focused 
program on vulnerabilities and the sensitivities of natural 
resources to changes not only in climate but to other 
environmental stresses is in order. We face a changing planet. 
That is very clear. Ultimately, our well-being depends on our 
ability to manage those resources well.
    Mr. Greenwood. Dr. Karl.
    Mr. Karl. Well, one of the things that I think is most 
important to consider to try and move forward on this issue is 
the difficulty that we face when we try to go from discipline 
to discipline to understand the important impacts and the 
adaptations, the mitigation measures we might take.
    There is a tremendous amount of collegial interaction that 
must occur between physicists, climate scientists, ecologists, 
specialists in hydrology. One of the things we found in the 
National Assessment, I think, that was so valuable was, for the 
first time, these communities were actually talking together. 
Outputs from one model were looked to see how they might be 
able to run another model. Observations from one group were 
looked at how they might apply to another area.
    That activity is really critical, and it is dependent on 
individuals trying to forge these interactions, these 
discussions. So I think one of the important messages from the 
National Assessment, one area that really is important if we 
expect further progress in this area, is to continue and 
encourage anything we can do to encourage that dialog across 
disciplines.
    Most scientists get much more pats on the back by being 
specialists in their own field. So without a push in that 
direction, it is going to be very hard, I think, to expect 
individual scientists to--although I'm not speaking for 
everybody, but I think letting the system go and expecting that 
to happen on its own will be difficult.
    Mr. Greenwood. Dr. Lashof.
    Mr. Lashof. Mr. Chairman, we know that we are adding a 
thickening blanket of heat trapping pollution to the atmosphere 
in CO<INF>2</INF> emissions from automobiles and power plants. 
We know that that is going to cause the climate to change, and 
indeed the climate has already begun the changes. I think 
everybody on this panel has recognized.
    The National Assessment shows that the United States is 
very vulnerable in many respects. We can't predict what the 
weather will be on July 25, 2030, but we can say that there are 
very severe risks to the United States if we continue to add 
carbon dioxide to the atmosphere at the increasing rates that 
we have been.
    We also know that for the last 10 years we have had a 
voluntary approach to trying to limit the emissions of 
greenhouse gases, and it's failed. Our emissions are going up. 
So I think that the basic conclusion is pretty straightforward. 
It's time for mandatory limits on emissions of carbon dioxide 
and other heat trapping gases.
    The House has before it the Clean Smokestack Act sponsored 
by Congressmen Boehlert and Waxman that would take a big start 
on that, focusing on an integrated approach to cleaning up 
emissions from power plants. I think we need an energy policy 
that is designed to limit carbon dioxide emissions. 
Unfortunately, I believe that the policy that was passed by the 
House earlier in the year moves us in the wrong direction, and 
instead of, for example, strengthening efficiency standards for 
automobiles that would have the result of reducing emissions of 
CO<INF>2</INF> and making us less vulnerable to dependence on 
foreign oil, it actually moves us in the wrong direction. It 
weakens currently law.
    So I think there are some very clear steps. You know, the 
good news is that this is a very daunting problem, but unlike 
some other problems like terrorism and poverty, I think we know 
how to solve this problem, and we just really need to get to 
work on it. So that would be my answer. Thank you.
    Mr. Greenwood. Dr. O'Brien.
    Mr. O'Brien. Mr. Chairman, I have two points here. One 
point is that, besides my colleague, Dr. Pielke's, very good 
points about looking to see where the vulnerabilities are so 
you know where to put the emphasis on your studies, I believe 
that more--that Congress should direct the scientific community 
to start looking at understanding climate variability, and I 
mean how we vary on the scales of annual, multi-years and 
multi-decades, because these are the way that we finally get to 
this straight line. I think that just looking at what is going 
to happen 100 years from now is the wrong approach.
    I also believe--The second point is I believe that this 
changing climate variability and its understanding should be 
made a national security issue and not just a domestic issue. I 
feel sad to hear that we continue to focus today too much on 
just what is happening in the United States, but unfortunately 
with our standing in the world, you know, we are taking on 
responsibility for lots and lots of parts of the world, you 
know.
    You see lots of efforts both in the military and the 
civilian side, and I do believe that we need to think about 
other places in the world. You know, if climate variability 
destabilizes countries which are on the edge--and I'm not going 
to mention any now--you know, that is going to cause a great 
problem for our economy and our citizens. So I really believe 
that we should return to the idea that the changing climate in 
the future and its variability is really an important national 
security issue for the United States.
    Mr. Greenwood. Thank you, sir. Dr. Pielke.
    Mr. Pielke. Well, first I would like to mention, I think we 
should move beyond the term global warming to the more 
inclusive term, which is human induced climate change; because 
I think it is multi-dimensional and multi-faceted, as our 
policy statement says.
    In the specific policy statement of my association, there 
are two bullets in there that I think address your question 
specifically. The first one is that policy responses to climate 
variability and change should be flexible and sensible. The 
difficulty of prediction and the impossibility of verification 
of predictions decades into the future are important factors 
that allow for competing views of a long term climate future. 
Therefore, the American Association of State Climatologists 
recommends that policies related to long term climate not be 
based on particular predictions but instead should focus on 
policy alternatives that make sense for a wide range of 
plausible climatic conditions, regardless of future climate.
    Climate is always changing on a variety of time scales, and 
being prepared for the consequences of this variability is a 
wise policy.
    Second, in our interactions with users of climate 
information AASC members recognize that the Nation's climate 
policies must involve much more than the discussions of 
alternate energy policies. Climate has a profound effect on 
sectors such as energy supply and demand, agriculture, 
insurance, water supply and quality, ecosystem management, and 
the impacts of national disasters.
    Whatever policies are promulgated with respect to energy, 
it is imperative that policymakers recognize that climate 
variability and change has a broad impact on society. The 
policy responses should also be broad. Thank you.
    Mr. Greenwood. Dr. Michaels.
    Mr. Michaels. Mr. Chairman, as a CATO scholar, I guess I am 
going to have to be rational. The fact of the matter is that I 
believe what we should do now is not mandate technological 
programs and technologies that will not do very much about 
warming.
    When Mr. Gore came back from Kyoto in 1997, he asked the 
government scientists to project how much warming the Kyoto 
Protocol would save. The Protocol would require us to reduce 
our emissions 7 percent below 1990 levels, etcetera. Let me 
show you the calculation.
    The solid black line is the average temperature change from 
a suite of models if all the nations of the world did Kyoto. 
The dashed line underneath it--if we continued business as 
usual, I'm sorry. The dashed line underneath it is what happens 
if all the nations of the world did Kyoto.
    The change in global surface temperature exerted by Kyoto 
in 50 years would be seven hundredths of a degree Celsius, 
fourteen hundredths of a degree Celsius in 100 years.
    If we really are concerned about this problem, I suggest 
rather than mandating technologies that we specifically allow 
people to retain their income to invest in the technologies of 
the future that this Congress and no one on this panel can 
define. One hundred years ago, the technology that ran our 
society was radically different than it is today. One hundred 
years from now, it will be radically different. It will be more 
efficient. It must be, because that is what a market 
determines.
    I think the best thing to do is to allow people to invest 
in those technologies, their own choice, rather than having 
governments, perhaps mistakenly, invest other people's monies 
in technologies that simply will not accomplish what many 
people on this panel think needs to be accomplished.
    I believe our change to a less carbon based economy is an 
historical inevitability. All we have to do is get out of the 
way.
    Mr. Greenwood. You would command us to command less.
    We thank all of our witnesses for your presence and your 
testimony, and excuse you now. This hearing is adjourned.
    [Whereupon, at 11:45 a.m., the subcommittee was adjourned.]

                                   -