|
|
Developing a Regional Climate Model
an ORNL LDRD project
CSM staff are participating in both the science and infrastructure
portions of this ORNL-funded LDRD project.
more info
Science goals
As a first demonstration, the project will obtain long term records of
stream flow from gauging stations in the region and benchmark the regional
climate model's hydrological results against this record. We will
perform an assessment of the change in stream flow using CCM3 Global
Circulation Model to provide projected climate change over the next 50
years with atmospheric CO2 increasing at a rate of 1% per year and
downscaling accomplished by the regional climate model for performing
hydrological calculations.
Specific tasks of this project include:
- Nesting a state-of-the-art mesoscale regional climate model within
the parallel atmospheric
GCM CCM3 over the eastern United States, and replacing the surface
boundary condition soil-vegetation-atmosphere transfer scheme in
the mesoscale climate model with an existing ORNL regional
terrestrial ecosystem carbon-water model, extended to incorporate
additional processes (e.g., albedo) required for integration with
the regional climate model;
This coupling will provide a unique
capability for an internally consistent simultaneous simulation of
surface boundary conditions and carbon cycle dynamics for the
assessment of carbon sequestration and the impacts of climate
change.
- Establishing a benchmark data set based on regional stream-flow
gauge stations;
- Incorporating a surface hydrology routing scheme in the regional
climate and ecosystem carbon-water model; and
- Benchmarking the performance of the integrated regional model by
testing predictions of stream flow against the regional stream
flow data set.
Infrastructure goals
The heart of the infrastructure for regional climate collaborators
will be a computational service that shares capabilities across
geographically distributed computers and data archives. It aggregates
the hardware and software resources of any number of sites that are
loosely connected across a network and offers up their combined power
and capabilities through client interfaces that are familiar from the
world of uniprocessor computing.
ORNL is very strong in the development of distributed computing
infrastructure with proven tools, such as PVM and NETSOLVE, and ongoing
development projects, such as CUMULVS and HARNESS and Problem Solving
Environments. For this LDRD proposal we will integrate the capabilities
of the existing tools to provide a computational grid structure for
existing CSMD and ESD computing equipment and extend CUMULVS
functionality to include the coupling of parallel models. To unify these projects and to support
externally-developed software for use within the test grid, the
infrastructure portion of this proposal will define, assemble, and /or
build the appropriate infrastructure that:
- Allows compatible parallel model components to plug together
using a standardized
interface;
This enables a systematic approach for building
multi-component coupled models to test various
configurations.
- Provides a mechanism for scheduling the various grid
components (parallel processors,
storage servers, and visualization engines) to complete an
analysis or model ensemble;
- Catalogues, provides access, and performs data distillation
of the large model output
runs; and
- Unifies various grid components using a graphical user
interface (GUI) that is useful
to both programmers and end users.
Computational grid
The
simulation and assessment of streamflow will be carried out in a
computational grid structure showing our ability to integrate novel
computer science infrastructure projects with innovative regional
climate science research.
New and existing computing resources
at ORNL and UTK will be used to
form a grid testbed on which model development, production, and
assessment activities will be undertaken. The grid resources include
two PC-based clusters (48 CPUs total), an SGI Origin 2000 (8 CPUs), an
SGI Onyx SMP (8 CPUs), a DEC Server (2 CPUs), and a SUN Enterprise 450
data server (180 Gigabytes disk storage). A variety of
high-speed and legacy networks will be used to link these
components into a usable computational grid that will be accessible
from researcher's desktop workstations.
|
|