To provide a software framework within which specific models can be easily developed to address scientific questions of interest to GFDL scientists.
To ease the transition of GFDL models to new computer architectures and to facilitate both intramural and extramural scientific collaborations.
1.1 LOW-LEVEL SUPPORT FOR MODELING
The Flexible Modeling System (FMS), as well as the other lab models, use the MPP modules developed at GFDL for memory management, communication, and I/O on scalable systems. These modules provide the message-passing and parallel I/O infrastructure for FMS and other participating lab models, including the gridpoint and spectral atmospheric GCMs, MOM3 and MOM4, and ZETANC, a non-hydrostatic mesoscale atmospheric model.
The MPP modules consist of three separate F90 programming interfaces: 1) mpp_mod, the low level interface provides basic routines for message passing; 2) mpp_domains, the higher level interface, provides routines for defining domain decompositions and performing halo updates and data transposition across processors; 3) mpp_io, the I/O interface, provides routines for writing output in different formats (including netCDF) from distributed arrays.
The MPP modules have been stable and operational since 1999. In the last year, they have been validated and benchmarked on a variety of scalable systems, including: Parallel Vector (Cray T90); massively parallel distributed memory (Cray T3E); ccNUMA (Origin 2000 and 3000); SMP clusters (IBM-SP, Sun UE-10000); and Beowulf clusters (SGI-1200, DEC alpha cluster over Myrinet, Intel Pentium cluster on fast ethernet).
1.1.2 Abstract Parallel Dynamical Kernels
The coding of model numerics in terms of abstract operators has many advantages. In particular, it permits many details of how differencing and averaging are performed on different grid types (e.g., Arakawa B- and C-grid) to be hidden from the user. Thus, it may be invoked for any grid type provided that particular method is available, or can be supplied by the user, for the grid in question.
Second, since the detection and invocation of halo updates is automatic, halos of differing widths may be used as needed in different parts of the code. For instance, the barotropic solver in the ocean component of FMS is latency-bound. The distributed_grid module will permit the use of wide halos in this portion of the code, so that halo updates can be called less frequently.
The distributed_grid module has been benchmarked for a shallow water code on the Cray T90 and T3E. While there is some penalty for abstraction (about 20% on 1 processor), it exhibits its strength in its high scalability (80% scaling on 25 PEs on a 125x125 grid on the T3E).
1.1.3 Interpolation Between Model Grids on Scalable Architectures
The exchange grid software which is used to do conservative interpolation between model grids in FMS was rewritten to increase performance on scalable architectures. Only minor modifications to external interfaces were required, but communications patterns were changed to reduce bottlenecks. Additional interfaces to support disjoint longitude latitude grids for support of spectral models with hemispheric windows were implemented. Exchange grid software has been ported to NCEP (National Centers for Environmental Prediction) and IRI (International Research Institute for Climate Prediction), where it is being used as part of coupled model developments at those institutions.
The distributed_grid module will be implemented for testing in the B-grid atmospheric model and the MOM4 ocean model.
Versions of the exchange grid overlap interfaces to support additional grid types required by GFDL modelers will be developed.
1.2 PHYSICAL PARAMETERIZATIONS, COMPONENT,
AND COUPLED MODELS
The programs used for the tropical storm analysis, as well as the program to compute the Tibaldi/Molteni blocking index, and a number of other analysis programs have been rewritten as streamlined Fortran 90 modules.
An empirically based relative humidity threshold cloud parameterization scheme has been designed to closely resemble the diagnostic cloud parameterization used in the Experimental Prediction spectral GCM v197. This scheme calculates cloud fractions diagnostically using relative humidity, vertical velocity, and stability. Seven cloud types and three cloud vertical layer specifications are possible, each with its associated optical properties. A concise, modular scheme written in Fortran-90 has been completed for both the calculation of fractional cloud amounts and the handling of the cloud optical properties. Column tests give very good agreement with the non-modular v197 version.
1.2.3 Global Atmospheric Grid Point Model
Evaluation of the B-grid core is underway in several coupled model configurations. Extensive testing has been done using a 2.5 x 2.0 degrees x 18 level version of the core coupled to models providing prescribed SSTs and sea ice. Additional tests have also been made with horizontal resolutions up to 1.25 x 1.0 degrees and vertical resolutions up to 50 levels. Initial testing has begun using a version of the core coupled to MOM3 and using prescribed sea-ice.
1.2.4 Dynamic/Thermodynamic Sea Ice Model
1.2.5 High Level Language Support for Coupled Models
Further improvements to the SIS sea ice model will continue along with tests of SIS in a variety of coupled model configurations.
The high level coupling language compiler will be completed and implemented to facilitate better coupled model design at GFDL.
A number of enhancements to the FMS diagnostic manager will be made to support more elaborate run-time diagnostics. Among the new features will be diagnostics for limited spatial domains, better spatial averaging facilities, and the ability to output diagnostics for particular times of day.
Evaluation of the current FMS version of the diagnostic cloud parameterization scheme will continue. The standard parameterization and a new cloud anomaly parameterization for marine stratus cloud fraction will both be incorporated in FMS. Comparative coupled model integrations with the Klein prognostic and Gordon/Slingo diagnostic clouds will be performed.
1.3.2 Software Version Control for the Flexible Modeling System
FMS uses GNU's CVS (Concurrent Versions System) version control system. The FMS source tree is stored in a single CVS repository that is accessible in its entirety to all users at GFDL and accessible in part to external users. Considerable effort has been made to create a repository structure and policy that will serve the needs of FMS well into the future. The repository structure is split functionally into "shared" code, which consists of utilities common to all FMS codes, "component-model" code, which consists of the core codes to run each of the dynamical cores, and "coupler" code, which consists of codes, such as the surface flux calculations and the main drivers, that couple component models together. The repository policy outlines naming conventions, requirements for introducing "branch" or "trunk" code into the repository, and the coordination of the quarterly release schedule of FMS (1.3.3) with the repository.
The releases are named by an alphabetical sequence of city names: Antwerp (5/2000), Bombay (8/2000), Calgary (11/2000), etc. The Antwerp release consisted mainly of parallelized "benchmark" code for the GFDL computer procurement. The Bombay release featured coordinated atmospheric dynamical cores, and the introduction of the MOM3 ocean model, the LaD land surface model, the SIS sea-ice model, and enhanced web documentation.
In conjunction with the Bombay release, an FMS workshop was held at GFDL to familiarize the laboratory with the capabilities of FMS. One and a half days were devoted to a series of presentations ranging from overviews for casual users to detailed studies of low-level support modules. Representatives from nearly a dozen outside modeling institutions were invited to attend the workshop as observers, and participated in an afternoon session discussing possible ways in which GFDL could interact with the outside world using FMS.
1.3.4 Web Page and Documentation Support
1.3.5 Optimization Team, Migration, and Evaluation
Continued coordination of development of different dynamical cores and component models should increase the amount of shared code and reduce maintenance burdens.
Improved coordination of run-time scripts and data archiving facilities will be explored to avoid redundant effort between scientific groups in these areas.
Quarterly releases
of FMS will continue. Several major new features are planned for inclusion,
including a comprehensive biosphere model, a standardized coupled model,
as well as the new radiation code of the atmospheric processes group.
*Portions of this document contain material that has not yet been formally published and may not be quoted or referenced without explicit permission of the author(s).