CHAMMP NEWSLETTER, VOL. 2 NUMBER 2 Jun. 1992 Dave Bader, (301)903-4328 FTS 233-4328 e-mail bader@oerv01.er.doe.gov bader@oerv01.er.doe.gov (My preference for all who have INTERNET) or D.BADER on OMNET MASSIVELY-PARALLEL MACHINES FOR CHAMMP - An update CHAMMP will be allocated 20-25% of the resources available on the the massively parallel supercomputers at the two DOE High Performance Computing Research Centers. The CM-5 (minus vector pipes) has arrived at Los Alamos' Advanced Computing Lab and the Intel Paragon has been or shortly will be delivered to Oak Ridge. CHAMMP usage of these resources will be coordinated through Bob Malone at Los Alamos and John Drake at Oak Ridge. Both centers are new and there will users should expect both large and small problems as well as be willing to help shake out these prototype systems. CHAMMP PILOT PROJECT VOLUME The CHAMMP Pilot Project report has been mailed out to over 700 individuals. Additional copies will be sent to report authors as soon as the secretary here can find time to pacakage and ship them. Individual copies can be obtained from Dave Bader. NEW SCIENCE TEAM MEMBERS Dr. Roni Avissar- Rutgers Univ. "An Evaluation of the Appropriate Land-Surface Resolution for Climate Models" Dr. Tommy Jensen - Colorado State Univ. "Parameterizations in High Resolution Isopycnal Wind-Driven Ocean Models" Dr. Donald Johnson - Univ. of Wisconsin "Modeling of Hydrologic and Transport Processes in Relation to Climate Change" Dr. James Hack - NCAR "Improvement in Moist and Radiative Processes in Highly Parallel Atmospheric General Circulation Models" Dr. John Anderson - Univ. of Wisconsin " An Experimental Regional Scale Climate Simulation Laboratory" Dr. James Kinter - Univ. of Maryland "Variability and Predictability of the Coupled Ocean-Atmosphere-Land Climate System" Dr. William Gutkowski - Iowa State Univ. "Modeling Land-Surface/Atmospheric Dynamics for CHAMMP" These Science Team awards were selected from the group of proposals reviewed in 1991. FIRST SCIENCE TEAM MEETING The first CHAMMP science team meeting was held in March in Las Vegas, NV. Mike MacCracken's staff at LLNL did an excellent job in making meeting arrangements for a successful gathering. All were exposed to the breadth of science team projects as well as the capabilities of a fine team of investigators. We hope to follow-up the meeting with a series of small informal workshops od Science Team sub-groups over the next year for more in depth discussion of the many science issues that were presented. Mike MacCracken is preparing a summary notebook for Science Team participants and others who may be interested. MODEL DEVELOPMENT EFFORT MEETING Bob Malone held a very successful team-building meeting for the MDE participants and collaborators in April at Sante Fe. Pland for the new HPCRCs were presented as well as updates on the components of the MDE. PARALLEL PROGRAMMING TOOLS -- Contibuted by John Drake, ORNL There have been several questions about parallel programming tools that are available for distributed memory MIMD computers. Three freely available tools will be discussed briefly in this note, PICL, PVM and PCN. PICL is a message passing library in use on a variety of MIMD parallel computers. PICL's usefulness comes from a uniform message passing syntax, call send0( buffer, length, message_type, proc_id) call recv0( buffer, length, message_type), and from its ability to gather execution trace data. Indeed, the acronym PICL stands for Portable Instrumented Communication Library, and the instrumentation functions provide performance monitoring and tuning data. The trace output can be displayed and interactively perused using an X-windows program called ParaGraph. PICL is specifically designed for a tightly coupled network of processors and a message passing programming paradigm with either Fortran or C. Another tool which supports a message passing programming paradigm and is designed for heterogeneous, loosely coupled networks of processors. PVM provides a workstation interface to a user defined Parallel Virtual Machine. A CRAY, an Intel Paragon, a CM-5 and a set of high performance workstations can be joined in a single parallel machine with network communication and machine specific formatting considerations handled by the PVM routines, call finitsend() call fputndfloat( buffer, length, info ) call fsnd( component_name, instance, message_type, info ) call frcv( message_type ) call fgetndfloat( buffer, length, info ). Some increased complexity of the send/receive is necessary for the increased generality of a heterogeneous collection of processors. PVM communication uses UNIX sockets and thus each processor of the virtual machine must have a network IP address. The individual processors of the Intel iPSC/860 or the CM-5 do not have IP addresses and, thus, PVM is not currently a replacement message passing system between the nodes of a tightly coupled machine. Though, the message passing programming paradigm is supported by both PICL and PVM, the tools are in use in quite different environments and with different purposes. Both tools are evolving and offer many high level communication functions. PVM has been used as a base for the development of an object oriented parallel language PVM++, a graphical performance monitor compatible with PARAGRAPH, as well as a graphical programming language, HeNCE. PCN is an innovative high-level parallel programming language with an interface to FORTRAN and C. Program Composition Notation (PCN) is used to define the relationship of procedures with three primitive composition operators: parallel, sequential and choice composition. The message passing required to support the execution of a task is hidden from the user; the message passing is implicit in the decomposition of the data structures and the process mapping to processors. For exploring and comparing parallel algorithms, PCN offers flexibility in the specification of the data decomposition and the process to processor mapping. To tune a parallel PCN code, trace data can be examined interactively with the X-windows display tools, GAUGE and UPSHOT. A network version of the PCN system is also available which uses UNIX sockets to communicate between nodes. All three tools are currently being used within the CHAMMP community. For example, PICL is the underlying message passing system used in the MIMD implementation of the Parallel Community Climate Model, (PCCM2). PVM is being used as the basis of a network archiving system and PCN has been used for a parallel shallow water code as well as a parallel implementation of the mesoscale model MM4. Further information on PICL and PVM can be obtained by sending a mail message to netlib@ornl.gov with the line(s) send index from pvm and/or send index from picl. The PCN system is available by anonymous FTP from directory pub/pcn at info.mcs.anl.gov. CONTRIBUTIONS STILL ENCOURAGED Anyone who wants to contribute to the newsletter is encouraged to do so. CHAMMP Contacts Dave Bader, CHAMMP Program Director (note corrected e-mail address) (301)903-4328, FTS 233-4328 (NOTE NEW COMMERCIAL PREFIX) bader@oerv01.er.doe.gov Mike MacCracken, CHAMMP Chief Scientist (510)422-1826, FTS 532-1826 (NOTE NEW AREA CODE) mmaccracken@llnl.gov Bob Malone, CHAMMP Director of Model Development (505)667-5925, FTS 843-5925 rcm@lanl.gov