Argonne National Laboratory Mathematics and Computer Science Division
Argonne Home > MCS Division > Hardware & Software > Software

MCS Software

MCS has long been a leader in the development of robust, reliable software. As early as the 1970s, Argonne spearheaded a series of software engineering projects that culminated in the release of EISPACK, LINPACK, FUNPACK, and MINPACK. Today, MCS researchers are continuing this tradition, with an added emphasis on portability and scalability. Thousands of researchers in academia and industry use our software in applications that include computational chemistry, protein structure, vortex dynamics, astrophysics, climate modeling, mathematics and logic, CFD, and reservoir simulation.

  • ADLB – The Asynchronous Dynamic Load Balancing is an MPI-based software library designed to help rapidly build scalable parallel programs. It provides a master/worker system with a put/get API for task descriptions, thus allowing workers to add work dynamically to the system. The library has been used as an execution engine for complicated applications such as Green’s function Monte Carlo and higher-level “many-task” programming models.
  • Access Grid Toolkit –The Access Grid Toolkit enables users to experience rich collaborations including people, data, and grid computing resources. The new 2.0 version of the toolkit includes streamlined user interfaces, robust middleware and low-level services that enable participants to share experiences through digital media.
  • ADIC – ADIC is a tool for the automatic differentiation of programs written in ANSI C. Given the source code and a user's specification of dependent and independent variables, ADIC generates an augmented C code that computes the partial derivatives of all of the specified dependent variables with respect to all of the specified independent variables in addition to the original result.
  • FOAM – The Fast Ocean-Atmosphere Model (FOAM) is a coupled ocean/atmosphere model that incorporates all the physics needed for multicentury simulations. It uses a combination of improved ocean model formulation and reduced-resolution atmosphere model to reduce computational requirements by a factor of ten relative to similar models. It uses parallel processing techniques to allow execution on parallel platforms that are more cost-effective than the vector multiprocessors traditionally used for climate model. A 500-year simulation performed with this model has yielded significant scientific results.
  • Globus Toolkit – The Globus Alliance provides software tools that make it easier to build computational Grids and Grid-based applications. These tools are collectively called the Globus Toolkit. The Globus Toolkit is used by many organizations to build Grids that can support their applications. The open source Globus Toolkit includes tools and libraries for solving problems in the following areas: security, communication, information infrastructure, fault detection, resource management, portability, and data management.
  • Jumpshot – Jumpshot is a profiling tools that provide log files, communication statistics, and graphical output of the results. An enhanced version deals with larger numbers of processes and to provide visualization of parallel I/O activities. Jumpshot is distributed with MPICH.
  • MCT – The Model Coupling Toolkit (MCT) is a software library for constructing parallel coupled models from individual parallel models. MCT is designed for high performance and portability and offers a programming model similar to MPI. Core services include component registration, decomposition description, indexible data storage, parallel data transfer, and interpolation.
  • MINOTAUR – MINOTAUR is an open-source toolkit for solving mixed-integer nonlinear optimization problems. It provides different solvers that implement state-of-the-art algorithms for MINLP. The MINOTAUR library can also be used to customize algorithms to exploit on specific problem structures.
  • MPICH – MPICH2 is a high-performance, widely portable implementation of the Message Passing Interface standard (both MPI-1 and MPI-2). It seeks to provide an MPI implementation that efficiently supports different computational and communication platforms, including commodity clusters, high-speed networks, and proprietary high-end computing systems. It also seeks to enable cutting-edge research in MPI through an easy-to-extend modular framework for other derived implementations.
  • NEOS Server – The NEOS Server 3.0 is the first network-enabled problem-solving environment for a wide class of applications in business, science, and engineering. Included are state-of-the-art solvers in integer programming, nonlinearly constrained optimization, bound-constrained optimization, unconstrained optimization, linear programming, stochastic linear programming, complementarity problems, linear network optimization, semidefinite programming, and administrative programming.
  • OpenAnalysis – The goal of the OpenAnalysis toolkit is to separate analysis from the intermediate representation in a way that allows the orthogonal development of compiler infrastructures and program analysis. Separation of analysis from specific intermediate representations will enable faster development of compiler infrastructures, the ability to share and compare analysis implementations, and in general quicker breakthroughs and evolution in the area of program analysis.
  • PETSc – PETSc, the Portable, Extensible Toolkit for Scientific computation, is a suite of uni- and parallel-processor codes for solving large-scale problems modeled by partial differential equations. PETSc employs the MPI standard for all message-passing communication. The code is written in a data-structure-neutral manner to enable easy reuse and flexibility. PETSc has been used for a variety of large-scaled applications, including transonic flow, modeling vortex dynamics in high-temperature superconductors, parallelization of a 3D magnetostatics code, and study of compressible flows at low and transonic Mach number.
  • PVFS – The Parallel Virtual File System (PVFS) project brings state-of-the-art parallel I/O concepts to production parallel systems. PVFS is designed to scale to petabytes of storage and provide access rates at hundreds of gigabytes per second. It also continues to be used as a platform for active research in the parallel I/O field.
  • ROMIO – ROMIO is a high-performance, portable implementation of MPI-IO. ROMIO includes almost everything defined in the MPI-2 I/O chapter and is optimized for noncontiguous access patterns, which are common in parallel applications. It also has an optimized implementation of collective I/O, an important optimization in parallel I/O.
  • TAO – The Toolkit for Advanced Optimization was developed under the DOE2000 program, focuses on the design and implementation of component-based optimization software for the solution of large-scale optimization applications. Our approach is to exploit numerical abstractions in large-scale optimization software design, so that we can leverage external parallel computing infrastructure (for example, communication libraries and visualization packages) and linear algebra tools in the development of optimization algorithms.
  • ZeptoOS – ZeptoOS is a research project studying very efficient and customized Linux kernels for petascale architectures with 10,000 to 1 million CPUs. Operating system and run-time software is strained by ultra-scale machines, and a variety of fascinating research topics are revealed at such amazing scale. Architectures such as IBM's BlueGene and Cray's XT3 are on the path toward petaflops and beyond, and make perfect testbeds for computer science explorations. ZeptoOS releases Linux kernel software, performance tools, and benchmarking suites for kernels. The ZeptoOS project is a collaboration between Argonne National Laboratory and the University of Oregon.

Archived Efforts

  • ADIFOR – ADIFOR is a tool for the automatic differentiation of Fortran 77 programs. Given a Fortran 77 source code and a user's specification of dependent and independent variables, ADIFOR will generate an augmented derivative code that computes the partial derivatives of all of the specified dependent variables with respect to all of the specified independent variables in addition to the original result.
  • dgsol – The dgsol code is designed for the solution of distance geometry problems with lower and upper bounds on distance constraints. The dgsol code uses only a sparse set of distance constraints, while other algorithms tend to work with a dense set of constraints either by imposing additional bounds or by deducing bounds from the given bounds. We have used the code successfully to study protein structures: our approach based on dgsol is significantly more reliable and efficient than multistarts with an optimization code.
  • DSDP – DSDP implements an interior-point method for semidefinite programming. It provides primal and dual solutions, exploits structure in the data, and has relatively low memory requirements for an interior-point method. The dual-scaling algorithm implemented in this package has a convergence proof and worst-case polynomial complexity under mild assumptions on the data. Furthermore, the solver offers scalable parallel performance for large problems. Some of the most popular applications of semidefinite programming and linear matrix inequalities are model control, structural design, and relaxations of combinatorial and global optimization problems.
  • HEIGHTS – The HEIGHTS (High Energy Interaction with General Heterogeneous Target Systems) package is used to simulate the physics of intense energy and power deposition on targets. The package integrates codes for diverse experiments -- MHD, magnetic diffusion, thermal conductivity, radiation transport, and hemodynamics -- and includes a graphical interface interactive calculations and presentation of information.
  • ICFS – ICFS is an incomplete Cholesky factorization for the solution of large-scale trust region subproblems and positive definite systems of linear equations. Our numerical results show that the number of conjugate gradient iterations and the computing time are reduced dramatically for small values of p. Our results also show that in contrast with drop tolerance strategies, the new approach is more stable in terms of number of iterations and memory requirements.
  • MM5 -- Distributed Memory Parallel Version – MM5 has been extended for use on distributed memory parallel computers such as the IBM SP, Cray T3E, Fujitsu VPP, clusters of PCs and workstations, and distributed-memory clusters of multiprocessor machines. It also provides an alternative to shared-memory parallel execution on distributed shared-memory machines such as the Silicon Graphics Origin 2000, the Hewlett Packard SPP, and others. This version of MM5 was developed by Argonne in collaboration with NCAR.
  • MPI – MPI (Message-Passing Interface) is a specification for the user interface to message-passing libraries for parallel computers. It was designed by a broadly based group of parallel computer vendors, library writers, and application developers to serve as a standard. MPI can be used to write programs for efficient execution on a wide variety of parallel machines, including massively parallel supercomputers, shared-memory multiprocessors, and networks of workstations.
  • PCX – PCx is an interior-point predictor-corrector linear programming package. The code has been developed at the Optimization Technology Center, a joint venture of Argonne National Laboratory and Northwestern University.
  • RSL – RSL is a runtime system library for implementing regular-grid models with nesting on distributed-memory parallel computers. RSL provides support for automatically decomposing multiple model domains and for redistributing work between processors at run time for dynamic load balancing. The interface to RSL supports Fortran77 and Fortran90. RSL has been used to parallelize the NCAR/Penn State Mesoscale Model.
  • Scalable UNIX Tools – Our Scalable Unix Tools project, which offers parallel, scalable version of common Unix commands for parallel machines with a Unix on each node, became the first official project of the Ptools Consortium. This organization seeks to promote de facto standards for parallel programming tools.
  • TRON –TRON is a trust region Newton method for the solution of bound-constrained optimization problems. TRON uses a gradient projection method to generate a Cauchy step, a preconditioned conjugate gradient method with an incomplete Cholesky factorization to generate a direction, and a projected search to compute the step. The use of projected searches, in particular, allows TRON to examine faces of the feasible set by generating a small number of minor iterates, even for problems with a large number of variables. As a result TRON is remarkably efficient.

The Office of Advanced Scientific Computing Research | UChicago Argonne LLC | Privacy & Security Notice | ContactUs