High Performance Computing

Mark Govett, Section Chief

  • Boulder HPCS

    Image of the Boulder HPCS Facility

    Over the last 25 years, super-computing has evolved from Cray vector machines, to a wide variety of commodity-based and vendor-specific CPU systems. CPUs have grown from a single processor per chip, to multi-core systems containing 8 or more CPU cores on a single chip. Modern systems are diverse; they can be shared memory, distributed memory or a hybrid mix of both. We are currently exploring accelerators for use in our weather models.

  • Fine-Grain Computing

    Intel MIC Architecture

    A new generation of High-Performance Computing (HPC) has emerged, referred to as Massively Parallel Fine Grain (MPFG). The term “Massively Parallel” refers to systems containing tens of thousands to millions of processing cores. “Fine Grain” refers to loop level parallelism that must be exposed in the application to permit thousands to millions of arithmetic operations to be executed every clock cycle. Two general classes of MPFG chips are available: Many Integrated Core (MIC) from Intel and Graphics Processing Units (GPUs) from NVIDIA and AMD. In contrast to up to 36 cores used on the latest generation Intel Haswell CPUs, these MPFG chips contain hundreds to thousands of processing cores. They provide 10-20 times greater peak performance than CPUs, and they appear in systems that increasingly dominate the list of top supercomputers in the world (Top500).

    Learn more about Fine-Grain Computing >>

     

  • Parallel Programming and SMS

    More computers

    The development of an efficient Message Passing Interface (MPI) library, supported by most vendors has improved the portability of models on distributed memory computers. However, MPI is sufficiently low level in nature that it can be difficult to use. To speed code parallelization, we have developed a high level tool called the Scalable Modeling System (SMS) that simplifies the task required to port and run NWS models on HPCs while offering good scalable performance.

    Learn more about Parallel Programming and SMS >>