Centers News

Composite image of the SGI logo with a circuitry background.

The Army Research Laboratory DoD Supercomputing Resource Center (ARL DSRC) has recently partnered with Intel and SGI to deploy a testbed, the Intel Knights Landing cluster, consisting of 28 nodes, in the DoD High Performance Computing Modernization Program (HPCMP). This testbed will give HPCMP users the opportunity to begin refactoring their applications in preparation for a larger, 540-node KNL partition on a production system to be installed at the Engineer Research and Development Center DoD Supercomputing Resource Center (ERDC DSRC) in 2017. The testbed has an Intel Omni-Path Interconnect, as well as Mellanox EDR InfiniBand interconnects, and is available for porting and benchmarking applications. Production computations are not currently supported due to frequent configuration changes.

Contact the HPC Helpdesk for more information.

Composite image of Armstrong and Shepard.

The Navy DSRC has established the Flexible Home & Job Management project in order to provide the users of the Cray XC30 systems Armstrong and Shepard with additional capabilities. The Center hopes that this project will lead to a better overall user experience for computing on the separate systems.

The Flexible Home & Job Management project consists of three layers. The first layer is a central login / job submission point. Users will be able to login to a central set of nodes for managing data and submitting jobs. The second layer leverages Altair.s PBSPro capability known as peer scheduling to allow jobs to flow from the central job submission point to Armstrong or Shepard and between Armstrong and Shepard themselves. The third layer is a centralized storage component that will provide users with a common home directory for both systems.

For more information, please see the Flexible Home & Job Management Guide available on the Navy DSRC website.

User Advocacy Group ... pleading your cause.

In the High Performance Computing Modernization Program, our goal is to provide our users with nothing less than a world-class HPC experience. Providing such an experience is a world-class challenge, and we employ talented and dedicated experts who strive for that goal every day. In trying to meet this challenge, however, we've realized one very important thing. Keeping up with the relentless pace of science, technology, and thousands of dedicated researchers is a herculean task, which will only be met by drawing upon the collective experience of the most intelligent, talented, creative, and driven people in the world... our users!

Because no one understands users' problems better than users themselves, the HPCMP chartered the User Advocacy Group (UAG) in 2002 to provide a way for users to influence policies and practices of the program by helping HPCMP leadership to better understand their changing needs. The UAG serves as an advocacy group for all HPCMP users, advising the Program Office on policy and operational matters from the user perspective. The UAG seeks to encourage efficient and effective use of HPCMP resources by fostering cooperation and teamwork among users and all program participants, and providing a forum for discussion and exchange of ideas.

As a user, the UAG is your loudest voice within the program. UAG representatives from across the DoD, many of whom are experienced users, provide a wealth of experience and deep technical understanding to HPCMP leadership, helping to improve the user experience for everyone.

You are invited to participate in this effort by contacting your UAG representative with your needs, concerns, and ideas. To find your representative, see the UAG roster on the User Dashboard. While you're there, you can also view the UAG Charter and the Minutes of past UAG meetings to gain a better understanding of how the UAG works.

We look forward to hearing from you.

SRD Logo and photo of user using SRD

SRD v2.1.1 has been released. This is an optional upgrade, and existing v2.1 users are unaffected. Visit the What's New page for an overview of what changed.

SRD allows any active HPCMP researcher to launch a Gnome desktop from a DSRC Utility Server, and the remote desktop is displayed on the user's client platform (Linux, Mac, or Windows). With this desktop, the user can run any graphical software installed on the Utility Server - MatLab, TecPlot, EnSight, etc. The SRD utility is a smart combination of hardware and software. It is highly optimized for graphics processing, remote rendering, and efficiently streaming the results back to your desktop. The net result: rapid 3-D visualization at the price of a 2-D data stream. Visit the SRD home page for full instructions on getting started.

CCAC Name Change Announcement

As of May 18th, the HPC Help Desk has replaced CCAC.

In order to more accurately describe the services provided, the Consolidated Customer Assistance Center (CCAC) has been renamed the "HPC Help Desk." This change is effective immediately. Access to the HPC Help Desk ticket system has changed from its old location to the new https://helpdesk.hpc.mil. Accordingly, the email address to contact the HPC Help Desk has become help@helpdesk.hpc.mil. The old URL and email address will remain available with auto forwarding and redirection for a period of time, but you are encouraged to update your bookmarks and contacts now. The phone number, 1-877-222-2039, will remain the same.

Rendering of Excalibur

ARL's newest HPC system, Excalibur, has over 101,000 cores and a theoretical peak speed of 3.7 petaFLOPS.

Recently acquired by the Department of Defense (DOD) High Performance Computing Modernization Program (HPCMP), the Cray XC40 supercomputer located at the Army Research Laboratory DOD Supercomputing Resource Center (ARL DSRC) debuted at number 19 on the November 2014 TOP500 list. The TOP500 list tallies the world's most powerful supercomputers and is published in June and November each year. ARL DSRC recently took delivery of this new HPC system, named Excalibur.

The system will complement a cadre of other HPC resources at the ARL DSRC. Excalibur will be one of the largest supercomputers installed to date in the HPCMP, boasting 101,184 cores augmented with 32 NVIDIA Tesla K40 GPGPUs. The system has a theoretical peak of over 3.7 petaFLOPS, 400 TB of memory, and 122 TB of solid-state disk (or .flash. storage). Excalibur will serve as a key HPC resource for the DOD Research, Development and Test and Evaluation communities.

The Department of Defense (DOD) High Performance Computing Modernization Program (HPCMP) just completed its fiscal year 2015 investment in supercomputing capability supporting the DOD science, engineering, test and acquisition communities. The total acquisition is valued at $73.8 million, including acquisition of four supercomputing systems with corresponding hardware and software maintenance services. At 9.92 petaFLOPS, this procurement will increase the HPCMP's aggregate supercomputing capability from 16.5 petaFLOPS to 26.4 petaFLOPS.

"The acquisition of these four systems completes an historic year for the HPCMP," said Christine Cuicchi, HPCMP associate director for High Performance Computing (HPC) centers. "We have now purchased more than $150 million of supercomputers within the 2014 calendar year. This previously unmatched expansion in capability — which nearly quintuples our pre-2014 capacity of 5.38 petaFLOPS to 26.4 petaFLOPS — will give our users another 577,000 compute cores on which to perform groundbreaking science and realize previously impossible discoveries in DOD research."

The four purchased systems will collectively provide 223,024 cores, more than 830 terabytes of memory, and a total disk storage capacity of 17.4 petabytes. This competitive government acquisition was executed through the U. S. Army Engineering and Support Center in Huntsville, Alabama, which selected systems from Silicon Graphics Federal, LLC, and Cray, Inc.

The new supercomputers will be installed at two of the HPCMP's five DOD Supercomputing Resource Centers (DSRCs), and will serve users from all Defense Department services and agencies:

  • The Air Force Research Laboratory DSRC at Wright-Patterson Air Force Base, Ohio, will receive an SGI ICE X system, based on the 2.3-GHz Intel Xeon E5-2699v3 ("Haswell-EP") processors. The system will be named "Thunder" and consist of:

    • 125,888 compute cores
    • 356 Intel Xeon Phi 7120P accelerators
    • 356 NVIDIA Tesla K40 GPGPUs
    • 443 terabytes of memory
    • 12.4 petabytes of storage
    • 5.66 petaFLOPS of peak computing capability

  • The Navy DSRC of the Naval Meteorology and Oceanography Command at Stennis Space Center, Mississippi, will receive three Cray XC40 systems containing 2.3 GHz Intel Xeon E5-2698v3 ("Haswell-EP") processors. The systems will be named "Bean," "Conrad," and "Gordon," in honor of the Apollo 12 astronauts Alan Bean, Pete Conrad, and Richard F. Gordon, Jr., all of whom were also naval aviators. Two larger systems will each contain:

    • 50,208 compute cores
    • 168 Intel Xeon Phi 5120D accelerators
    • 197 terabytes of memory
    • 2.29 petabytes of storage
    • 2.0 petaFLOPS of peak computing capability

  • A third smaller system will contain:

    • 6,720 compute cores
    • 24 Phi accelerators
    • 27 terabytes of memory
    • 420 terabytes of storage
    • 260 teraFLOPS peak computing capability

Combined, the Navy DSRC will add 107,136 compute cores and 4.26 petaFLOPS of capability to the DSRC.

The HPCMP enables advanced computing for the DOD's science and engineering communities, and serves as an innovation enabler. HPC is employed in a broad range of diverse application areas in the DOD including fluid dynamics, structural mechanics, materials design, space situational awareness, climate and ocean modeling, and environmental quality.

Image of an SGI Ice X and a Cray XC30

The Department of Defense (DOD) High Performance Computing Modernization Program (HPCMP) just completed its fiscal year 2014 investment in supercomputing capability supporting the DOD science, engineering, test and acquisition communities. The total acquisition is valued at $65 million, including acquisition of two supercomputing systems with corresponding hardware and software maintenance services. At 8.4 petaFLOPs, this procurement more than doubles the HPCMP's aggregate supercomputing capability, increasing from 8.1 petaFLOPs to 16.5 petaFLOPs.

"Supercomputing is a critical enabling technology for the DOD as it continues vital work to improve both the safety and performance of the U.S. military," said John West, director of the HPCMP. "These newly acquired systems ensure that scientists and engineers in the DOD's research, development, test and evaluation communities will continue to be able to take advantage of a robust computing ecosystem that includes the best computational technologies available today."

The two purchased systems will collectively provide nearly 227,000 cores, more than 850 terabytes of memory, and a total disk storage capacity of 17 petabytes. This competitive government acquisition was executed through the U.S. Army Engineering and Support Center in Huntsville, Alabama, which selected systems from both Silicon Graphics Federal, LLC, and Cray, Inc.

"The increase in computational capability will dramatically improve the speed at which our scientists and engineers are able to complete their work," said Christine Cuicchi, HPCMP associate director for HPC centers. "These systems are also designed to advance DOD's scientific visualization capabilities to manage the vast amounts of data being produced on these systems, providing new opportunities for discovery in numerous areas of research."

The new supercomputers will be installed at two of the HPCMP's five DOD Supercomputing Resource Centers (DSRCs), and will serve users from all of the services and agencies of the Defense Department:

  • The Army Research Laboratory DSRC in Aberdeen, Maryland, will receive a Cray XC40 system containing 2.3-GHz Intel Xeon E5-2698 v3 ("Haswell-EP") processors and NVIDIA Tesla K40 General-Purpose Graphics Processing Units (GPGPUs). This system will consist of 101,312 compute cores, 32 GPGPUs, and 411 terabytes of memory, and will provide 3.77 petaFLOPS of peak computing capability.
  • The U.S. Army Engineer Research and Development Center DSRC in Vicksburg, Mississippi, will receive an SGI ICE X system containing 2.3-GHz Intel Xeon E5-2699 v3 ("Haswell-EP") processors and NVIDIA Tesla K40 GPGPUs. The system will consist of 125,440 compute cores, 32 GPGPUs, and 440 terabytes of memory, and will provide 4.66 petaFLOPS of peak computing capability.

Delivery of the new systems is expected in the spring of 2015, with general availability to users in the summer.

Graphic of floorplan expansion

ARL is making major renovations and upgrades to the ARL Supercomputing Research Center facility to prepare the ARL DSRC for Technology Insertion for fiscal year 2014 (TI-14) systems arrivals as well as scaling the Center's High Performance Computing (HPC) infrastructure for future systems.

Building on the initial TI-12 requirements, the facility expansion was scoped to take advantage of the available space and infrastructure to build out the facility to meet HPC system needs through 2018. Plans are to increase commercial power capacity to 12 Megawatts, generator power to 12 Megawatts, cooling capacity to 2600 tons, and UPS protection to 8 Megawatts.

Photo of a Cray XC30

The Department of Defense (DoD) High Performance Computing Modernization Program (HPCMP) has just completed its fiscal year 2013 investment in supercomputing capability supporting the DoD science, engineering, test and acquisition communities. The total acquisition is valued at $50 million, including acquisition of multiple supercomputing systems and hardware, as well as software maintenance services. At nearly three petaFLOPS of computing capability, the acquisition constitutes a more than 50 percent increase in the DoD HPCMP.s current peak computing capability.

The supercomputers will be installed at two of the HPCMP's five DoD Supercomputing Resource Centers (DSRCs), and will serve users from all of the services and agencies of the Department.

  • The Air Force Research Laboratory DSRC at Wright-Patterson Air Force Base, Ohio, will receive a 1.2 petaFLOPs Cray XC30 system built upon the 2.7-GHz Intel Xeon E5-2697 v2 ("Ivy Bridge EP") processor. This system consists of 56,112 compute cores and 150 terabytes of memory.
  • The Navy DSRC, Naval Meteorology and Oceanography Command, located at Stennis Space Center, Mississippi, will receive two 0.75 petaFLOPs Cray XC30 systems built upon the 2.7-GHz Intel Xeon E5-2697 v2 processor and the 1.05-GHz Intel Xeon Phi Coprocessor 5120D. These two systems are identical; each consisting of 29,304 compute cores, 7,440 coprocessor cores, and 78 terabytes of memory. The systems are designed as sister-systems to provide continuous service during maintenance outages.

Delivery of the new systems is expected in the spring of 2014, with general availability to users in the summer.

Photo collage with Haise, Kilrain, and a Xeon Phi Co-processor.

The HPCMP Centers Team is pleased to announce the availability of Intel Xeon Phi accelerated nodes on Navy DSRC IBM iDataPlex systems HAISE and KILRAIN. Intel Xeon Phis, also known as many integrated cores (MICs), can each support up to 240 threads of concurrent execution. A detailed Intel Xeon Phi Guide is available on the Navy DSRC website at: https://www.navydsrc.hpc.mil/docs/xeonPhiGuide.html

Users will be able to access the nodes via the phi64 queue on each system. PBS jobs submitted to the phi64 queue will continue to be charged based on the usage of a standard compute node (# of nodes * 16 cores * # of hours). If you would like access to the phi64 queue, please send a brief note requesting access to dsrchelp@navydsrc.hpc.mil.

These nodes are comprised of both a compute node and an accelerator node. Each accelerated compute node contains two 8-core 2.6-GHz Intel Sandy Bridge processors, identical to those in the rest of the system, and 64 GBytes of memory. Each Xeon Phi co-processor node contains two Intel Xeon Phi 5110P co-processors, each composed of 60 cores and 8 GBytes of internal memory. Each Xeon Phi core supports up to 4 execution threads, allowing for potentially extensive parallelism.

In order to properly support compilation of Intel Xeon Phi codes, the Intel Compiler Suite and Intel Math Kernel Library (MKL) modules will both be defaulted to version 13.1 on Haise and Kilrain.

Please note: Intel Xeon Phi nodes on Haise and Kilrain currently only support offload mode and native mode.

In offload mode, code runs on the standard compute node portion of a Phi node and offloads segments of execution to the co-processor portion. For example, an MPI code with offload directives within it could run on multiple accelerated nodes, using the Intel Xeon Phi portion of each node to potentially speed up calculations.

In native mode, code runs directly on the co-processor node. Currently, MPI codes running in native mode are limited to a single Phi node. However, in the next several months Mellanox should release an update to its version of OFED, which will support native execution of MPI codes across multiple Intel Xeon Phi nodes.

For more information about developing code for the Intel Xeon Phi, including webinars, please see: http://software.intel.com/mic-developer.

Users are invited to report problems and direct requests for unclassified assistance to the Consolidated Customer Assistance Center (CCAC) at 1-877-222-2039 or by E-mail to help@ccac.hpc.mil.

Photo of a Utility Server with the PGI and OpenACC Logos

The HPCMP recently upgraded the existing PGI Compiler licenses on the DSRC utility servers to support OpenACC directives. Programmers can accelerate applications on x64+accelerator platforms by adding OpenACC compiler directives to existing high-level standard-compliant Fortran, C and C++ programs and then recompiling with appropriate compiler options. The only installed PGI version on the utility servers that supports Acceleration/OpenACC is 13.7. All other installed versions support only the original base compiler license. Information on PGI Accelerator compilers can be found at the following link: http://www.pgroup.com/resources/accel.htm

The new HPC Centers website, https://centers.hpc.mil, is your resource for all information and documentation related to HPC systems, tools, and services available at the five DSRCs and within the High Performance Computing Modernization Program (HPCMP). This website replaces http://ccac.hpc.mil, which is no longer in service.

The User Dashboard has been developed to provide all authorized users holding a valid HPCMP account a personalized look at up-to-date information pertaining to your user accounts on all of the HPC systems to which you have access within the Program. The Dashboard also provides documentation that requires authentication to view, including information about our classified systems.

Photo montage of Garnet

The 150,912 Core Cray XE6 (Garnet) is open for logins and batch jobs submission. The consolidation of Raptor, Chugach and the original Garnet plus additional disk racks makes Garnet the largest HPC system in the High Performance Computing Modernization Program.

Garnet is a capability computing system and has been tailored to run large jobs with its maximum job size of 102,400 cores and reduced wall time limits for large jobs to increase throughput. Running very large jobs has it challenges, with IO being one of the most difficult to contend with. Guidance and tools to help maximize IO performance of large jobs are found on the ERDC DSRC website:

Photo of Spirit

The Air Force Research Laboratory (AFRL) DoD Supercomputing Resource Center (DSRC), one of the five supercomputing centers in the Department of Defense High Performance Computing Modernization Program (HPCMP), is proud to announce the transition to full production of its newest supercomputer for the Department of Defense. The new SGI ICE X supercomputer, named "Spirit" in honor of the B-2 Stealth Bomber, is located at Wright-Patterson Air Force Base in Dayton, Ohio.

Installation of the new system expands the installed supercomputing capability of the AFRL DSRC by 1.415 quadrillion floating point operations per second (petaFLOPS), making it one of the top 20 fastest computers in the world. The new system will support research, development, test and evaluation in a diverse array of disciplines ranging from aircraft and ship design to microelectronics and protective systems. Read More >

Photo of an iDataPlex system

The HPCMP recently completed the installation, integration, and testing of six IBM iDataPlex systems, and all are now in full production for DoD scientists and engineers. Three of the systems, named haise, kilrain, and cernan, are located in the Navy DSRC at the Navy Meteorology and Oceanography Command, Stennis Space Center, Mississippi. Two of the systems, named pershing and hercules, are located at the Army Research Laboratory (ARL) DSRC, Aberdeen Proving Grounds, Maryland. The final system, named riptide, is located at the Maui High Performance Computing Center (MHPCC) DSRC in Kihei, Hawaii

Each system is based on the Intel Sandy Bridge E2670 processor, which runs at 2.6 GHz and features TurboMode and Hyperthreading. Moreover, the system features RedHat Enterprise Linux, the IBM General Parallel Filesystem (GPFS), and a Mellanox FDR-10 Infiniband interconnect. The Navy DSRC systems are additionally fitted with Intel Xeon Phi 5110P coprocessors based on Intel Many Integrated Core (MIC) architecture, enabling more efficient performance for highly-parallel applications.

The table below presents specific system details for the HPCMP IBM iDataPlex systems.

System Compute
Nodes
Compute
Cores
Memory
per Node
Peak
Performance
Haise 1,176 18,816 + 24phi 32 GBytes 435 TFLOPS
Kilrain 1,176 18,816 + 24phi 32 GBytes 435 TFLOPS
Cernan 248 + 41 3,968 + 641 32 GBytes or 256 GBytes1 84 TFLOPS
Pershing 1,092 + 1681 17,472 + 2,6881 32 GBytes or 64 GBytes1 420 TFLOPS
Hercules 1,092 17,472 64 GBytes 360 TFLOPS
Riptide 756 12,096 32 GBytes 252 TFLOPS

1 - Large memory nodes

The HPCMP is delighted to offer these new systems - providing an aggregate of one petabyte of peak computational power - to the DoD Science and Technology (S&T) and Test and Evaluation (T&E) communities. To request an account, contact the HPCMP Consolidated Customer Assistance Center at help@ccac.hpc.mil.

Photo montage from the dedication ceremony

Navy DSRC Establishes HPC System Names and Honors NASA Astronaut and Former Naval Aviator Fred Haise at Ceremony

The Navy DoD Supercomputing Resource Center (DSRC), one of the five supercomputing centers in the Department of Defense High Performance Computing Modernization Program (HPCMP), recently added three new supercomputers to its operations. The three IBM iDataPlex systems, installed in the fall of 2012 and operational in January, tripled the installed capacity of the Navy DSRC.

The new supercomputers, located at the John C. Stennis Space Center in Mississippi, are named after NASA astronauts who have served in the Navy. At a dedication ceremony in February, one of those computers was dedicated in honor of naval aviator and Apollo 13 astronaut Fred Haise, who attended the ceremony. Read More >

Photo of the award presentation.

The Department of Defense (DoD) High Performance Computing Modernization Program (HPCMP) was recently recognized by the U.S. Army chief of engineers in the Awards of Excellence program for excellence in sustainability, design and construction. A team comprised of representatives from the HPCMP's five DoD Supercomputing Resource Centers (DSRCs) was awarded the U.S. Army Corps of Engineers (USACE) Green Innovation Award for recognizing an innovation or idea with clear potential to transform the federal community's overall energy and environmental performance. The award is presented to an individual or team for the development and execution of a novel product, project, program, design or revolutionary idea that promotes sustainability in the federal government in an area relevant to the USACE mission. The DSRCs created an interagency community of practice (COP) team, the Green Team, that meets monthly to share best practices in supercomputing facilities operation and to plan energy awareness initiatives. Read More >