Decommissioned Systems

jaguar_pagebanner.jpg

Jaguar – XT4 Partition (2007 – 2011)
The XT4 partition contained 7,832 compute nodes in addition to dedicated login/service nodes. Each compute node contained a quad-core AMD Opteron 1354 (Budapest) processor running at 2.1 GHz, 8 GB of DDR2-800 memory (some nodes use DDR2-667 memory), and a SeaStar2 router. The resulting partition contained 31,328 processing cores, more than 62 TB of memory, over 600 TB of disk space, and reached a peak performance of 263 teraflop/s (263 trillion floating point operations per second).
Ewok (2006-2011)
Ewok was an Intel-based Infiniband cluster running Linux. The system was provided as an end-to-end resource for center users. It was used for workflow automation for jobs running from jaguar and for advanced data analysis. The system contained 81 nodes. Each node contained two 3.4 GHz Pentium IV processors, 3.4GHz Intel Xeon CPU, and 6 GB of memory. An additional node contained 4 dual-core AMD processors and 64 GB of memory. The system was configured with a 13 TB Lustre file system for scratch space.

eugene_pageBanner

Eugene (2008 – 2011)
Eugene was a 27 TF IBM Blue Gene/P System operated by the NCCS. It provided approximately 45 million processor hours yearly for ORNL staff and for the promotion of research collaborations between ORNL and its core university partner members.
The system consisted of 2048 850Mhz IBM quad core 450d PowerPC processors and 2GB of memory per each node. Eugene had 64 I/O nodes; each submitted job was required to use at least one I/O node. This means that each job consumed a minimum of 32 nodes per execution.


Phoenix Banner

Phoenix (2003 – 2008)
Phoenix was a Cray X1E provided as a primary system in the National Center for Computational Sciences (NCCS).
The original X1 was installed in 2003 and went through several upgrades, arriving at its final configuration in 2005. Since October 2005, it provided almost 17 million processor-hours. The system has supported over 40 large projects in research areas including Climate, Combustion, High
Energy Physics, Fusion, Chemistry, Computer Science, Materials Science, and Astrophysics.
At its final configuration Phoenix had 1,024 multistreaming vector processors (MSPs). Each MSP had 2 MB of cache and a peak computation rate of 18 GF. Four MSPs formed a node with 8 Gb of shared memory. Memory bandwidth was very high, roughly half the cache bandwidth. The interconnect functioned as an extension of the memory system, offering each node direct access to memory on other nodes at high bandwidth and low latency.
The Cray X1E used custom-designed vector processors to get high performance for scientific codes. The Cray-designed processors were linked using a high-performance shared memory interconnect technology. Each of Phoenix’s 1,024 MSPs could carry out as many as 18 billion operations per second, making the performance of the total system as high as 18.5 trillion operations per second.
Previous phoenix technical pages can be found here.


Hawk Page

Hawk (2006 – 2008)
Hawk was a 64-node Linux cluster dedicated to high-end visualization.
Hawk was installed in 2006 and was used as the Center’s primary visualization cluster until May 2008 when it was replaced by a 512 core system named lens.
Each node contained two single-core Opteron processors and 2 GB of memory. The cluster was connected by a Quadrics Elan3 network, providing high-bandwidth and low-latency communication. The cluster was populated with two flavors of nVidia graphics cards connected with AGP8x: 5900 and QuadroFX 3000G. Nodes with 3000G cards were directly connected to the EVEREST PowerWall and were reserved for PowerWall use.


Ram Banner

Ram (2003 – 2007)
Ram was an SGI Altix provided as a support system for the National Center for Computational Sciences (NCCS).
Ram was installed in 2003 and was used as a pre and post processing support system for allocated NCCS projects until 2007.
Ram had 256 Intel Itanium2 processors running at 1.5 GHz, each with 6 MB of L3 cache, 256K of L2 cache, and 32K of L1 cache. Ram had 8 Gb of memory per processor for a total of 2 TB of shared memory.
The most remarkable feature of this computer was its memory, which held 2 trillion bytes (2 terabytes) of data. By contrast, the first supercomputer in Oak Ridge, the Cray XMP installed in 1985, had one-millionth the memory of the SGI Altix.
Last modified on March 5th, 2012 at 8:41 am