TeraGrid is composed of multiple partner institutions, each of which contributes one or more
hardware resources to the grid. Resources include computational and visualization machines,
data storage,
data collections, and instruments. This page summarizes TeraGrid hardware resources at each of the
partner sites.
For information on individual resources click the machine name in the table below
to go to that provider's local Web site.
Resource Name
Platform |
Description |
Specifications |
Resources grouped by site
(group by machine type)
IU |
IU Big Red IBM e1350
| Big Red has 768 IBM JS21 compute nodes, each with two dual-core 2.5 GHz PowerPC 970MP CPUs, 8 GB memory, 72 GB of local scratch disk, and a PCI-X Myrinet-2000 interconnect for high-bandwidth low-latency MPI applications. It has access to 266 TB of local GPFS scratch space, the TeraGrid-wide GPFS-WAN file system, and to the 535 TB Lustre file system provided by the Data Capacitor. NOTE: 6.5 TFlops is available for TeraGrid usage.
Recommended Use Big Red is a distributed shared-memory cluster, intended to run parallel as well as serial applications.
Status In production and accepting allocation requests
Startup Allocation Limit 30000 SUs
A TeraGrid-Wide ROAMING Resource (more about Roaming) | Machine Type
MPP
Operating System
SuSE Linux Enterprise Server 9
Teraflops
30.7
Disk Size
266 TB |
IU Quarry Dell AMD
| Quarry Gateway Web Services Hosting consists of multiple Dell AMD systems geographically distributed for failover. Each system has at least 8 cores and 32 GB of memory. Persistent storage is available by utilizing IU NFS home directories with a 10GB default quota or the 335TB Data Capacitor WAN (Lustre) file system. The system utilizes OpenVZ to
provide virtual hosting of RPM-based Linux distributions. The host
operating system is Red Hat Enterprise Linux.
Recommended Use This machine is used for hosting Scientific Gateway and Web Service allocations. The Quarry resource is restricted to members of approved XRAC grants which have a web service component.
Status In production and accepting allocation requests
Startup Allocation Limit 30000 SUs
NOT included in TeraGrid-Wide Roaming | Machine Type
SMP
Operating System
RedHat Enterprise Linux Server
Teraflops
0
Disk Size
335 TB |
LONI |
LONI Queen Bee Dell Intel 64 Linux Cluster
| Queen Bee is a 50.7 TFlops Peak Performance 668 node Dell PowerEdge 1950 cluster running the Red Hat Enterprise Linux 4 operating system. Each node contains two Quad Core Intel Xeon 2.33GHz 64-bit processors and 8 GB of memory. The cluster is interconnected with 10 Gb/sec Infniband and has total 192TB (raw) of storage in shared Lustre filesystems.
Recommended Use Queen Bee is primarily intended for parallel applications scalable up to 5344 processing cores.
Note: Queen Bee is in production as of February 1, 2008. Starting with April 1 2008 allocations, LONI Queen Bee system is co-allocated with the NCSA Abe system.
Status In production and accepting allocation requests
Startup Allocation Limit 30000 SUs
A TeraGrid-Wide ROAMING Resource (more about Roaming) | Machine Type
Cluster
Operating System
Red Hat Enterprise Linux 4 (Linux 2.6.9-55)
Teraflops
50.7
Disk Size
192 TB |
NCAR |
NCAR Frost IBM Blue Gene/L
| Frost is a single-rack, Blue Gene/L system with 1024 compute nodes. There is one I/O node for every 32 compute nodes (pset size of 32) for a total of 32 I/O nodes in the rack. Each compute node and I/O node is a dual-core chip, containing two 700MHz PowerPC-440 CPUs, 512MB of memory, and two floating-point units (FPUs) per core. Thus frost has a total of 2048 processors capable of sustaining a peak performance of 5.734 trillion floating-point operations per second (TFLOPs). By default, the compute nodes run in coprocessor mode (one processor handles computation and the other handles communication), but virtual node mode is also available, where both processors share the computation and communication load.
Recommended Use The Frost Blue Gene/L system is a highly scalable platform for developing, testing and running parallel MPI applications up to 2048 processors, and providing efficient computing for smaller job sizes.
Frost became available for Startup/Education allocations in July, 2007. Requests for Research allocations were first accepted on Sept. 17, 2007, for projects beginning Jan. 1, 2008.
Status In production and accepting allocation requests
Startup Allocation Limit 50000 SUs
NOT included in TeraGrid-Wide Roaming | Machine Type
MPP
Operating System
SuSE Linux Enterprise Server 9
Teraflops
5.734
Disk Size
6 TB |
NCSA |
NCSA Abe Dell Intel 64 Linux Cluster
| This Dell blade system has 1,200 PowerEdge 1955 dual socket, quad core compute blades, an InfiniBand interconnect and 100 TB of storage in a Lustre filesystem.
Recommended Use The NCSA Intel 64 cluster (Abe) is intended for highly scalable parallel applications.
Note: Abe is in production as of July 9, 2007. Starting with April 1 2008 allocations, Abe will be co-allocated with the LONI Queen Bee system.
Status In production and accepting allocation requests
Startup Allocation Limit 30000 SUs
A TeraGrid-Wide ROAMING Resource (more about Roaming) | Machine Type
Cluster
Operating System
Red Hat Enterprise Linux 4 (Linux 2.6.9)
Teraflops
89.47
Disk Size
100 TB |
NCSA Cobalt SGI Altix
| The NCSA SGI Altix consists of several Intel Itanium 2 processor shared-memory systems. The Altix uses the CXFS shared parallel filesystem from SGI.
Recommended Use The NCSA SGI Altix (cobalt) is intended primarily for running large shared-memory applications.
Status In production and accepting allocation requests
Startup Allocation Limit 30000 SUs
A TeraGrid-Wide ROAMING Resource (more about Roaming) | Machine Type
SMP
Operating System
SGI ProPack 5
Teraflops
8.2
Disk Size
100 TB |
NCSA Lincoln Dell/Intel/NVIDIA
| Lincoln consists of 192 compute nodes (Dell PowerEdge 1950 dual-socket nodes with quad-core Intel Harpertown 2.33GHz processors and 16GB of memory) and 96 NVIDIA Tesla S1070 accelerator units. Each Tesla unit provides 345.6 gigaflops of double-precision performance and 16GB of memory.
Recommended Use Lincoln is intended for applications that can make use of the heterogenous processors (CPU and GPU) that comprise this system.
Status Pre-production, available for allocation requests
Startup Allocation Limit 30000 SUs
A TeraGrid-Wide ROAMING Resource (more about Roaming) | Machine Type
Cluster
Operating System
Red Hat Enterprise Linux 4
Teraflops
47.5
Disk Size
200 TB |
NCSA Mercury Intel IA-64 Cluster
| NCSA 's IA-64 TeraGrid Linux Cluster consists of 887 IBM nodes: 256 nodes with dual 1.3 GHz Intel Itanium 2 processors (half with 4 GB of memory per node, and the other half with 12 GB of memory per node), and 631 nodes with dual 1.5 GHz Intel Itanium 2 processors (4 GB of memory per node). The cluster is running SuSE Linux and is using Myricom 's Myrinet cluster interconnect network, and the GPFS parallel filesystem.
Recommended Use The NCSA IA-64 Linux Cluster (mercury) is primarily intended to run applications of moderate to high levels of parallelism, particularly those needing a 64-bit environment and codes that perform well in a distributed cluster environment.
Note: mercury is co-allocated with the SDSC and ANL IA-64 Clusters.
Status In production and accepting allocation requests
Startup Allocation Limit 30000 SUs
A TeraGrid-Wide ROAMING Resource (more about Roaming) | Machine Type
Cluster
Operating System
Linux 2.4.21-SMP
Teraflops
10.23
Disk Size
60 TB |
NICS |
NICS Kraken Cray XT4
| The Kraken system is a Cray XT4 with 4,512 nodes interconnected with SeaStar, a 3D torus. Each node has one 4-core processor for a total of 18,048 cores.
Recommended Use Kraken is intended for highly scalable parallel applications.
Status In production and accepting allocation requests
Startup Allocation Limit 200000 SUs
NOT included in TeraGrid-Wide Roaming | Machine Type
MPP
Operating System
Compute Node Linux (CNL)
Teraflops
166
Disk Size
350 TB |
NICS Kraken Cray XT5
| The upgraded Kraken system is a Cray XT5 with 8,352 nodes interconnected with SeaStar, a 3D torus. Each node has two quad-core AMD Opteron processors for a total of 66816 cores. Half of the nodes have 2 Gbytes of memory per core the remaining nodes have 1 Gbyte of memory per core.
Each core runs at 2.3 GHz and can generate 4 FLOPS per clock cycle. The theoretical peak performance is 615 TFLOPS. HPL runs at 438 TFLOPS.
The XT5 system will replace the existing XT4. NICS expects the XT5 to go into production in mid- to late-February.
Recommended Use Kraken is intended for highly scalable parallel applications.
Status Pre-production, available for allocation requests
Startup Allocation Limit 200000 SUs
NOT included in TeraGrid-Wide Roaming | Machine Type
MPP
Operating System
Compute Node Linux (CNL)
Teraflops
615
Disk Size
2400 TB |
ORNL |
ORNL NSTG Cluster IBM IA-32 Cluster
| The ORNL NSTG cluster has 28 nodes, 16 of which are dedicated to running compute jobs. Each compute node has two 3.06 GHz Intel Pentium4 Xeon CPUs, 2.5 GB memory, and 26 GB of local scratch. 800 GB of shared scratch is provided across the private gigabit interconnect. Four additional nodes are dedicated to running GridFTP servers, and each is configured with 4 GB of memory.
Recommended Use Available for general use. Also available for special request use for long duration or experimental infrastructure test deployments.
Status In production and accepting allocation requests
Startup Allocation Limit 30000 SUs
A TeraGrid-Wide ROAMING Resource (more about Roaming) | Machine Type
Cluster
Operating System
SuSE Linux 10.2
Teraflops
0.34
Disk Size
2.14 TB |
PSC |
PSC BigBen Cray XT3
| BigBen is a Cray XT3 MPP system with 2068 2.6-GHz dual-core AMD Opteron compute nodes linked by a custom-designed interconnect. Twenty-two dedicated I/O processors are also connected to this network. Each compute node has 2 Gbytes of memory shared by its two cores, and runs the Catamount operating system. The front end processors run SuSE Linux.
Recommended Use BigBen is primarily intended to run applications with very high levels of parallelism or concurrency (512 - 4096 processes).
Status In production and accepting allocation requests
Startup Allocation Limit 200000 SUs
A TeraGrid-Wide ROAMING Resource (more about Roaming) | Machine Type
MPP
Operating System
SuSE Linux [Frontend] Catamount [Compute]
Teraflops
21.5
Disk Size
100 TB |
PSC Pople SGI Altix 4700
| Pople is an SGI Altix 4700 comprising 192 blades with 8 GB of memory and 2 sockets on each blade. Each socket is a 1.66GHz dual-core Intel Itanium 2 processor (Montvale). Pople has a total of 384 sockets, 768 cores and 1.5 TB (2GB per core) of RAM.
The blades are linked with a NUMAlink interconnect.
Recommended Use Pople is intended for applications utilizing shared memory and hybrid architectures.
Status In production and accepting allocation requests
Startup Allocation Limit 30000 SUs
A TeraGrid-Wide ROAMING Resource (more about Roaming) | Machine Type
SMP
Operating System
SuSE Linux
Teraflops
5.0
Disk Size
150 TB |
Purdue |
Purdue Brutus SGI 450
| These resources consist of an SGI 450 (brutus.rcac.purdue.edu) with two RC100 FPGA blades, totaling 4 available FPGAs. Also available is a Sun Fire X2200 M2 (portia.rcac.purdue.edu) which serves both as a place & route node for preparing FPGA code for use on Brutus and as an entry point for GSI-SSH and job submission to Brutus by TeraGrid users.
NOTE: All references to CPU below should be interpreted as referring to be FPGAs.
Recommended Use This resource should only be used for FPGA accelerated applications.
Status In production and accepting allocation requests
Startup Allocation Limit 10000 SUs
NOT included in TeraGrid-Wide Roaming | Machine Type
Cluster
Operating System
SUSE Linux
Teraflops
0.042
Disk Size
22 TB |
Purdue Condor Pool Condor Pool
| The Purdue Condor pools consist of over 14000 CPUs of computation: 8000 LINUX/X86_64 CPUs, 400 LINUX/INTEL (ia32) CPUs, and 5000 WINNT51/INTEL CPUs, as well as a small number of Itanium Linux, Solaris and MacOS X machines. Memory on compute nodes range from 512 MB to 32 GB, and most CPUs run at 3 GHz or better. With a total of over 60 TFLOPS available, the Purdue Condor pools can provide large numbers of cycles in a short amount of time. All shared areas and software packages available on Lear are available on Condor.
Recommended Use Condor is designed for high-throughput computing, and is excellent for parameter sweeps, Monte Carlo simulation, or most any serial application. Also, some classes of parallel jobs (master-worker) may be run in Condor.
Status In production and accepting allocation requests
Startup Allocation Limit 200000 SUs
A TeraGrid-Wide ROAMING Resource (more about Roaming) | Machine Type
Cluster
Operating System
Linux Debian Etch, RHEL4
Teraflops
60
Disk Size
170 TB |
Purdue Steele Dell 1950 Cluster
| The Steele cluster consists of 893 dual quad-core Dell 1950 compute nodes, running Red Hat Enterprise Linux, version 4. Each node thus has 8 64-bit Intel 2.33 GHz E5410 CPUs and either 16 GB or 32 GB of RAM. They are interconnected with either Gigabit Ethernet or InfiniBand. The machine offers access to the RCAC scratch space. Steele users also may also access a 1.3 PB DXUL archive system. Steele 's peak performance is rated at 66.59 TFLOPS.
Recommended Use Steele is well suited for a wide range of both serial and parallel jobs.
Status
Status In production and accepting allocation requests
Startup Allocation Limit 200000 SUs
A TeraGrid-Wide ROAMING Resource (more about Roaming) | Machine Type
Cluster
Operating System
RedHat Enterprise 4
Teraflops
66.59
Disk Size
130 TB |
Purdue TeraDRE TeraDRE
| The Purdue TeraDRE is a high-throughput visualization resource built on the Purdue Condor Pools. A 48-node subcluster featuring Nvidia GeForce 6600 GT GPUs are available for GPU-accelerated programs and hardware-accelerated rendering using
Gelato.
Recommended Use The TeraDRE allows TeraGrid users to render graphics with a number of rendering packages: Maya, POV-ray, and Blender, among others.
Status In production and accepting allocation requests
Startup Allocation Limit 200000 SUs
A TeraGrid-Wide ROAMING Resource (more about Roaming) TeraDRE is a TeraGrid visualization resource. | Machine Type
Cluster
Operating System
Linux Debian Etch, RHEL4
Teraflops
60
Disk Size
170 TB |
SDSC |
SDSC IBM IA-64 Cluster
| The TeraGrid cluster at SDSC comprises 262 IBM Itaninum2 nodes with two processors per node. Each node is built with SuSE Linux and interconnected with Myricom 's Myrinet. The system has a peak performance of 3.1 Teraflops, a total memory of 1 TB, and a total of 50 TB GPFS disk through the SAN network. Jobs are scheduled and run by the Catalina-scheduler and PBS-batch system.
Status In production and accepting allocation requests
Startup Allocation Limit 30000 SUs
A TeraGrid-Wide ROAMING Resource (more about Roaming) | Machine Type
Cluster
Operating System
Linux 2.4 (SuSE 8.0)
Teraflops
3.10
Disk Size
34 TB |
TACC |
TACC Lonestar Dell PowerEdge 1955
| The Lonestar Dell PowerEdge Linux Cluster is configured with 5,840 compute-node cores, 11.6 TB of total memory and 106TB of local disk space. The peak performance rated is 62 TFLOPS. The system supports a 70TB globally accessible, Lustre parallel file system. Nodes are interconnected with InfiniBand technology in a fat-tree topology with a 1GB/sec point-to-point bandwidth. Also, a 2.8 petabyte archive system and a 5TB SAN are available through the login/development nodes.
Recommended Use Lonestar is intended primarily for parallel applications scalable up to 4096 processing cores. Normal batch queues enable users to run up to 24-hour simulations that utilize up to 512 cores. Simulations requiring longer run times and/or more cores are accommodated via a special queue after approval from TACC technical staff. Serial and development queues are available to users for code development, conversion of serial applications to parallel, and single-cpu performance analysis.
Status In production and accepting allocation requests
Startup Allocation Limit 30000 SUs
A TeraGrid-Wide ROAMING Resource (more about Roaming) | Machine Type
MPP
Operating System
Linux (Centos) 2.6.12
Teraflops
62.16
Disk Size
106.5 TB |
TACC Ranger Sun Constellation System
| The Ranger Sun Constellation Cluster is configured with 3,936 four-socket, quad-core AMD Opteron nodes (62,976 compute cores) and 125 TB of distributed memory. With each core clocked at 2.3 GHz and capable of four flops/clock cycle, Ranger provides the user community access to a resource with a theoretical peak performance of 579.3 TFLOPS. Multiple shared file systems (HOME, WORK, and PROJECTS) are configured from 1.7 PB of raw storage. All file systems are managed via the Lustre Parallel File System. Nodes are interconnected with InfiniBand technology with two non-blocking Sun Magnum switches acting as the core of the fabric.
Recommended Use Ranger is intended for users with codes scalable to thousands of cores (1024 and above). A batch queue is available to assist users develop, test, and scale codes up to 1024 compute cores. Four separate login nodes will provide interactive connectivity to the system for compiling and interfacing with the batch queuing system.
Production Date
Mon., Feb. 4, 2008
Status In production and accepting allocation requests
Startup Allocation Limit 200000 SUs
NOT included in TeraGrid-Wide Roaming | Machine Type
MPP
Operating System
Linux (CentOS)
Teraflops
579.3
Disk Size
1730 TB |
TACC Spur Sun Visualization Cluster
| Spur, the TACC Sun Visualization Cluster, consists of 8 nodes, each with significant computing and graphics resources. Total system resources include 128 compute cores, 1 TB distributed memory and 32 NVIDIA FX5600 GPUs.
The login node is a Sun Fire X4600 server with 8 dual-core AMD Opteron processors, 256 GB memory, and 2 NVIDIA Quadro Plex model 4 graphics cards. Compute nodes include: 1 Sun Fire X4400 server with 4 quad-core AMD Opteron processors, 128 GB memory, and 2 NVIDIA Quadro Plex model 4 graphics cards and 6 Sun Fire x4400 servers, each with 4 quad-core AMD Opteron processors, 128 GB memory, and an NVIDIA Quadro Plex S4 graphics card.
Spur shares the InfiniBand interconnect and Lustre Parallel file systems of the TACC Sun Constellation Cluster, Ranger. Thus, Spur acts not only as a powerful, stand-alone visualization system: it also enables researchers to perform visualization tasks on Ranger-generated data without migrating to another file system and to integrate simulation and rendering tasks on a single network fabric.
Recommended Use Spur is intended for serial and parallel visualization applications that take advantage of large per-node memory, multiple computing cores, and multiple graphics processors. Spur is also an ideal visualization resource for researchers that use Ranger since data produced on Ranger can be visualized directly on Spur with no data migration.
Status In production and accepting allocation requests
Startup Allocation Limit 30000 SUs
A TeraGrid-Wide ROAMING Resource (more about Roaming) Spur is a TeraGrid visualization resource. | Machine Type
Cluster
Operating System
Linux (RHEL 4)
Teraflops
1.13
Disk Size
1730 TB |
UC/ANL |
UC/ANL IA-32 Visualization Cluster
| The IA-32 TeraGrid Linux Visualization Cluster at UC/ANL consists of 96 nodes with dual Intel Xeon processors, with 4 GB of memory and nVidia GeFORCE 6600GT AGP graphics card per node. The cluster is running Red Hat Enterprise Linux and is using Myricom 's Myrinet cluster interconnect network. There is a 16 TB local high-performance GPFS, and access to the TeraGrid-wide GPFS-WAN file-system.
Status In production and accepting allocation requests
Startup Allocation Limit 30000 SUs
A TeraGrid-Wide ROAMING Resource (more about Roaming) The IA-32 Visualization Cluster is a TeraGrid visualization resource. | Machine Type
Cluster
Operating System
Red Hat Enterprise Linux 4
Teraflops
0.61
Disk Size
4.00 TB |
UC/ANL Intel IA-64 Cluster
| The IA-64 TeraGrid Linux Cluster at UC/ANL consists of 62 nodes with dual Intel Itanium 2 processors, with 4 GB of memory per node. The cluster is running Red Hat Enterprise Linux and is using the Myricom Myrinet cluster interconnect network. There is a 16 TB local high-performance GPFS, and access to the TeraGrid-wide GPFS-WAN file-system.
Status In production and accepting allocation requests
Startup Allocation Limit 30000 SUs
A TeraGrid-Wide ROAMING Resource (more about Roaming) | Machine Type
Cluster
Operating System
Red Hat Enterprise Linux 4
Teraflops
0.61
Disk Size
4.00 TB |