Experimental Computing Lab (ExCL)

The Experimental Computing Lab (ExCL) was established in 2004 with the goal of providing application users and computer scientists with access to leading-edge computing systems. ExCL is managed by the Future Technologies Group. ExCL researchers investigate architectures such as multicore processors, Field Programmable Gate Arrays (FPGAs), Graphics Processing Units (GPUs), Cell Broadband Engines (CBEs), and Multi-Threaded Array Processors (MTAPs). Most hardware is located in a large access-controlled server room at ORNL in the JICS/NICS building, near the Future Technology Group.

More Information

Systems

Compute Servers

  • A 31-node Linux Networx cluster (Yoda1.ornl.gov) consisting of 32-bit Intel Xeon 2.6GHz processors, networked with Gigabit Ethernet and 4x SDR IB, which serves as a testbed for system software research including operating systems and parallel filesystems.
  • Dual socket 2.4 GHz quad-core Intel Clovertown (springfield.ftpn.ornl.gov)
  • Quad socket 2.2 GHz quad-core AMD Opteron Barcelona B3 (malaga.ornl.gov)
  • Dual socket 2.1 GHz quad-core AMD Opteron Barcelona B3 (madrid.ornl.gov)
  • Dual socket 2.6 GHz dual-core Intel Woodcrest (wc0[012].ornl.gov), connected with InfiniBand DDR HCAs and Myrinet 10GigE cards.
  • Dual socket 2.2 GHz dual-core AMD Opterons (dmz0[0123].ornl.gov)
  • A Sun SPARC Enterprise T5120 server (olafsun.ftpn.ornl.gov) with one 1.165GHz UltraSPARC T2 processor. This processor has eight cores, each with eight thread contexts; to the OS, this system appears to have 64 virtual processors.

I/O Servers for Parallel Filesystem Development

  • Dual socket 2.3 GHz quad-core Intel Harpertown (iot0[1-5].ftpn.ornl.gov), with InfiniBand DDR+QDR and Chelsio 10GigE network

Emerging Architectures

  • Two Cell Broadband Engine (CBE) blade systems with dual 2.4GHz Cell processors, each with a 64-bit Power Architecture PPE core and eight SPE SIMD cores. (cell0[01].ornl.gov)
  • Several variants of GPU accelerators, including double-precision capable boards from
    • NVIDIA Tesla 10-series and
    • AMD FireStream.
    • This includes several previous generation NVIDIA and AMD/ATI series boards.
    • (A CUDA development machine with NVIDIA 8600GT is available at athens.ornl.gov.)
  • An AGEIA PhysX P1 PCI 128MB GDDR3 physics accelerator board.
  • Two ClearSpeed Avalon PCI boards, each capable of 100GF.
  • Three Digilent Virtex-II Pro FPGA Development System boards, with a variety of I/O ports, including USB and Ethernet.
  • A Nallatech XtremeDSP Development Kit with the Xilinx Virtex-II Pro FPGA and dual-channel high-performance ADCs and DACs.
  • Various simulators for advanced architectures

Infrastructure

  • A 4.5 TB Panasas ActiveStore storage system (one shelf, two Director Blades and nine Storage Blades) serving home directories and project areas for ExCL systems
  • A 500 GB RAID server with two dual-core Intel Xeons
  • An InfiniBand network consists of a 48-port DDR switch, 32 dual-port SDR HCAs and 8 dual-port DDR HCAs. The 31-node LNXI cluster is currently connected with the InifiniBand SDR HCAs.
  • Two InfiniBand connectX QDR (quad data rate) HCAs
  • Two Myrinet 10-GigaBit EtherNet cards
  • Two Chelsio 10-GigaBit EtherNet cards

Retired Architectures

  • An SRC-6C MAPstation Reconfigurable Computing Platform pairing dual 2.8GHz Xeon processors with the Xilinx Virtex-II FPGA connected via DIMM slots.
  • An ATI FireStream 1GB PCIe GPU-based stream processing card.
  • Two ClearSpeed CS-301 PCI boards, each with two Multi-Threaded Array Processors with 64 parallel execution units.
  • An Iwill H8501 server with 8 1.8GHz dual core Opteron processors with 32 GB of memory on a NUMA HT interconnect, configured as a 16-way SMP.
  • A 144 processor Cray XD1 containing 2.2GHz Opteron processors with six of these nodes containing a Virtex-II Pro FPGA connected to a pair of Opterons via HyperTransport.
 
ft/experimental_computing_lab.txt · Last modified: 2009/08/03 09:21 by rothpc
Recent changes RSS feed Driven by DokuWiki