A New
Computer Architecture Strategy: The “Blue Planet”
Proposal
In recent years scientific computing in America has been
handicapped by its dependence on hardware that is designed
and optimized for commercial applications. The performance
of the recently completed Earth Simulator in Japan, which
is five times faster than the fastest American supercomputer,
dramatically exposed the seriousness of this problem. Typical
scientific applications are now able to extract only 5 to
10 percent of the power of American supercomputers built from
commercial Web and data servers. By contrast, the design of
the Earth Simulator makes 30 to 50 percent of its power accessible
to the majority of types of scientific calculations.
It is becoming increasingly clear that the requirements of
high performance computing (HPC) for science and engineering,
and the requirements of the commercial market are diverging.
This divergence can be seen in some computer vendors’
reduced interest in the HPC market as well as in the performance
limitations of clusters of symmetric multiprocessors (SMPs)
used for scientific applications. Communications and memory
bandwidth in SMPs are not scaling with processor power, which
constrains the performance of scientific codes. The cost of
scientific supercomputing, with nearly football-field size
computers that consume megawatts of electricity, is also an
issue of national strategic importance.
Lawrence Berkeley and Argonne national laboratories, in close
collaboration with IBM, have responded to this challenge with
a proposal for a new program to bring into existence a new
class of computational capability in the United States that
is optimal for science. Our strategic white paper, “Creating
Science-Driven Computer Architecture: A New Path to Scientific
Leadership”, envisions a new type of development
partnership with computer vendors that goes beyond the mere
evaluation of the offerings that those vendors are currently
planning for the next decade. This strategy includes development
partnerships with multiple vendors, in which teams of scientific
applications specialists and computer scientists will work
with computer architects from major U.S. vendors to create
hardware and software environments that will allow scientists
to extract the maximum performance and capability from the
hardware.
|
|
|
|
Figure
2 Blue Planet was born here: At a
two-day workshop in September 2002, a team of Argonne,
Berkeley Lab, and IBM scientists developed the fundamental
concepts of Virtual Vector Architecture (ViVA), potentially
redefining supercomputing in America. |
|
One the key partnerships, involving IBM, Lawrence Berkeley
National Laboratory, and the NERSC Center, will deploy a new
architecture called ViVA or Virtual Vector Architecture.
This architecture will use commercial microprocessors but
will run programs optimized for vector processors, providing
both high sustained levels of performance and cost-effectiveness.
Blue Planet, a 160 teraflop/s mature implementation
of ViVA, has been proposed for installation at NERSC in the
second half of 2005. Blue Planet is expected to provide twice
the sustained capability of the Earth Simulator at half the
cost. Computer scientists from Berkeley Lab/NERSC, Argonne,
and IBM held two workshops in September and November 2002
(Figure 2), the first to define the Blue Planet architecture,
and the second for IBM to receive scientists’ suggestions
on the design of the Power 6 processor.
|