Argonne technology
is bringing closer the day when the Internet can let people share
computing, storage, data, programs and other resources as easily
as the electric power grid allows people and energy companies
to share electricity.
The “Grid”
will allow researchers at many facilities to integrate instruments,
displays and computational and information resources over a variety
of computer platforms to attack increasingly complex challenges
faced by scientists, educators, industry and even consumers.
The Grid is
more than just a collection of resources. It also includes a set
of protocols, services and tools for enabling Grid computingthe
Globus Toolkit™. The
toolkit, which was awarded an R&D
100 Award from R&D
magazine in 2002, is one of the most widely used systems for
Grid computing today. Its components enable the secure, scalable
and coordinated use of resources in dynamic, multi-institutional
“virtual organizations.” The toolkit was developed
by the Globus Project™,
centered at Argonne’s
Mathematics and Computer Science Division (MCS), the University
of Chicago’s Distributed Systems Laboratory and the
University of Southern California’s
Information Sciences Institute.
Growing
industrial interest
Industry interest
in grids and the Globus Toolkit is growing rapidly. Both Microsoft
and IBM are now providing
funds to support distributed computing based on these technologies.
Entropia is integrating
its commercial software with that of the Globus Toolkit, and Platform
Computing Inc. is collaborating with the Globus Project to
provide a commercially supported version. Nine other companies
worldwide have adopted the toolkit as their defacto standard Grid
technology platform.
“We
certainly welcome this support,” said Ian Foster, associate
director of MCS Division and professor of computer science at
the University of Chicago.
“The Globus Project is staunchly committed to open-architecture
software, and industry backing will contribute to the public knowledge
base of Grid computing. The potential benefit to users is enormous.”
Foster and
his MCS colleague Steven Tuecke lead Globus Project activities
at Argonne and have spearheaded efforts to engage industry in
Globus Toolkit development and applications.
Globus technologies
are being applied in a wide range of leading-edge activities,
including GriPhyN,
a physics network to explore applications requiring quadrillions
of operations per second; NEESGrid,
a national virtual laboratory for earthquake engineering; and
the Cactus
astrophysics computing portal.
Several national
collaboratories supported by the new DOE Scientific
Discovery through Advanced Computing program will also use
Globus tools:
Creating
a virtual supercomputer
In a recent
major test of Grid computing, the Globus Toolkit harnessed the
power of multiple supercomputers with different operating systems,
more than quadrupling the system’s computing efficiency
and earning a prestigious Gordon
Bell prize for a team of scientists from Argonne, the University
of Chicago, Northern Illinois
University and the Max
Planck Institute for Gravitational Physics in Germany.
The team created
a “virtual supercomputer” to simulate the evolution
of gravitational waves according to Einstein’s theory of
general relativity. The experiments were the largest-ever simulations
involving Einstein’s general relativity equations and modeled
the gravitational effects of the collision of black holes. The
supercomputer comprised 512 processors from three SGI Origin2000
machines at the National Center
for Supercomputing Applications in Illinois and a 1,024-processor
IBM SP2 at the San Diego Supercomputing
Center in California.
The research
team used three software systems: Argonne’s Globus Toolkit;
MPICH-G2, a Grid-enabled
version of the standard Message Passing Interface developed at
Northern Illinois University and Argonne; and the Cactus computational
toolkit for scientists and engineers developed by the Max Planck
Institute for Gravitational Physics.
Researchers
tested various configurations. After a baseline run, researchers
modified the software and achieved 63 percent efficiency of 1,500
central processing units (CPUs), meaning the virtual supercomputer
spent almost two-thirds of its time crunching numbers and only
one-third waiting for data coming from other machines. This was
14 percent more efficient than the baseline. The team also ran
the experiments using only 1,140 CPUs and boosted efficiency to
88 percent.
The data these
experiments generate are of interest to astrophysicists looking
for gravitational signals from celestial events such as the collision
and merger of black holes. The results are also significant for
computer scientists as they try to improve Grid computing efficiency,
said Foster.
“The
experiments underscore the potential of linking multiple supercomputers
with different operating systems for large-scale simulations across
a computational grid,” Foster said. “We can merge
all their computing power to focus on the most challenging problems
of science.”
Funding for
this work was provided by DOE’s
Office of Science, the National
Science Foundation, NASA,
Defense Advanced Research Projects
Agency, IBM and Microsoft.
For more information,
please contact Dave Jacqué.
Next: Big
project reveals secrets of tiny materials
Back
to top