![]() |
Leap to the extreme scale
could break science boundaries
Posted February 14, 2011
![](https://webarchive.library.unt.edu/web/20130214195917im_/http://ascr-discovery.science.doe.gov/shared/images/cable_river_360.jpg)
First in a series.
Over the next decade, scientists will command the abilities of computers operating at exaflops speeds – 1,000 times faster than today’s best machines, or greater than one billion laptops working together.
With that kind of power, researchers could unravel the secrets of disease-causing proteins, develop efficient ways to produce fuels from chaff, make more precise long-range weather forecasts and improve life in other ways. But first scientists must find ways to make such huge computers and make them run well – a job nearly as difficult as the tasks the machines are designed to complete.
The measurement for a high-performance computer’s speed is floating-point operations per second, or flops. A floating-point operation is a calculation involving fractions, which require more work to solve than whole-number problems. For example, it’s easier and faster to solve 2 plus 1 than 2.135 plus 0.865, even though both equal 3.
Some of the first supercomputers, built in the 1970s, ran at about 100 megaflops, or 100 million flops. Then speeds climbed through gigaflops and teraflops – billions and trillions of flops, respectively – to today’s top speed of just more than a petaflops, or 1 quadrillion flops. How big is a quadrillion? Well, if a penny were 1.55 millimeters thick, a stack of 1 quadrillion pennies would be 1.55 × 109 kilometers tall – enough to reach from Jupiter to the Sun and back.
An exaflops is a quintillion, or 1018, flops – 1,000 times faster than a petaflops. Given that there are about 1 sextillion (1021) known stars, “An exascale computer could count every star in the universe in 20 minutes,” says Buddy Bland, project director at the Oak Ridge National Laboratory (ORNL) Leadership Computing Facility.
A brief history of speed
Increases in flops can be correlated with specific generations of computer architectures. Cray vector machines, for instance, led the way into gigascale computing, says Mark Seager, assistant department head of advanced technology at Lawrence Livermore National Laboratory (LLNL). In those computers, one “vector” instruction turned into operations on multiple pieces, or a vector, of data. The vector approach dominated supercomputing in the 1980s. To reach terascale computing, designers turned to massively parallel processing (MPP), an approach similar to thousands of computers working together. Parallel processing machines break big problems into smaller pieces, each of which is simultaneously solved (in parallel) by a host of processors.