text-only page produced automatically by LIFT Text Transcoder Skip all navigation and go to page contentSkip top navigation and go to directorate navigationSkip top navigation and go to page navigation
National Science Foundation
 
News
design element
News
News From the Field
For the News Media
Special Reports
Research Overviews
NSF-Wide Investments
Speeches & Lectures
NSF Current Newsletter
Multimedia Gallery
News Archive
News by Research Area
Arctic & Antarctic
Astronomy & Space
Biology
Chemistry & Materials
Computing
Earth & Environment
Education
Engineering
Mathematics
Nanoscience
People & Society
Physics
 


Press Release 96-065
Need for Speed: NSF Pursues Petaflop Computers

October 25, 1996

This material is available primarily for archival purposes. Telephone numbers or other contact information may be out of date; please see current contact information at media contacts.

Kids often race their bicycles, pedaling madly to move ever faster. Then they advance to sedans, but covet sports cars, still wanting to push that envelope of speed.

Computer scientists are no different.

The fastest computers created today are capable of speeds of about a teraflop--a trillion operations per second. Already researchers are looking far ahead, yearning for computers a thousand times faster.

The National Science Foundation, in conjunction with NASA and DARPA, have funded eight research projects to creatively approach a petaflop. These pilot projects will be presented at a workshop this Sunday, Oct. 27, at the Frontiers '96 conference in Annapolis, Maryland.

To put the speeds in terms that people can understand: if the speeds of the world's fastest computers just now being built are like the sailing ships Christopher Columbus used to cross the Atlantic, space shuttle speeds are the goal of this research project. Right now, computer speeds are limited by memory storage and by how fast that memory can be transferred to the working parts of the computer. Even with those issues solved, computers operating at petaflop speeds must be massively parallel--any application must be broken into a million pieces, all calculated at once. To wait to solve problems sequentially slows the computer down.

"The first petaflop computers are going to be difficult to use. One of the goals of this project is to see how friendly can we keep them. You don't want computers only a few experts can use. The architectures must support a reasonable programming model without slowing down," said John Van Rosendale, NSF program manager leading the project.

But why would anyone need a thousand trillion operations per second? Any number of applications are already apparent, from real time nuclear magnetic resonance imaging during surgery, to computer based drug design, astrophysical simulation and modeling of environmental pollution and long term climate changes.

"Until the Internet arrived, we had no real appreciation of its impact. Petaflop computers may be like that: we have only a limited sense of the kind of applications this technology will enable," Van Rosendale said.

The eight Pursuing a Petaflop projects are:

  • A Flexible Architecture for Executing Component Software at 100 Teraflops; Andrew A. Chien and Rajesh K. Gupta, University of Illinois at Urbana-Champaign

  • Point Designs for 100 Teraflop Computers Using PIM Technologies; Peter M. Kogge, Steven C. Bass, Jay B. Brockman, Danny Z. Chen and Edwin Hsing-Mean Sha; University of Notre Dame

  • Architecture, Algorithms and Applications for Future Generation Supercomputers; Vipin Kumar and Ahmed Sameh; University of Minnesota

  • Design Studies on Petaflops Special-Purpose Hardware for Astrophysical Particle Simulations; Stephen L. W. McMillan, Drexel University; Piet Hut, Institute for Advanced Study, Princeton; Junichiro Makino, University of Tokyo; Michael L. Normal, University of Illinois at Urbana-Champaign; Frank J. Summers, Princeton University

  • Hybrid Technology Multi-Threaded Architecture; Paul Messina and Thomas Sterling; California Institute of Technology

  • Hierarchical Processors-and-Memory Architecture for High Performance Computing; Jose A.B. Fortes and Rudolph Eigenmann, Purdue University; Valerie Taylor, Northwestern University

  • The Illinois Aggressive Cache-Only Memory Architecture Multiprocessor; Josep Torrellas and David Padua, University of Illinois at Urbana-Champaign

  • A Scalable-Feasible Parallel Computer Implementing Electronic and Optical Interconnections for 156 TeraOPS Minimum Performance; Sotirios G. Ziavras and Haim Grebel, New Jersey Institute of Technology; Anthony T. Chronopoulos, Wayne State University

-NSF-

Media Contacts
Beth Gaston, NSF (703) 306-1070 egaston@nsf.gov

Program Contacts
John Van Rosendale, NSF (703) 306-1581 jvanrose@nsf.gov

The National Science Foundation (NSF) is an independent federal agency that supports fundamental research and education across all fields of science and engineering. In fiscal year (FY) 2009, its budget is $9.5 billion, which includes $3.0 billion provided through the American Recovery and Reinvestment Act. NSF funds reach all 50 states through grants to over 1,900 universities and institutions. Each year, NSF receives about 44,400 competitive requests for funding, and makes over 11,500 new funding awards. NSF also awards over $400 million in professional and service contracts yearly.

 Get News Updates by Email 

Useful NSF Web Sites:
NSF Home Page: http://www.nsf.gov
NSF News: http://www.nsf.gov/news/
For the News Media: http://www.nsf.gov/news/newsroom.jsp
Science and Engineering Statistics: http://www.nsf.gov/statistics/
Awards Searches: http://www.nsf.gov/awardsearch/

 

border=0/


Print this page
Back to Top of page
  Web Policies and Important Links | Privacy | FOIA | Help | Contact NSF | Contact Webmaster | SiteMap  
National Science Foundation
The National Science Foundation, 4201 Wilson Boulevard, Arlington, Virginia 22230, USA
Tel:  (703) 292-5111, FIRS: (800) 877-8339 | TDD: (800) 281-8749
Last Updated:
July 29, 2005
Text Only


Last Updated: July 29, 2005