ORNL
Search Magazine  
   
A closer view

Jeff Nichols


ORNL's associate laboratory director for Computing and Computational Sciences leads the laboratory's highperformance computing efforts in areas such as climate change, fusion energy, nanotechnology, and biotechnology. This includes managing the Oak Ridge Leadership Computing Facility, which was established at ORNL in 2004 and now hosts several of the world's most powerful computer systems. We
asked him to share his thoughts on how supercomputing impacts both R&D and day-to-day living.

How does supercomputing touch our daily lives?


Jeff Nichols

Supercomputers affect almost everything we do, often in ways that we don't realize. Weather forecasters, for example, use supercomputers. Manufacturers depend on supercomputers to fast-track product designs and increase their global competitiveness. Environmental scientists use computer simulations to model the flow of groundwater and protect the drinking water supply. Computing technology touches our lives all the time. We just don't think of it as supercomputing.

After NASA sent astronauts to the moon, we saw a lot of innovations based on discoveries made in the space program. Supercomputing works the same way. Innovations in the field are rapidly spun off into consumer products. Smart phones can operate without burning a hole in your pocket because the computing and cooling technologies used in these devices were first developed for high-performance computers.

Computer simulations are now used in almost every scientific discipline. What's the attraction?

The attraction is that it allows scientists to predict things with a high degree of accuracy. One example is drug discovery. For the last several years, the pharmaceutical industry has been using supercomputers to help predict which chemical compounds are most likely to be effective for a particular purpose, based on computer models of their chemical structure. The results of this research have enabled drug companies to reduce the time it takes to develop a new drug from about 10 years to two.

The same techniques could be applied to many other areas, like advanced manufacturing, the development of new materials, nanoscience, and molecular machines. New products in all these areas can be designed and then tested with supercomputers—which is pretty cool.

Jaguar has often been called the most powerful scientific computer in the world. How will Titan expand that legacy?

Titan will take advantage of new, faster processors and will also replace many of its standard processors with GPUs (graphics processing units) that are 10 times more powerful and designed specifically to handle computationally-intensive operations. These upgrades will result in a 10X performance boost over Jaguar.

The result will not only be the world's most powerful computer, but like Jaguar, Titan will be a machine that can be used for research across a wide range of scientific disciplines—nuclear energy, climate modeling, etc. Making sure that Titan is as accessible to scientific researchers as Jaguar is a tough job because the upgrades shift the machine's complexity from its hardware to its software to accommodate differences in how GPUs handle information. A 10X improvement in hardware performance will have been achieved, but the software guys will have to write new code that takes full advantage of that improvement by breaking down research problems into tens of millions of parallel elements—that is a very tough assignment.

Titan's contribution to Jaguar's legacy will be to generate 10 times the raw performance from the same amount of energy—while delivering a similar increase in speed for its scientific applications.

You've said that having an understanding of the scientific applications that run on supercomputers is just as important as having the fastest supercomputer. Why is that?

Part of our strategy has been to ensure that we have groups at the laboratory that are doing application development, not only in computational materials, but also in computational chemistry, computational biology, computational fusion science, computational nuclear science, and computational astrophysics. We have computational science groups in each of those areas developing applications to run on our systems. The reason is that those groups then understand the concept of scalability and extensibility and have the ability to generate or develop the next-generation applications that will be able to take advantage of Titan's capabilities and do science on day one.

We are unique, in some sense, compared to the other national labs in that we have computational science skills across all of those domains, and we can apply those folks to developing next-generation applications. I think that is what sets us apart from other national labs. They all have good people and good computational scientists, but I think we have the breadth of applications and the ability to solve science problems across the board that some of the other labs can't.

You came to the laboratory 10 years ago. In the area of computing, what's the biggest change you've seen over that time?

When I came to ORNL 10 years ago, the concept of scalable infrastructure was not as well understood as it is today. It was understood that we had to have a physical presence, and Thomas Zacharia (ORNL's former deputy lab director for Science and Technology) had the vision to build a 40,000-square-foot computer center. At the time, there was no computer to put in it, so Thomas was a visionary when it came to the scalable infrastructure. He also anticipated that we could take advantage of the space by supporting multiple organizations and agencies.

Thomas hired me from Pacific Northwest National Laboratory because I was doing scalable computational chemistry. He wanted the same thing for ORNL—not just in chemistry, but in materials, climate, biology, and all the other research areas.

Today we are extending that vision. In 2000, Thomas delivered the laboratory's first teraflops capability, and eight years later Jaguar delivered the first petaflops performance. Now with Titan, we are building on those achievements and working toward the goal of delivering the first exaflops supercomputer. ORNL also hosts supercomputers for the University of Tennessee, the National Science Foundation, and the National Oceanic and Atmospheric Administration.

What makes the laboratory fertile ground for supercomputing?

The main reason is that we have 600 plus people who take the business of high-performance computing very seriously. We also have unique capabilities in terms of scalable infrastructure—not just power, space and cooling, but all of the resources that surround computing operations. We have the file systems, the archives, the visualization theater—all of the things that enable us to field a well-balanced system and allow us to understand the science that is being produced. That's why these organizations want to locate their supercomputing operations here—because we know how to do it, and we've been doing it very well for the last 10 years. When an organization works with us, they are doing science on a scale that gives them a competitive advantage across the country and around the world.