STR Masthead

Article title: Simulating the Next Generation of Energy Technologies; article blurb: Livermore's high-performance computing capabilities and expertise in simulation promise to rapidly advance the nation's development of clean-energy technologies.

COMPUTER simulation has become an important tool for finding solutions in almost every field, from physics, astrophysics, meteorology, chemistry, and biology to economics, psychology, and social science. Running simulations allows researchers to explore ideas, gain insights into new technology, and estimate the performance of systems too complex for conventional experimental analysis. For example, automotive engineers perform complex calculations to consider design adjustments before they produce the first physical model. Aerospace engineers use simulations to evaluate proposed combinations of aircraft features instead of building and testing prototype models for each possibility.

Research teams at Lawrence Livermore are now applying the power of high-performance computing (HPC) to improve energy systems throughout the country. Laboratory scientist Julio Friedmann, who leads the carbon-management program for the Global Security Principal Directorate, says, “The energy and environmental challenges facing the nation are so immense, urgent, and complex, high-performance computing is one of the most important tools we have to accelerate the development and deployment of solutions. Simulations will give us the confidence to move ahead more rapidly, and we don’t have the luxury of learning the slow way.”

HPC will provide U.S. industry with a competitive advantage in solving environmental challenges, achieving energy independence, and reducing the nation’s reliance on imported fossil fuels. In addition, using these computational tools to explore technology solutions will save time and money by helping utility companies reduce capital expenditures, avoid industrial failures, and prevent damage to power-generation equipment.

A Foundry for Solutions
According to Friedmann, partnerships between national laboratories and private industries are advancing the use of simulation to evaluate new concepts for power generation and efficient energy use. Traditionally, technology development for energy utilities and other industries starts with producing a benchtop model. When a new design proves successful, the benchtop is scaled up in power and capacity, with each subsequent model taking about two years to produce.

With HPC, design engineers can scale through simulated prototypes much more quickly. “HPC allows us to skip steps in the scaling process,” Friedmann says. “Without these simulations, we’d have to keep building larger prototypes from a benchtop to a 10-kilowatt model on up to 100 kilowatts, 1 megawatt, and so forth.”

In support of stockpile stewardship, the Laboratory has already created many complex simulation tools and developed the expertise to run them effectively on massively parallel computer systems. The successful application of HPC to help maintain a reliable nuclear weapons stockpile has increased confidence in the power and effectiveness of these tools. As a result, large and small firms throughout the energy industry are interested in tapping into the Laboratory’s HPC resources.

“Utilities and those involved with improving energy efficiency work with computational tools every day,” says John Grosh, deputy associate director for Computation’s programs. “They are frequently hampered, though, because they are running applications on desktop computers or small server systems. The computational horsepower offered by our machines is 1,000 to 100,000 times greater than what they have available.” HPC simulations can examine complex scenarios with fine resolution and high fidelity—that is, with the level of detail and accuracy required to ensure that simulated results emulate reality.

In looking for partnership opportunities that are most suitable for addressing national problems, Friedmann has found many energy projects in which HPC simulations could play an important primary or supporting role, improving the quality of solutions and the rate of deployment. He notes, however, that simulation and modeling are not the goal. “They are the medium by which we deliver solutions to problems,” he says. “Like a foundry, we want to forge solutions to address threats to American competitiveness and energy security.”

These challenges are providing a wide range of opportunities where HPC simulations can make a difference. One Laboratory effort is focused on predicting how the intermittent nature of renewable energy sources such as wind and solar power will affect electricity generation. In another project, Livermore researchers are developing HPC simulations to evaluate the environmental implications of new technologies such as those for enhanced energy production and carbon capture and sequestration. Says Friedmann, “Delivering solutions to these problems is our measure of success.”

A New Look at Today’s Technology
Improving the nation’s energy security is not only about developing advanced technologies. It also involves improving how available resources are used today. Livermore geophysicist Rick Ryerson in the Physical and Life Sciences Directorate is leading one such effort: applying HPC calculations to predict how best to create subsurface fracture networks for enhanced recovery of shale gas and geothermal energy.

“Before we induce hydraulic fracturing or stimulate gas flow in an underground network, we need to evaluate the effects of our proposed techniques,” says Ryerson. “Then we can refine the best methods to get improved energy extraction in a safe and environmentally responsible manner.”

Jeff Roberts, who leads Livermore’s Renewable Energy Program, notes that this multidisciplinary effort builds on the Laboratory’s expertise in seismology and rock mechanics as well as HPC. “Our existing codes were not designed to simulate fracture generation in tightly coupled geologic materials—for example, areas where underground water flows through different rock layers,” says Roberts. “A key challenge in resolving this issue has been to develop a simulation framework that allows us to explore the interactions between fluids and solids during the fracturing process.”

Livermore researchers are also part of the Greater Philadelphia Innovation Cluster (GPIC), a collaboration designed to help organizations build, retrofit, and operate facilities for greater energy efficiency. “We need better insight into how buildings consume energy and lose heat,” says Grosh. “Simulation tools can help us gain this understanding at higher fidelity.” With that information, engineers, architects, and operators can modify designs to improve a facility’s energy efficiency.

As part of this project, Laboratory researchers are developing algorithms and other computational tools to quantify the uncertainties in the energy simulations they are running. Uncertainty quantification is a growing field of science that focuses on quantifying the accuracy of simulated results, in particular, which predicted outcomes are most likely to occur. (See S&TR, July/August 2010, Narrowing Uncertainties.) Determining the quantitative level of model accuracy is especially difficult because calculations include approximations for some physical processes and not all features of a system can be exactly known. By quantifying the uncertainty and numerical errors in simulations of a facility’s energy consumption, Livermore researchers and their GPIC partners can develop more robust and effective building controls.

Simulations of increased flow from a stimulated underground reservoir.
High-performance computing (HPC) simulations by Pengcheng Fu, a postdoctoral researcher at Livermore, predict the increased flow of, for example, shale gas or geothermal energy when an underground reservoir is stimulated with fluid overpressure. The results shown here compare the fracture network in a 100- by 100-meter reservoir before (left) and after (right) stimulation. Bar heights indicate flow rate. Color represents fluid pressure, which is highest (red) at the injection well and lower (blue) at the production well. Stimulated flow engages fractures in the lower regions of the network, allowing developers to extract energy from this part of the production field.

Example showing cascade of scales.
Finer mesh grids applied over a zone of interest improve the numerical resolution of HPC simulations. This example shows the cascade of scale provided by a nested grid resolution, allowing researchers to examine in detail how global circulation patterns (left), local terrain (middle), and placement of individual wind turbines (right) might affect wind currents.

Forecasts in the Wind
One of the more difficult problems facing utility companies is predicting the availability of intermittent energy sources such as wind and solar power. A Livermore team led by Wayne Miller in the Engineering Directorate is refining HPC simulations of wind energy to improve forecasting accuracy. For this effort, the team has modified the Weather Research and Forecasting (WRF) code, a public-domain code designed to model weather patterns over a segment of the globe, such as the entire state of California. “WRF detects large-scale weather motions and simulates patterns over thousands of square kilometers,” says Miller. “For example, it can pick up the large cyclonic low-pressure systems that come down the California coast. But that representation is too coarse to capture accurate forecasts in a particular spot, such as at a wind farm in the Altamont Hills east of Livermore.”

To improve the numerical resolution of the simulated results, the team applies finer mesh grids over the zone of interest, a process called nesting. By nesting the grid resolution, researchers can see in detail how changes in global circulation patterns and local terrain affect the thermal cycling that drives winds on a daily schedule. Postdoctoral scholar Katie Lundquist is working to incorporate the Immersed Boundary Method into the base WRF code. This model will more precisely represent complex terrain, such as mountains, foothills, and other topographic changes that WRF does not resolve, and thus improve the accuracy of the simulated results.

Miller’s team is also developing computational tools to model atmospheric turbulence. Gusts are a form of turbulence that can significantly alter the availability of wind energy at a site as well as the stability and uniformity of wind currents—characteristics that can affect a power plant’s production capabilities. In addition, says Miller, “A wind gust strong enough to heel a sailboat over can be trouble for a turbine,” causing component fatigue or even failure ahead of a turbine’s rated lifetime.

In-depth analysis of wind patterns provides valuable information for determining where to locate large wind-turbine farms. Building a wind farm requires considerable capital expenditures, and choosing a site can affect a developer’s return on investment. HPC simulations can incorporate field data as well as historical averages of wind patterns to characterize potential locations and predict the amount of power each one could produce.

Livermore simulations will also evaluate how wind forecasts for an area can predict energy production at a particular wind farm, information utility companies can use to fine-tune the balance between supply and demand. Many utilities supplement peak load requirements with gas turbines to ensure that the amount of power supplied to the grid remains steady even as wind patterns change. When demand for power peaks, as it would on a hot, still day when many people turn on their air conditioners, gas turbines generate the peak energy needed to help meet those demands. At other times, wind alone can generate the power required.

With timely, accurate predictions of these changing conditions, utilities could make adjustments more quickly and better control their operating costs. The simulations developed to date do not run in real time, but researchers at Livermore and elsewhere are refining the models to operate faster.

Prior to supercomputers, the energy industry relied on experimental data and observations, both of which are expensive to acquire. Researchers must gather enough samples to guarantee that results are statistically valid. As an example, Miller describes an effort to collect data on offshore wind power. The average cost for an offshore meteorological tower is $5 million, and surveying the entire length of the California coast would require 1,000 towers. “Computer simulations are wildly cheaper than that project would be,” says Miller. He notes that field samples are still necessary, providing data to validate model accuracy. “If a simulation starts to diverge from reality, we can use field data to tune the model into alignment, even as it’s running.”

Putting Carbon in Its Place
New technologies to capture carbon dioxide before it reaches the atmosphere are also benefiting from HPC simulations. Livermore researchers have evaluated several approaches for sequestering this greenhouse gas as part of the efforts to mitigate climate change. (See S&TR, December 2010, Carbon Dioxide into the Briny Deep; May 2005, Locked in Rock: Sequestering Carbon Dioxide Underground.) An innovative project led by computational biologist Felice Lightstone is using HPC simulations to design a synthetic lung enzyme that can catalyze the capture process before carbon is released to the atmosphere by coal-fired power plants. (See S&TR, March 2011, From Respiration to Carbon Capture.)

To design the catalyst, Lightstone’s team is borrowing methodologies from the pharmaceutical industry. In searching for an effective, broad-spectrum antibiotic, drug developers must identify key interactions between small molecules that bind to specific proteins. Designing the synthetic lung catalyst involves making and breaking chemical bonds as well. HPC tools allow the team to quickly analyze candidate compounds. “Our goal is to give the experimentalists a lot of suggestions for effective molecular combinations,” says Lightstone. “Then we provide a fast, iterative feedback loop to modify the options.”

Without HPC, the turnaround time would make this work impractical. “We’d have to do it the old-fashioned way—think of an idea and try it in the lab,” says Lightstone. If researchers relied only on trial and error, they would have to synthesize samples of each candidate molecule to be tested, a difficult and time-consuming process. Instead, using HPC simulations, they can design hundreds of possible combinations and synthesize only the most promising candidates. After creating the catalyst, the researchers will hand it off to Babcock and Wilcox, an international provider of energy products and services, for small-scale systems testing.

Roberts adds that HPC is also important for evaluating the effects of carbon sequestration technologies. “We need to improve our understanding of fluid flow in underground reservoirs,” he says. “For example, where does carbon dioxide go when it’s pumped into the subsurface? And how does that fluid movement affect the surrounding geologic layers.” Evaluating new technologies for carbon capture and sequestration is a long-term, complex process, but HPC simulations speed up the process significantly. Says Friedmann, “With simulations, we expect to cut the deployment cycle in half, reducing a 10- or 15-year timeline to only 5.”

Drawing of the electric grid of tomorrow.
This conceptual drawing illustrates the vast, complex resources in the electric grid of tomorrow. Livermore’s expertise in HPC is advancing the development of new technologies to secure the nation’s energy supply for years to come.

A Thousand Scenarios in a Day
Energy networks are vast and complex. A typical distribution system of a trunk power line may have 2,000 circuits. California has 20,000 distribution systems, with millions of power lines lacing the state. Millions of variables must be examined to understand the system as a whole and recommend improvements, a job best suited to supercomputers.

To help utility companies determine what resources are needed for the electric grid of tomorrow, Livermore scientists are using HPC simulations to model the impacts when generation capacity is increased by adding a large number of intermittent wind and solar resources to the grid. Instead of building conventional generating capacity to back up these intermittent resources, grid operators could rely on techniques such as distributed energy storage or demand response, in which consumers shut off appliances on request to reduce the system’s load.

“The advent of distributed storage, generation, and demand response has increased the number of grid state and control variables by orders of magnitude,” says Livermore scientist Thomas Edmunds, who works in the Engineering Directorate. “We need larger-scale planning and operations models to optimize the performance of these systems.” A Laboratory Directed Research and Development project led by Edmunds is focused on developing optimization algorithms for this application.

He notes that HPC can also contribute to grid reliability. Grid managers must operate the system in a fault-tolerate mode with generating levels set such that no single failure will cause a widespread blackout. To ensure reliability, researchers must analyze many independent models of the grid with different failure modes. “This problem is ideal for high-performance computing,” says Edmunds.

Legislation enacted in California calls for 33 percent of the state’s energy supply to come from renewable resources by 2020. Mathematician Carol Meyers of the Engineering Directorate is working on a study initiated by the California Public Utilities Commission and managed by the California Independent System Operator (CAISO) to help determine how the 2020 standard will affect operations of the state’s grid. “Potentially billions of dollars are at stake in terms of backup generation and transmission costs to incorporate renewable resources on a large scale,” says Meyers. “The power utilities need to better understand all of the issues involved so they can adapt to distributed generation.”

For the CAISO study, Meyers and software developers at Energy Exemplar adapted the company’s PLEXOS energy simulator to run on the Laboratory’s supercomputers. “PLEXOS is front-end software that generates the mathematical model for our simulations,” says Meyers. In demonstration runs on the Hyperion test bed, PLEXOS looked at the 2,100 generators across the entire western grid, plus a large number of load, storage, transmission, and reserve requirements. The resulting model included more than 225,000 variables and 400,000 constraints and took an incredibly long time to run—several days to compute one yearlong scenario.

“We dug into the model to determine what slowed it down,” says Meyers. The bottleneck was in the Mixed Integer Programming solver, which takes a mathematical description of the variables, constraints, and objective function and solves the model. “IBM provided licenses for CPLEX, their state-of-the-art mixed-integer optimization software,” she says. Adding CPLEX allowed the researchers to run simulations in parallel. When combined with the Laboratory’s HPC processing power, the modified PLEXOS could simulate a thousand scenarios a day. The development team then modified the mathematics routines behind the model to improve variable interactions. The resulting calculations ran four times faster.

“HPC has the potential to be game-changing in the energy industry,” says Meyers. “It not only answers existing questions but also expands the very nature of questions to be asked.” The team’s future work involves streamlining the PLEXOS–HPC user interface, modifying the optimization routines, and collaborating with IBM to extend CPLEX to run on massively parallel systems. The work has already proven valuable, serving as the demonstration test case for a proposal to simulate the possible consequences of end-to-end changes to the energy system.

Map showing electricity transmission lines plus load and generation centers.
This map shows the electricity transmission lines (red lines) and the load or generation centers (green dots) for the western U.S. Simulations of these electric grid resources produce massive amounts of data.

Screenshots from the PLEXOS energy simulator.
Software developers at Livermore and Energy Exemplar adapted the company’s PLEXOS energy simulator to run on the Laboratory’s supercomputers. PLEXOS uses a familiar computer interface so energy analysts can easily understand modeling data even without expertise in simulations. The team hopes this ease of use will increase the software’s adoption in the energy sector.

Reduced Barriers to Partnership
Once progress has been made in these projects, the Laboratory and other entities will reach out to large and small companies that cannot afford to invest in HPC resources themselves. Giving potential partners an opportunity to probe the world of HPC simulations allows them to see which tools might be adapted to meet their needs. According to Friedmann, Livermore’s plan is to develop a Web portal that provides links to available tools and promotes those resources to potential collaborators at universities and industry.

The new High-Performance Computing Innovation Center (HPCIC) is also helping to extend the Laboratory’s HPC capabilities to energy-related work. Part of the Livermore Valley Open Campus adjacent to Lawrence Livermore and Sandia national laboratories, HPCIC is a public–private partnership whose mission is to boost American industrial competitiveness, scientific research, education, and national security by broadening the adoption and application of supercomputing technology. (See S&TR, March 2011, New Campus Set to Transform Two National Laboratories.) The center provides partnering organizations with access to secure supercomputer resources and computational expertise that would otherwise be unavailable.

HPCIC projects will focus on big, complex challenges and opportunities in the energy sector as well as in climate science, health care, manufacturing, and bioscience. The center will allow industrial partners to access the full range of scientific, algorithmic, and application support available at the national laboratories. Grosh notes that although many companies develop codes that run on desktop computers, the ability to write for modest to large computing systems is much rarer outside the national laboratories, the national security community, and a few select industries. HPCIC will expand access to these computational resources so that industrial partners can perform virtual prototyping and testing, conduct multidisciplinary science research, optimize software applications, and develop system architecture for next-generation computers. “With this new capability, we foresee transforming the way U.S. industry uses HPC and providing an innovation advantage to the energy sector,” says Grosh.

Friedmann adds, “Supercomputing centers are popping up around the country, and they’re all looking for applications in manufacturing and energy and for software that is ready to run on their machines. Working with them to apply our expertise in HPC is a natural outgrowth of the Laboratory’s mission. We have an opportunity to merge diverse projects into a coherent effort and create a knowledge pipeline for tackling important national issues. The growth potential for the Laboratory is immense.”

—Kris Fury

Key Words: carbon capture and sequestration, clean energy, energy sector, high-performance computing (HPC) simulation, High-Performance Computing Innovation Center (HPCIC), smart electric grid, wind energy.

For further information contact Julio Friedmann (925) 423-0585 (friedmann2@llnl.gov) or John Grosh (925) 424-6520 (grosh1@llnl.gov).


S&TR Home | LLNL Home | LLNL Site Map | Top
Site designed and maintained by TID’s Web & Multimedia Group

Lawrence Livermore National Laboratory
Operated by Lawrence Livermore National Security, LLC, for the
U.S. Department of Energy’s National Nuclear Security Administration

Privacy & Legal Notice | UCRL-TR-52000-11-12 | December 8, 2011