NERSCPowering Scientific Discovery Since 1974

Habib Najm

BES Requirements Worksheet

1.1. Project Information - Computational Reacting Flow with Detailed Kinetics

Document Prepared By

Habib Najm

Project Title

Computational Reacting Flow with Detailed Kinetics

Principal Investigator

Habib Najm

Participating Organizations

Sandia National Laboratories

Funding Agencies

 DOE SC  DOE NSA  NSF  NOAA  NIH  Other:

2. Project Summary & Scientific Objectives for the Next 5 Years

Please give a brief description of your project - highlighting its computational aspect - and outline its scientific objectives for the next 3-5 years. Please list one or two specific goals you hope to reach in 5 years.

This project focuses on computations and analysis of flames with detailed  chemical kinetics. We compute reacting flows, using hydrocarbon fuels, where the objective is analyzing and understanding reacting flow structure. Our work  provides improved understanding of the detailed structure of hydrocarbon flames and their interaction with transport processes in two-dimensional laboratory scale flows. Such understanding is important for building simplified flow-flame interaction models for more complex combustion systems. 
 
The computational challenges are largely driven by the complexity and stiffness of chemical models, and the large range of length and time scales in these flows. We use highly resolved spatial meshes in 2D and employ operator-split time integration with implicit stiff time integration of chemical source terms. 
 
In the next 3-5 years we plan to extend our current computations with complex hydrocarbon fuels to physical problems of enhanced complexity. This will include initially steady-state and then time-unsteady laminar lifted jet flames. We will target these flows initially with methane-air chemistry, moving to nHeptane, and iso-Octane fuels. 

3. Current HPC Usage and Methods

3a. Please list your current primary codes and their main mathematical methods and/or algorithms. Include quantities that characterize the size or scale of your simulations or numerical experiments; e.g., size of grid, number of particles, basis sets, etc. Also indicate how parallelism is expressed (e.g., MPI, OpenMP, MPI/OpenMP hybrid)

Our current primary code is "dflame". It is a uniform-mesh 2D finite-differenc low Mach number reacting flow code that is second-order in space and time. It uses a projection scheme for the momentum equations, employing an FFT pressure solver, coupled with an operator-split time integration for the species and energy equations. We use an RKC/RK2 time integrator for the transport terms and an implicit time integrator for the chemistry terms. We typically use meshes on the order of 1024x2048, and have used up to 560 chemical species in the chemical model thus far. The code uses hybrid MPI/OpenMP parallelism. 

3b. Please list known limitations, obstacles, and/or bottlenecks that currently limit your ability to perform simulations you would like to run. Is there anything specific to NERSC?

We are currently working on enlarging the mesh, and will need to address scalability issues. 

3c. Please fill out the following table to the best of your ability. This table provides baseline data to help extrapolate to requirements for future years. If you are uncertain about any item, please use your best estimate to use as a starting point for discussions.

Facilities Used or Using

 NERSC  OLCF  ACLF  NSF Centers  Other:  

Architectures Used

 Cray XT  IBM Power  BlueGene  Linux Cluster  Other:  

Total Computational Hours Used per Year

 122,000 Core-Hours

NERSC Hours Used in 2009

 122,000 Core-Hours

Number of Cores Used in Typical Production Run

 1936

Wallclock Hours of Single Typical Production Run

 20-36

Total Memory Used per Run

 24 GB

Minimum Memory Required per Core

 0.24 GB

Total Data Read & Written per Run

 20 GB

Size of Checkpoint File(s)

 6 GB

Amount of Data Moved In/Out of NERSC

 80 GB per  year

On-Line File Storage Required (For I/O from a Running Job)

 0.03 GB and  1 Files

Off-Line Archival Storage Required

 GB and  Files

Please list any required or important software, services, or infrastructure (beyond supercomputing and standard storage infrastructure) provided by HPC centers or system vendors.

DVODE, LAPACK, NERSC consulting, craypat 

4. HPC Requirements in 5 Years

4a. We are formulating the requirements for NERSC that will enable you to meet the goals you outlined in Section 2 above. Please fill out the following table to the best of your ability. If you are uncertain about any item, please use your best estimate to use as a starting point for discussions at the workshop.

Computational Hours Required per Year

 

Anticipated Number of Cores to be Used in a Typical Production Run

 

Anticipated Wallclock to be Used in a Typical Production Run Using the Number of Cores Given Above

 

Anticipated Total Memory Used per Run

 GB

Anticipated Minimum Memory Required per Core

 GB

Anticipated total data read & written per run

 GB

Anticipated size of checkpoint file(s)

 GB

Anticipated On-Line File Storage Required (For I/O from a Running Job)

 GB and  Files

Anticipated Amount of Data Moved In/Out of NERSC

 GB per  

Anticipated Off-Line Archival Storage Required

 GB and  Files

4b. What changes to codes, mathematical methods and/or algorithms do you anticipate will be needed to achieve this project's scientific objectives over the next 5 years.

We will need to improve parallel scalability.

4c. Please list any known or anticipated architectural requirements (e.g., 2 GB memory/core, interconnect latency < 3 #s).

4d. Please list any new software, services, or infrastructure support you will need over the next 5 years.

 

4e. It is believed that the dominant HPC architecture in the next 3-5 years will incorporate processing elements composed of 10s-1,000s of individual cores, perhaps GPUs or other accelerators. It is unlikely that a programming model based solely on MPI will be effective, or even supported, on these machines. Do you have a strategy for computing in such an environment? If so, please briefly describe it.

Yes we have a hybrid MPI-OpenMP code in place. 

New Science With New Resources

To help us get a better understanding of the quantitative requirements we've asked for above, please tell us: What significant scientific progress could you achieve over the next 5 years with access to 50X the HPC resources you currently have access to at NERSC? What would be the benefits to your research field if you were given access to these kinds of resources?

Please explain what aspects of "expanded HPC resources" are important for your project (e.g., more CPU hours, more memory, more storage, more throughput for small jobs, ability to handle very large jobs).