CCIM LogoSandia National Laboratories Logo
HomeCapabilitiesOrganizationAwardsPublications and PresentationsCareer OpportunitiesCSRIPlatforms


2005 Newsnotes


New Automatic Differentiation Tools Expedite Code Development
and Enable New Design Algorithms

SNLFad and Rad are new automatic differentiation tools for C++ codes that are being developed by researchers in SNL's CCIM center. They have already been deployed in the Charon code and are partly responsible for the rapid development of that code's capability to model neutron damage effects in semiconductor devices for QASPR. In the future they will enable Charon to provide analytic device sensitivities to large numbers of defect species in a highly efficient manner, and are "key to our ability to support future qualification efforts through verification and validation" says Charon project lead Rob Hoekstra.

Automatic differentiation (AD) is a software technology that, given computer code that performs some computations, will generate computer code that calculates the derivatives with respect to variables flagged as independent. "It just does the chain rule through your code" explains SNLFad lead-developer Eric Phipps. While all computational scientists believe their code is very complex, all calculations are based on just 23 elementary operations (+, -, *, /, sin, cos, log,...), where the rules for differentiation are known by any good calculus student, and which only needed to be programmed once in SNLFad and Rad.

Successful AD technology has previously been developed for Fortran and C, but the approaches used for those languages do not suffice for the more flexible C++. SNLFad exploits this flexibility by using a technique (first implemented in the public domain AD package TFad<>) called "expression templating" to coax the compiler into generating code for the derivatives. Rad is a package with lead-developer David Gay that exploits the technique of "reverse accumulation" to combine these derivatives in a different order to compute adjoint sensitivities. The code generated by the AD tools is perfectly accurate, highly efficient, and scalable to large codes, to large problem sizes, and to large numbers of processors. The accompanying plot shows the efficiency gains of AD over the standard finite differencing method, which also suffers from limited accuracy.

While the tools use very sophisticated computer science constructs, their use is relatively straightforward. The result is an enormous savings in development time by removing the need to derive, program, and verify code for computing derivatives. The appeal of just programming the governing equations, and using AD to calculate the Jacobian and sensitivities, has made the Charon code attractive for new development efforts including plasma transport/reaction simulations for semiconductor processing, MHD simulations for aspects of Z-pinch modeling, and reacting-flow simulations for a chemical laser application.

Why are derivatives important? Foremost is the calculation of a Jacobian matrix, which is required for the robust solution of nonlinear equations with Newton's method. In addition, numerous design and analysis capabilities beyond simple repeated simulation are enabled by derivatives, including: linear stability analysis, sensitivity analysis, continuation methods and bifurcation analysis, error estimation, and optimization. The scalability of the gradient calculations for many optimization and error estimation applications requires the unique adjoint AD capability delivered by the Rad tool. Furthermore, analytic higher derivatives can easily be obtained by applying these AD tools recursively. Having ready access to higher derivatives even for complex mission-critical applications is opening the door for innovative algorithm development efforts including time integration, optimization, and uncertainty quantification.

The ADTools package that includes SNLFad and Rad is scheduled to be released in FY06. The project just finished its first year, and is ASC funded through the CSRF program.
(Contacts: Eric Phipps and David Gay)
December 2005


New Architectures and Algorithms Could Enable Higher Quality Automatic Translation Systems

The Computation, Computers, Information and Mathematics Center (1400) recently completed an evaluation of Netezza’s unique, massively parallel database computer. The investigation included three very large database problems; each related to important national security issues. The team developed and demonstrated: (1) the largest literature graph that has been computed, to date; (2) graph search algorithms for aiding reverse engineering and netlist verifications of integrated circuits; and (3) a novel approach to better automatic language translation. Most automatic language translation systems make use of the statistical properties of written text to make a Bayesian estimate for a possible translation. However, these algorithms never actually understand the textual meaning itself. An alternate approach has been pursued by researchers at NMSU, which aims for higher quality translations by means of computationally expensive, knowledge intensive reasoning about sentence meanings. Our collaborations suggest that this approach, when combined with a massively parallel computer, is now computationally feasible. The new algorithms should scale linearly to make use of the even the largest Netezza machine (600 processors with 27 terabytes of storage). Linear scaling, together with the massively serial nature of streaming document sources, suggests that tens of thousands of processors could be employed for intelligence applications. Interestingly, even better web-searches can also be enabled by this approach, which attempts to actually understand the queries and the text being searched.
(Contact: Mark D. Rintoul)
December 2005


Enterprise-Level Modeling and Optimization of DOD Logistics Operations

A growing collaboration between Discrete Algorithms and Math (1415) and Systems Sustainment and Readiness (6642) is developing software technology to solve enterprise-level DOD logistics problems, including spare parts inventory and resource allocation for the Lockheed Martin Joint Strike Fighter (JSF). This strategic partnership leverages expertise in combinatorial optimization - led by Jean-Paul Watson (1415), discrete-event modeling and simulation - led by Bruce Thompson (6642), and technology management and systems sustainment - led by Craig Lawton (6642). The software simulates the multi-year operational lifetime of a weapons platform, such as the JSF, and can minimize the cost of logistics operations involving as many as 50 million decision variables (completed in hours on a PC platform). The potential customer cost savings due to this capability are very large. Manager Robert Cranwell (6642) is expanding this successful capability from the initial focus on the JSF to additional DOD systems such as the Army’s Future Combat System. The cross-organizational collaboration has strong positive impacts on both 1415 and 6642. For 1415, the effort has exposed a number of novel, open challenges in optimization algorithm technology, allowing their R&D efforts to be more closely aligned with the needs of real-world customers. For 6642, an optimization capability significantly enhances the utility of simulation as a decision-making tool for the deployment and sustainment of key DOD weapons systems.
(Contact: Jean-Paul Watson)
December 2005


Current Events at SC05

On November 16, 2005 the newest Top500 supercomputer list was unveiled in Seattle, WA at SC05 – the Annual International Supercomputing Conference. To quote from the highlights of the Top500 list, “Two systems at DOE’s Sandia National Laboratories occupy positions 5 and 6. A new PowerEdge-based Dell system outperformed the enlarged ASC Red Storm system by a narrow margin with 38.27 Tflops/s versus 36.19 Tflop/s.” This marks Sandia’s return to the Top10 with two very different systems.

Our Dell System is known as Thunderbird and there is still room for improvement in the Linpack benchmark as this performance was achieved with only 7,442 processors of its available 8,960. Thunderbird is the largest cluster on the Top500 list. This system addresses two important objectives: 1) Sandia’s Institutional demand for capacity computing, and 2) establishing a long-term collaboration among SNL, Dell, Intel, and Cisco to address scalability issues of large clusters.

Of greater significance is the extent of peak performance at 83.4% and overall performance achieved by Red Storm. Our final result on the Linpack benchmark was 36.19 Teraflops on 10,848 processors. In terms of impact on the high performance computing community, the commercial version of Red Storm is known as the Cray XT3. Six additional XT3 systems also appear on the Top500 list in positions, 10-Oak Ridge National Laboratory, 14-U.S. Army Engineer Research and Development Center, 43-Pittsburgh Supercomputer Center, 71-Swiss Scientific Computing Center, 189-other Govt. and 290-other Govt. (Contact: James Ang)
December 2005


Charon News Note

Large-scale Parallel Device Simulation

In October 2005, the Charon (http://mpcharon.sandia.gov/) team at Sandia National Laborato-ries demonstrated aggressive progress in the development of their finite-element based semi-conductor device simulator code. Initial results for a stockpile bipolar junction transistor (BJT) in support of the QASPR (Qualification Alternatives for the Sandia Pulse Reactor) project show both the ability to model the complex physics associated with these devices as well as the code’s ability to scale well on large parallel computers.

To help demonstrate and exercise the parallel nature of Charon, a scaling study was undertaken using a model of a 60x15 micron region of the 2N2222 BJT. Specifically, the model used a uniform
mesh refinement strategy to generate a series of meshes based upon a 41,000-element model. Using refinement, the resulting meshes contained 161,000, 642,000 and 2.5 million ele-ments. Parallel calculations were performed on the NWCC-spirit computer using up to 64 proc-essors. The scaling study represented in Table 1 shows that for a problem increase by a factor of 64, the solution time increase is only a factor of 3.3. Note that these are preliminary results for the code and include the effects of both algorithmic and computational scaling. Multi-level preconditioner enhancements promise to drastically improve the algorithmic portion of Charon’s scaling and are expected to be integral in a version of the code to be released later in FY06.

The graph below illustrates mesh convergence properties for a NPN BJT indicating that for a uniform mesh refinement, mesh convergence is reached at 120,000 elements for this specific problem. These results are also compared with results from the commercial code Medici where very good agreement is shown. These studies of both scalability and convergence pro-vide confidence in Charon’s ultimate ability to meet the QASPR project’s requirements for high-fidelity modeling of device physics with neutron-generated defects. To support these efforts, the team expects to model full transient radiation effects in stockpile BJTs utilizing 1000’s of processors on Sandia’s new Red Storm platform.

Table 1. Parallel & Algorithmic Scaling for 2N2222 BJT on NWCC.

Processors
Unknowns
Unknowns Ratio
Solver Iterations
Solution Time (sec.)
Time Ratio
1
41,000
1
4
16
1
4
161,000
4
12
30
1.9
16
642,000
16
19
34
2.1
64
2,563,000
64
39
53
3.3


About Charon
The Charon project seeks to model electrical semiconductor devices such as transistors at high fidelities. By applying finite element and massively parallel solver technology developed at San-dia, the tool is capable of modeling unprecedented fidelities including transient gamma and neutron irradiation effects. Relying on the Nevada finite element framework and the Trilinos solvers toolset, rapid development of robust and scalable capability has been achieved. The Charon tool is critical to the Qualification Alternatives to the Sandia Pulsed Reactor (QASPR) effort allowing computational modeling to assist in the qualification of weapons systems under hostile environments.
(Contacts: Rob Hoekstra and Gary Hennigan)
November 2005


AMPL Utilization at Sandia Grows Through Site License

Sandia has free access to the popular AMPL mathematical programming language. Sandia has unlimited rights to run the software, and also has access to the source code. The license is in the spirit of a CRADA; the license was obtained at no monetary cost to Sandia, in exchange for sharing improvements back to AMPL’s parent company.

Figure 1: Cover of the 2002 edition of the AMPL book by Fourer, Gay, and Kernighan


AMPL is a comprehensive and powerful algebraic modeling language for stating, solving, and analyzing linear and nonlinear optimization problems, in discrete or continuous variables. AMPL lets you use common notation and familiar concepts to formulate optimization models and examine solutions, while the computer manages communication with an appropriate solver. AMPL's flexibility and convenience render it ideal for rapid prototyping and model development, while its speed and control options make it an especially efficient choice for repeated production runs. For more on AMPL, see the AMPL web site, http://www.ampl.com.


AMPL was created at Bell Labs by Bob Fourer, David Gay, and Brian Kernighan. David joined Sandia in 2003 and has been helping other Sandians with AMPL and topics related to mathematical programming.


AMPL impact is growing at Sandia. The 14 February 2005 news note, titled “Collaboration in DOE Logistics Planning,” featured a cross-center team that achieved an order of magnitude improvement in speed and memory usage for their Yucca Mountain modeling. The team’s optimization leader states, “With the site license and David's help, I was able to advance 6221's OCRWM Investment Planning Model past a point at which it had been stuck for over a year.” The team intends to use AMPL for all future modeling. (Contact: David Gay)
November 2005


CUBIT’s Customization Tools Provide Goodyear with Prototype Tread Design Software

The CUBIT development team was recently able to demonstrate prototype design software to Goodyear for modeling tire tread designs. Several members of the CUBIT development team recently visited Goodyear’s world headquarters in Akron, Ohio, where they demonstrated how CUBIT could be rapidly customized to meet the needs of Goodyear’s designers and analysts. This interaction was part of a long-standing CRADA agreement between Sandia and Goodyear and has led to expanded investment from Goodyear in Sandia technology.

The CUBIT Geometry and Meshing Toolkit is the most widely used software at Sandia for generating meshes for computational simulation. CUBIT’s strength includes its advanced hexahedral meshing algorithms and geometry manipulation capabilities. CUBIT also provides a comprehensive toolset for preparing analysis models for simulation, including an advanced graphical user interface and graphical manipulation.

In addition to the rich feature set provided by CUBIT, new capability has been added providing the option to customize the software to fit into an end-user's specific application needs. Using the Qt toolkit, the PyQt interface and the Python scripting language, the user can design a custom interface that provides only the capabilities that they need for a specific application. This allows them access to needed CUBIT functions from a single source, giving them a way to automate (via a python script) many repetitive tasks that can be tied to custom GUI panels. A simplified and focused interface can be developed rapidly by an expert user which can be valuable for keeping the complexity hidden from potential users of CUBIT, focusing them on tasks relevant only to their application. If the user decides that the rest of CUBIT’s functionality is necessary to complete a task, it can easily be introduced.

The Goodyear CRADA has provided a unique opportunity to demonstrate the capabilities of this new customizable system. The panel shown here provides an interface into the meshing system co-developed by Sandia and Goodyear. Where once a complex batch-run script was needed, Goodyear now has the flexibility to change parameters through the custom GUI panels, and see the results in an interactive setting, rather than a batch process.

Plans to expand this new custom capability are underway. While Goodyear is a tremendous success for the system, new areas of application are being explored. The ability to rapidly develop custom software to suite the unique needs and expectations of a small niche group of designers and computational analysts has great potential for Sandia’s engineering community. (Contact: Steven J. Owen)
October 10, 2005



ASC Level II Earth Penetration Milestone Progress

Departments 1527 and 1431 are set to carry out the ASC Level II Milestone for Earth Penetration. This milestone calculation involves simulation of the response and trajectory of a 5000-lb class penetrator into jointed geologic media at an angle of obliquity up to 20 degrees. The team has been methodically working towards this goal, through a series of increasingly complex computations using the SHISM algorithm in the ALEGRA code. Following on the results of the FY04 application of SHISM to the Forrestal and Warren experiments of small steel penetrators into aluminum targets, the team has successfully simulated experiments of larger penetrators. These experiments, conducted at WES and managed by D. Frew, included normal and oblique penetration into concrete targets. Depth-of-penetration results, reported by L. Kmetyk (1527), show the ALEGRA calculations to be, on average, within about 7% of the experimental values over a range of impact velocities and for normal and 15-degree angle-of-obliquity. Calculations of a very large penetrator have been performed by J. Bishop (1527), simulating the EQ test, a 5000-lb penetrator into concrete. The computed depth-of-penetration was within about 5% of the reported value. Subsequent calculations of this size of problem with 30-degree obliquity into geologic materials (tuff and limestone) have also been performed. Final testing of the material model components to handle jointing and fracturing effects is being conducted. These successful calculation series now put the team in position and on track to perform the final milestone calculation this summer.

Two important components have enabled these calculations. The improvements to the Geomaterial Model developed by A. Fossum, R. Brannon, and E. Strack (6117) and supported in ALEGRA by S. Petney (1431) have produced accurate response of the target materials. Improvements to the ALEGRA code and the SHISM algorithm over the past year have reduced memory footprint, made communication more efficient, and improved the performance of the remapping algorithm (Contacts: David Hensinger, Christopher Luchini)
August 8, 2005


Simulations of Electrical Effects of Radiation-induced Semiconductor Defects

High fidelity physics-based modeling of the electrical effects of radiation-induced defects in semiconductor devices is a major component of the QASPR project strategy for developing a robust methodology to qualify weapons systems in hostile radiation environments. Quantum density functional theory (DFT) calculations play a vital role in this strategy as many critical properties of lattice defects generated by radiation damage are not known or accessible from experiment, and must be calculated. However, conventional methods for simulating defect properties lack the accuracy needed to satisfy QASPR requirements. Peter Schultz (9235) identified the fundamental issue as the use of incorrect boundary conditions in the computational models commonly used in DFT calculations for defect systems. Over the past six months, he formulated and implemented a new, more rigorous methodology for defect simulations within DFT. This robust physics-based scheme incorporates the correct electrostatic boundary conditions, locates a fixed electronic chemical potential, and includes the bulk dielectric response. After implementing this methodology into the ASC SeqQuest DFT code, the computed formation energies and electrical defect levels for a wide variety of charged defects in silicon was calculated. The results yield remarkably accurate predictions of defect levels (<0.2 eV errors from experiment – better accuracy than might have been expected given the DFT approximation). Moreover, the method significantly reduces the computational requirements of the simulations. Use of these theoretical results in kinetic models of device response successfully filled a knowledge gap in the simulation of radiation-induced early-time transient response of electronic devices. This new methodology will be an important new capability to enable physics-based modeling within QASPR. (Contact: Peter A. Schultz)
July 11, 2005


Breakthrough in Visualization Performance Announced

Sandia National Labs, Kitware Inc., and NVIDIA Corporation (Nasdaq: NVDA) recently announced a press release ( see http://www.nvidia.com/object/IO_19962.html ) exhibiting a breakthrough in large data scientific visualization, attaining rendering rates of over 1.5 billion polygons per second.

The breakthrough was achieved with ParaView, an open source visualization application developed by Kitware Inc that contains high-end parallel visualization algorithms developed by Sandia’s Data Analysis and Visualization Department (Org. 09227).

In a recent test with one of the world’s largest polygonal datasets (see Figure 1) Sandia utilized ParaView on 128 new visualization nodes that are being deployed for the new Red Storm Environment (RoSE), and performed various parallel operations on the data including coloring, t-stripping, clipping, and glyphing at interactive rates. Rendering of the surface was performed at an aggregate rate of over 1.5 billion polygons per second, which equates to three-four frames per second.

Figure 1. One of the world’s largest polygonal datasets is this 473 million triangle isosurface generated from a Richtmyer-Meshkov simulation run at Lawrence Livermore National Laboratories (LLNL: UCRL-MI-151066). The Richtmyer-Meshkov instability is a fundamental fluid instability that occurs when perturbations on an interface separating gases with different properties grow following the passage of a shock. This instability is of great fundamental interest in fluid dynamics, as well as of interest to inertial confinement fusion, and to supernovae dynamics.

ParaView is also being used by the US Army Research Laboratory (ARL) on tiled display systems for the analysis of physics based simulations in armor/anti-armor applications (see Figure 2). “When calculations require tens of CPU years and produce terabytes of output, parallel visualization is no longer a luxury; it’s a necessity,” said Jerry Clarke, scientific visualization team leader, US Army Research Laboratory. “ParaView on our visualization clusters is an important part or our physics-based simulation environment and our future.”

Figure 2: ParaView used to visualize a ZSU23-4 Russian Anti-Aircraft vehicle being hit by a planar wave. 2.5 billion cell calculation. Courtesy of Jerry Clarke (US Army Research Laboratory)

(Contact: David R. White)
May 25, 2005


Red Storm Risk Mitigation Effort

Sandia and Cray have personnel working literally around the clock to meet the 2QFY05 Level II ASC Milestone #30. The milestone asserts: “Initial operation of Red Storm hardware will be demonstrated at Sandia by providing functionality needed for early testing of applications codes. We will run the 7x acceptance test suite and document the results.” All Red Storm hardware is on site, integrated into the system, the system has been powered up and we can boot all processors on Red Storm but not yet as a single system. Back in November 2004 we recognized that left to its own course, Cray would probably fail to deliver for this milestone. In response to this situation we started the Red Storm Risk Mitigation project with efforts in three key areas: Portals enhancements, Parallel Virtual File System (PVFS)-based parallel I/O capability, and Message Passing Interface (MPI) application scaling efforts. These risk mitigation efforts have given Sandia a much better understanding of the remaining issues and has provided the foundation to meet or exceed our projected application scaling by the 3/31/05 due date for this milestone. As of 3/15/05, several applications are running at over 1,900 processors (High Performance Linpack, CTH, Sage, Partisn, and UMT2000). Among the other 7x acceptance test suite applications, ITS and sPPM have run at 1,872 processors, Presto and Calore have run at 1,536 processors, and Salinas at 343 processors while Alegra has run at 256 processors.
(Contacts: James A. Ang, John P. Noe, and James R. Stewart)
May 25, 2005


Red Storm Progress

All Red Storm hardware is now integrated at Sandia. The entire system is undergoing heavy usage by both Cray and Sandia developers and applications testers during the system test and check out (STCO) phase at Sandia. Currently, we boot the entire system in multiple partitions; and applications scaling has been carried out for all of the tri-lab ASC benchmark codes. All benchmark codes are running efficiently on at least 1000 processors and several at well over 3000 processors. LANL has carried out and released to us a preliminary assessment of Red Storm with very good results: they believe based on their measurements and analysis that their major applications will run 10—30+ times as fast on Red Storm as they do on RED. At this point, Sandia’s Red Storm management team believes that we have met the letter of the L-II milestone for Red Storm and are close to meeting all aspects of the spirit of the milestone.

DOE-DoD Distributed Storage Project
A Wright Patterson Air Force Base proposal for DOE-DoD Distributed Storage has been funded, with support from Congressman Hobson. The Ohio Supercomputer Center (OSC) will play a key role in this effort. Sandia is continuing to foster this development in our ongoing collaboration with the OSC as part of the ASC (Advanced Simulation and Computing) program.

HMC Clinic
Neil Pundit, Ron Brightwell and Ron Oldfield, (all 9223) visited Harvey Mudd College (HMC) in Feb’05 with the goal of starting a new clinic in the Computer Science area. The clinic will be a year long project in which HMC seniors collaborate on a research topic with a research organization such as Sandia. The HMC/Sandia efforts will be in the area of lightweight file systems and will leverage funding from the Computer Science Research Institute (CSRI).

CUG2005
Sandia will host the Cray User Group (CUG) annual conference to be held in Albuquerque, May 16-19, 2005. Neil Pundit is the Local Chair, and CUG has invited Bill Camp to give the keynote address. A tour of Sandia’s Red Storm tour will be a conference highlight. The theme of the conference is “Petroglyphs to Petaflops.” CUG is the original supercomputer user’s conference and is a highly-attended international event.

ASC PI Meeting
Sandia hosted the ASC PI Meeting in late Feb’05 in San Antonio, Texas. John Noe (9300) was the host and the Technical Chair. The meeting is held annually to review key progress in tri-lab ASC community R->D->A activities. The PI meeting was attended by the tri-Lab ASC directors as well as leadership from NNSA including David Crandall, Dimitri Kusnezov, and Bob Meisner. Fred Johnson represented DOE’s Office of Science.

(Contact: Neil Pundit)
April 25, 2005


Graduated Embodiment for Sophisticated Agent Evolution and Optimization featured in DOE Annual Report

We combined Sandia's Umbra Modeling and Simulation capabilities with our object-oriented Genetic Programming engine to visually show the results for each optimization stage as the computer evolves a segment of code to control the behavior of an autonomous glider that balances exploration with exploitation of local conditions. The LDRD program office selected the project that funded this work (entitled 'Graduated Embodiment for Sophisticated Agent Evolution and Optimization') to be featured on a Divider Page in the annual report to DOE. The Divider Page summarizes the FY04 mission of individual investment areas and then summarizes a project that is an exceptional example.(Contact: Mark Boslough)
March 14, 2005


Massively Parallel Magnetic Diffusion Computations: Highly Scalable Z-pinch Simulations

Researchers at Sandia National Laboratories have dramatically improved scalability within a novel algebraic multigrid algorithm for solving eddy current approximations to Maxwell’s equations. This advance has significant impact on magnetohydrodynamic simulations of environments generated by Sandia's Z-machine. The Z-machine uses tremendous amounts of electrical current to convert wire arrays into plasma, which is then collapsed onto a cylindrical axis (z-axis) by magnetic forces.

Figure 1 illustrates a wire array within a Z-pinch machine. Prior to the development of the new solver, large scale simulations were not possible because standard solvers failed to converge.

The recent scalability gains were obtained by carefully analyzing solver characteristics. Load balancing using Sandia’s Zoltan package was then introduced within several stages of the multigrid construction. These modifications lead to 10x improvements in magnetics solve times over those that were achieved last April on 3600 processors. The improvements correspond to approximately 4x gains in run time of the overall simulation.

Figure 2 illustrates the run time of a single magnetics solve within the simulation. The largest simulation corresponds to a linear system with over 112 million degrees of freedom. This work played a significant role in a recent Level One Milestone to document simulation capabilities that demand a PetaOPs supercomputer.

The scalability enhancements build on a specially-developed edge-element algebraic multigrid solver that dramatically decreases solution time for eddy current simulations. The effectiveness of the new algebraic multigrid solver relies on properties of the discretized differential operator, most notably, on the characterization of its near null space as a subspace of discrete gradients. By properly considering the near null space, this solver avoids difficulties associated with standard iterative methods.

The new method is now available within Sandia's multigrid solver package (ML) and is being used as a major computational kernel by Sandia's multi-physics code, ALEGRA-HEDP, to model high energy density physics environments. Figure 3 illustrates calculated magnetic field lines generated by a Z-pinch simulation. ML is available as part of the Trilinos solver framework.

Figure 3: 3D magnetic field generated by electrical current running through an idealized plasma liner. Prior to the development of the new multigrid solver, this simulation was not feasible.

(Contacts: Jonathan Hu, Ray Tuminaro, Pavel Bochev, Christopher Garasi, and Allen Robinson)
February 28, 2005


Collaboration in DOE Logistics Planning

An ongoing collaboration between Discrete Algorithms and Math (9215) and CI Modeling and Simulation I (6221) continues to apply computer science modeling techniques to strategically important DOE logistics problems. Vitus Leung (9215), in collaboration with Julie Lloyd (6221), recently improved the speed of the OCRWM (Yucca Mountain) Investment Planning Model by a factor of twenty and reduced the memory requirements by a factor of ten to overcome nearly prohibitive memory limitations. With this increase in speed and reduction in memory, project leader Dean Jones (6221) can move to more detailed models with longer planning horizons to better meet DOE's OCRWM investment planning needs. This collaboration continues Vitus' recent success in solving a DOE Complex transportation planning problem that had been unsolvable for over four years. (Contact: Vitus Leung)

February 14, 2005


CUBIT Measures Significant Decrease in Time for Geometric Editing Operations

A recent study designed to measure the effectiveness of CUBIT's new graphical user interface and geometry tools has demonstrated up to a forty percent decrease in time for geometric editing operations. The study involved complex CAD models requiring detailed geometric decomposition and editing operations. Geometry preparation has been earmarked by the Design through Analysis Roadmap Team (DART) as among the most time consuming aspects of the design through analysis process. As a result, the CUBIT team has devoted significant resources into improving its usability and tools for geometry management and cleanup. CUBIT's recent 9.0 and 9.1 releases include a new cross-platform graphical user interface. A significant feature of the new user interface is the Geometry Power Tool. The Geometry Power Tool permits the user to analyze a CAD model according to a series of diagnostic tools. These tools will detect potential problems and areas of concern that a user should examine and/or modify prior to attempting to mesh. Presented with the list of potential problems, a variety of tools for graphically examining and modifying the problem geometry is made available through a convenient GUI panel.

In order to measure the impact of the new Geometry Power Tool and CUBIT's new Graphical User Interface, a series of test models were selected. The same user was tasked with developing an all-hexahedral mesh of the models in CUBIT's 8.0 version, which provided only the old command line interface, and again in CUBIT 9.0, which provided the new GUI tools. In an attempt to factor out the user's time to learn how to mesh the parts, the user first practiced meshing the part to gain experience with the tools and the models in both systems. Time to mesh was measured based only on the speed of using these tools. In all cases, the new tools helped to decrease the time to prepare the geometry for meshing by between 10 and 40 percent. (Contact: Steven J. Owen)
For more information on CUBIT, visit http://cubit.sandia.gov

January 24, 2005


Method of Manufactured Solutions Verifies SNL Analysis Codes

How can one be assured that computer codes designed to solve Partial Differential Equations (PDE's) are actually solving those equations free of bugs and excessive numerical error? Comparing a numerical solution to an analytic solution is one way, but what if the physics or phenomena under study are so complex that no analytic solutions are known? This is increasingly the case at Sandia.

The Method of Manufactured Solutions (MMS) is a mathematical testing technique that extends beyond toy PDE's and simple physics. In MMS, one manufactures analytic solutions without consideration of boundary & initial conditions and adds source terms to balance the PDE. This flexibility allows the manufacture of exact solutions to very general PDE's having coupled physics, non-linearities, space and time-varying coefficients, complex boundary conditions, and general domains. By performing code verification via MMS and grid refinement one can devise a truly comprehensive test suite to identify hidden coding mistakes and provide solid evidence that the code solves its governing equations correctly. SNL has been a recent leader in the development and championing of MMS; see the book "Verification of Computer Codes in Computational Science and Engineering," 2002, by Patrick Knupp (9211) and Kambiz Salari (LLNL).

MMS is being widely adopted at SNL for the verification of ASC codes. Code groups at SNL using or considering MMS include CEPTRE (radiation transport), Premo (Computational Fluid Dynamics), Presto (Computational Mechanics), Alegra (Shock Physics), and Calore (Heat Transfer). MMS is also gaining ground within the broader scientific and engineering communities whenever high-confidence simulations of complex physics are required; see the special issue of "Computing in Science & Engineering" devoted to Verification and Validation, October 2004, edited by Tim Trucano (9211) and Doug Post (LANL). MMS researchers continue to make significant progress in making MMS more easy to use and in educating development groups on how to use it. Adoption of code verification methods involves software development and signals a shift in Sandia's software engineering practices. This is taking place through close technical collaboration between staff in Sandia's Validation and Verification program and code development teams. (Contact: Pat Knupp)
January 24, 2005


Leukemia Microarray Study

The treatment of childhood leukemia has greatly improved over the past 50 years. Adult leukemia, however, has remained a therapeutically resistant disease, especially for people over the age of 55. Recently, some progress has been made towards understanding this disease using microarrays, a technology that allows the simultaneous measurement of tens of thousand of genes. George S. Davidson and Shawn Martin, of the Department of Computational Biology (9212), have been involved in a large-scale microarray study (170 patients) funded by the National Cancer Institute through the University of New Mexico. In collaboration with Dr. Cheryl Willman’s lab at UNM, especially Dr. Carla Wilson, and using technology originally developed at Sandia to study collections of documents (such as scientific articles or patents), George and Shawn have proposed that the 170 patients be divided into 6 major categories. Surprisingly, these categories were found to correspond to the overall survival of the patients. This work was well received at the Annual American Society of Hematology conference in 2004, and has been submitted as a plenary paper to the high impact journal Blood. (Contacts: Shawn Martin and George Davidson)
January 10, 2005



NOX

The NOX development team is releasing production versions of two unique solver algorithms based on tensor and inexact trust region techniques. NOX is a software library being developed by ASC to provide robust, large-scale algorithms for solving nonlinear equations. NOX is currently used by a variety of Sandia projects including circuit simulators (Xyce), semiconductor device simulators (Charon), compressible aerodynamics (Premo), and chemically reacting flow (MPSalsa) and is also available in the SIERRA and NEVADA frameworks. NOX is part of the Trilinos solver project which recently won a 2004 R&D 100 award and the Supercomputing 2004 HPC Software Challenge award. The library played a critical role in meeting a level 1 milestone in circuit simulation. NOX is now developing a multi-physics capability to drive tightly coupled simulations (Newton-based) between separate applications. The software is licensed under the GNU LPGL and is freely downloadable from the web. (Contact: Roger Pawlowski)
January 10, 2005


Fluid streamlines in a differentially heat box (MPSalsa).


Potential in a Bipolar Junction Transistor (Charon).


Pressure contours over an airfoil (PREMO).

Return to recent newsnotes


Newsnotes | Info and Events (internal - SNL only) | Open-Source Software Downloads | Privacy and Security
Sandia National Laboratories Home Page - External or Internal (SNL only)

Maintained by: Bernadette Watts
Modified on: May 6, 2008