Mathematical and Computational Sciences Division
Summary of Activities for Fiscal Year 2001
Information Technology Laboratory
National Institute of Standards and Technology
Technology Administration
U. S. Department of Commerce
January 2001
Abstract
This report summarizes the technical work of the Mathematical and Computational Sciences Division of NIST's Information Technology Laboratory. Included are details of technical projects, as well as information on publications, technical talks, and other professional activities in which the Division's staff has participated.
For further information, contact Ronald
F. Boisvert, Mail Stop 8910, NIST, Gaithersburg, MD 20899-8910, phone
301-975-3812, email boisvert@nist.gov,
or see the Division’s web site at http://math.nist.gov/mcsd/.
Thanks to Robin Bickel for collecting and organizing the information contained in this report.
Table of Contents
1.2. Overview of Technical Areas
High Performance Computing and Visualization
Digital Library of Mathematical Functions
Virtual Cement and Concrete Testing Laboratory
1.5. Administrative Highlights
The APEX Method in Image Sharpening
Blind Deconvolution of Scanning Electron Microscope Imagery
Image Analysis for Combinatorial Experimentation
Mathematical Problems in Construction Metrology
Representation of Terrain and Images by L1 Splines
Computer Graphic Rendering of Material Surfaces
Monte Carlo Methods for Combinatorial Counting Problems
Time-Domain Algorithms for Computational Electromagnetics
OOF: Finite Element Analysis of Material Microstructures
Mathematical Modeling of Solidification
Numerical Simulation of Axisymmetric Dendritic Crystals
Machining Process Metrology, Modeling and Simulation
Modeling and Computational Techniques for Bioinformatics
TNT: Object Oriented Numerical Programming
Parallel Adaptive Refinement and Multigrid Finite Element Methods
Information Services for Computational Science
2.3. High Performance Computing
and Visualization
Parallel Computation of Ground State of Neutral Helium
Parallelization of Feff X-ray
Absorption Code
Modeling and Visualization of
Dendritic Growth in Metallic Alloys
Linewidth Standards for
Nanometer-level Semiconductor Metrology
Theory of Nano-structures and
Nano-optics
Computational Modeling of the Flow
of Cement
Parallelization, Visualization of
Fluid Flow in Complex Geometries
Parallelization of a Model of the
Elastic Properties of Cement
Digital Library of Mathematical Functions
3.3. Conferences, Minisymposia,
Lecture Series, Short-courses
Scientific Object Oriented Programming Users Group (SCOOP)
3.6. Other Professional
Activities
The mission of the Mathematical and Computational Sciences Division (MCSD) is stated as follows.
Provide technical leadership within NIST in modern analytical and computational methods for solving scientific problems of interest to U.S. industry. The division focuses on the development and analysis of theoretical descriptions of phenomenon (mathematical modeling), the design of requisite computational methods and experiments, the transformation of methods into efficient numerical algorithms for high- performance computers, the implementation of these methods in high- quality mathematical software, and the distribution of software to NIST and industry partners.
Within the scope of our charter, we have set the following general goals.
o Insure that sound mathematical and computational methods are applied to NIST problems.
o Improve the environment for computational science and engineering research community at large.
With these goals in mind, we have developed a technical program in five major areas.
The first area and third areas accomplished primarily via collaborations with other technical units of NIST, supported by mathematical research in key areas. Projects in the second area are typically motivated by internal NIST needs, but have products, such as software, which are widely distributed. This work is also often done in conjunction with external forums whose goals are to promulgate standards and best practices. The fourth and fifth areas represent large special projects. These are being done in collaboration with other ITL Divisions, as well as with the NIST Physics and Electronics and Electrical Engineering Laboratories. Each of these is described in further detail below.
Our customers span all of the NIST Laboratories, as well as the computational science community at large. We have developed a variety of strategies to increase our effectiveness in dealing with such a wide customer base. We take advantage of leverage provided via close collaborations with other NIST units, other government agencies, and industrial organizations. We develop tools with the highest potential impact, and make online resources easily available. We provide routine consulting, as well as educational and training opportunities for NIST staff. We maintain a state-of-the-art visualization laboratory. Finally, we select areas for direct external participation that are fundamental and broadly based, especially those where measurement and standards can play an essential role in the development of new products.
Division staff maintain expertise in a wide variety of mathematical domains, including linear algebra, special functions, partial differential equations, computational geometry, Monte Carlo methods, optimization, inverse problems, and nonlinear dynamics. We also provide expertise in parallel computing, visualization, and a variety of software tools for scientific computing. Application areas in which we have been actively involved in this year include atomic physics, materials science, fluid mechanics, electromagnetics, manufacturing engineering, construction engineering, wireless communications, bioinformatics, image analysis and computer graphics.
In addition to our direct collaborations and consulting, output of Division work includes publications in refereed journals and conference proceedings, technical reports, lectures, short courses, software packages, and Web services. In addition, MCSD staff members participate in a variety of professional activities, such as refereeing manuscripts and proposals, service on editorial boards, conference committees, and offices in professional societies. Staff members are also active in educational and outreach programs for mathematics and computer science students at all levels.
In this section we provide additional background on each of the technical thrust areas, including their impetus, general goals, and expected long-term outcomes. The identification of these areas was part of a NIST-wide effort to identify and document its programs of work. Details on the technical work that has been undertaken in each of these areas can be found in Part II.
Impetus. As computing resources become more plentiful there is increased emphasis on answering problems by "putting problems on the computer". Formulating the right questions, translating them into tractable computations, and analyzing the resulting output, are all mathematics-intensive operations. It is rare for a bench scientist to be expert both in their primary subject area and in the often deep and subtle questions of the mathematics that they engender. Thus, NIST needs a sustained cadre of professional mathematicians who can bring their expertise to bear on the wide variety of mathematics problems found at NIST. Often, the mathematics resulting from NIST problems is widely applicable outside, and hence there is added benefit.
Activities. MCSD mathematicians engage in consulting and long-term collaboration with NIST scientists and their external customers. They also work to develop requisite mathematical technologies, including mathematical models, methods and software. The following are examples of such activities.
o Mathematical modeling of solidification processes
o Monte Carlo methods for combinatorial counting
problems
o Terrain modeling
o Micromagnetic modeling
o Modeling of complex material microstructures
o Modeling of high-speed machining processes
o Development and analysis of image sharpening methods
o Computer graphic rendering of material surfaces
o Computational techniques in bioinformatics
o Mathematical problems in construction metrology
Expected Outcomes. Improved mathematical techniques and computational procedures will lead to more effective use of mathematical and computational modeling at NIST. Areas such as materials science, high-speed machining, and construction technology will see immediate improvements in methodology. Distribution of related methodology and tools (including computer software) will allow these benefits to accrue to the scientific community at large. Examples of the latter include (1) more widespread study of material science problems and the development of new technologies characterized by complex material microstructure, and (2) improvement in the accuracy and reliability of micromagnetic modeling software.
Impetus. Mathematical modeling in the sciences, engineering, and finance inevitably leads to computation. The core of computations is typically a series of well-defined recurring mathematical problems, such as the solution of a differential equation, the solution of a linear system, or the computation of a transform. Much mathematical research has focused on how to solve such problems efficiently. The most effective means of passing on this expertise to potential customers is by encapsulating it in reusable software components. Since much work at NIST relies on such computations, it has a natural interest in seeing that such components are developed, tested, and made available. The computational science community outside of NIST has similar needs. Programming methodologies and tools for developing efficient and reliable mathematical modeling codes in general, and for developing and testing reusable mathematical software components in particular, are also of interest.
Activities. MCSD staff members develop of mathematical algorithms and software in response to current and anticipated NIST needs. They are also involved in the development of standards for mathematical software tools, and in the widespread dissemination of research software, tools, testing artifacts, and related information to the computational science community at large. The following are examples of such activities.
o The Sparse BLAS
o Parallel adaptive multigrid methods
o Guide to Available Mathematical Software
Expected Outcomes. Improved access to general-purpose mathematical software will facilitate the rapid development of science and engineering applications. In addition, the availability of community standards and testing tools will lead to improved portability, performance, and reliability of science and engineering applications.
Impetus. The most demanding mathematical modeling and data analysis applications at NIST require resources that far exceed those routinely found on the scientist's desktop. In order to effect such computations in a reasonable amount of time, one must often resort to parallel computers. The effective use of parallel computers requires that computational algorithms be redesigned, often in a very fundamental way. Effecting these changes, and debugging the resulting code, requires expertise and a facility with specialized software tools that most working scientists do not possess. Hence, it is necessary to support the use of such facilities with specialized expertise in these areas. Similarly, the use of sophisticated visualization equipment and techniques is necessary to adequately digest the massive amount of data that these high performance computer simulations can produce. It is not easy to become facile with the use of such tools, and hence specialized expertise in their use must also be provided.
Activities. MCSD staff members collaborate with NIST scientists on the application of parallel computing to mathematical models of physical systems. In addition, they collaborate with NIST scientists on the application of advanced scientific visualization and data mining techniques. They develop and maintain supporting hardware and software tools, including a fully functional visualization laboratory. MCSD staff members also provide consulting in the use of applications software provided by the NIST central computing facility. The following are examples of activities in this area.
o Parallelization of Feff x-ray absorption code
o Parallel computation of the ground state of neutral helium
o Parallel genetic programming
o Parallel computing and visualization of the flow of suspensions
o Modeling and visualization of dendritic growth
o Visible cement database
o Immersive visualization
Expected Outcomes. Working closely with NIST scientists to improve the computational performance of their models will lead to higher fidelity simulations, and more efficient use of NIST central computing resources. New scientific discovery will be enabled through the insight provided by visualization and data mining. Finally, widespread dissemination of supporting techniques and tools will improve the environment for high performance computing and visualization at large.
Impetus. The special functions of applied mathematics are extremely useful tools in mathematical and computational modeling in a very wide variety of fields. The effective use of these tools requires access to a convenient source of information on the mathematical properties of these functions such as series expansions, asymptotics, integral representations, relations to other functions, methods of computation, etc. For more than 35 years the NBS Handbook of Mathematical Functions (AMS 55) has served this purpose. However, this book is now woefully out of date. Many new properties of these functions are known, many new scientific applications of them have come into use, and current computational methods are completely different than those of the 1950s. Finally, today there are new and more effective means of presenting the information: online, Web-based, highly interactive, and visual.
Activities. The purpose of this project is to develop a freely available, online, interactive resource for information on the special functions of applied mathematics. With the help of some 40 outside technical experts, we are surveying the technical literature, extracting the essential properties of interest in applications, and packaging this information in the form of a reference compendium. To support the presentation of such data on the Web, we are developing mathematics-aware search tools, indices, thesauri, and interactive Web-based visualizations.
Expected Outcomes. Widespread access to state-of-the-art data on the special functions will improve mathematical modeling in many areas of science, statistics, engineering, and finance. The DLMF will encourage standardization of notations and normalizations for the special functions. Users of the special functions will have an authoritative reference to cite the functions they are using, providing traceability to NIST for standardized mathematical objects.
Students Brianna
Blaser (Carnegie Mellon) and Elaine Kim (Stanford) work with
Bonita Saunders on
graphics for the Digital Library of Mathematical Functions.
Impetus. Quantum information networks have the potential of providing the only known provably secure physical channel for the transfer of information. The technology has only been demonstrated in laboratory settings, and a solid measurement and standards infrastructure is needed to move this into the technology development arena. Quantum computers have potential for speeding up previously intractable computations. ITL has been asked to support the work in the NIST Physics and Electronics and Electrical Engineering Laboratories to develop quantum processors and memory, concentrating on the critical areas of error correction, secure protocols, algorithm and tool development, programming, and information theory.
Activities. This project is an ITL-wide effort with participants in six Divisions. We are working to develop a quantum communications test bed facility for the DARPA QuIST program as part of a larger effort to develop a measurement and standards infrastructure to support quantum communications. We are further supporting the NIST Quantum Information program through collaborative research with the NIST Physics Laboratory related to quantum information theory. Within MCSD we are working on issues related to the use of quantum entanglement for long-distance communication, the modeling of neutral atom traps as quantum processors, and the development and analysis of quantum algorithms.
Expected Outcomes. We expect that the development of an open, measurement-focused test bed facility will allow a better understanding of the practical commercial potential for secure quantum communication, and serve the development of standardized network protocols for this new communications technology. By working closely with staff members of the NIST Physics Laboratory, who are working to develop quantum processors, we expect that early processor designs will be more capable and useable.
In this section we will highlight some of the technical accomplishments of the Division for FY2001. Further details can be found in Part II.
Scientific and engineering data is increasingly being generated in the form of images. Images produced at NIST are from a wide variety of sources, from scanning electron microscopes to laser radar. Applications range from combinatorial chemistry to building construction. The area of image analysis has blossomed into a significant area of applied mathematics research in recent years, for which new fundamental mathematical technologies are continuing to be developed. MCSD staff members are working on a variety of projects in collaboration with the NIST Laboratories in which image analysis plays a vital role. Examples of these follow.
Blind Direct Deconvolution. Scanning electron microscopes (SEMs) are basic research tools in many of NIST's programs in nanotechnology. A major concern in scanning electron microscopy is the loss of resolution due to image blurring caused by electron beam point spread. The shape of that beam can change over time, and is usually not known to the microscopist. Real-time blind deconvolution of SEM imagery, if achievable, would significantly extend the capability of electron microprobe instrumentation. Blind deconvolution is a very difficult problem in which ill conditioning is compounded with non-uniqueness. Most known approaches to that problem are iterative in nature. Such processes are typically quite slow, can develop stagnation points, or diverge altogether. Alfred Carasso of MCSD has developed reliable direct (non-iterative) methods, in which the fast Fourier transform is used to solve appropriately regularized versions of the underlying ill-posed parabolic differential equation problem associated with the blur. When the point-spread function (psf) is known, Carasso's SECB method can deblur 512x512 images in about 1 second of CPU time on current desktop platforms. Carasso has recently developed two new direct blind deconvolution techniques based upon SECB. These methods detect the signature of the psf from appropriate 1-D Fourier analysis of the blurred image. The detected psf is then input into the SECB method to obtain the deblurred image. When applicable, these blind methods can deblur 512x512 images in less than a minute of CPU time, which makes them highly attractive in real-time applications. Carasso has been applying this method with great success to images obtained from NIST SEMs. The methods are applicable in a wide variety of imaging modalities in addition to SEM imaging.
Alfred Carasso has developed a unique highly efficient method for blind
deconvolution of images. This is
currently being used in several applications of electron microscopy at
NIST. The method is more widely
applicable, as indicated by the enhancement of the Whirlpool Galaxy
(M51) image, shown in the photo.
Feature Extraction, Classification. In applications like combinatorial chemistry, large sets of such images are generated which must be processed automatically to identify information of interest. Isabel Beichl has been working with the NIST Polymers Division to automatically detect areas of wetness and dryness in images of polymer dewetting processes, and to generate summary statistics related to the geometry of each image. Another need is to automatically classify the state of the dewetting process that each image represents. An algorithm of Naiman and Priebe based upon importance sampling and Bayesian statistics is being adapted for this purpose. In a separate effort, Barbara am Ende is working with the Semiconductor Electronics Division to develop techniques for automatically detecting and counting lattice planes between sidewalls in High Resolution Transmission Electron Microscopy (HRTEM) images. This capability is a key step in the development of precision linewidth standards for nanometer-level semiconductor metrology.
Micrographs to Computational Models. Image analysis is the first step in the processing done by the popular OOF software for analyzing materials with complex microstructure. Developed by MCSD's Stephen Langer in association with staff of the NIST Material Science and Engineering Laboratory, OOF begins with a micrograph of a real material with multiple phases, grain boundaries, holes, and cracks, identifies all the parts, and then generates a finite element mesh consistent with the complicated geometry. Material scientists can then use the result to perform virtual tests on the material, such as raising its temperature and pulling on it. The resulting stresses and strains can then be displayed. OOF has become a popular tool in the material science community, and has won internal and external awards. This year Langer worked with Robert Jin, a talented intern from Montgomery Blair High School, to developed a technique for automatically detecting grain boundaries in micrographs. The algorithm is based upon a modified Gabor wavelet filter and edge linking. This will be incorporated into OOF2, now under development. OOF2 will include a variety of new capabilities and will be easier to extend.
LADAR and 3D Imaging. Laser radar (LADAR) systems provide a relatively inexpensive method for terrain mapping. Such systems can optically scan a given scene, providing distance and intensity readings as a function of scanning angle. In principle, such data can be used to construct a geometrical model of the scanned scene. In practice this remains a very difficult process. The data is voluminous, noisy, and full of unnatural artifacts. The data is one-sided, only providing the view as seen from a particular vantage point. Hence, to develop a true three-dimensional model, scans from multiple sources must be registered and the data fused. Christoph Witzgall has been working with staff of the NIST Building and Fire Research Laboratory to develop three-dimensional models of construction sites. With such a model, as-built conditions could be automatically assessed, current construction processes could be viewed, planned sequences of processes could be tested, and object information could be retrieved on demand. Witzgall has developed techniques for cleaning and registering LADAR data, and extracting a triangulated irregular network model from it. These techniques have been tested on applications such as determining volumes of excavated earth. In a related effort, David Gilsinn is studying the use of LADAR to read object-identifying bar codes on remote objects. The reflectance data is noisy and defocused, and Gilsinn is developing deconvolution techniques to reconstruct bar codes from the LADAR data. This is challenging since the LADAR is not a single beam, but rather a collection of multiple sub-beams. Some progress has been made using averaging filters. A more accurate model for the convolution kernel is being developed.
Mathematical problems with discrete components are increasing in frequency at NIST, turning up in applications from nanotechnology to network analysis. MCSD staff members have become involved in a variety of these efforts, and are developing some of the basic technologies to tackle such problems efficiently. Some examples follow.
Combinatorial counting problems. Combinatorial problems arise in a wide variety of applications, from nanotechnology to computer network analysis. Fundamental models in these fields are often based on quantities that are extremely difficult (i.e., exponentially hard) to compute. We have devised methods to compute such quantities approximately (with known error bars) using Monte Carlo methods. Traditional Monte Carlo methods can be slow to converge, but we have made progress in significantly speeding up these computations using importance sampling. In the past few years Isabel Beichl and colleagues have made progress in evaluating the partition function for describing the probability distribution of states of a system. In a number of settings, including the Ising model, the q-state Potts model, and the monomer-dimer model, no closed form expressions are known for three-dimensional cases and obtaining exact solutions of the problems is known to be computationally intractable. We have developed a class of probabilistic importance sampling methods for these problems that appears to be much more effective than the standard Markov Chain Monte Carlo technique. We have used these techniques to obtain accurate solutions for both the 3D dimer covering problem and the more general monomer-dimer problem. An importance sampling formulation for the 3D Ising model has also been constructed. This year, new Monte Carlo/importance sampling techniques and software have been developed to estimate the number of independent sets in a graph. A graph is a set of vertices with a set of connections between some of the vertices. An independent set is a subset of the vertices, no two of which are connected. The problem of counting independent sets arises in data communications, in thermodynamics, and in graph theory itself. For example, it is closely related to issues of reliability of computer networks. Physicists have used estimates of number of independent sets to estimate the hard sphere entropy constant. This constant is known analytically in 2D, but no analytical result is known in 3D. Beichl, along with Dianne O'Leary and Francis Sullivan have been able to use their approach to estimate the constant for a 3D cubic lattice. They are now are working on the case of an FCC lattice.
Bioinformatics. Computational biology is currently experiencing explosive growth in its technology and industrial applications. Mathematical and statistical methods dominated the development of the field but as the emphasis on high throughput experiments and analysis of genetic data continues, computational techniques have also become essential. We are working to understand the mathematical issues in dealing with large biological datasets with the aim of developing expertise that can be applied to future NIST problems. In the process, we are developing techniques and tools of widespread interest. One of these is GenPatterns. Fern Hunt, along with former guest researcher Antti Pessonen, and student Daniel Cardy developed this program to compute and graphically display DNA or RNA subsequence frequencies and their recurrence patterns, as well as to creating Markov models of the data. GenPatterns is now a part of the NIST Bioinformatics/Computational Biology software website currently being constructed by the NIST the Chemical Science and Technology Laboratory. More recently we have turned our attention to the problem of aligning protein sequences with gaps. Database searches of protein sequences are based on algorithms that find the best matches to a query sequence, returning both the matches and the query in a linear arrangement that maximizes underlying similarity between the constituent amino acid residues. Very fast algorithms based on dynamic programming exist for aligning two or more sequences if the possibility of gaps is ignored. Gaps are hypothesized insertions or deletions of amino acids that express mutations that have occurred over the course of evolution. The alignment of sequences with such gaps remains an enormous computational challenge. Fern Hunt and Anthony Kearsley are currently working with Honghui Wan of NIH to develop an alternative approach based on Markov decision processes. The optimization problem then becomes a linear programming problem and it is amenable to powerful and efficient techniques for solution. We are creating software for multiple sequence alignment based on these ideas.
Quantum
algorithms. We have
recently begun a project in the area of quantum information science. We are collaborating with other ITL
Divisions and the NIST Physics Laboratory in the development and analysis of
quantum-based systems for communication and computation. One component of this is the study of
algorithms for quantum computers. The
principle advances in this field thus far have been Shor’s algorithm for
factoring and Grover’s algorithm for searching an unordered set, each of which
exhibit significant speedups which are thought not to be possible on classical
computers. A new postdoctoral
appointee, David Song, is working with Isabel Beichl and Francis Sullivan on
quantum algorithms for determining whether a finite function over the integers
is one-to-one. They are constructing a
quantum algorithm for this problem which they hope to show has a complexity of O(SQRT(n))
steps. Classical algorithms require n
steps to do this computation. The
proposed algorithm uses phase symmetry, Grover's search algorithm and results
about the pth complex roots of unity for a prime p.
Concrete is an essential ingredient of the national civil engineering infrastructure. Some 6,100 companies support this infrastructure, with a gross annual product of $35 billion when delivered to a work site, and over $100 billion when in place in a building. In recent years there has been a growing recognition of the great potential for improving the performance of cement and concrete products with the development of new understanding of the materials and processes. The NIST Building and Fire Research Laboratory (BFRL) has over two decade's worth of experience in experimental, theoretical, and computational work on cement and concrete and is a world leader in this field. MCSD staff members in the Scientific Applications and Visualization Group have contributed to this effort by working closely with BFRL scientists in developing parallel implementations of their computational models, and in providing effective visualizations of their results. Among these are models of the flow of suspensions, flow in porous media, and the elastic properties of concrete. MCSD contributions have significantly extended the class of problems that can be addressed by BFRL researchers. Striking visualizations of the results of these simulations, including immersive visualizations, have also been developed by MCSD staff. (Examples are included elsewhere in this report.)
In January 2001 the Virtual Cement and Concrete Testing Laboratory (VCCTL) consortium was formed under the leadership of BFRL. The overall goals of the consortium are to develop a virtual testing system to reduce the amount of physical testing of concrete, expedite the research and development process, and facilitate innovation. The consortium has seven industrial members. MCSD is a partner in the effort, and is taking the lead in visualization and parallelization efforts.
This image shows a volume rendering of a
cement paste sample.
The actual sample is less than one
millimeter wide.
A popular computer code for X-ray absorption spectroscopy (XAS) now runs 20-30 times faster, thanks to a cooperative effort of MCSD and the NIST Materials Science and Engineering Laboratory (MSEL). XAS is widely used to study the atomic-scale structure of materials, and is currently employed by hundreds of research groups in a variety of fields, including ceramics, superconductors, semiconductors, catalysis, metallurgy, geophysics, and structural biology. Analysis of XAS relies heavily on ab initio computer calculations to model x-ray absorption in new materials. These calculations are computationally intensive, taking days or weeks to complete in many cases. As XAS becomes more widely used in the study of new materials, particularly in combinatorial materials processing, it is crucial to speed up these calculations. One of the most commonly used codes for such analyses is FEFF. Developed at the University of Washington, FEFF is an automated program for ab initio multiple scattering calculations of X-ray Absorption Fine Structure (XAFS) and X-ray Absorption Near-Edge Structure (XANES) spectra for clusters of atoms. The code yields scattering amplitudes and phases used in many modern XAFS analysis codes. Feff has a user base of over 400 research groups, including a number of industrial users, such as Dow, DuPont, Boeing, Chevron, Kodak, and General Electric.
To achieve faster speeds in FEFF, James Sims of the MCSD worked with Charles Bouldin of the MSEL Ceramics Division to develop a parallel version, FeffMPI. In modifying the code to run on the NIST parallel processing clusters using a message-passing approach, they gained a 20-30-fold improvement in speed over the single processor code. Combining parallelization with improved matrix algorithms may allow the software to run 100 times or more faster than current single processor codes. The latter work is in process. The parallel version of the XAS code is portable, and is now also operating on parallel processing clusters at the University of Washington and at DoE's National Energy Research Scientific Computing Center (NERSC). One NERSC researcher has reported doing a calculation in 18 minutes using FeffMPI on the NERSC IBM SP2 cluster that would have taken 10 hours before. In 10 hours this researcher can now do a run that would have taken months before, and hence would not have been even attempted.
A large number of MCSD staff members received significant awards this year. Some of these are highly distinguished awards from external groups, while others are prized internal awards.
External Awards. Anthony Kearsley, a MCSD mathematician, received the Arthur Flemming Award in June 2001. The Flemming Award is given annually to recognize outstanding Federal employees with less than 15 years of service. The Flemming Award Commission selects the honorees, and the award is sponsored by George Washington University and Government Executive magazine. This year 12 winners were selected from throughout the federal government, six in the administrative category and six in the science and engineering category. Kearsley was cited for a sustained record of contributions to the development and use of large-scale optimization techniques for the solution of partial differential equations arising in science and engineering. Noted were his contributions to the solution of problems in such diverse areas as oil recovery, antenna design, wireless communications, climate modeling, optimal shape design, and high-temperature superconductors. His tireless work as a mentor and leading proponent of careers in mathematics for students at the high school, undergraduate, and graduate levels was also cited. This was the second year in a row that an MCSD staff member received the Flemming award. Last year Fern Hunt was among the 12 winners.
Anthony
Kearsley, winner of the 2001 Arthur Flemming Award,
and
Bonita Saunders, 2001 Claytor Lecturer.
Bonita V. Saunders presented the 2001 Claytor Lecture on January 13, 2001. The National Association of Mathematicians (NAM) inaugurated the Claytor Lecture in 1980 in honor of W. W. Schieffelin Claytor, the third African American to earn a Ph.D. in Mathematics, and the first to publish mathematics outside of his thesis. Founded in 1969, NAM is a non-profit professional organization whose mission is "to promote excellence in the mathematical sciences and promote the mathematical development of underrepresented American minorities." Saunders is the twentieth mathematician to be selected as Claytor lecturer. Previous honorees include Fern Hunt, also of ITL, David H. Blackwell, the first African American elected to the National Academy of Sciences, and J. Ernest Wilkins, who at 19 became the youngest African American to receive a doctorate in the mathematical sciences. Saunders' lecture, entitled, "Numerical Grid Generation and 3D Visualization of Special Functions" was delivered at a special session of the Joint Mathematics Meetings in New Orleans.
Geoffrey McFadden, Leader of the MCSD Mathematical Modeling Group, was elected a Fellow of the American Physical Society (APS). McFadden was recognized "for fundamental insights into the effect of fluid flow on crystal growth and for an innovative approach to phase field methods in fluid mechanics." McFadden's interest in the study of crystal growth began when he joined NIST in 1981. Since then he has published more than 100 papers with colleagues in MSEL, as well as with researchers at external institutions such as Carnegie Mellon University, Northwestern University, Rensselaer Polytechnic, and the University of Southampton. The APS's Division of Fluid Dynamics recommended the nomination. Fellowship in the APS is limited to no more than one-half of one percent of APS membership. Presentation of the award took place at the Annual Meeting of the Division of Fluid Dynamics held in San Diego, November 18-20, 2001.
Raghu Kacker was elected Fellow of the American Society for Quality and recognized at the 55th Annual Quality Congress held in Charlotte, NC on May 6-9, 2001. He was cited for pioneering work in the advancement of the application of the statistical sciences, especially Taguchi methods, to quality, measurement science, calibration and inter-laboratory comparisons.
Raghu Kacker
(left) was elected a Fellow of the American Society for Quality, and
Geoffrey
McFadden (right) was elected a Fellow of the American Physical Society.
NIST Awards. In December 2000, Stephen Langer of MCSD, along with Ed Fuller and Andy Roosen of MSEL, received the NIST Jacob Rabinow Applied Research Award. The Rabinow Award is presented yearly in recognition of outstanding application of NIST research in industry. Langer, Fuller, and Roosen were honored for the development of OOF, a system for the modeling of materials with complex microstructures. Also in December 200, a team of MCSD staff from the Scientific Applications and Visualization Group was awarded a NIST Bronze Medal for their work in visualization of Bose-Einstein condensates. The honorees were Judith Devaney, William George, Terence Griffin, Peter Ketcham, and Steve Satterfield. They were cited for their work with colleagues in the NIST Physics Lab to develop unique 3D color representations of the output of computational models of Bose-Einstein condensates. The visualizations illustrated properties of the condensates which were previously unknown, and which have since been experimentally verified. The pictures were selected as cover illustrations by Physics Today (Dec. 1999), Parity magazine (Japanese, Aug. 2000), Optics and Photonics News (Dec. 2000), and were featured in a title spread for an article in Scientific American (Dec. 2000).
Winners of the 2000 NIST Jacob Rabinow Applied
Research Award (left to right):
Andrew
Roosen (MSEL), Stephen Langer, and Edwin Fuller (MSEL).
Winners of the 2000 NIST Bronze Medal: (front, left to right) Steven
Satterfield, Peter
Ketcham, Terrence Griffin, (back, left to right) William George,
Judith Devaney.
Winners of the 2001 Bronze medal: Roldan Pozo (left) and Ronald Boisvert (right)
In December 2001, Ronald Boisvert and Roldan Pozo received a NIST Bronze Medal. They were cited "for leadership in technology transfer introducing significant improvements to the Java programming language and environment for scientific computing applications."
ITL Awards. Isabel Beichl received the first annual ITL Outstanding Publication Award in May 2001 in recognition of a series of 11 tutorial articles on non-numeric techniques for scientific computing published in Computing in Science and Engineering from 1997-2000. Beichl was the first winner of this newly instated ITL award.
Five MCSD staff members were among a group of 17 ITL staff named as joint recipients of the Outstanding Contribution to ITL Award in May 2001. The award recognized members of the ITL Diversity Committee. The MCSD awardees were Judith Devaney (Chair), Isabel Beichl, Ronald Boisvert, Raghu Kacker, and Bonita Saunders.
Isabel Beichl won the first ITL Outstanding publication Award for a series of
11 tutorial articles published in Computing in Science and Engineering.
MCSD staff members continue to be active in publishing the results of their research. This year 49 publications authored by Division staff appeared, 28 of which were published in refereed journals. Twenty-one additional papers have been accepted and are awaiting publication. Another 22 are under review. MCSD staff members were invited to give 40 lectures in a variety of venues and contributed another 30 talks to conferences and workshops.
Four shortcourses on Java and LabView where provided by MCSD for NIST staff this year. The Division lecture series remained active, with 27 talks presented (five by MCSD staff members); all were open to NIST staff. In addition, a Scientific Object Oriented Programming User's Group, chaired by Stephen Langer, was established. Six meetings of the group have been held.
MCSD staff members also organize workshops,
minisymposia, and conferences to provide forums to interact with external
customers. This year, staff members were involved in organizing twelve external
events and three internal ones. For
example, a very successful workshop was held in late June to discuss the
current state of the OOF finite element program and to plan future
developments. Approximately 65 OOF
users and developers attended the two-day workshop from 5 countries, 9
companies, 18 universities, and 4 national labs. The workshop was co-sponsored by MCSD and the MSEL Center for
Theory and Computation in Material Science (CTCMS).
Software continues to be a by-product of Division work, and the reuse of such software within NIST and externally provides a means to make staff expertise widely available. Several existing MCSD software packages saw new releases this year, including Zoltan (grid partitioning, joint with Sandia National Laboratories), OOMMF (micromagnetic modeling), OOF (material microstructure modeling), and TNT (Template Numerical Toolkit for numerical linear algebra in C).
Tools developed by MCSD have led to a number of commercial products. Examples from two past Division projects are f90gl and IMPI. F90gl is a Fortran 90 interface to OpenGL graphics. Originally developed by William Mitchell of MCSD for use in NIST applications, f90gl was subsequently adopted by the industry-based OpenGL Architecture Review Board to define the standard Fortran API for OpenGL. NIST's reference implementation has since been included in commercial products of Lahey Computer Systems, Compaq, NASoftware, and Interactive Software Services. Several others are planned. MCSD staff facilitated the development of the specification for the Interoperable Message Passing Interface (IMPI) several years ago. IMPI extends MPI to permit communication between heterogeneous processors. We developed a Web-based conformance testing facility for implementations. Several commercial implementations are now under development. Several companies, including Hewlett-Packard and MPI Software Technologies demonstrated IMPI on the exhibit floor of the SC'01 conference in Denver in November 2001.
Web resources developed by MCSD continue to be among the most popular at NIST. The MCSD Web server at math.nist.gov has serviced more than 38 million Web hits since its inception in 1994 (9 million of which have occurred in the past year). The Division server regularly handles more than 11,000 requests for pages each day, serving more than 40,000 distinct hosts on a monthly basis. Altavista has identified approximately 10,000 external links to the Division server. The seven most accessed ITL Web sites are all services offered by MCSD:
Division staff members continue to make significant contributions to their disciplines through a variety of professional activities. Ronald Boisvert serves as Chair of the International Federation for Information Processing (IFIP) Working Group 2.5 (Numerical Software). He also serves as Vice-Chair of the ACM Publications Board. Donald Porter serves on the Tcl Core Team, which manages the development of the Tcl scripting language. Daniel Lozier serves as chair of the SIAM Special Interest Group on Orthogonal Polynomials and Special Functions.
Division staff members serve on journal editorial boards of eleven journals: ACM Transactions on Mathematical Software (R. Boisvert and R. Pozo), Computing in Science & Engineering (I. Beichl), Interfaces and Free Boundaries (G. McFadden), Journal of Computational Methods in Science and Engineering (M. Donahue), Journal of Computational Physics (G. McFadden), Journal of Crystal Growth (G. McFadden), Journal of Numerical Analysis and Computational Mathematics (I. Beichl and W. Mitchell), Journal of Research of NIST (D. Lozier), Mathematics of Computation (D. Lozier), SIAM Journal of Applied Mathematics (G. McFadden), SIAM Journal of Scientific Computing (B. Alpert).
Division staff members also work with a variety of external working groups. Ronald Boisvert and Roldan Pozo chair the Numerics Working Group of the Java Grande Forum. Roldan Pozo chairs the Sparse Subcommittee of the BLAS Technical Forum. Michael Donahue and Donald Porter are members of the Steering Committee of muMag, the Micromagnetic Modeling Activity Group.
In 2001 NIST Celebrated its Centennial. As part of the celebration, NIST published a centennial volume entitled A Century of Excellence in Measurements, Standards, and Technology: A Chronicle of Selected Publications of NBS/NIST, 1901-2000. The publication highlights approximately 100 highly significant NBS/NIST publications of the last century. CRC Press published this book in the fall of 2001. Four of the highlighted publications are associated with the work of ancestor organizations to MCSD:
R. Boisvert, D. Lozier, D. O'Leary, and C. Witzgall developed vignettes in the published volume describing these publications.
The year 2002 also marks the 50th anniversary of the original Hestenes-Stiefel paper on the conjugate gradient method cited above. This anniversary will be commemorated at a conference on Iterative Methods for Large Linear Systems to be held at the ETH in Zurich in February 2002. MCSD is a joint sponsor of this conference.
MCSD attempts to maximize the impact of its work. In order to do this, it must continually assess the future needs of its customers, as well as the mathematical and computational technologies that can help meet those needs. This is the role of strategic planning. Information gathered in this way is used to set priorities for selecting projects, developing new areas of expertise, and hiring new staff.
MCSD assesses the needs of its customers in a variety of ways.
Advances in mathematical and computational technologies are tracked in the course of a variety of professional activities such as participation in workshops and conferences, monitoring of technical magazines and journals, and consultation with external technical experts.
Many of these planning activities occur on a continuing basis during the year. A formal Division strategic plan was developed in 1999 and will be revisited in 2002. The major themes identified in that plan were the following.
o
Measurement and Calibration for the Virtual Sciences
The ordinary industrial user of complex modeling packages has few tools available to assess the robustness, reliability, and accuracy of models and simulations. Without these tools and methods to instill confidence in computer-generated predictions, the use of advanced computing and information technology by industry will lag behind technology development. NIST, as the nation’s metrology lab, is increasingly being asked to focus on this problem.
o
Evolving Architecture of Tools, Libraries, and
Information Systems for Science and Engineering
Research studies undertaken by laboratories like NIST are often outside the domain of commercial modeling and simulation systems. Consequently, there is a great need for the rapid development of flexible and capable research-grade modeling and simulation systems. Components of such systems include high-level problem specification, graphical user interfaces, real-time monitoring and control of the solution process, visualization, and data management. Such needs are common to many application domains, and re-invention of solutions to these problems is quite wasteful.
The availability of low-cost networked workstations will promote growth in distributed, coarse grain computation. Such an environment is necessarily heterogeneous, exposing the need for virtual machines with portable object codes. Core mathematical software libraries must adapt to this new environment.
All resources in future computing
environments will be distributed by nature.
Components of applications will be accessed dynamically over the network
on demand. There will be increasing
need for online access to reference material describing mathematical
definitions, properties, approximations, and algorithms. Semantically rich exchange formats for
mathematical data must be developed and standardized. Trusted institutions, like NIST, must begin to populate the net
with such dynamic resources, both to demonstrate feasibility and to generate
demand, which can ultimately be satisfied in the marketplace.
o
Emerging Needs for Applied Mathematics
The NIST Laboratories will remain a rich source of challenging mathematical problems. MCSD must continually retool itself to be able to address needs in new application areas and to provide leadership in state-of-the-art analysis and solution techniques in more traditional areas. Many emerging needs are related to applications of information technology. Examples include VLSI design, security modeling, analysis of real-time network protocols, image recognition, object recognition in three dimensions, bioinformatics, and geometric data processing. Applications throughout NIST will require increased expertise in discrete mathematics, combinatorial methods, data mining, large-scale and non-standard optimization, stochastic methods, fast semi-analytical methods, and multiple length-scale analysis.
This year NIST embarked on an Institute-wide strategic planning process called NIST 2010. Four technical areas were identified for emphasis.
In addition, three internal infrastructure areas where identified.
MCSD staff members are currently working with NIST-wide committees to understand current NIST capabilities in these areas and develop specific plans. We will work to align our programs to be able to support these efforts.
In addition to these planning efforts, we have had additional extensive discussions with management and staff of the NIST Physics Lab related to quantum information, and the NIST Building and Fire Research Lab related to computer-aided construction. Finally, we have exchanged ideas with members of the government-wide Interagency Committee on Extramural Mathematics Programs (ICEMAP), whose meetings we have participated in this past year.
Two new postdoctoral appointments were made during the past year. Katharine Gurski joined MCSD in January 2001 as a National Research Council postdoctoral fellow working with Geoffrey McFadden. She has a Ph.D. in applied mathematics from the University of Maryland, and had a previous postdoctoral appointment at the NASA Goddard Space Flight Center. She has been developing numerical methods for the solution of axisymmetric boundary integral equations for applications of in materials science, including dendritic growth. In October 2001 David Daegene Song also began a two-year postdoctoral appointment with MCSD. A recent graduate of Oxford University, where he received a Ph.D. in physics, Song was associated with the Clarendon Laboratory’s Center for Quantum Computation. He has been working on issues related to entanglement swapping and the analysis of quantum algorithms.
Raghu Kacker began a one-year detail from the ITL Statistical Engineering Division to MCSD to begin investigation of the mathematical and statistical questions associated with virtual measurement systems. He is also assisting with the DLMF project.
Annette Shives, Secretary for MCSD’s Scientific
Applications and Visualizations Group, retired on September 28, 2001 after 24
years of government service. Yolanda Parker, formerly of the NIST Manufacturing
Engineering Laboratory, was hired to take over the administrative operations of
the group, as well as to perform new duties related to the operations of the
MCSD Visualization Lab.
Three new foreign guest researchers began their terms in MCSD this year: Julien Franiette, Aboubekre Zahid, and F. Pokam. Each is working in the Scientific Applications and Visualization Group. A. Samson and F. Pokam also completed their terms during the year.
MCSD provided support for nine student staff members on summer appointments during FY 2001. Such appointments provide valuable experiences for students interested in careers in mathematics and the sciences. In the process, the students can make very valuable contributions to MCSD programs. This year's students were as follows.
Name, Affiliation |
Supervisor |
Project |
E. Baer, Montgomery Blair
High School |
A. Kearsley |
Numerical and theoretical
properties of algorithms for the solution of linear systems were studied. In
particular, an application of P-adic arithmetic work of Morris Newman’s was
implemented. |
B. Blaser, Carnegie Mellon
Univ. |
B. Saunders |
Development of graphics
for the DLMF project. |
D. Cardy, Montgomery Blair
High School |
F. Hunt |
Explored methods for
distinguishing coding and non-coding regions of DNA sequences based on the
mutual entropy function. His work involved use of GenPatterns, a tool for
analyzing statistical patterns in DNA and RNA. |
J. Carlson, Dartmouth
College |
I. Beichl |
Developed a probabilistic
algorithm to estimate the number of independent sets in a graph. She wrote a
Matlab program to do this and applied the results to computing the 2d and 3d
hard sphere entropy constants for cubic lattices. |
D. Caton, Univ. of
Maryland |
J. Devaney |
Developing
an algorithm to recognize images in a large database of images with similar
texture characteristics. |
Stefanie Copley Univ. of Colorado |
J. Filla |
Vislab
and scientific visualization support, specializing in nonlinear video editing
and 3D stereo data presentation. |
R. Jin, Montgomery Blair
High School |
S. Langer |
Apply image analysis
techniques to micrographs of materials microstructure, with the goal of
developing software for automatic grain boundary detection. The software will
be included in the OOF project. |
E. Kim, Stanford Univ. |
B. Saunders |
Development of graphics for
the DLMF Project. |
K. McQuighan, Montgomery
Blair High School |
T. Kearsley |
The theoretical properties
of algorithms for quantum computers were studied. In particular, the
application of Grover's method to difficult search problems was considered. |
Charge density on a computed
diffusion-limited cluster aggregate.
In work yet to be published, Alfred Carasso's direct blind deconvolution techniques have been shown capable of producing useful results, in real time, on a wide variety of real blurred images, including astronomical, Landsat and aerial images, MRI and PET brain scans, and electron microscope imagery. A key role is played by a class of functions introduced in the 1930's by Paul Lévy in connection with his fundamental work on the Central Limit Theorem. The potential usefulness in image processing of these so-called Lévy "stable" laws had not previously been suspected.
In the last several years, digital imagery has become pervasive in numerous areas of applied science and technology, and digital image processing has matured into a major discipline within Information Technology. Image processing is now a vast research activity that lies at the intersection of Optics, Electronics, Computer Science and Applied Mathematics. SPIE, IEEE, and SIAM are three major scientific societies that support significant research in this area.
In most cases, image acquisition involves a loss of resolution. This may come about from imperfect optics, from the scattering of photons before they reach their intended target, from turbulent fluctuations in the refractive index while imaging through the atmosphere, from image motion or defocusing, or from a combination of these and a myriad other small aberrations. The resulting acquired image is typically blurred, and this blur, when known, can be described by a point spread function (psf) that mathematically characterizes the cumulative effect of all these distortions. In an idealized imaging system, the psf is the Dirac delta function and has zero spread. In a real system, there is always some point spread, and this delta function typically becomes spread out onto some type of bell-shaped curve. There is considerable interest in improving image resolution by removing some of this blur through computer processing of the given blurred image.
Image
deblurring is one of several distinct topics within image
processing, (image compression is another), and it is one with considerable
mathematical content. Deblurring
involves deconvolution in an integral equation. This is a notoriously difficult
ill-conditioned problem in which data noise can become amplified and overwhelm
the desired true solution. Depending on the type of point spread function, this
deconvolution problem is mathematically equivalent to an ill-posed initial
value problem for a partial differential equation in two space variables. For
example, Gaussian psfs, which are ubiquitous in applications, lead to solving
the time-reversed heat equation. Other types of parabolic partial differential
equations, associated with nonlinear anisotropic diffusion, have recently been
advocated as generic image enhancement tools in image processing. That
approach, originating in France in the early 1990's, is computationally highly
intensive, and has yet to be evaluated. In another direction, probabilistic
methods based on Bayesian analysis together with Maximum Likelihood or Maximum
Entropy criteria, have long been used in Astronomy and Medical Imaging. These
are again nonlinear methods that must be implemented iteratively. A
characteristic feature of such probabilistic approaches is that large-scale
features in the image can typically be reconstructed after one or two-dozen
iterations, while several thousand further iterations, and several hours of CPU
time, are usually necessary to reconstruct fine detail.
In many cases, the psf describing the blur is unknown or incompletely known. So-called blind deconvolution seeks to deblur the image without knowing the psf. This is a much more difficult problem in which ill conditioning is compounded with non-uniqueness. Most known approaches to that problem are iterative in nature and seek to simultaneously reconstruct both the psf and the deblurred image. As might be expected, that iterative process can become ill behaved and develop stagnation points or diverge altogether. As a rule, iterative blind deconvolution procedures are not well suited for real-time processing of large size images of complex objects.
Carasso's work in image deblurring has focused on developing reliable direct non-iterative methods, in which Fast Fourier Transform algorithms are used to solve appropriately regularized versions of the underlying ill-posed parabolic equation problem associated with the blur. When the psf is known, Carasso's SECB method can deblur 512 by 512 images in about 1 second of CPU time on current desktop platforms. Moreover, in a recent SIAM Journal on Applied Mathematics paper, Carasso has developed two new direct blind deconvolution techniques, the BEAK method and the APEX method. These methods are based on detecting the signature of the psf from appropriate 1-D Fourier analysis of the blurred image. This detected psf is then input into the SECB method to obtain the deblurred image. When applicable, either of these two distinct blind methods can deblur 512x512 images in less than a minute of CPU time, which makes them highly attractive in real-time applications.
The APEX method is predicated on a class of shift invariant blurs, the class G, which can be expressed as a finite convolution product of radially symmetric two-dimensional Lévy stable density functions. This class includes Gaussians, Lorentzians, and their convolutions, as well as many other kinds of bell-shaped curves with heavy tails. The motivation for using the class G as the framework for the APEX method, lies in previously unrecognized basic work by C. B. Johnson, an electronics engineer who, in the 1970's, discovered non-Gaussian heavy-tailed psfs in a wide variety of electron optical imaging devices. In fact, Carasso has been energetic in making Johnson's work more widely known within the imaging research community, has corresponded with Johnson, and has succeeded in drawing the attention of Mandelbrot, Woyczynski, and Nolan, three eminent specialists on Lévy processes, to Johnson's seminal work. Very recently, Woyczynski has interviewed Johnson in connection with Woyczynski's forthcoming book on Lévy processes in the physical sciences.
APEX method in image sharpening
(A)
Original transverse PET brain image. (B) Enhanced PET image. Bright spots
indicating areas of the brain responding to applied external stimuli were
barely visible in original image. Here, beta=0.284. (C) Original scanning
electron micrograph of mosquito's head showing compound eye. (D) Enhanced image
shows increased contrast and brings eye into sharper focus. Here, beta=0.157.
(E) Original F-15 plane image. (F) Enhanced image brings out terrain features
and condensation trails behind aircraft. Here, beta=0.107.
Lévy densities are characterized by an exponent beta that expresses the degree of departure from the Gaussian density, for which beta=1.0. In physical applications where Lévy densities appear, values of beta less than 0.5 are generally rare. While not all images can be significantly improved with the APEX method, there is a wide class of images for which APEX processing is beneficial. These images have the property that their 1-D Fourier transform traces are globally logarithmically convex. When the APEX method is applied to such an image, a specific value of beta is detected. Typical APEX-detected values of beta are on the order of 0.25. The physical origin of such beta values, if any, is uncertain. However, it is remarkable that useful sharpening of imagery from a wide variety of scientific and technological applications can be accomplished with such heavy-tailed psfs. The appearance of low-exponent stable laws in the present context is of great interest to specialists on Lévy processes. The APEX method is based on ill-posed continuation in diffusion equations involving fractional powers of the Laplacian. Mathematically, such an approach differs fundamentally from currently more popular techniques based on solving well-posed nonlinear anisotropic diffusion equations. Interestingly, the APEX method generally produces sharper imagery, at much lower computing times.
Future work will explore more fully applications of this technique to NIST imaging problems, as well as to selected problems in other areas.
Alfred S.
Carasso
David S. Bright
(NIST CSTL)
András E. Vladár
(NIST MEL)
Scanning electron microscopes (SEM) are basic research tools in many of NIST's programs in nanotechnology. Moreover, considerable expertise resides at NIST on the theory behind these instruments, as well as on the analysis and interpretation of SEM imagery. David Bright has created the LISPIX image analysis package and has used it to automate electron microscopes. András Vladár is the SEM Project Leader in the Nanoscale Metrology Group, and he has helped define and implement the basic standards for the measurement and monitoring of electron microscope imaging performance. That expertise was vital to the success of this project, which extended over a two-year period and involved well over 1 gigabyte of processed imagery.
A major concern in scanning electron microscopy is the loss of resolution due to image blurring caused by electron beam point spread. The shape of that beam can change over time, and is usually not known to the microscopist. Hence, the point spread function (psf) describing the blur is generally unknown. Nevertheless, there is great interest in improving resolution by reducing this blur. The images we are concerned with come from scanning electron beam instruments such as the field emission gun scanning electron microscope (FEGSEM), a high-resolution instrument, and the environmental scanning electron microscope (ESEM), a lower resolution instrument with more flexible sample handling capability. SEM micrographs are typically large size images of complex objects.
Real-time blind deconvolution of SEM imagery, if achievable, would significantly extend the capability of electron microprobe instrumentation. Previously gained experience with the APEX method on images from very diverse imaging modalities, naturally suggests use of this technique. However, SEM imaging differs from other electron-optic imaging, in that the instrument transform I that converts a sample s(x,y) into an image i(x,y) has a nonlinear component, M, which describes the details of the nonlinear interaction between the electrons and the material. M is usually studied by Monte Carlo simulations applied to electron trajectories, but is not readily invertible. The second component of I, call it q, describes blurring due to the electron beam point spread, along with some of the instrument's electronics. That component is often represented as a convolution, so that the SEM micrograph i(x,y) is the convolution of q with M(s(x,y)). The APEX method is a linear deconvolution technique predicated on a restricted class of blurs, the class G, consisting of finite convolution products of radially symmetric Lévy probability density functions. It is by no means obvious that the APEX method is applicable to SEM imagery.
Nevertheless, when the APEX method was applied to a large variety of original SEM micrographs, the method was and found to be quite useful in detecting and enhancing fine detail not otherwise discernible. Several examples are shown in the accompanying Figure. In addition, quantitative sharpness analysis of ‘ideal sample’ micrographs, using a methodology originally developed by the NIST Nanoscale Metrology Group to monitor SEM imaging performance, shows that APEX processing can actually produce sharper imagery than is achievable with optimal microscope settings. On such ideal sample micrographs, sharpness increases on the order of 15% were obtained as a result of APEX processing. A crucial element in this work was the marching backwards in time feature of the APEX method, which allows for deconvolution in slow motion. The APEX method sharpens the image, while simultaneously increasing contrast and brightness, by restoring some of the high frequency content that had been attenuated in the course of imaging the sample. Slow motion deconvolution allows the user to terminate the APEX process before brightness, contrast, or noise, becomes excessive.
As in all inverse problems, successful use of the APEX method requires a-priori knowledge about the solution. Here, such prior knowledge takes the form of training and experience on the part of the microscopist, whose judgment is called upon to distinguish genuine features in the presence of noise and visually select the best reconstruction. Several experienced NIST microscopists were involved in evaluating the merits of APEX processed imagery.
Real time APEX processing of Scanning Electron Microscope Imagery
Left column: Original SEM micrographs, Right column: After APEX
processing. (A) Fly ash particle from Nuclepore filter. (C) particle from
crystalline mercury compound. (E) dirt particle from air filter. APEX
processing increases contrast and brightness as it sharpens the image, and
brings out fine scale detail not otherwise discernible.
In the adjoining Figure, the left column contains examples of original SEM micrographs that were input into the APEX method, while the right column contains the corresponding APEX images. All original micrographs were input as 8-bit 512 by 512 images, although smaller sub images are displayed in some cases. These images are part of a wide class of SEM images with globally logarithmically convex 1-D Fourier transform traces. Image (A) is a micrograph of a 2-micron diameter fly ash particle on a nuclepore filter. That image was scanned from an old Polaroid print taken by John Small (NIST), in the 1970's, on a Cambridge SEM at the University of Maryland. Imperfections on the Polaroid print are detected in the APEX image (B), along with enhancing the texture of the sample. Some of that texture may be due to the print rather than to the sample itself. Moreover, the scratch near the upper right corner in image (B) is not discernible in image (A). This example is a useful indicator of the value of APEX processing. Presumably, actual imperfections or small defects in some other sample would have been detected equally well. Also, the APEX image (B) has more depth than the original image, in that the structure in the lower left quadrant now appears closer to the viewer than does the rest of the image.
Image (C) is a 20-micron field of view micrograph of a particle from a complex multi-form crystalline compound of mercury. This particular sample has very complex and varied morphology, in addition to surface dusting or decoration of fine particles almost everywhere. This becomes clearly evident only in the APEX image (D), which contains substantially more information than does image (C). Also, the three-dimensional structure of the particle is particularly well rendered in image (D). Image (E) is a small portion of a 250-micron field of view micrograph of a dust particle from an air vent, consisting of a complex agglomeration of biological and mineral particles. Very striking APEX enhancement is apparent in image (F).
As in the previously mentioned APEX applications, low values of the Lévy exponent beta, typically on the order of 0.25, were detected in these SEM micrographs. Future work will examine possible links between these values of beta and the physics of electron microscopy. Plans are also underway to incorporate APEX processing into the LISPIX package, a NIST-developed image analysis tool that is widely used within the NIST Laboratories. In another direction, the possible use of APEX methodology to produce a new quantitative measure of SEM imaging performance is being explored.
James Lawrence
Computational geometry and image analysis techniques have been applied to photographic images of polymer dewetting under various conditions in order to model the evolution of these materials. This work is in collaboration with MSEL which has massive amounts of data as a result of combinatorial experimentation and which is in great need of automatic techniques for analysis. Methods and software have been devised to evaluate areas of wetness and dryness for their geometric properties such as deviation of holes from perfect circularity and distribution of holes centers. We computed Voronoi diagrams of the initial hole centers and we investigate their use as a predictor of later de-wetting behavior.
In dewetting, the samples progress through various states and we need to determine automatically which state a given image represents. To this end, we have built on statistical techniques developed by D. Naiman and C. Priebe at Johns Hopkins for analyzing medical images. They do this with Monte Carlo methods based on importance sampling for estimating the probabilities of being in various states using many normal images. Their method is a brilliant combination of importance sampling and the Bayesian approach. We have devised methods for determining the probability of states that are a combination of other states and we have tested our approach on some simple geometric examples. A paper describing our work is being written will soon be submitted for publication. The true test on real-world data awaits preparation and delivery of data from MSEL is in process.
Recently we have we have begun to extend these techniques to infrared spectral data from given to us by PL.
Christoph
Witzgall
Javier Bernal
David Gilsinn
During the past decade, laser-scanning technology has developed into a major vehicle for wide-spread applications such as cartography, bathymetry, urban planning, object detection, dredge volume determination, just to name a few. BFRL is actively investigating the use of that technology for monitoring construction sites. Here laser scans taken from several vantage points are used to construct a surface representing a particular scene. In conjunction with the construction site terrain modeling work currently under way another aspect of the overall project envisions that CAD-generated geometry sets will be transformed into a library of 3D construction site objects. These objects are then loaded into an augmented simulation system that tracks both equipment and resources based on real-time data from the construction site. With some future enhancements, the end result will be a world model of the site, in which as-built conditions can be assessed, current construction processes can be viewed as they occur, planned sequences of processes can be tested, and object information can be retrieved on demand. A project can be viewed and managed remotely using this tool.
LIDAR technology is currently being tested for locating equipment on construction sites. Three specific areas will be the major concern for this project: a) Literature search for LIDAR based object recognition technology, which has been completed and a report submitted to BFRL. b) Parts tracking support and demonstration project, and c) LIDAR bar code recognition for object identification.
LIDAR-acquired
image of a pattern of 25.4 mm (1 in)
reflector
bar codes. Note the lower three blurred bars.
Blurred
LIDAR image of 25.4 mm (1 in) reflector bar codes deconvolved
with
an averaging filter. Note ringing due to sharp data edges.
David Gilsinn
Christoph Witzgall
John Lavery (Army Research Office)
Methods for gathering terrain data have proliferated during the past decade in both the military and commercial sectors. The rapid development of laser scanning techniques and their application to cartography, bathymetry, urban planning, construction site monitoring, just to name a few, has resulted in a strong push for next generation computational tools for terrain representation. Using smooth surfaces for representation of terrain has long been recognized. However, previously available smooth-surface techniques such as polynomial and rational splines, radial basis functions and wavelets require too much data, too much computing time, too much human interaction, and/or do not preserve shape well. Conventional smooth splines have been the main candidate for an alternative to triangular irregular networks (TINS) because of their relative computational simplicity. However, conventional smooth splines are plagued by extraneous, nonphysical oscillation.
Recently (1996-2000), J. Lavery of the Army Research Office (ARO) has developed and tested a new class of L1 splines (published in the journal Computer Aided Geometric Design). L1 splines provide smooth, shape-preserving, piecewise polynomial fitting of arbitrary data, including data with abrupt changes in magnitude and spacing and are calculated by efficient interior-point algorithms (extensions of Karmarkar's algorithm). The L1 spline algorithm developed by John Lavery of the Army Research Office and used in the terrain approximation code uses a special finite element with a bivariate cubic spline structure function. It is called a Sibson element and is not documented well in the literature. A NISTIR documenting the construction of a Sibson element was completed. In collaboration with J. Lavery of the ARO, NIST has carried out the first steps in evaluating the accuracy and data-compression capabilities of L1 splines. The goal was to demonstrate that, on simple grids with uniform spacing, L1 splines provide more accurate and compact representation of terrain than do conventional splines and piecewise planar surfaces. The results of this work are to be published in three conference proceedings (Lavery, J.E., Gilsinn, D.E., Multiresolution Representation of Terrain By Cubic L1 Splines', Trends in Approximation Theory, Vanderbilt University Press; Lavery, J.E., Gilsinn, D.E., "Multiresolution Representation of Urban Terrain by L1 Splines, L2 Splines and Piecewise Planar Surfaces", Proc. 22nd Army Science Conference, 11-13 December 2000, Baltimore, MD; D.E., Gilsinn, J.E. Lavery, "Shape-Preserving, Multiscale Fitting of Bivariate Data by L1 Smoothing Splines", Proc. Conf. Approximation Theory X, St. Louis, MO.). They demonstrated the superiority of L1 spline interpolative ability over conventional L2 splines. The superiority of L1 splines over piecewise planar interpolation depended on the measure of closeness.
Comparisons of the performance of L1 splines vs. that of piecewise planar surfaces and of conventional smooth splines have been carried out on sets of open terrain data, such as Ft. Hood DTED data, which include irregularly curved surfaces, steep hillsides and cliffs as well as flat areas (plateaus or bodies of water), and urban terrain data, such as data for downtown Baltimore. The metrics for the comparison will be 1) amount of storage required for meshes and spline parameters; 2) accuracy of the representation as measured by rms RMS error and maximum error. L1 splines will be compared with conventional techniques not only for fitting terrain data that has been "rectified" to regular grids (a standard, but error-rich step in current modeling systems) but also for fitting irregularly spaced "raw" terrain data. Numerical experiments have also been undertaken with the application of smoothing L1 splines to decomposed portions of a larger image with the intent of stitching the individual splines together in order to recompose the larger image. The resulting spline coefficients at overlapping cells of the subimages were remarkably similar. This initially indicated the potential success of recomposing large images from subimages for which L1 smoothing splines can be computed rapidly through parallel processing. Due to uncertainties about the methods used to prepare certain urban data sets obtained for imaging sources, simulated urban terrain data was created without noise or image uncertainties. L1 smoothing spline approximation then demonstrated the clear difference between conventional spline and L1 approximations in that the Gibbs phenomenon at sharp discontinuities was clearly visible for conventional splines. The L1 smoothing spline code was also tested on several simulated urban data sets with buildings that included curved sides, quadratic function roofs as well as slanted roofs.
L1 spline approximation of a
simulated urban building complex.
Note the sharp edge approximation.
L2 spline approximation of a
simulated urban building complex.
Note the Gibbs phenomena at the edges of the buildings.
Fern Hunt
Maria Nadal
(NIST PL)
Gary Meyer
(University of Oregon)
Harold Westlund
(University of Oregon)
Michael Metzler
(ISCIENCES Corporation)
http://math.nist.gov/~FHunt/webpar4.html
For some years, computer programs have produced images of scenes based on a simulation of scattering and reflection of light off one or more surfaces in the scene. In response to increasing demand for the use of rendering in design and manufacturing, the models used in these programs have undergone intense development. In particular, more physically realistic models are sought (i.e., models that more accurately depict the physics of light scattering). However there has been a lack of relevant measurements needed to complement the modeling. As part of a NIST competency project entitled "Measurement Science for Optical Reflectance and Scattering", F. Hunt is coordinating the development of a computer rendering system that utilizes high quality optical and surface topographical measurements performed here at NIST. The system will be used to render physically realistic and potentially photorealistic images. Success in this and similar efforts can pave the way to computer based prediction and standards for appearance that can assure the quality and accuracy of products as they are designed, manufactured and displayed for electronic marketing.
The work of the past year has focused on the application of the enhanced rendering program iBRDF that broadens the range of models and optical measurements that can be used to produce computer graphic images of surfaces. This program was developed by Gary Meyer and his student Harold Westlund of the University of Oregon as part of the competency project. F. Hunt worked with Meyer and Westlund on a quantitative evaluation of a selected set of rendered images. These images were compared with the optical measurements performed by Maria Nadal of the Physics Laboratory. Nadal used a measurement protocol worked out with Michael Metzler and Hunt. The protocol is set up so that measurements can be used to parameterize the Beard-Maxwell model for optical scattering and is based on the protocol used in a government database. The objects measured were 2 metal panels painted with gray metallic paint, with one paint consisting of large metallic paint flakes while the other contained small flakes. The goal of this exercise was to establish a metrological basis for a difference in appearance. The figure below shows a digital photograph of the two panels painted with the metallic paints positioned inside a lightbox. The panels are illuminated by lights in the ceiling of the box. The figure below shows a rendering of the panels and the box based on optical measurements of the panel and the walls of the box. The calculations assumed that the lights in the ceiling provided a diffuse and uniform illumination of the samples. Numerical comparison showed good agreement between the model and the measurements that were used to define the parameters of the model and to validate it for out-of plane measurements. Radiance measurements of the panels were compared radiance values calculated from the rendering model. Here there was less agreement because the actual light source was in fact quite non-uniform. The simulation did not capture the sudden decrease in sample radiance as the sample is rotated from 45 to 60 degrees with respect to the normal of the floor i.e. flop. When a single light source, was assumed in the calculation (reproducing the source used in the laboratory) flop was observed in the calculated radiance values.
The project officially ended in fiscal year 2001. Westlund, Hunt and Meyer are working on a web site that gives an account of the rendering work done during the project. We will also make NIST scattering measurements available to the rendering community.
F. Hunt gave an invited presentation at the ACREO AB Microelectronics and Optics Conference in Kista, Sweden on October 29. It was entitled, "Digital Rendering of Surfaces". Harold Westlund and Gary Meyer gave a presentation of their work at SIGGRAPH 2001 in Los Angles, CA, and at EuroGraphics Workshop.
Digital photo of a
lightbox (left) and a rendered image (right).
Isabel Beichl
Dianne O'Leary
Francis Sullivan
(IDA/CCS)
This year, new techniques and software have been developed to estimate the number of independent sets in a graph. A graph is a set of vertices with a set of connections between some of the vertices. An independent set is a subset of the vertices, no two of which are connected.
The problem of counting independent sets arises in data communications, in thermodynamics and in graph theory itself. In data communications it is closely related to issues of reliability of networks. In brief, if failure probabilities are assigned to links, the new methods can be used to estimate the failure probability of the entire network. I. Beichl has consulted with Leonard Miller in the ITL Advanced Networking Technologies Division about applications to network reliability. They believe that the combinatorial counting techniques can be extended to estimate the probability of network failure for very large graphs.
Physicists have used estimates of number of independent sets to estimate the hard sphere entropy constant which can be formulated as an independent set problem. This constant is now known in 2D analytically but know analytical result is known in 3D. Beichl, O'Leary and Sullivan have been able to use their approach to estimate the constant for a 3D cubic lattice. They are now are working on the case of an FCC lattice.
I. Beichl in collaboration with guest researcher, F. Sullivan, also discovered that stratified sampling can be used to enhance this program. Stratified sampling is a Monte Carlo technique that divides choices into strata and requires one sample from each stratum be chosen if possible. They found that the independent set program could be improved so that many fewer samples are needed with this technique. Isabel Beichl, Dianne O’Leary and Francis Sullivan are investigating the connection between this method and standard Markov chain methods for estimating the number of independent sets in a graph.
I. Beichl gave eight invited talks on these Monte Carlo methods in the last year. The team was also invited to make a presentation on this subject at the annual American Mathematical Society meeting.
Leslie Greengard
(New York University)
Thomas Hagstrom
(University of New Mexico)
Acoustic and electromagnetic waves, including radiation and scattering phenomena, are increasingly modeled using time-domain computational methods, due to their flexibility in handling wide-band signals, material inhomogeneities, and nonlinearities. For many applications, particularly those arising at NIST, the accuracy of the computed models is essential. Existing methods, however, typically permit only limited control over accuracy; high accuracy generally cannot be achieved for reasonable computational cost.
Applications that require modeling of electromagnetic (and acoustic) wave propagation are extremely broad, ranging over device design, for antennas and waveguides, microcircuits and transducers, and low-observable aircraft; nondestructive testing, for turbines, jet engines, and railroad wheels; and imaging, in geophysics, medicine, and target identification. At NIST, applications include the modeling of antennas (including those on integrated circuits), waveguides (microwave and photonic), transducers, and in nondestructive testing.
The objective of this project is to advance the state of the art in electromagnetic computations by eliminating three existing weaknesses with time-domain algorithms for computational electromagnetics to yield: (1) accurate nonreflecting boundary conditions (that reduce an infinite physical domain to a finite computational domain), (2) suitable geometric representation of scattering objects, and (3) high-order convergent, stable spatial and temporal discretizations for realistic scatterer geometries. The project is developing software to verify the accuracy of new algorithms and reporting these developments in publications and at professional conferences.
This year the paper, "Lattice Sums and the Two-Dimensional, Periodic Green's Function for the Helmholtz equation," Dienstfrey, Hang, and Huang, Proc. Roy. Soc. Lond. A 457, 67-85 (2001), which treats the solution of problems in periodic media, appeared. Submitted for publication, the paper Nonreflecting Boundary Conditions for the Time-Dependent Wave Equation, Alpert, Greengard, and Hagstrom, demonstrates the efficacy of the recently-developed nonreflecting boundary conditions through their implementation in wave-propagation software, and compares to the perfectly-matched layer (PML) technique due to Berenger. In addition, this year the project continued to investigate discretization issues that arise in complicated geometry, leading to new quadrature and interpolation techniques still under development.
The work of the project is supported in part by the Defense Advanced Research Projects Agency (DARPA). The work has been recognized by researchers developing methods for computational electromagnetics (CEM) and has influenced work on these problems at Boeing and HRL (formerly Hughes Research Laboratories). It has also influenced researchers at Yale University and University of Illinois. In each of these cases, new research in time-domain CEM is exploiting discoveries of the project. In particular, some efforts for the new DARPA program on Virtual Electromagnetic Testrange (VET) are incorporating these developments. We expect that design tools for the microelectronics industry and photonics industry, which increasingly require accurate electromagnetics modeling, will also follow.
Michael Donahue
Donald Porter
Robert McMichael (NIST MSEL)
Jason Eicke (George Washington University)
http://www.ctcms.nist.gov/~rdm/mumag.html
The engineering of such IT storage technology as patterned magnetic recording media, GMR sensors for read heads, and magnetic RAM (MRAM) elements requires an understanding of magnetization patterns in magnetic materials at the nanoscale. Mathematical models are required to interpret measurements at this scale. The Micromagnetic Modeling Activity Group (muMAG) was formed to address fundamental issues in micromagnetic modeling through two activities: the definition and dissemination of standard problems for testing modeling software, and the development of public domain reference software. MCSD staff is engaged in both of these activities. The Object-Oriented MicroMagnetic Framework (OOMMF) software package is a reference implementation of micromagnetic modeling software. Achievements in this area since October 2000 include the following.
Scientific contributions
Stephen Langer
Andrew Reid
(MIT/NIST)
Andrew Roosen
(NIST MSEL)
Edwin Fuller
(NIST MSEL)
Craig Carter
(MIT)
Edwin Garcia
(MIT)
Robert Kang-Xing
Jin (Montgomery Blair High School)
http://www.ctcms.nist.gov/oof/
The OOF project, a collaborative venture of MCSD, MSEL's Ceramics Division, and the Center for Theoretical and Computational Materials Science, and MIT, is developing software tools for analyzing real material microstructure. The microstructure of a material is the (usually) complex ensemble of polycrystalline grains, second phases, cracks, pores, and other features occurring on length scales large compared to atomic sizes. The goal of OOF is to use data from a micrograph of a real material to compute the macroscopic behavior of the material via finite element analysis.
OOF is composed of two programs, oof and ppm2oof, which are available as binary files and as source code on the OOF website. From December 2000 through November 2001, ppm2oof was downloaded 1369 and oof was downloaded 1182 times. The source code was downloaded 610 times, and a conversion program, oof2abaqus, was downloaded 141 times. The OOF mailing list (as of 12/4/01) has 268 members.
In June 2001, an OOF workshop held at NIST. Approximately 70 researchers from nine different corporations, four different government laboratories, 18 universities, and five countries attended. The OOF developers presented the current state of the software and plans for the future, while the users spoke about the numerous ways in which the software is being used. Topics ranged from ceramic coatings on turbine blades to marble degradation and paint blistering. A final discussion session provided useful feedback for further code development.
Technical achievements during FY01 included:
Geoffrey B.
McFadden
William
Boettinger (NIST MSEL)
John Cahn (NIST
MSEL)
Sam Coriell
(NIST MSEL)
Jonathan Guyer
(NIST MSEL)
James Warren
(NIST MSEL)
Daniel Anderson
(George Mason University)
B. Andrews
(University of Alabama)
Richard Braun
(University of Delaware)
Bruce Murray,
(SUNY Binghamton)
Robert Sekerka
(Carnegie Mellon University)
G. Tonaglu
(Izmir Institute of Technology, Turkey)
Adam Wheeler
(University of Southampton, UK)
Mathematical modeling provides a valuable means of understanding and predicting the properties of materials as a function of the processing conditions by which they are formed. During the growth of alloy crystals from the melt, the homogeneity of the solid phase is strongly influenced by conditions near the solid-liquid interface, both in terms of the geometry of the interface and the arrangements of the local temperature and solute fields near the interface. Instabilities that occur during crystal growth can cause spatial inhomogeneities in the sample that can significantly degrade the mechanical and electrical properties of the crystal. Considerable attention has been devoted to understanding and controlling these instabilities, which generally include interfacial and convective modes that are challenging to model by analytical or computational techniques.
A well-established collaborative effort between the Mathematical and Computational Sciences Division and the Metallurgy Division of the Materials Science and Engineering Laboratory has included support from the NASA Microgravity Research Program as well as extensive interaction with university and industrial researchers. In the past year a number of projects have been undertaken that address outstanding issues in materials processing through mathematical modeling.
G.
McFadden collaborated with W. Boettinger, J. Warren, and J. Guyer (MSEL) on an
extension of recently developed diffuse-interface models of solidification to
include electrical effects during deposition processes. The resulting model of
electrodeposition is intended to treat the free boundary between the
electrolyte and metal electrode, and includes equations for charged species and
electrical potential.
G. McFadden collaborated with John Cahn
(MSEL), R. Braun (U. Delaware), and G. Tonaglu (Izmir Institute of Technology,
Turkey) on a model of order-disorder transitions in a face-center-cubic binary
alloy. This work includes improved models for the free energy of the system,
which lead to more accurate representations of the solid-state phase
transitions that occur in such materials. The work has been submitted for
publication in Acta Materialia.
G. McFadden is also a participant in a new project on the evolution and self-assembly of nanoscale quantum dots, in collaboration with researchers at Northwestern University. Self-assembly of quantum dots, which offer interesting electronic properties through quantum confinement of electrons, can be achieved spontaneously during heteroepitaxy
(controlled deposition of one material upon
another). In this project, the effects of anisotropic surface energy and
substrate elasticity will be studied in order to understand the underlying
physics and nonlinear nature of the dynamics of the self-organization
process.
Other related work in this period included
collaboration with Professor R. Sekerka, Carnegie Mellon University, on
a model of dendritic growth for two-component metallic alloys, which has been
written up as a short note. A short review on applications of stability theory
for a solid-liquid interface in collaboration with S. Coriell (MSEL) is also in
press. McFadden also collaborated with Coriell and B. Murray (SUNY Binghamton)
on developing a model for interfacial instabilities during the cooperative
growth of monotectic materials. This work is in support of research by B.
Andrews, University of Alabama in Birmingham, who is planning an experiment in
monotectic growth on board the US Space Station. McFadden also hosted an
extended visit by Professor A. Wheeler, University of Southampton, UK. McFadden
and Wheeler completed a study of the Gibbs adsorption equation in the context
of diffuse interface theory. The Gibbs adsorption equation provides a
description of the dependence of the interfacial surface energy on other
thermodynamical parameters in the system. A manuscript describing this work has
been accepted for publication in the Proceedings of the
Royal Society of London.
Katharine Gurski
Geoffrey
McFadden
Dendritic growth is commonly observed during many materials processing techniques, including the casting of alloys for industrial applications.
The prediction of the associated length scales and degrees of segregation for dendritic growth are essential to design and control materials processing technology. We are developing numerical methods for the solution of axisymmetric boundary integral equations for applications of potential theory in materials science, including dendritic growth. The goal is to create a stable, computationally feasible numerical simulation of axisymmetric dendrite growth for a pure material. Our efforts are directed toward the removal of computational difficulties that have plagued previous attempts to create a model of more than two dimensions by using a sharp interface model with an axisymmetric boundary integral method that incorporates fast algorithms and iterative solvers.
This project is in the early developmental stage. We are still investigating effects of different polynomial approximations and integration methods on the stability of the numerical method.
Timothy Burns
Debasis Basak (NIST MSEL)
Matthew Davies (NIST MEL)
Brian Dutterer (NIST MEL)
Richard Fields (NIST MSEL)
Michael Kennedy (NIST MEL)
Lyle Levine (NIST MSEL)
Robert Polvani (NIST MEL)
Richard Rhorer (NIST MEL)
Tony Schmitz (NIST MEL)
Howard Yoon (NIST PL)
This is an ongoing collaboration on the modeling and measurement of machining processes with researchers in the Manufacturing Process Metrology Group in the Manufacturing Metrology Division (MMD) in MEL. The mission of MMD is to fulfill the measurements and standards needs of the U.S. discrete-parts manufacturers in mechanical metrology and advanced manufacturing technology.
Most manufacturing operations involve the plastic working of material to produce a finished component. One way to classify these plastic deformation processes is by the order of magnitude of the rate of deformation, or strain rate. Forming, rolling, and drawing involve relatively low strain rates (<103s-1), while high-speed stamping, punching and machining can involve strain rates as high as 106s-1 or more. Annual U.S. expenditures on machining operations alone total more than $200B, or about 2% of the Gross Domestic Product (GDP). Currently, process parameters are chosen by costly trial-and-error prototyping, and the resulting choices are often sub-optimal. A recent survey by the Kennametal Corporation has found that industry chooses the correct tool less than 50 % of the time.
Pressure from international competition is driving industry to seek more sophisticated and cost-effective means of choosing process parameters through modeling and simulation. While there has been significant progress in the predictive simulation of low-strain-rate manufacturing processes, there is presently a need for better predictive capabilities for high-rate processes. The main limitations are current measurement capabilities and lack of good material response data. Thus, while commercial finite-element software provides impressive qualitative results, data to validate these results are nearly nonexistent. Without serious advances in metrology, it is likely that industry will lose faith in this approach to modeling.
The main goal of our current efforts, which are in the second year of a three-year program supported in large part by intramural ATP funding, is to develop the capability to obtain and validate the material response data that are critical for accurate simulation of high-strain-rate manufacturing processes. Although the focus of this project is machining, the material response data will be broadly applicable. Success in this project will advance the state-of-the-art in two areas: (1) fundamental advanced machining metrology and simulation; and (2) measurement of fundamental data on the behavior of materials at high strain rates (material-response-data) needed for input into machining (and more broadly mechanical manufacturing) simulations. A longer-term, higher-risk objective of this effort is the development of new test methods that use idealized machining configurations to measure high-strain-rate material response.
Related work this year has involved research with M.A. Davies and T.L. Schmitz in MEL on the analysis of the stability of high-speed machining operations in which the tool contacts the workpiece only intermittently
Fern Y. Hunt
Anthony J.
Kearsley
Agnes
O’Gallagher
Honghui Wan
(National Center for Biotechnology Information, NIH)
Antti Pesonen
(VTT, Helsinki Finland)
Daniel J. Cardy
(Montgomery Blair High School)
http://math.nist.gov/~FHunt/GenPatterns/
Computational biology is currently experiencing explosive growth in its technology and industrial applications. Mathematical and statistical methods dominated the development of the field but as the emphasis on high throughput experiments and analysis of genetic data continues, computational techniques have also become essential. We seek to develop generic tools that can be used to analyze and classify protein and base sequence patterns that signal potential biological functionality.
Database searches of protein sequences are based on algorithms that find the best matches to a query sequence, returning both the matches and the query in a linear arrangement that maximizes underlying similarity between the constituent amino acid residues. Dynamic programming is used to create such an arrangement, known as an alignment. Very fast algorithms exist for aligning two sequences or more if the possibility of gaps is ignored. Gaps are hypothesized insertions or deletions of amino acids that express mutations that have occurred over the course of evolution. The alignment of sequences with such gaps remains an enormous computational challenge. We are currently experimenting with an alternative approach based on Markov decision processes. The optimization problem associated with alignment then becomes a linear programming problem and it is amenable to powerful and efficient techniques for solution. Taking a database of protein sequences (cytochrome p450) as a test case, we have developed a method of using sequence statistics to build a Markov decision model and currently the model is being used to solve the linear program for a variety of cost functions. We are creating software for multiple sequence alignment based on these ideas.
Work has also continued on another project involving the program GenPatterns. The software computes and visually displays DNA or RNA subsequence frequencies and their recurrence patterns. Bacterial genomes and chromosome data can be downloaded from GENBANK and computations can be performed and displayed using a variety of user options including creating Markov models of the data. A demonstration can be found at the project website.
GenPatterns and the software developed from the alignment project is now a part of the NIST Bioinformatics/Computational Biology software website currently being constructed under the direction of T.N. Bhat of the Chemical Science and Technology Laboratory (CSTL).
Roldan
Pozo
BLAS Technical
Forum
http://www.netlib.org/blas/blast-forum/
NIST is playing a leading role in the new standardization effort for the Basic Linear Algebra Subprograms (BLAS) kernels for computational linear algebra. The BLAS Technical Forum (BLAST) is coordinating this work. BLAST is an international consortium of industry, academia, and government institutions, including Intel, IBM, Sun, HP, Compaq/Digital, SGI/Cray, Lucent, Visual Numerics, and NAG.
One of the most anticipated components of the new BLAS standard is support for sparse matrix computations. R. Pozo chairs the Sparse BLAS subcommittee. NIST was first to develop and release a public-domain reference implementations for early versions of the standard, which has helped shape the standard, which was released this year.
The new BLAS standard, which includes the Sparse BLAS component, has been finalized and was submitted to the International Journal of High Performance Computing Applications. Several companion papers on implementation and design of the new BLAS were submitted to ACM Transactions on Mathematical Software. Implementations of the Sparse BLAS in Fortran 95 are currently available on the Web, and the C implementation is currently being developed.
Roldan
Pozo
NIST has a history of developing some of the most visible object-oriented linear algebra libraries, including Lapack++, Iterative Methods Library (IML++), Sparse Matrix Library (SparseLib++), Matrix/Vector Library (MV++), and most recently the Template Numerical Toolkit (TNT).
TNT incorporates many of the ideas we have explored with previous designs, and includes new techniques that were difficult to support before the ANSI C++ standardization. The library includes support for both C and Fortran array layouts, array sections, basic linear algebra algorithms (LU, Cholesky, QR, and eigenvalues) as well as primitive support for sparse matrices.
TNT has enjoyed several thousand downloads and is currently in use in several industrial applications. This year there were two software updates to the TNT package, as well as current development work on a new array interface for multidimensional arrays compatible with C and Fortran storage layouts.
William F. Mitchell
Finite element methods using adaptive refinement and multigrid techniques have been shown to be very efficient for solving partial differential equations on sequential computers. Adaptive refinement reduces the number of grid points by concentrating the grid in the areas where the action is, and multigrid methods solve the resulting linear systems in an optimal number of operations. W. Mitchell has been developing a code, PHAML, to apply these methods on parallel computers. The expertise and software developed in this project are useful for many NIST laboratory programs, including material design, semiconductor device simulation, and the quantum physics of matter.
This year saw three major activities on this project. The first is a collaboration with Sandia National Laboratories to develop Zoltan, a dynamic load balancing library. NIST's contributions to Zoltan are the implementation of a Fortran 90 interface to the library, and the implementation of the K-Way Refinement Tree (RTK) partitioning method, which was developed as part of PHAML. Second is the completion of an initial version of the PHAML software to be released to the public. Third is the application of PHAML to solve Schrödinger's Equation in collaboration with the Quantum Processes group of NIST's Atomic Physics division. Among the accomplishments this year are the following.
Modified the RTK code in Zoltan to agree with design changes from Sandia.
Made several performance, capability and software improvements to the Zoltan RTK code.
The first public release of Zoltan occurred in February 2001.
Performed experiments to compare RTK with other partitioning methods in Zoltan.
Made several modifications to PHAML to bring it closer to being releasable, including improvements to generality and usability of the graphics, generality of the initial grid, robustness of error handling, ease of the
Began writing a user's guide for PHAML.
Parallelized the eigensolver in PHAML.
Developed and implemented a new method for finding interior eigenvalues.
Applied PHAML to the solution of interior eigenvalues of Schrödinger's Equation with realistic parameters.
Ronald Boisvert
Roldan Pozo
Bruce Miller
http://math.nist.gov/javanumerics/
<http://math.nist.gov/scimark/
Java, a network-aware programming language and environment developed by Sun Microsystems, has already made a huge impact on the computing industry. Recently there has been increased interest in the application of Java to high performance scientific computing. MCSD is participating in the Java Grande Forum (JGF), a consortium of companies, universities, and government labs who are working to assess the capabilities of Java in this domain, and to provide community feedback to Sun on steps that should be taken to make Java more suitable for large-scale computing. The JGF is made up of two working groups: the Numerics Working Group and the Concurrency and Applications Working Group. The former is co-chaired by R. Boisvert and R. Pozo of MCSD. Among the institutions participating in the Numerics Working Group are: IBM, Intel, Least Squares Software, NAG, Sun, Visual Numerics, Waterloo Maple, Florida State University, the University of Karlsruhe, the University of Tennessee at Knoxville, and the University of Westminster.
Earlier recommendations of the Numerics Working Group were instrumental in the adoption of a fundamental change in the way floating-point numbers are processed in Java. This change will lead to significant speedups to Java code running on Intel microprocessors like the Pentium. The working group also advised Sun on the specification of elementary functions in Java, which led to improvements in Java 1.3. The specification of the elementary functions was relaxed to tolerate errors of up to one unit in the last place, permitting more efficient implementations to be used. A parallel library, java.lang.StrictMath, was introduced to provide strictly reproducible results.
The Numerics Working Group has now begun work on a series of formal Java Specification Requests for language extensions, including a fast floating-point mode and a standardized class and syntax for multidimensional arrays.
This year, MCSD staff presented the findings of the Working Group in a variety of forums, including
Seminar for Java for High End Computing, Edinburgh Parallel Computing Center, Edinburgh, Scotland (November 2000)
Hewlett-Packard High Performance Computer Users Group meeting, San Mateo, CA (March 2001)
SIAM Conference on Parallel Processing for Scientific Computing, Portsmouth, VA (March 2001)
IFIP Working Group 2.5 meeting, Amsterdam (May 2001)
JavaOne Conference, San Francisco, CA (June 2001)
NERSC, Lawrence Berkeley Labs, Berkeley, CA (June 2001)
MCSD staff also worked on the organization of a number of events related to Java.
Roldan Pozo organized an invited full-day shortcourse on Java for High Performance Computing for the SIAM Conference on Parallel Processing for Scientific Computing, which was held in Portsmouth, VA in March 2001. He and Ronald Boisvert, as well as three other speakers presented.
Ronald Boisvert was a member of the Program Committee of the ACM Java Grande / ISCOPE conference, which was held at Stanford University in June 2001. Roldan Pozo was Publicity Chair for the Conference.
Ronald Boisvert and Roldan Pozo co-chaired a half-day meeting of the Java Numerics Working group, which was held in conjunction with the Java Grande Conference.
Roldan Pozo was on the Program Committee for the Workshop on Java in High Performance Computing held in conjunction with the HPCN 2001 conference, held in Amsterdam in June 2001.
Boisvert at Pozo were co-authors with José Moreira (IBM) and Michael Philippsen (University of Karlsruhe) of an invited survey article on Numerical Computing in Java, which appeared in the March/April 2001 issue of Computing in Science and Engineering.
The NIST SciMark benchmark continues to be widely used. SciMark includes computational kernels for FFTs, SOR, Monte Carlo integration, sparse matrix multiply, and dense LU factorization, comprising a representative set of computational styles commonly found in numeric applications. SciMark can be run interactively from Web browsers, or can be downloaded and compiled for stand-alone Java platforms. Full source code is provided. The SciMark result is recorded as megaflop rates for the numerical kernels, as well as an aggregate score for the complete benchmark. The current database lists results for more than 1300 computational platforms, from laptops to high-end servers. As of December 2001, the record for SciMark is 275 Mflops, a 68% improvement over the best reported one-year ago (164 Mflops).
NIST continues to distribute the JAMA linear algebra class for Java that it developed in collaboration with the MathWorks several years ago. More than 8,000 copies of this software have been downloaded from the NIST web site.
Boisvert and Pozo received a Department of Commerce Bronze medal in December 2001 in recognition of their leadership in this area.
Ronald Boisvert
Joyce Conlon
Marjorie McClain
Bruce Miller
Roldan Pozo
MCSD continues to provide Web-based information resources to the computational Science research community. The first of these is the Guide to Available Mathematical Software (GAMS). GAMS is a cross-index and virtual repository of some 9,000 mathematical and statistical software components of use in science and engineering research. It catalogs software, both public domain and commercial, that is supported for use on NIST central computers by ITL, as well as software assets distributed by netlib. While the principal purpose of GAMS is to provide NIST scientists with information on software available to them, the information and software it provides are of great interest to the public at large. GAMS users locate software via several search mechanisms. The most popular of these is the use of the GAMS Problem Classification System. This system provides a tree-structured taxonomy of standard mathematical problems that can be solved by extant software. It has also been adopted for use by major math software library vendors.
A second resource provided by MCSD is the Matrix Market, a visual repository of matrix data used in the comparative study of algorithms and software for numerical linear algebra. The Matrix Market database contains more than 400 sparse matrices from a variety of applications, along with software to compute test matrices in various forms. A convenient system for searching for matrices with particular attributes is provided. The web page for each matrix provides background information, visualizations, and statistics on matrix properties.
Web resources developed by MCSD continue to be among the most popular at NIST. The MCSD Web server at math.nist.gov has serviced more than 38 million Web hits since its inception in 1994 (9 million of which have occurred in the past year!) The Division server regularly handles more than 11,000 requests for pages each day, serving more than 40,000 distinct hosts on a monthly basis. Altavista has identified approximately 10,000 external links to the Division server. The top seven ITL Web sites are all services offered by MCSD:
The GAMS home page is downloaded more than 25,000 times per month by some 15,000 distinct hostnames. During a recent 36-month period, 34 prominent research-oriented companies in the .com domain registered more than 100 visits apiece to GAMS. The Matrix Market sees more than 100 users each day. It has distributed more than 35 Gbytes of matrix data, including nearly 100,000 matrices, since its inception. The Matrix Market is mirrored in Japan and Korea. GAMS has a Korean mirror.
William George
John Hagedorn
Judith Devaney
The Message Passing Interface (MPI) is the de facto standard for writing parallel scientific applications in the message-passing programming paradigm. MPI suffers from two limitations: lack of interoperability among vendor MPI implementations and lack of fault tolerance. For long-term viability, MPI needs both. The Interoperable MPI protocol (IMPI) standard addresses the interoperability issue. It extends the power of MPI by allowing applications to run on heterogeneous clusters of machines with various architectures and operation systems, each of which in turn can be a parallel machine, while allowing the program to use a different implementation of MPI on each machine. This is accomplished without requiring any modifications to the existing MPI specification. That is, IMPI does not add, remove, or modify the semantics of any of the existing MPI routines. All current valid MPI programs can be run in this way without any changes to their source code.
NIST, at the request of computer vendors, facilitated the specification of the IMPI standard, and built a conformance tester. The IMPI standard was adopted in March 2000; the conformance tester was completed at the same time. The conformance tester is a web-based system that sets up a parallel virtual machine between NIST and the testers, that is, the vendor implementers of MPI. The conformance test suite contains over a hundred tests and exercises all parts of the IMPI protocol. Results are returned via a web page. The IMPI standard was published in the May-June 2000 issue of the NIST Journal of Research.
In 2001 we have provided assistance to active vendor implementers of IMPI by initiating and coordinating on-line discussions, between MPI vendors, of several aspects of the IMPI protocols. This was needed to clarify the intent of the specification in several areas. A minor error in one of the collective communications algorithms was discovered by one of the vendors. This was fixed and documented with an entry in the IMPI errata as well as by the addition of extra conformance tests to confirm the correct operation of the algorithm.
IMPI application (computing the Mandelbrot set) that was demonstrated by vendors at SC2001.
During 2001, the IMPI protocols were fully implemented in the MPI libraries of Hewlett-Packard and Fujitsu. Most of IMPI is supported in the latest library from LAM/MPI (Univ. of Indiana). MPI Software Technology will have full IMPI support in their commercial MPI/Pro library for MS Windows and Linux early in 2002. A Phase II SBIR in the amount of $289,568 was awarded to MPI Software Technology to continue the development of a dynamic communications algorithm tuner specifically for IMPI software. IMPI software was on display on the vendor exhibition floor at the SC2001 conference held in Denver in November 2001 and IMPI was mentioned in several product pamphlets. Several vendors are discussing the possibility of a demonstration of IMPI for the SC2002 conference exhibition (Nov 2002) in Baltimore. This demonstration would include machines from each of the implementers of IMPI and would demonstrate IMPI applied to a production parallel code. Extensions to IMPI to accommodate MPI-2 may be proposed by MPI Software Technology as they gain more experience with IMPI.
We have submitted an article on IMPI to Dr. Dobb's Journal, at their invitation, and it has been accepted for publication.
James Sims
Stanley
Hagstrom (Indiana University)
Exact analytical solutions to the
Schrödinger equation, which determines quantities such as energies, are known
only for atomic hydrogen and other equivalent two-body systems. Thus, for any
atomic system other than hydrogen, approximate solutions must be determined
numerically. This year the computation of the nonrelativistic energy for the
ground singlet S state of neutral beryllium (a four electron system) was
computed to a higher accuracy than had ever been achieved before by James Sims
and Stanley Hagstrom using the HyCI method, which they developed.
In a series of papers between 1971
and 1976, Sims and Hagstrom used the method to compute not only energy levels,
but also other atomic properties such as ionization potentials, electron
affinities, electric polarizibilities, and transition probabilities of two,
three, and four electron atoms and other members of their isoelectronic
sequences. The technique is still being used today. In 1996, in a review
article in Computational Chemistry, it was declared that this method is
nearly impossible to use for more than three or four electrons. Sims and
Hagstrom believe that while that may have been true in 1996, it is no longer
true today due to the availability of cheap CPUs which can be connected in
parallel to enhance both the CPU power and the memory that can be brought to
bear on the computational task. To demonstrate the capability of the Hy-CI
technique in a modern computing environment with parallel processing and
multiprecision arithmetic, Sims and Hagstrom undertook to calculate the
nonrelativistic energy for the ground singlet S state of neutral helium (a two
electron problem).
They have computed the energy to be
-2.9037 2437 7034 1195 9829 99 a.u. This represents the highest accuracy
computation of this quantity to date. Comparisons with other calculations and
an energy extrapolation yield an estimated accuracy of 20 decimal digits. To
obtain a result with this high a precision, a very large basis sets had to be
used. In this case, variational
expansions of the wave function with 4,648 terms were employed, leading to the
need for very large computational resources. Such large expansions also lead to
problems of linear dependence, which can only be remedied by using higher
precision arithmetic than is provided by standard computer hardware. For this
computation, 192-bit precision (roughly 48 decimal places) was necessary, and
special coding was required to simulate hardware with this precision. Parallel
processing was also employed to speed the computation, as well as to provide
access to enough memory to accommodate larger expansions. NIST's Scientific Computer Facility cluster
of 16 PCs running Windows NT was utilized for parallel computation. Typical run
times for a calculation of this size about are 8 hours on a single CPU, but
only 30 - 40 minutes on the parallel processing cluster.
The results of this work have been
submitted to the peer-reviewed International Journal of Quantum Chemistry.
This work employs a very novel wave function, namely, one consisting of at most
a single r12 raised to the first power combined with a conventional
non-orthogonal configuration interaction (CI) basis. The researchers believe
that this technique can be extended to multielectron systems (more than three
or four electrons). The combination of computational simplicity of this form of
the wave function, compared to other wave functions of comparable accuracy, as
well as the use of parallel processing and extended precision arithmetic, make
it possible (they believe) to achieve levels of accuracy comparable to what has
been achieved for He, for atoms with more than two electrons. Work is in progress,
for example, to see what precision can be obtained for atomic lithium, which is
estimated to require a 6,000-fold increase in CPU requirements to reach the
same level of precision, making the use of parallel programming techniques even
more critical. After lithium comes
beryllium, which Sims and Hagstrom hope they can again compute with a higher
accuracy than has been achieved to date. Beryllium is the key to multielectron
systems (more than four electrons), since the integrals that arise for more than
four electrons are of the same type as the ones that arise in the four electron
systems.
James Sims
Howard Hung
Charles
Bouldin (NIST MSEL)
John Rehr
(University of Washington)
X-ray absorption spectroscopy (XAS)
is used to study the atomic-scale structure of materials, and is employed by
hundreds of research groups in a variety of fields, including ceramics,
superconductors, semiconductors, catalysis, metallurgy and structural biology.
Analysis of XAS relies heavily on ab-initio computer calculations to model
x-ray absorption. These calculations are computationally intensive, taking days
or weeks to complete in many cases. As XAS is more widely used in the design of
new materials, particularly in combinatorial materials processing, it is
crucial to speed up these calculations.
One of the most commonly used codes for such analyses is FEFF. Developed
at the University of Washington, FEFF is an automated program for ab initio
multiple scattering calculations of X-ray Absorption Fine Structure (XAFS) and
X-ray Absorption Near-Edge Structure (XANES) spectra for clusters of atoms. The
code yields scattering amplitudes and phases used in many modern XAFS analysis
codes. Feff has a user base of over 400 research groups, including a number of
industrial users, such as Dow, DuPont, Boeing, Chevron, Kodak, and General
Electric.
James Sims, Howard Hung, and
Charles Bouldin have parallelized the FEFF code using MPI. It now runs 20-30
times faster than its single-processor counterpart. The parallel version of the
XAS code is portable, and has been incorporated in the latest release of Feff
(FeffMPI). It is now in operation on the parallel processing clusters at the
University of Washington and at DoE's National Energy Research Scientific
Computing Center (NERSC). With the speedup of 30 provided by this version,
researchers can now do calculations they only dreamed about before. One NERSC
researcher has reported doing a calculation in 18 minutes using FeffMPI on the
NERSC IBM SP2 cluster that would previously have taken 10 hours. In 10 hours
this researcher can (and does) now do runs that would have taken months before,
and hence would not have been even attempted.
The peer-reviewed paper Rapid
Calculation of X-ray absorption near edge structure using parallel computing,
has been published in X-ray Spectroscopy. The paper Parallel
Calculation of Electron Multiple Scattering using Lanczos Algorithms, has been
accepted for publication by the Physical Review B. The presentation
"Rapid Computation of X-ray Absorption Near Edge Structure Using Parallel
Computation", was given at the American Physical Society Meeting, March 12-16,
2001, Seattle, Washington.
The bottleneck in the code is now a
memory bottleneck for large systems brought about by the way the tables are
built and stored in the sequential version of the code. The Feff development
team is working on eliminating this bottleneck. Once that is accomplished, the
NIST researchers will begin another round of benchmarking and parallelizing
which hopefully will allow the software to run 100 times or more faster than
current single processor codes.
William George
Steve
Satterfield
James Warren (NIST MSEL)
Snowflake-like structures known as dendrites develop within metal alloys during casting. A better understanding of the process of dendritic growth during the solidification will help guide the design of new alloys and the casting process used to produce them. MCSD mathematicians (e.g., G. McFadden, B. Murray, D. Anderson, R. Braun) have worked with MSEL scientists (e.g., W. Boettinger, R. Sekerka) for some time to develop phase field models of dendritic growth. Such diffuse-interface approaches are much more computationally attractive than traditional sharp-interface models. Computations in two dimensions are now routinely accomplished. Extending this to three dimensions presents scaling problems for both the computations and the subsequent rendering of the results for visualization. This is due to the 0(n4) execution time of the algorithm as well as the 0(n3) space requirements for the field parameters. Additionally, rendering the output of the three dimensional simulation also stresses the available software and hardware when the simulations extend over finite-difference grids of size 1000x1000x1000.
We have developed a parallel 3D dendritic growth simulator that runs efficiently on both distributed-memory and shared-memory machines. This simulator can also run efficiently on heterogeneous clusters of machines due to the dynamic load-balancing support provided by our MPI-based C-DParLib library. This library simplifies the coding of data-parallel style algorithms in C by managing the distribution of arrays and providing for many common operations on arrays such as shifting, elemental operations, reductions, and the exchanging of array slices between neighboring processing nodes as is needed in parallel finite-difference algorithms. With the expansion of Hudson, NIST's central Linux cluster, to 128 CPUs with 1GB of memory per node, we will now be able to complete simulations on 10003 grids, sufficient for direct comparison with earlier two-dimensional simulations.
A two-dimensional slice through a
simulated three-dimensional
dendrite crystal) of a bi-metal
alloy. This image, colored to indicate
the relative concentration of the
two metals within the dendrite, is
one of many snapshots taken during
the simulation to observe the
process of dendritic growth. The bright outline in this image is at the
dendrite surface, showing the
abrupt change in relative concentration
that takes place as the alloy
changes phase from liquid to solid.
The output from the simulator consists of 40 snapshots consisting of pairs of files containing the phase-field and the relative concentration of the solutes at each grid point at specific time steps. At smaller grid sizes, below 3003, we use commonly available visualization software to process these snapshot files into color images and animations with appropriate lighting and shading added. For larger grid sizes we have developed a visualization procedure that converts the 3D grid data into a polygonal data set that can take advantage of hardware acceleration. Using standard SGI software, OpenGL Performer, this polygonal representation is easily displayed. The semi-transparent colors allow a certain amount of internal structure to be revealed and the additive effects of the semi-transparent colors produce an isosurface approximation. A series of polygonal representations from the simulator snapshots are cycled producing a 3D animation of dendrite growth that can be interactively viewed. Most of the currently available immersive virtual reality (IVR) systems are based on OpenGL Performer. Thus, utilizing this format immediately allows the dendrite growth animation to be placed in an IVR environment for enhanced insight.
An article on this implementation of 3-D dendritic growth simulation using the phase-field method, with an emphasis on the parallel implementation, has been submitted to the Journal of Computational Physics.
Improvements to this simulator that we intend to pursue include adding computational steering capabilities, improving the immersive visualization of the results, and decreasing the memory requirements of the simulator.
Judith
E. Devaney
John
G. Hagedorn
Because the
design and implementation of algorithms is highly labor-intensive, the number
of such projects that can be undertaken is limited by the availability of
people with appropriate expertise. The
goal of this project is to create a system that will leverage human expertise
and effort through parallel genetic programming. The human specifies the problem to be solved, provides the
building blocks and a fitness function that measures success, and the system
determines an algorithm that fits the building blocks together into a solution
to the specified problem. We are implementing a
generic Genetic Programming (GP) system with features of existing systems as
well as some features unique to our approach.
These unique features are intended to improve the operation of the
system particularly for the types of real-world scientific problems to which we are applying the system at NIST. Genetic
programming is also a meta-technique. That is, it can be used to solve any
problem whose solution can be framed in terms of a set of operators and a
fitness function. Thus it has applications in parameter search. NIST scientists
have many special purpose codes that can be used as operators in this sense.
We have instrumented our system to collect a variety of
information about programs, populations of programs, and runs. We have also implemented a visual
representation of populations and individual programs. The accompanying figure shows a visualization
of a population of 128 individuals.
Each program is represented by one vertical column. As indicated in the figure, three aspects of
each program are represented. The upper
part is a visual representation of the content of the program. Each block of color in this section
corresponds to a procedure in the program.
In the middle section, the sequence of genetic operations that brought
each individual into existence is presented. Finally, the lower portion of the
image presents a normalized view of the fitness of each individual. In the figure, the individuals have been sorted
by fitness with the more fit individuals on the left.
Visualization of a population.
The instrumentation described above has provided insight in many aspects of the operation of our GP system. As a result we have created two new operators: repair and prune. They have yielded substantial improvement in the system's ability to find solutions. All operating parameters of the system are controlled by keyword parameter files that are read in during program initialization, and the system is configured to dynamically link to user-supplied code that provides a problem-specific fitness function as well as problem-specific operations encapsulated as C functions. The GP system has been parallelized using the island model. This parallelization has been easily accomplished with the use of our MPI AutoMap and AutoLink software libraries that facilitating the transfer of complex data structures between independent programs.
Papers describing our work appeared in two peer-reviewed conference proceedings this year: "A Genetic Programming System with a Procedural Program Representation", Proceedings of the Genetic and Evolutionary Computation Conference (Late Breaking Papers), July, 2001, and A Genetic Programming Ecosystem, Proceedings of the 15th International Parallel and Distributed Processing Symposium, April 2001.One of us was invited to participate in the panel "Biologically Inspired Computing: Where to in the next 10 years?" at the Workshop on Biologically Inspired Solutions to Parallel Processing Problems, April 23, 2001, San Francisco. One poster, Genetic Programming and Discovery was presented at the Advanced Technology Program National Meeting, June 2001. Two invited talks were presented: "Genetic Programming for Data Visualization and Mining", Workshop on Combinatorial Methods for Materials R&D: Systems Integration in High Throughput Experimentation, American Institute of Chemical Engineering National Meeting, November 15, 2000, and "Genetic Programming", Electron and Optical Physics Seminar, January 11, 2001.
Currently, we are using symbolic regression to automate the identification of functional forms of measurement errors; we are studying metrics for monitoring population diversity; and we will use our system to mine the output of combinatorial experiments. We have interest from NIST scientists who would like to collaborate with us when we have completed our system in about a year.
Immersive Visualization, also
described as Immersive Virtual Reality (IVR), is an emerging technique
with the potential to handle the growing amount of data from large
parallel computations or advanced data acquisitions. To be fully
immersive, a computer graphics system should include one or more large
rear projection screens to encompass peripheral vision, stereoscopic
display for increased depth perception, and head tracking for realistic
perspective based on the direction the user is viewing. Unlike graphics on
a computer monitor, immersive visualization allows the scientist to explore
inside the data. Visualization of scientific data can provide an intuitive
understanding of the phenomenon or data being studied. It can contribute to
theory validation through demonstration of qualitative effects seen in
experiments. Effective visualization can also uncover structure where no
structure was previously known. With parallel computing, the datasets are typically
three-dimensional. Immersive visualization sets the viewer in a 3D setting and
takes advantage of human skills at pattern recognition by providing a more
natural environment where peripheral vision, increased depth perception,
and realistic perspective provides more context for human intuition.
A scientist who specializes in a
field such as chemistry or physics is often not simultaneously an expert
in visualization techniques. MCSD provides a framework of hardware, software
and complementary expertise, which NIST application scientists can utilize
to facilitate meaningful discoveries. The immersive system in the
Immersive Visualization Laboratory (Gaithersburg Building 225/A140) is
a RAVE (Reconfigurable Automatic
Virtual Environment) from Fakespace Systems. During 2001, this system was
upgraded with the addition of a second module. Thus, the two-wall RAVE is
configured as an immersive corner with two 8' x 8' (2.44m x 2.44m) screens
flush to the floor oriented 90 degrees to form a corner. As defined above,
the RAVE is fully immersive. The large corner configuration provides a
very wide field of peripheral vision, with stereoscopic display and head
tracking. The host computer system is a high performance graphics system
from SGI that was upgraded during 2001 to the current Origin 3000 family
consisting of 12 500MHz MIPS R14000 CPUs, 12GB memory and 3 Infinite
Reality Graphics Pipes. The additional floor space required by the second
module required the expansion of the Immersive Visualization Laboratory
into an adjacent room by removing the joining wall unit and repairing the
raised floor. Use of immersive visualization to model the expansion prior to
implementation allowed the unit to be efficiently placed in the new space.
Collaboration with Virginia Tech's
Visualization and Animation Group on the use and implementation of DIVERSE
(Device Independent Virtual Environments-Reconfigurable, Scalable,
Extensible) open source software was continued. DIVERSE is the
primary software environment in use on the RAVE. It handles the details
necessary to implement the immersive environment. A flashlight feature was
added to the system this year. Like a real flashlight, an object within the
immersive environment can be identified by shining the virtual light on it.
Researchers in the Building and
Fire Research Laboratory (BFRL) at NIST are studying high performance
concrete. BFRL is leading the Virtual Cement and Concrete
Laboratory (VCCTL) consortium consisting of the major cement producers.
The accompanying image is from a virtual concrete flow visualization. The
numerical algorithm, simulates the flow of ellipsoidal objects (concrete
particles) in suspension. The visualization plays an important role in the
validation of the algorithms and the correctness of complex systems like
this flow of fluid concrete. A digital movie of this visualization is
available for view at http://math.nist.gov/mcsd/savg/vis/concrete/.
The virtual reality simulation of
concrete flow was implemented with Diversifly, a visualization utility included
with DIVERSE, so no application-specific programming was required. Two
general purpose and very simple ASCII file formats were defined. Two file
loaders were implemented to provide an interface between the numerical
simulation and the immersive environment. Utilizing shell scripts
and common filters/tools, the simulation data is transformed into the
suitable formats to be loaded, viewed and navigated with Diversifly. The
file formats are suitable for a wide range of application areas. This philosophy
of converting data to predefined file formats that can be
immediately displayed in the immersive environment has created a simple
and very usable system. The NIST scientists themselves use the RAVE and
demo their own visualizations.
A description of the BFRL
collaboration is included in the peer-reviewed paper titled, "DIVERSE:
A Framework for Building Extensible and Reconfigurable Device Independent
Virtual Environments" to be presented at the IEEE Virtual Reality
Conference 2002. A demonstration to a reporter from Government Computer News
resulted in an article in the July 27, 2001 issues, which is online at
http://www.gcn.com/20_25/news/16941-1.html. Other demonstrations to external
organizations include: the Director of the High Performance Computer Center at
Texas Tech University (April 2001), a Digital Library of Mathematical Functions
Editorial Board (April 2001), a Virtual Cement and Concrete Testing Laboratory
(VCCTL) Consortium Meeting (April 2001), an Aggregates Foundation for
Technology, Research and Education (AFTRE) meeting (May 2001), the Virginia
Department of Transportation and the University of Virginia (July 2001),
LaFarge (cement producer in France) (July 2001), Montgomery College
Students (August 2001), the Fire Testing Laboratory Workshop (June 2001),
the Washington Internships for Students of Engineering (July 2001), the German
Cement Association (VDZ-Verein Deutscher Zementwerke e.V.) (November
2001).
Single image from an interactive
visualization of flowing
concrete. Ellipsoids represent
concrete particle motion. Lines
represent their full path over the simulation time period.
The most interactive visualizations
in an immersive environment are those that can be rendered using polygon-based
graphics techniques. A large amount of scientific data is represented as
a volume with a data values at each x,y,z point within a defined
volume. For example, experimental cement data has been captured with X-ray
techniques at 1000 x 1000 x 1000 resolution.
Future work will include continuing
collaborations with Virginia Tech on incorporating volume-rendering
techniques of this type of data into the immersive environment. The device
independence of DIVERSE allows the same applications to be run on a variety
of hardware from non-immersive desktop machines to fully immersive
environments. This capability will be exploited to bring a broad base of
research activities into the immersive environment by providing an entry
point at the scientists desktop and then drawing them into the Immersive
Visualization Lab.
Barbara am
Ende
Michael
Cresswell (NIST EEEL)
Richard
Allan (NIST EEEL)
Loren.
Linholm (NIST EEEL)
Christine
Murabito (NIST EEEL)
Will
Guthrie (ITL Statistical Engineering Division)
Hal
Bogardus (SEMATECH)
The Semiconductor Industry Association's (SIA) International Technology Roadmap for Semiconductors (ITRS) projects the decrease of gate linewidths used in state-of-the-art IC manufacturing from present levels of up to 250 nm to below 70 nm within several years. Scanning electron microscopes (SEMs) and other systems traditionally used for linewidth metrology exhibit measurement uncertainties exceeding ITRS-specified tolerances for these applications. It is widely believed that these uncertainties can be partly managed through the use of CD (Critical Dimension) reference materials with linewidth values that are traceable with single-nanometer-level uncertainties. Until now, such reference materials have been unavailable because the technology needed for their fabrication, and a means of assuring their traceability, has not been available.
A technical strategy that has been developed at NIST for fabricating CD reference materials with appropriate properties is based on the Single-Crystal CD Reference-Materials (SCCDRM) implementation. Essential elements of the implementation are the starting silicon wafers having a (110) orientation; the reference features being aligned to specific lattice vectors; and their lithographic patterning with lattice-plane selective etches of the kind used in silicon micro-machining. This approach provides straight reference features with vertical, atomically planar, sidewalls. The path for linewidth traceability is provided by High Resolution Transmission Electron Microscopy (HRTEM) imaging. The technique enables counting the lattice planes between the feature's two sidewalls and thus measuring the linewidth with single nanometer-level accuracy. However, sample preparation is destructive and very costly to implement. The traceability strategy for the SCCDRM implementation utilizes the sub-nanometer repeatability of electrical linewidth metrology as a secondary reference means. Low-cost precise measurements of the electrical linewidths of features on all die sites of each starting wafer are made first. In order to enable electrical linewidth metrology, the reference features are patterned in the device layers of silicon-on-insulator material. Then, the absolute linewidths of a subset of these features are determined from lattice-plane counts extracted from HRTEM images. The absolute linewidths are then reconciled with the features’ previously measured electrical linewidths. In this way, the linewidths of all reference features on the wafer that are not used for HRTEM imaging become calibrated with specified uncertainties and having traceability to silicon’s (111) lattice-plane spacing. MCSD is working to automate detection and counting of lattice planes between a feature's two sidewalls in the HRTEM images. am Ende has developed a series of algorithms to automatically detect and count peaks (which represent lattice planes) in the image. Peaks are calculated for all zone intervals across the entire vertical direction of the image. The best zones are determined based on the lowest standard deviation of the distances between peaks. The algorithm that selects peaks automatically currently needs some human input in areas where the images are not clear, where peaks are poorly developed, and along the margins of the crystalline portion of the wafer. The human input required for judging the quality of the automatically determined peaks is significant less than the manual counting of fringes and the repeatability is greatly increased.
Results of the project’s work were presented on October 10, 2001, "Single-Crystal CD Reference Materials,"
Semiconductor Electronics and Statistical Engineering Divisions, National Institute of Standards and Technology,
AMAG Meeting,
International SEMATECH. Two abstracts were accepted to conferences
based on this work. Both have been
accepted:
Am Ende will continue to fully automate the counting algorithms to completely take the human input out of the loop. Criteria for marking the boundaries of the lattice planes, and to quantify the "raggedness" of the boundary between the crystalline and amorphous silicon will be developed.
Julien Franiatte
Judith Devaney
Steve Satterfield
Garnett Bryant (NIST PL)
Accurate atomic-scale quantum
theory of nanostructures and nanosystems fabricated from nanostructures enables
precision metrology of these nanosystems and provides the predictive precision
modeling tools needed for engineering these systems for applications including
advanced semiconductor lasers and detectors, single photon sources and
detectors, biosensors, and nanoarchitectures for quantum coherent technologies
such as quantum computing. Theory and modeling of nanoscale and near-field
optics is essential for the realization and exploitation of nanoscale
resolution in near-field optical microscopy and for the development of
nanotechnologies that utilize optics on the size-scale of the system. Applications include quantum dot arrays and
quantum computers. Atomic-scale theory and modeling of quantum nanostructures,
including quantum dots, quantum wires, quantum-dot arrays, biomolecules, and
molecular electronics, is being used to understand the electronic and optical
properties of quantum nanostructures and nanosystems fabricated from component
nanostructures. Theory and numerical modeling is being used to understand
optics on the nanoscale and in the near field with applications including
near-field microscopy, single-molecule spectroscopy, optics and quantum optics
of nanosystems, and atom optics in optical nanostructures.
Laboratory nanostructure (from
Phys. Rev. B, 53, R13242, 1996)
A computed nanostructure
MCSD is participating in
parallelization of computational models for studying nanostructures. Parallel
processing has enabled near linear speedup in the sequential code. Codes that
took nine hours can now be completed in one hour on ten processors. As the
computational model is extended to handle more complex and larger systems by
including not only the nanocrystals but also the substrate and environment
around them, parallel processing becomes a necessity. This year the code will
be extended to study self-assembled quantum dots.
The NIST Building and Fire Research Laboratory (BFRL) does experimental and computational research in cement and concrete. Recently MCSD has been working with BFRL parallelizing their codes and creating visualizations of their data. In January 2001 the Virtual Cement and Concrete Testing Laboratory (VCCTL) consortium was formed. MCSD assisted in this effort through presentations of our work with BFRL and demonstrations of visualizations in our immersive environment. The consortium originally consisted of NIST and six industrial members: Cemex, Dyckerhoff Zement GmbH, Holcim Inc., Master Builders Technologies, the Portland Cement Association, and W.R. Grace & Co. A seventh industrial member, the German Cement Association (VDZ), has recently joined. The overall goals of the consortium are to develop a virtual testing system to reduce the amount of physical concrete testing and expedite the research and development process. This will result in substantial time and cost savings to the concrete construction industry as a whole. MCSD continues to contribute to the VCCTL through collaborative projects involving parallelizing and running codes, creating visualizations, as well as presentations to the VCCTL current and prospective members. The following four projects are included in this effort.
James Sims
Terence Griffin
Steve Satterfield
Nicos Martys (NIST BFRL)
http://math.nist.gov/mcsd/savg/parallel/dpd/
http://math.nist.gov/mcsd/savg/vis/concrete/
Understanding the flow properties
of complex fluids like suspensions
(e.g., colloids, ceramic slurries and concrete) is of technological
importance and presents a significant theoretical challenge. The computational
modeling of such systems is also a great challenge because it is difficult to
track boundaries between different fluid/fluid and fluid/solid phases. We use a
new computational method called dissipative particle dynamics (DPD), which has
several advantages over traditional computational dynamics methods while
naturally accommodating such boundary conditions. In DPD, the interparticle
interactions are chosen to allow for much larger time steps so that physical
behavior, on time scales many orders of magnitude greater than that possible
with molecular dynamics, may be studied.
Our algorithm (QDPD) is a
modification of DPD, which uses a velocity Verlet algorithm to update the
positions of both the free particles and the solid inclusion. In addition, the
rigid body motion is determined from the quaternion-based scheme of Omelayan
(hence the Q in QDPD). Parallelization of the algorithm is important in order
to adequately model size distributions, and to have enough resolution to
avoid finite size effects.
Flow around steel reinforcing bars (left)
and a model rheometer (right).
This year Jim Sims has
completed both shared and distributed memory versions of the algorithm using
MPI. The distributed memory version runs so well on the PC cluster (with fast
Ethernet) that its limits are not visible on the current cluster. We are
currently able to model coarse aggregates. This code has been used to
study the flow around steel reinforcing bars, and to model a rheometer.
Experiments at the Center for Advanced Cement Based Materials, a consortium of
universities and industry that includes NIST, will use this code to validate
the viscosity measurements of a rheometer with idealized aggregates consisting
of marbles. Talks on this work including results of the computations have been
presented at diverse places.
Nicos Martys and James Sims, "Application of Dissipative Particle Dynamics For Modeling Cement Based Materials", 2000 MRS Fall Meeting, Symposium on Materials Science of High Performance Concrete, Boston, Nov. 28-30, 2000.
Nicos Martys and Jim Sims, "Computational study of colloidal suspensions using dissipative particle dynamics", 73rd Annual Meeting of the Society of Rheology, October 21-25, 2001, Bethesda, Maryland.
Nicos Martys and Jim Sims, "Computational study of colloidal suspensions using dissipative particle dynamics", Center for Advanced Cement-based Materials, Northwestern University, Oct, 17, 2001.
Terence Griffin has worked
extensively with N. Martys to develop visualizations for this project (see
accompanying examples). Martys presented some of these at the American Concrete
International Meeting in March 26, 2001 in Philadelphia. Griffin also made
videos of simulation results that were shown at the Symposium of Aggregate
Research (Austin, Texas, April 23 2001), the VCCTL Consortium (NIST, April 19,
2001), and the Interfacial Consortium (May 2).
In the coming year, additional
computations will be performed in support of the VCCTL and papers will be
submitted to refereed journals. This code is also flexible enough to be used to
model other things such as multicomponent fluids.
John Hagedorn
Judith Devaney
Nicos Martys (NIST BFRL)
http://math.nist.gov/mcsd/savg/parallel/lb/
http://math.nist.gov/mcsd/savg/vis/fluid/
The flow of fluids in complex
geometries plays an important role in many environmental and technological
processes. Examples include oil recovery, the spread of hazardous wastes in
soils, and the service life of building materials. Further, such processes depend
on the degree of saturation of the porous medium. The detailed simulation
of such transport phenomena, subject to varying environmental conditions
or saturation, is a great challenge because of the difficulty of
modeling fluid flow in random pore geometries and the proper accounting of
the interfacial boundary conditions.
In order to model realistic
systems, we developed a parallel lattice Boltzmann (LB) algorithm and
implemented it with MPI to study large systems. We verified the correctness of the model with several numerical
tests and comparisons with experiments. The modeled permeabilities of X-ray
microtomography images of sandstone media and their agreement with experimental
results verified the correctness and utility of the parallel implementation of
the LB methods.
These simulations would not have
been possible without parallelizing the algorithm. The results were published
by Martys, Hagedorn and Devaney, as an invited chapter "Pore Scale Modeling of
Fluid Transport using Discrete Boltzmann Methods" in the book Ion and Mass
Transport in Cement-Based Material. The model was run many times to generate
data that was used in the paper The effects of statistical fluctuations,
finite size error, and digital resolution on the phase percolation and
transport properties of the NIST cement hydration model, by E. J. Garboczi and
D. P. Bentz of BFRL, submitted to the peer-reviewed Cement & Concrete
Research. Visualizations of fluid
properties calculated with this model were published in Physical Review E, Vol
63, 031205, "Critical Properties and Phase Separation in Lattice Boltzmann
Fluid Mixtures". Martys and Hagedorn presented "Modeling Complex Fluids with
the Lattice Boltzmann Method", Society of Rheology Meeting, Oct, 2001,
Bethesda.
Laboratory Experiment
Computational Experiment
J. Hagedorn has performed a series
of runs simulating multiple fluids through a tube. Parameters have been varied to investigate the effects of tube
radius, tube length, wetting parameters, and other parameters on the stability
of the fluid structure. Results are very similar to experimental results
generated by Dr. Kalman Migler of the Polymers Division of MSEL as shown in the
figures below. Papers on this work will be submitted in the coming year. Martys
and Hagedorn will present "Modeling fluid flow in Cement Based Materials using
the Lattice Boltzmann Method" at the Ventura California Gordon Conference on
Cement-based Materials (April 2002).
Robert Bohn
Edward Garboczi (NIST BFRL)
Almost all real materials are multi-phase, whether deliberately, when formulating a composite, inadvertently, by introducing impurities into a nominally mono-phase material, or by the very nature of the material components, as in the case of cement-based materials. Predicting the elastic properties of such a material is dependent on two pieces of information for each phase: how each phase is arranged in the microstructure, and the elastic moduli of each phase. Cement paste is extraordinarily complex elastically, with many different chemically and elastically distinct phases (20+) and a complex microstructure. This complexity further increases in concrete, as aggregates are added.
A
finite element package for computing the elastic moduli of composite materials
has been written by staff of the NIST Building and Fire Research Laboratory and
has been available for several years. The program takes a 3-D digital image of
a microstructure, assigns elastic moduli tensors to each pixel according to
what material phase is present, and then computes the effective composite
linear elastic moduli of the material.
This program has worked successfully in many different material
microstructures, including ceramics, metal alloys, closed and open cell foams,
gypsum plaster, oil-bearing rocks, and 2-D images of damaged concrete This
program is a single-processor code.
Reasonable run times mean that we are limited, at present, to
systems of about 1-2 million pixels, which require 200-500 Mbytes of
memory.
We are updating and parallelizing
this code with MPI. In particular, this code will be run on multiprocessor SGI
hardware and also on a Linux based PC cluster. The main benefactor of
the work will be the cement and concrete industries that are members
of the Virtual Cement and Concrete Testing Laboratory consortium. This code, in
its scalar form at present, will go into version 2.0 of the software
that is distributed to the companies in January, 2002. The parallel form
will be used for further research in the elastic properties of cement paste.
For example, work on the elastic properties of random-shape aggregates in
concrete will directly benefit all the aggregate companies involved in the
International Center for Aggregate Research (U Texas-Austin), who
will be sponsoring this research in 2002.
This year the code was rewritten to
disseminate the data and other necessary information to each of the
computing nodes. This code was previously written in a linear vector form
in order to run faster on a vector-based machine; this bookkeeping system has
been eliminated. The input now describes the actual data in a 3-D way and
it is more natural to compute and transmit the data in chunks of the
original 3-D data array. Currently we are parallelizing the three
main subroutines, FEMAT, DEMBX and ENERGY. The other subroutines are
virtually identical in structure and will follow this parallelization.
Making this code parallel will
allow much larger systems to be studied, allowing us to probe the crucial
parameter of digital resolution by providing faster turnaround times and the
possibility to quickly run larger higher-resolution microstructures. Early
age computations of cement paste elastic moduli are more difficult to carry
out accurately, because of the low connectivity between phases at early
ages. The higher resolution available on the parallel machines should help
resolve this problem. Large systems are also required to study the elastic
properties of random shapes, like aggregates found in concrete. Finally,
direct simulations of AFM probes of composite surfaces are being carried
out. These are CPU-time intensive, since every pixel on a surface needs to
be displaced, one at a time, and the composite elastic response
computed. The improved run times from parallel codes will help this
project immensely.
Steve Satterfield
Peter Ketcham
William George
Judith Devaney
James Graham
James Porterfield
Dale P. Bentz (NIST BFRL)
Symoane Mizell (NIST BFRL)
Daniel A. Quenard (Centre
Scientifique et Technique du Batiment)
Hebert Sallee (Centre Scientifique
et Technique du Batiment)
Franck Vallee (Centre Scientifique
et Technique du Batiment)
Jose Baruchel (European Synchrotron
Radiation Facility)
Elodie Boller (European Synchrotron
Radiation Facility)
Abdelmajid Elmoutaouakkil (European
Synchrotron Radiation Facility)
Stefania Nuzzo (European Synchrotron
Radiation Facility)
http://visiblecement.nist.gov/
To produce materials with
acceptable or improved properties, adequate characterization of their
microstructure is critical. While the
microstructure can be viewed in two dimensions at a variety of resolutions
(e.g., optical microscopy, scanning electron microscopy, and transmission
electron microscopy), it is often the three-dimensional aspects of the
microstructure that have the largest influence on material performance. Direct viewing of the three-dimensional
microstructure is a difficult task for most materials. With
advances in X-ray microtomography, it is now possible to obtain
three-dimensional representations of a material's microstructure with a spatial
resolution of better than one micrometer per voxel.
The Visible Cement Data Set
represents a collection of 3-D data sets obtained using the European
Synchrotron Radiation Facility in Grenoble, France in September of 2000 as part of an international collaboration between
NIST, ESRF, and Centre Scientifique et Technique du Batiment (CSTB- Grenoble,
FRANCE). Most of the images obtained are for hydrating
Portland cement pastes, with a few data sets representing hydrating plaster of
Paris and a common building brick. The
goal of this project is to create a web site at NIST where all researchers
could access these unique data sets.
The web site includes a text-based description of each data set and
computer programs to assist in processing and analyzing the data sets. In addition to the raw data files, the site
contains both 2-D and 3-D images and visualizations of the
microstructures.
Several of these data sets have been animated
using the MCSD immersive visualization environment. The accompanying figure is an image from one of the plaster of Paris
data sets that have been displayed this way.
A variety of computer programs for processing the data sets have
been developed and made available on the Visible Cement Data Set web site. These include programs for extracting a
subvolume from the complete data set, determining the gray level histogram for
a subvolume, segmenting a subvolume into individual phases (cement particles,
hydration products, and pores for example), filtering the raw and segmented
subvolume, and assessing the percolation (connectivity) properties of a phase
in a segmented subvolume. The
segmentation of a data set into individual phases is the critical step in
attaching physical significance to the data. Suitable
algorithms for converting these segmented subvolumes into a collection of
polygons suitable for viewing on the MCSD immersive environment have been
explored and demonstrated. The article The Visible
Cement Data Set by D.P. Bentz, S. Mizel, S. Satterfield, J. Devaney, W.
George, P. Ketcham, J. Graham, J. Porterfield, D. Quenard, F. Vallee, H.
Sallee, E. Boller, J. Baruchel, describes the details of the dataset. It is in
preparation for the NIST Journal of Research.
The Visible Cement Data Set web site will
continued to be used by NIST and will serve as a valuable resource to both the
construction materials and visualization research communities.
64 x
64 x 64 subvolume of Plaster of Paris hydrated for 4 hours, rendered as an
isosurface of the segmented (particle/porosity) data.
as an
isosurface of the segmented (particle/porosity) data.
Daniel Lozier
Ronald Boisvert
Joyce Conlon
Marjorie McClain
Bruce Fabijonas
Raghu Kacker
Bruce Miller
F. W. J. Olver
Bonita Saunders
Abdou Youssef
Qiming Wang (NIST ITL/IAD)
Charles Clark (NIST PL)
Brianna Blaser
Elaine Kim
NIST is well known for its collection and dissemination of standard reference data in physical sciences and engineering. From the 1930s through the 1960s, NBS also disseminated standard reference mathematics, typically tables of mathematical functions. The prime example is the NBS Handbook of Mathematical Functions, prepared under the editorship of Milton Abramowitz and Irene Stegun and published in 1964 by the U.S. Government Printing Office. The NBS Handbook is a technical best seller, and likely is the most frequently cited of all technical references. Total sales to date of the government edition exceed 150,000; further sales by commercial publishers are several times higher. Its daily sales rank on amazon.com consistently surpasses other well-known reference books in mathematics, such as Gradshteyn and Ryzhik's Table of Integrals. The number of citations reported by Science Citation Index continues to rise each year, not only in absolute terms but also in proportion to the total number of citations. Some of the citations are in pure and applied mathematics but even more are in physics, engineering, chemistry, statistics, and other disciplines. The main users are practitioners, professors, researchers, and graduate students.
Except for correction of typographical and other errors, no changes have ever been made in the Handbook. This leaves much of the content unsuitable for modern usage, particularly the large tables of function values (over 50% of the pages), the low-precision rational approximations, and the numerical examples that were geared for hand computation. Also, numerous advances in the mathematics, computation, and application of special functions have been made or are in progress. We are engaged in a substantial project to transform this old classic radically. The Digital Library of Mathematical Functions is a complete rewriting and substantial update of the Handbook that will be published in a low-cost hardcover edition and on the Internet for free public access. The Web site will include capabilities for searching, downloading, and visualization, as well as pointers to software and related resources. The contents of the Web site will also be made available on CD- ROM, to be included with the hardcover edition. A sample chapter, including examples of dynamic visualizations, may be viewed on the project Web site.
|
Dynamic Visualization.
View of principal branch of the Hankel function |H5(1)(x+iy)|
showing pole at the origin, branch cut, location of zeroes near the cut, and
exponential growth and decay in different parts of the complex plane. Five zeros around the pole are not fully
visible in this view. In the DLMF,
this view may be rotated and seen from any angle. The same technology can be used to generate views of additional
branches. © National Institute of
Standards and Technology. |
|
Funded by the National Science Foundation and NIST, the DLMF Project is contracting with the best available world experts to rewrite all existing chapters, and to provide additional chapters to cover new functions (such as the Painlevé transcendents and q-hypergeometric functions) and new methodologies (such as computer algebra). Four NIST editors (Lozier, Olver, Clark, and Boisvert) and an international board of nine associate editors are directing the project. The associate editors are
Richard
Askey (University of Wisconsin),
Michael Berry (University of Bristol),
Walter Gautschi (Purdue University),
Leonard Maximon (George Washington University),
Morris Newman (University of California at Santa Barbara),
Peter Paule (Technical University of Linz),
William Reinhardt (University of Washington),
Ingram Olkin (Stanford), and
Nico Temme (CWI Amsterdam).
Major accomplishments were the result of team efforts. In FY 2001 these include the following.
Chapter
Status
Editorial Issues
Production
Issues
External
Recognition
Ronald Boisvert
Isabel Beichl
Anthony Kearsley
William Mitchell
David Song
Francis Sullivan
Carl Williams
(NIST PL)
Eite Tiesinga
(NIST PL)
Mike Robinson
(IDA Center for Computing Sciences)
This year, ITL began a new program of work in Quantum Information Systems in collaboration with the NIST Physics Laboratory and the NIST Electronics and Electrical Engineering Laboratory. R. Boisvert is coordinating the ITL effort, which involves participants from six ITL divisions. This work is partially supported by a grant from the DARPA Quantum Information Science and Technology (QuIST) program, which began this year. The main thrusts of ITL’s DARPA QuIST effort are as follows.
o
Quantum Communications Testbed Facility
We are working with the NIST PL
to develop a working testbed to demonstrate concepts and to measure performance
of systems, components, and protocols for highly secure communications based on
the principles of quantum physics. The
initial testbed, now under construction, will feature an open-air optical link
between the NIST Administration Building and the NIST North Building, which
will be used to demonstrate the BB84 protocol for quantum key exchange. (Such
keys could be used as one-time pads for encrypting messages, or could be used
for the separate generation of common one-time pads). The link will include a quantum channel as well as several
classical channels. An attenuated laser
will generate single polarized photons for transmission over the quantum
channel; commercially available avalanche photo diodes will be used to detect
the photons. An effective key
generation rate of 1 Mbps is the goal.
To achieve this it will be necessary for the channels to operate at 1
Gbps. The testbed will be used to study
the performance and security of quantum-based network protocols, and to
quantify the improvements obtained through the use of alternate physical
components. For example, improved
single photon sources and detectors are under development in the NIST PL and
EEEL, respectively. Participants: ITL Advanced Networking Technologies
Division, ITL Convergent Information Systems Division, NIST Physics Lab, NIST
Electronics and Electrical Engineering Laboratory.
o Hybrid Quantum Authentication Protocols
Authentication is another
important aspect of secure communications, i.e., being able to verify the
identity of someone with whom you have initiated an electronic
communication. Quantum communication
networks may provide new means for authentication. ITL staff members have begun research in the development and
analysis of hybrid quantum/classical authentication schemes based upon the
availability of entangled photons. Participants:
ITL Computer Security Division.
o
Information Theory
Quantum systems provide enormous
potential for new ways of doing computation in which currently intractable
problems could become routine to solve. Experiments that are currently underway
at NIST and elsewhere in the development of processors for quantum computation
are still very far from practical use in computation, however. Many obstacles remain in the areas of
computer engineering and computer science. Practical error correction schemes
need to be devised and implemented, languages for expressing quantum algorithms
need to be devised, and compilers capable of translating high-level
descriptions into sequences of gate operations (and in turn sequences of
instructions to lasers and other hardware components) need to be devised. In
addition, we need to understand what problems are amenable to solution by
quantum computers, and how to implement them. In this work we are studying
error propagation and correction in particular quantum gates being developed in
the NIST PL. We are also studying the scalability of computer architectures
based upon the neutral atoms or ion arrays being developed in the NIST PL.
Finally, we have begun the study of new quantum algorithms that would show
significant speedups on quantum computers. Participants: ITL Mathematical
and Computational Sciences Division, ITL Software Diagnostics and Conformance
Testing Division.
Within MCSD,
we are working at two ends of the spectrum of quantum computation: in the
modeling of the physical processes that will be used to implement a quantum
gate, and in the development and analysis of algorithms for quantum computers.
William
Mitchell has been working with Eite Tiesinga of the NIST PL to solve for
eigenvalues and eigenstates of the Schrödinger equation in configurations
relevant to the optical traps for neutral atoms. Arrays of such atoms will correspond to arrays of qubits, and
interactions of adjacent atoms will be used to implement elementary quantum
gates. The computations are quite
challenging. Multiple eigenvalues in
the middle of the spectrum are desired, and the corresponding eigenstates have
sharp gradients. Mitchell is adapting
his parallel adaptive multigrid solver PHAML for this task. Some very encouraging early results have
already been generated.
David Daegene
Song, a recent Ph.D. from the Quantum Computation program at Oxford University,
joined MCSD this fall. Song has done work in approximate quantum cloning,
entanglement swapping, and nonlinear qubit transformations. Entanglement swapping provides a means for
transporting quantum states over long distances using chains of entangled
qubits. He has begun extending his work
on this subject here at NIST.
Several MCSD
staff members have begun investigations of the potential speedups for
quantum-based algorithms for a variety of applications. David Song, Isabel Beichl and Francis
Sullivan are studying the problem of determining whether a finite function over
the integers is one-to-one. In
particular, they are developing a quantum algorithm for determining if a
mapping from a finite set to itself is one-to-one. They hope to find a complexity of O(SQRT(n))
steps. Classical algorithms require n
steps to do this computation. The
proposed quantum algorithm uses phase symmetry, Grover's search algorithm and
results about the pth complex roots of unity for a prime p. The
proof, developed in collaboration with Mike Robinson at the Center for
Computing Sciences, relies on results about the density of prime numbers in the
integers.
Song has also
begun work with Anthony Kearsley to study potential speedups when solving
integer-valued matrix equations on quantum computers.
Charge density on a computed diffusion-limited cluster aggregate.
Presentations Short
Courses Workshops MCSD
staff members make contact with a wide variety of organizations in the course
of their work. Examples of these follow. Industrial
Labs Advanced
Biologic Corp. Advanced
Research Systems, Inc. Alabama
Cryogenic Engineering Altair
Engineering, Inc. American
Superconductor Avaya Cadence
Design Systems Chesapeak
Cryogenics Compaq
Corp. Dow
Chemical Endocardial
Solutions Frontier-Technologies,
Inc. General
Electric Hewlett
Packard Hughes
Corp. IBM Intel Irvine
Sensors Johnson
Scientific Group Lucent
Technologies Motorola MPI-Software
Technology Myricom,
Inc. Northrup
Grumman Praxair,
Inc. SAIC Schema
Group Sierra
Lobo, Inc. Sun
Microsystems Sunrise-Systems
Limited Texas
Instruments The
MathWorks Government/Non-profit
Organizations Air Force Office of Scientific
Research (AFOSR) American Institute of
Physics (AIP) American Mathematical
Society (AMS) American Museum of Natural
History Argonne National Labs Army Research Office (ARO) Association for Computing
Machinery (ACM) Centre National de la
Recherche Scientifique (France) Defense Advanced Research
Projects Agency (DARPA) Fermi National Labs IDA Center for Computing
Sciences Idaho National Engineering
and Environmental Laboratory IEEE Computer Society Institute for Computer
Applications and Engineering Lawrence Livermore Labs Mammoth Cave National Park Mathematical Association of
America (MAA) NASA National Science Foundation
(NSF) National Institutes of
Health (NIH) Sandia National Laboratory Society for Industrial and
Applied Mathematics (SIAM) U.S. Department of Energy
(DoE) W.M. Keck Foundation Universities Arizona
State University Carnegie-Mellon
University Case
Western Reserve University Clemson
University College
of William and Mary Columbia
University Cornell
University Courant
Institute Dartmouth
College Federal
Institute of Technology Zurich (ETH) Florida
State University George
Mason University George
Washington University Georgia
Tech Harvard
University Indiana
University Israel
Institute of Technology Johns
Hopkins University Louisiana
State University School of Medicine Marymount
University New
Jersey Institute of Technology New
York University Northwestern
University Oxford University (UK) Purdue University Rensselaer Polytechnic
Institute Rice University Santa Monica College Southern University (Baton
Rouge) Stanford University SUNY Binghampton Swarthmore College Technical University of
Denmark Technical University of
Dresden (Germany) Technical University of
Vienna (Austria) Texas Tech Towson University UMIST (UK) Uniformed Services University
of the Health Sciences Universitaet Wuerzburg
(Germany) Université Louis Pasteur UCLA University of California at
Irvine University College (London) University of Alabama University
of Antwerp (Belgium) University of
Bayreuth (Germany) University
of Chicago University
of Colorado University
of Delaware University
of Houston University
of Iowa University
of Jyvaskyla (Finland) University
of Manchester (UK) University
of Maryland Baltimore County University
of Maryland, College Park University
of Minnesota University
of New Mexico University
of North Carolina University
of Pennsylvania University
of Pittsburgh University
of Southampton (UK) University
of Virginia University
of Washington University
of Wisconsin Vanderbilt
University Vienna
University of Technology Virginia
Tech Wake
Forest University Charge
density on a computed diffusion-limited cluster aggregate. Legend: F = Faculty
Appointee, GR = Guest Researcher, PD = Postdoctoral Appointee, S = Student, PT=
Part time Ronald Boisvert,
Chief Robin Bickel,
Secretary Peggy Liller,
Clerk Joyce Conlon Brianna Blaser,
S André Deprit, GR Jeffrey Fong, GR Karin Remington,
PT Geoffrey
McFadden, Leader Bradley Alpert
(Boulder) Timothy Burns Alfred Carasso Andrew
Dienstfrey (Boulder) Michael Donahue Fern Hunt Anthony Kearsley
Stephen Langer Agnes
O'Gallagher (Boulder) Donald Porter Daniel Anderson,
GR Eric Baer, S James Blue, GR Richard Braun, F
Eleazer
Bromberg, GR Daniel Cardy, S John Gary, GR Katharine
Gurski, PD Kelly McQuighan,
S Bruce Murray, GR
Dianne O'Leary, F Roldan Pozo,
Leader Daniel Lozier Marjorie McClain
Bruce Miller William Mitchell
Bert Rust Bonita Saunders Bruce Fabijonas,
F Elaine Kim, S Leonard Maximon,
GR Frank Olver, GR G.W. Stewart, F Abdou Youseff, F Ronald Boisvert,
Acting Leader Isabel Beichl Javier Bernal David Gilsinn Christoph
Witzgall Theodore Einstein, GR Saul Gass, F Alan Goldman, GR
James Lawrence,
F Francis
Sullivan, GR Judith Devaney,
Leader Yolanda Parker,
Secretary Barbara am Ende Robert Bohn James Filla
(Boulder) William George Terence Griffin Howard Hung Peter Ketcham John Koontz
(Boulder) Steven
Satterfield James Sims Deborah Caton, S Stefanie Copley
(Boulder), S Howland Fowler,
GR Julien
Franiette, GR Olivier Nicolas,
GR John-Lloyd
Littlefield, S Vital Pourprix,
GR Submitted
In Process
Visualizations Published
3.2.
Presentations
Invited Talks
Conference Presentations
Visualizations Produced
3.3.
Conferences,
Minisymposia, Lecture Series, Short-courses
MCSD Seminar Series
DLMF Seminar Series
Scientific Object Oriented Programming Users Group
(SCOOP)
Local Events Organized
External Event Organization
Other Participation
3.4.
Software Released
3.5.
External Contacts
3.6.
Other Professional Activities
Internal
External
Outreach
Part IV - Staff
Division Staff
Mathematical Modeling Group
Mathematical Software Group
Optimization and Computational Geometry Group
Scientific Applications and Visualizations Group