Research
|
Photosynthesis, the process by which green plants use sunlight to convert carbon dioxide and water into oxygen and carbohydrates, is the basis for all life on Earth: The oxygen makes Earth's air breathable, and the carbohydrates feed the entire food web. Many scientists would like to mimic this process to produce inexpensive fuels and raw materials using renewable solar energy. But copying Nature's chemistry is no simple matter. “Nature has found a way to do this over eons,” says chemist Etsuko Fujita of DOE's Brookhaven Lab. “It's very complicated chemistry.” That hasn't stopped Fujita and her colleagues from trying. In one example, Fujita explains, “We would like to produce hydrogen — for use in fuel cells or other processes — from plain water and sunlight.” Recent experiments with a novel ruthenium-quinone catalyst discovered by Japanese colleagues have met with some success in mimicking what appears to be the rate-limiting reaction in the process of water splitting. The reaction, called water oxidation, is a step in natural photosynthesis that produces oxygen as well as protons and electrons from water. The protons and electrons can then be combined in a second reaction to make molecular hydrogen. "We are combining theoretical and experimental studies to determine how this ruthenium complex with bound quinone molecules efficiently catalyzes water oxidation," Fujita says. Fujita has also conducted pioneering work in understanding and advancing the catalysis of carbon dioxide reduction, a crucial step in transforming carbon dioxide to useful organic compounds such as methanol. Her systematic application of catalyst synthesis and advanced methods for determining key reaction pathways has demonstrated exceptional accomplishment in the face of great scientific difficulty. Fujita's accomplishments span well over a decade, from early 1990s work that has become a cornerstone of the scientific foundation for solar activation of carbon dioxide, to recent innovations in bio-inspired photochemical processes that demonstrate creative new pathways to carbon dioxide reduction. Submitted by DOE's Brookhaven National Laboratory |
Check out the joint Fermilab/SLAC publication symmetry.
|
‘Exascale’ computing envisioned by Sandia and Oak Ridge researchersDOE's Sandia and Oak Ridge national laboratories recently launched the Institute for Advanced Architectures to lay the groundwork for a new computer that would perform a million trillion calculations per second. An exaflop is a thousand times faster than a petaflop, which is a thousand times faster than a teraflop. Teraflop computers — the first was developed 10 years ago at Sandia — currently are the state of the art and perform trillions of calculations a second. The institute is intended “to close critical gaps between theoretical peak performance and actual performance on current supercomputers,” says Sandia project lead Sudip Dosanjh. “We believe this can be done by developing novel and innovative computer architectures.” Ultrafast supercomputers improve detection of real-world conditions by helping researchers more closely examine the interactions of larger numbers of particles over time periods divided into smaller segments. An exascale computer would enable researchers to perform more accurate simulations in support of emerging science and engineering challenges in national defense, energy assurance, advanced materials, climate, and medicine, says James Peery, Sandia director of computation, computers and math. The institute is funded in FY08 by congressional mandate at $7.4 million. It is supported by the National Nuclear Security Administration and the Department of Energy's Office of Science. One aim of the institute is to reduce or eliminate the growing mismatch between data movement and processing speeds. Processing speed refers to the rapidity with which a processor can manipulate data to solve its part of a larger problem. Data movement refers to the act of getting data from a computer's memory to its processing chip and back again. The larger the machine, the farther away from a processor the data may be stored and the slower the movement of data. Compounding the problem is new technology that has enabled designers to split a processor into first two, then four, and now eight cores on a single die. Some special-purpose processors have 24 or more cores on a die. “In order to continue to make progress in running scientific applications at these [very large] scales,” says Jeff Nichols, who heads the Oak Ridge branch of the institute, “we need to address our ability to maintain the balance between the hardware and the software. There are huge software and programming challenges and our goal is to do the critical R&D to close some of the gaps.”Submitted by DOE's Sandia National Laboratories |
| DOE Pulse Home | Search | Comments |