ASCR News & Resources

Monthly News Roundup

ASCR Monthly Computing News Report—May 2012




This monthly survey of computing news of interest to ASCR is compiled by Jon Bashor (JBashor@lbl.gov) with news provided by Argonne, Fermi, Lawrence Berkeley, Lawrence Livermore, Los Alamos, Oak Ridge, Pacific Northwest and Sandia national labs.

In this issue:
Research News
Researchers at Argonne and the Computation Institute Win R&D 100 Award
INCITE/OLCF Team’s Research Earns Placement on Science Magazine Cover
Jaguar Used to Simulate Process Improving Catalytic Rate of Enzymes by 3,000 Percent
Learning From Photosynthesis to Create Electricity
Turning Water into Hydrogen Fuel Using a More Reactive Catalytic Surface
ALCF Scientists Probe the Cosmic Structure of the Dark Universe
PNNL Researchers Develop Parallel-in-Time Integration for Molecular Dynamics
Scientists Conduct Multiscale Modeling of Energy Storage Materials at ALCF
Floating Robots Track Water Flow with Smart Phones, Send Data to NERSC

People
Berkeley Lab Mathematician John Bell Elected to National Academy of Sciences
Berkeley Lab’s Hank Childs Wins 2012 DOE Early Career Award
Argonne’s Barry Smith to Receive Distinguished Performance Award
Argonne’s Ian Foster to Receive HPCD Achievement Award

Facilities/Infrastructure
Argonne’s Mira among World’s Fastest Science Supercomputers
Researchers Take "Test Drives" on ESnet’s 100 Gigabit Testbed
NERSC Announces Data Intensive Computing Pilot Program Awards
After Five Years, NERSC’s Franklin Cray XT4 System Retires

Conferences and Meetings
Plotting the Future for Computing in High-Energy and Nuclear Physics
NVIDIA GPU Technology Conference Showcases Newfound Scientific Prowess on Titan
ESnet Staff Share Expertise at Europe’s Premier Networking Conference
OLCF Helps Take the Lead at Annual Cray User Group Meeting
Berkeley Lab Staff Participate in Inria@Silicon Valley Workshop in Paris
Berkeley Lab Staff Contribute to SIAM Conference on Imaging Science

Outreach and Education
Berkeley Lab-Mentored Girls Win National Contest to Develop Science Ed App
Researchers Test Drive Blue Gene/Q at ALCF "Leap to Petascale" Workshop
OLCF Spring Training Workshop Prepares Users New and Old

Researchers at Argonne and the Computation Institute Win R&D 100 Award

Ian Foster, Argonne distinguished fellow and director of the Argonne/University of Chicago Computation Institute, together with his team at the Argonne/University of Chicago Computation Institute, received an R&D 100 award for Globus Online, a service that enables the rapid transfer of large quantities of data between various institutions. The R&D 100 awards, organized by R&D Magazine, have been given out annually since 1962 for the top technologies of the year. Globus Online addresses a central problem in the emerging world of big data research: moving large quantities of information among sites where data are produced, transformed, stored, and consumed. Researchers with minimal IT expertise can use Globus Online to move large scientific data sets reliably and quickly among large scientific facilities, cloud storage providers, campus systems, and personal computers.

INCITE/OLCF Team’s Research Featured on Science Magazine Cover
A recent cover of Science magazine features a visualization from a longstanding INCITE/Oak Ridge Leadership Computing Facility (OLCF) user team’s quest to discover the mechanism behind the explosions of core-collapse supernovas (CCSNs). The June 1, 2012 issue of Science explores eight unsolved problems in astronomy, including CCSNs. The phenomenon has been explored by multiple teams on OLCF systems for years, yielding numerous breakthroughs in our quest to understand the origins of our universe.

The visualization is related to a paper the team, led by Oak Ridge National Laboratory’s (ORNL’s) Tony Mezzacappa, published in Nature in 2008. The Nature paper detailed a possible mechanism to spin up neutron stars to become pulsars, or rotating neutrons stars, which are only derived from CCSNs. Pulsars are handy tools for astronomers and physicists due to the regular intervals at which they emit radiation towards the Earth. Previously, researchers had no good explanation for how a slowly rotating star gains speed. The standing accretion shock instability (SASI), a phenomenon that occurs in a stalled supernova shock, provides an explanation for how you get fast rotation even if you don’t have a spinning star to start with, said team member and OLCF staff member Bronson Messer.

According to Messer, "The SASI and, in particular, its possible role in pulsar birth, was really made clear via visualizations such as the one on the cover of Science."

Jaguar Used to Simulate Process Improving Catalytic Rate of Enzymes by 3,000 Percent
Light of specific wavelengths can be used to boost an enzyme’s function by as much as 30 fold, potentially establishing a path to less expensive biofuels, detergents and a host of other products. In a paper published in the Journal of Physical Chemistry Letters, a team led by Pratul Agarwal of ORNL described a process that aims to improve upon nature—and it happens in the blink of an eye. Agarwal noted that enzymes are present in every organism and are widely used in industry as catalysts in the production of biofuels and countless other everyday products.

While the researchers obtained final laboratory results at industry partner AthenaES, computational modeling allowed Agarwal to test thousands of combinations of enzyme sites, modification chemistry, different wavelengths of light, different temperatures and photo-activated switches. Simulations performed on the OLCF’s Jaguar supercomputer also allowed researchers to better understand how the enzyme’s internal motions control the catalytic activity.

"This modeling was very important as it helped us identify regions of the enzyme that were modified by interactions with chemicals," said Agarwal, a member of ORNL’s Computer Science and Mathematics Division. "Ultimately, the modeling helped us understand how the mechanical energy from the surface can eventually be transferred to the active site where it is used to conduct the chemical reaction." Read more.

Learning From Photosynthesis to Create Electricity
Solar power remains more expensive on average than fossil fuels; one reason is that traditional photovoltaics require expensive rare-earth elements. If we learn from plants, which use only common elements—hydrogen, nitrogen, carbon, oxygen and some others—to convert sunlight into energy, then we’ll be able to bring down the cost of solar power. This is why researchers are looking at bio-inspired materials as possible resources for solar energy.

In the 1990s, an Arizona State University research group made a huge advance in this field by creating the carotenoid-porphyrin-C60 molecular triad, a novel material that converts sunlight into chemical energy by mimicking photosynthesis. However, the material has been difficult to commercialize because it can only be controlled, or confined, in experimental labs.

But now, using 2 million computer hours at NERSC and 2.5 million computer hours at the Texas Advanced Computing Center (TACC), University of Houston physicist Margaret Cheung and her team have explored the role that confinement, temperature, and solvents play in the stability and energy efficiency of the light-harvesting triad. Their results provide a way to test, tailor, and engineer nano-capsules with embedded triads that, when combined in large numbers, could greatly increase the ability to produce clean energy. Read moreExternal link.

Turning Water into Hydrogen Fuel Using a More Reactive Catalytic Surface
Although fuel cells have been touted as a clean alternative to combustion engines for powering cars, the molecular hydrogen required to fuel this technology is naturally rare on Earth and must be extracted from natural gas or water. To split water molecules, the popular catalyst titanium dioxide (TiO2) needs an even layer of hydroxyl (OH) groups across its surface. Conventional methods for putting hydroxyl groups on TiO2 achieve about 20 percent coverage, but scientists have been trying to beat those results, getting more coverage without resorting to extremes of time, temperatures, or resources.

Now a team of researchers at the Pacific Northwest National Laboratory (PNNL) and the Worcester Polytechnic Institute has figured out how to cover 50 percent of a TiO2 surface with hydroxyl groups. Using resources at NERSC and the Environmental Molecular Science Laboratory (EMSL) at PNNL, the team also characterized the atomic-level structure and reactivity of the hydroxyl-rich TiO2 surface. Read more.

ALCF Scientists Probe the Cosmic Structure of the Dark Universe
The origin of dark energy and dark matter—together accounting for 95 percent of the mass energy of the Universe—remains mysterious. To learn more about their ultimate nature, a team of researchers led by Argonne National Laboratory’s Salman Habib and co-PI Katrin Heitmann is carrying out some of the largest high-resolution simulations of the distribution of matter in the Universe. The researchers are resolving galaxy-scale mass concentrations over observational volumes representative of state-of-the-art sky surveys by using Mira, a petascale supercomputer at the Argonne Leadership Computing Facility (ALCF). A key aspect of the project involves developing a major simulation suite covering approximately 100 different cosmologies—an essential resource for interpreting next-generation observations. This initiative targets an approximately two- to three-orders-of-magnitude improvement over currently available resources.

The simulation program is based around the new HACC (Hardware/Hybrid Accelerated Cosmology Code) framework aimed at exploiting emerging supercomputer architectures such as the IBM Blue Gene/Q at the ALCF. HACC is the first (and currently the only) large-scale cosmology code suite worldwide that can run at scale on all available supercomputer architectures. To achieve this versatility, the researchers essentially built the code from scratch. They’ve completed the porting of the code to the BG/Q, and the code is running extremely well. The set of simulations being produced will generate a unique resource for cosmological research. The database created from this project will be an essential component of Dark Universe science for years to come.

ALCF BG Q system

For more information, contact Salman Habib (habib@anl.gov).


PNNL Researchers Develop Parallel-in-Time Integration for Molecular Dynamics
Several parallel-in-time algorithms have been developed for molecular dynamics (MD) and ab initio molecular dynamics (AIMD) simulations. These algorithms iteratively solve for the trajectory of the system over a fixed interval of time consisting of many evolutionary time-steps. The computational effort required to compute the force acting on the system is distributed across the time domain—the forces at different time-steps within the solution interval are computed simultaneously instead of sequentially. In a recent publication, PNNL and university researchers viewed the trajectory of the system over the solution interval as the root of a system of non-linear equations. They solved this system of equations by a stabilized, preconditioned fixed point iteration using both coarse grained physics models and Broyden-type strategies. The resulting integration method was demonstrated to provide speed-ups of between 5–20, even across very slow TCP/IP networks and even in cases where useful coarse grained models are not available. The research team includes Jonathan Q. Weare (U. Chicago), Eric J. Bylaska (PNNL), and John H. Weare (UCSD).

Scientists Conduct Multiscale Modeling of Energy Storage Materials at ALCF
U.S. reliance on fossil fuels is increasingly recognized as a threat to national energy security and a source of global climate change. The development of batteries and fuel cells can provide viable clean-energy alternatives for replacing internal combustion engines in automobiles and powering personal electronics. However, electrochemical technologies continue to lag behind fossil fuels in performance and cost. Breakthroughs are hindered both by a lack of understanding of transport and catalytic mechanisms, as well as the complexity of modeling chemical processes in the individual components of fuel cells and batteries, in addition to modeling dynamics at the interfaces between each component.

Led by Gregory Voth, a team of scientists at The University of Chicago and Argonne National Laboratory are combining a powerful multiscale simulation methodology with the leading-edge resources at the Argonne Leadership Computing Facility (ALCF) to address key questions concerning the poorly understood ion-conduction mechanisms in both fuel cell membranes and at battery interfaces. The methodology will form the first step in a potential feedback loop with experimental efforts.

After the research team has completed foundational atomistic simulations for fuel cell membranes, coarse-grained models will be constructed and used to accelerate simulations of same-sized systems as well as to begin initial studies on membrane structure at much larger length scales. The initial atomistic simulations also are being analyzed to examine the degree by which water content alters the solvation structure of charged defects and ionic side chains along the polymer backbone. For the Li-ion battery systems, the team has parameterized Li-ion interactions with electrolytes and salts using electronic structure calculations. Currently, the models are being validated. Results from these initial studies already highlight the importance of including the correct physics for describing the electric field at the electrolyte/electrode interface with observed changes in Li-ion and electrolyte solvation structures near the interface under different applied voltages. Extending the accessible time and length scales for simulating these systems will enable the exploration of fundamental questions regarding proton, hydroxide, and Li-ion transport.
For more information, contact Gregory Voth (gvoth@uchicago.edu).

Floating Robots Track Water Flow with Smart Phones, Send Data to NERSC
To understand how water flows through the Sacramento-San Joaquin Delta, 100 mobile sensors were placed into the Sacramento River on May 9 to make critical measurements every few seconds. Once collected, this data was transmitted to NERSC for assimilation and analysis.

Two-thirds of the water in California passes through the Sacramento-San Joaquin Delta, providing drinking water for 22 million Californians and supporting agriculture valued at tens of billions of dollars. Understanding how the water flows through the delta on its way to pumping stations and San Francisco Bay is imperative to balance conflicting demands on this critical resource. Read more.


People

Berkeley Lab Mathematician John Bell Elected to National Academy of Sciences
John Bell, an applied mathematician and computational scientist who leads the Center for Computational Sciences and Engineering and the Mathematics and Computational Science Department at Berkeley Lab, has been elected to the National Academy of Sciences. He is one of only two mathematicians on this year’s list.

Bell is well known for his contributions in the areas of finite difference methods, numerical methods for low Mach number flows, adaptive mesh refinement, interface tracking, and parallel computing and the application of these numerical methods to problems from a broad range of fields including combustion, shock physics, seismology, flow in porous media, and astrophysics. He is the co-author of more than 160 research papers. Read moreExternal link.

John Bell



Berkeley Lab’s Hank Childs Wins 2012 DOE Early Career Award
Hank Childs of the Computational Research Division’s Visualization Group at Berkeley Lab has been honored with a 2012 DOE Early Career Award. Childs was selected by DOE’s Office of Advanced Scientific Computing Research for his contributions to "Data Exploration at the Exascale." This is the third year of the Early Career Research Program managed by the U.S. Department of Energy’s Office of Science, and Childs is one of four researchers from the Lawrence Berkeley National Laboratory (Berkeley Lab) who were honored. In total, there were 68 award recipients from 47 institutions. Read moreExternal link.

Hank Childs



Argonne’s Barry Smith to Receive Distinguished Performance Award
Barry Smith, a senior computational mathematician in Argonne’s Mathematics and Computer Science Division, has been named recipient of a 2012 Argonne National Laboratory Distinguished Performance Award, given by the UChicago Argonne, L.L.C., Board of Governors of Argonne National Laboratory. Distinguished Performance Awards recognize outstanding scientific or technical achievements, or a distinguished record of achievement, of select Argonne employees.

Smith is widely considered the "father" of PETSc, the Portable, Extensible Toolkit for Scientific computation, which is regarded as the gold standard for parallel partial differential equation simulations. Smith is also an outstanding applied mathematician. His breakthrough research extending domain decomposition led to three international awards. Moreover, his most recent work has focused on designing and implementing efficient solvers for cutting-edge simulations on DOE leadership-class computers and other advanced architectures. Smith will be honored in an awards ceremony and reception later this summer.
Barry Smith

Argonne’s Ian Foster to Receive HPCD Achievement Award
Ian Foster, Argonne distinguished fellow and director of the Argonne/University of Chicago Computation Institute, has been named the first recipient of the High-Performance Parallel and Distributed Computing (HPDC) Achievement Award. The newly established annual award is intended for a person who has made long-lasting, influential contributions to the foundations or practice of high-performance parallel and distributed computing. Foster will be presented the achievement award on June 22 at the 21st international Association for Computing Machinery symposium on HPDC in Delft, the Netherlands. Preceding the presentation, he will give a talk titled "Reflections on 20 Years of Grid Computing."

Foster, widely considered the "father of the grid," has spearheaded research in advanced computing technologies. He is particularly interested in high-performance networking to incorporate remote computing and information resources into local computational environments.

Ian Foster



Facilities/Infrastructure

Argonne’s Mira among World’s Fastest Science Supercomputers
MiraExternal link, Argonne National Laboratory’s new IBM Blue Gene/Q system, is the third fastest supercomputer in the world according to the TOP500External link list announced on June 18 at the International Supercomputing Conference (ISC) in Hamburg, Germany. Mira thus takes its place among the U.S. computational giants poised to propel scientific discoveries into the petascale.

Also announced at the ISC on June 19: Mira tied for first place on the Graph 500 list with Sequoia, located at Lawrence Livermore National Laboratory. Each supercomputer achieved a score of more than 3,500 GTEPS (giga traversed edges per second). Vesta, Mira’s testing and development rack, placed sixth on the list.

The TOP500 list, now in its 39th edition, is the semiannual ranking of the world’s most powerful supercomputers. Its rankings are based on the highest score measured using the Linpack benchmark suite, a special-purpose computer code that scores application runs in quadrillions of floating-point operations per second, or petaflops. For the June list, Mira achieved 8.1 petaflops per second on the Linpack benchmark.

Graph 500 began in 2010 as a complementary benchmark to the TOP500 benchmark. The Graph 500 benchmark evaluates machine performance while running data-intensive analytic applications and is a measure of the machine’s communications capabilities and computational power.
For more information, visit the ALCF websiteExternal link.


Researchers Take "Test Drives" on ESnet’s 100 Gigabit Testbed
With $62 million in funding from the American Recovery and Reinvestment Act, ESnet built a 100 Gbps long-haul prototype network and a wide-area testbed. So far more than 25 groups have taken advantage of ESnet’s wide-area testbed, which is open to researchers from government agencies and private industry to test new, potentially disruptive technologies without interfering with production science network traffic. The testbed currently connects three unclassified DOE supercomputing facilities: the National Energy Research Scientific Computing Center (NERSC) in Oakland, Calif., the Argonne Leadership Computing Facility (ALCF) in Argonne, Ill., and the Oak Ridge Leadership Computing Facility (OLCF) in Oak Ridge, Tenn.

"No other networking organization has a 100-gigabit network testbed that is available to researchers in this way," says Brian Tierney, who heads ESnet’s Advanced Networking Technologies Group. "Our 100G testbed has been about 80 percent booked since it became available in January, which just goes to show that there are a lot of researchers hungry for a resource like this."
Read moreExternal link.

NERSC Announces Data Intensive Computing Pilot Program Awards
NERSC’s new data-intensive science pilot programExternal link is aimed at helping scientists capture, analyze, and store the increasing stream of scientific data coming out of experiments, simulations and instruments. Those selected for the pilot program will get access to large data stores, priority access to a 6 terabyte flash-based file system, and priority access to Hadoop-style computing resources on NERSC’s Carver Infiniband cluster. They may also use NERSC’s Science GatewaysExternal link for web access.

The first awards in this pilot program were made to the following eight projects:

  • High Throughput Computational Screening of Energy Materials
  • Analysis and Serving of Data from Large-Scale Cosmological Simulations
  • Interactive Real-Time Analysis of Hybrid Kinetic-MHD Simulations with NIMROD
  • Next-Generation Genome-Scale In Silico Modeling: The Unification of Metabolism, Macromolecular Synthesis, and Gene Expression Regulation
  • Data Processing for the Dayabay Reactor Neutrino Experiment’s Search for Theta13
  • Transforming X-Ray Science toward Data-Centrism
  • Data Globe
  • Integrating Compression with Parallel I/O for Ultra-Large Climate Data Sets
Read more about the awarded projectsExternal link.

After Five Years, NERSC’s Franklin Cray XT4 System Retires
In late April 2012, DOE’s National Energy Research Scientific Computing Center (NERSC) retired one of its most scientifically prolific supercomputers to date—a Cray XT4 named Franklin, in honor of the United States’ pioneering scientist Benjamin Franklin. Over its five-year lifetime, Franklin has delivered 1.18 billion processor hours to scientific research in service to NERSC’s more than 4,500 users. Read moreExternal link.


Conferences and Meetings

Plotting the Future for Computing in High-Energy and Nuclear Physics
More than 500 physicists and computational scientists from around the globe, including many working at the world’s largest and most complex particle accelerators, met in New York City May 21–25 to discuss the development of the computational tools essential to the future of high-energy and nuclear physics. The 19th International Conference on Computing in High Energy and Nuclear Physics (CHEP) was hosted by DOE’s Brookhaven National Laboratory and New York University.

The conference was organized by scientists from Brookhaven Lab’s RHIC and ATLAS Computing Facility (RACF), which provides computing services for Brookhaven’s Relativistic Heavy Ion Collider (RHIC) and the U.S.-based collaborators in the ATLAS experiment at Europe’s Large Hadron Collider (LHC) — particle accelerators that recreate conditions of the early universe in billions of subatomic particle collisions to explore the fundamental forces and properties of matter — as well as the collaborators in the Large Synoptic Survey Telescope (LSST) project. A central theme of the meeting was how to keep up with ever-increasing needs for data processing and analysis from such complex experiments in a cost-effective, efficient manner. Read moreExternal link.

NVIDIA GPU Technology Conference Showcases Newfound Scientific Prowess on Titan
An international gathering of researchers, computer scientists, and engineers converged on San Jose, California from May 14–17 to share their experiences using the newest technology in HPC—blistering fast graphics processing units (GPUs).

ORNL, home to a GPU-accelerated supercomputer known as Titan, has partnered with technology company NVIDIA, host of the annual GPU Technology Conference (GTC) and inventor of the GPU, to use this new technology for next-generation scientific challenges.

"Problems solved and insights gained from GPU acceleration of scientific codes are surprisingly applicable from one research field to another," said Andy Walsh, director of marketing for GPU computing at NVIDIA. "And, enabling researchers from around the world to learn about early results and potential new breakthroughs on the Titan supercomputer was an incredibly valuable addition to the conference."

Traditionally, HPC has relied on increasing the number of central processing units (CPUs) to increase computation speed. But with the advent of high-performance, energy-efficient GPUs, researchers are now able to process larger numbers of parallel tasks much faster and more efficiently, while allowing the CPUs to focus on more complex calculations.

Several ORNL staff members presented at the conference. Jack Wells, director of science at the Oak Ridge Leadership Computing Facility, which manages Titan, chaired a session about his center’s landmark system, which will employ NVIDIA GPU technology to deliver important scientific insights immediately. When Titan is fully operational in 2013, it is expected to reach a peak of 20 petaflops, or 20 thousand trillion calculations per second.

ESnet Staff Share Expertise at Europe’s Premier Networking Conference
ESnet staff members Eric Pouyoul, Jon Dugan, and Bill Johnston were among the speakers at the 2012 TERENA Networking ConferenceExternal link held May 21–24 in Reykjavík, Iceland. The conference, sponsored by the Trans-European Research and Education Networking Association, is the largest and most prestigious European research networking conference. Read moreExternal link.

OLCF Helps Take the Lead at Annual Cray User Group Meeting
The annual Cray User Group (CUG) meeting, held Apr.29–May 3 in Stuttgart, Germany, convened for computational researchers to share their expertise and findings with one another, all in hopes of bringing next-generation supercomputers online. Oak Ridge National Laboratory researchers helped take the lead.

Most contributions from the OLCF dealt with the center’s newest supercomputer—a Cray XE6 dubbed Titan. Over 20 ORNL staff members took part in presentations during the conference, presenting five papers. In addition, the conference’s winning paper, "Software Usage on Cray Systems across Three Centers (NICS, ORNL and CSCS)," was coauthored by ORNL staff members Bilel Hadri, Mark Fahey, and William Renaud. One of the runner-up papers, "Porting the Community Atmosphere Model—Spectral Element Code to Utilize GPU Accelerators," also had ORNL contributions from Matthew Norman, Richard Archibald, Valentine Anantharaj, and Katherine Evans.

OLCF staff organized a special invitation-only "birds of a feather" session where owners and administrators of Cray XK systems came together to discuss how to more effectively measure the performance of hybrid architecture systems. These administrators focused on finding methods to measure accelerator usage and efficiency on supercomputers. Both NVIDIA and Cray have developed products to tackle these issues, and the session focused on how effective these products have been, and what other products may be developed based on individual needs.

Berkeley Lab Staff Participate in Inria@Silicon Valley Workshop in Paris
The second workshop Berkeley-Inria-Stanford 2012 (BIS’12)External link was hosted by Inria in Paris on May 21–22, 2012. It is co-organized by UC Berkeley, Inria, and Stanford University, in partnership with CITRIS and the French Ministry of Foreign Affairs. BIS’12 is part of the joint research program Inria@SiliconValley. The objectives of this workshop are twofold: first, to present the current state of scientific collaborations, and second to work on proposals for future ambitious joint projects.

From Lawrence Berkeley National Laboratory’s Computational Research Division (CRD), Deb Agarwal gave an invited talk on "Driving Data Management for Science Using the 20 Questions Approach" and participated in a panel discussion on "Big Data: Scientific and Societal Challenges." Other CRD participants included Esmond Ng, who is collaborating on "Fast and Scalable Hierarchical Algorithms for Computational Linear Algebra"; and Jim Demmel, who is collaborating on "Communication Optimal Algorithms for Linear Algebra."

Berkeley Lab Staff Contribute to SIAM Conference on Imaging Science
The 2012 SIAM Conference on Imaging ScienceExternal link held May 20–22 in Philadelphia, Pa., featured a number of presentations by Berkeley Lab researchers. New devices capable of imaging objects and structures from nanoscale to the astronomical scale are continuously being developed and improved, and as result, the reach of science and medicine has been extended in exciting and unexpected ways. The impact of this technology has been to generate new challenges associated with the problems of formation, acquisition, compression, transmission, and analysis of images. Berkeley Lab researchers giving presentations at the conference were:
  • Filipe Maia, NERSC: "Real-Time Ptychographic X-Ray Image Reconstruction."
  • Stefano Marchesini, Advanced Light Source: co-organized session on Algorithms for Diffractive Imaging; presentation title unavailable.
  • Ralf Grosse-Kunstleve (co-author with Nicholas Sauter), Physical Biosciences Division: "Computational Challenges for Biological Structure Determination Using X-Ray Diffraction."
  • Chao Yang, Computational Research Division: "Algorithms for Single Molecule Diffractive Imaging"; co-organized session on Algorithms for Diffractive Imaging.


Outreach and Education

Berkeley Lab-Mentored Girls Win National Contest to Develop Science Ed App
After taking top honors among their peers from Albany and Berkeley High Schools, a team of five girls from Albany High beat out 10 other teams from high schools around the country to win the 2012 Technovation Challenge. The challenge is a 10-week program in which teams of girls develop science education apps for smartphones. The team was mentored by Sufia Haque of Berkeley Lab’s Engineering Division and Taghrid Samak of the Computational Research Division. In all, 24 women at the Lab served as mentors to girls participating in the program. Read moreExternal link.

Researchers Test Drive Blue Gene/Q at ALCF "Leap to Petascale" Workshop
Researchers eager to get their code ported to Mira, the powerful IBM Blue Gene/Q supercomputer being installed at Argonne National Laboratory, converged for the Leap to Petascale workshop held May 22–25. At the workshop, they got early access to the BG/Q test and development rack at the Argonne Leadership Computing Facility (ALCF).

Geared especially for projects that are already scaled to multiple racks of the ALCF’s Blue Gene/P system, this year’s Leap to Petascale focused on scaling to Mira, the 10-petaflops Blue Gene/Q—the third fastest supercomputer in the world. A large portion of the workshop was devoted to hands-on tuning of applications with one-on-one assistance from the ALCF’s team of experts. Special system reservations provided attendees with the opportunity to conduct full-scale runs.

For more information, view the presentationsExternal link from the Leap to Petascale workshop, including talks on hardware and architecture, compilers, communication libraries, debuggers and more.

OLCF Spring Training Workshop Prepares Users New and Old
The Oak Ridge Leadership Computing Facility hosted its annual Spring Training and Users Meeting Apr. 16–18 to help users new and old get warmed up on the Titan supercomputer. OLCF staff divided the meeting into two parallel tracks. One catered to new users who needed more general information about the computing resources and capabilities of the center, while the other was designed for more experienced users looking for guidance on Titan.

New users spent the first two days of the conference learning the basics of parallel computing, including computing basics using UNIX, programming fundamentals, and the Message Passing Interface. For more advanced users, OLCF staff and vendor representatives discussed directive-based compilers, new performance analysis tools, and scalable debuggers available to help make the transition to Titan easier. They also used TitanDev, the OLCF’s prototype hybrid machine containing a sample of CPU-GPU processors, to familiarize themselves with the system.

OLCF staff members see Spring Training as not only a way to get users up to speed with the machine, but also a means to hear thoughts on how to improve the center. "Our team looks forward to the annual users meeting, as it gives us the chance to interact with users in person and learn more about their needs and share what is under way to enhance their experience," said user assistance and outreach team leader Ashley Barker.
Last modified: 7/23/2012 10:42:44 AM