[an error occurred while processing this directive]

NERSC 3 Greenbook

next up previous contents
Next: Non-equilibrium Quantum Field Theory Up: High Energy and Nuclear Previous: Overview of Accelerator Physics

The DOE Grand Challenge in Computational Accelerator Physics

Robert Ryne, Salman Habib, Ji Qiang,
Los Alamos National Laboratory
Los Alamos, New Mexico

and

Kwok Ko, Zenghai Li, Brian McCandless, Wanjun Mi, Cho-Kuen Ng,
Mikhail Saparov, Vinay Srinivas, Yong Sun, Xiaowei Zhan
Stanford Linear Accelerator Center
Stanford University,
Stanford, CA 94309

Introduction[*]

The previous section provided an overview of Accelerator Physics and the crucial role of High Performance Computing to that field. In this section we will discuss progress-to-date in a project now underway that is using high performance computing to advance the state-of-the-art in accelerator modeling.

In 1997 the U.S. Department of Energy initiated a Grand Challenge in Computational Accelerator Physics. The primary goal of this project is to develop a new generation of accelerator modeling tools for HPC platforms, and to apply them to problems of national importance including those mentioned above. The new tools will enable the simulation of problems 3 to 4 orders of magnitude larger than has ever been done before; additionally the use of algorithms and software optimized for the HPC environment will make it possible to obtain results quickly and with very high accuracy. Significant progress has already been made in regard to parallel beam dynamics calculations for proton linac design and parallel wake-field calculations for linear collider design.[*] The tools are being developed by a collaboration involving U.S. national laboratories (LANL and SLAC), universities (Stanford and UCLA), and high performance computing and communications research centers (ACL and NERSC).

Computational Accelerator Physics

Computational accelerator physics consists largely of two main areas: beam dynamics and electromagnetics. In the area of beam dynamics, our goal is to develop an HPC capability to model the acceleration and propagation of an intense charged particle beam as it passes through the various structures in an accelerator complex. In the area of electromagnetics, the goal is to develop a capability to calculate the electromagnetic fields in very large, highly complex structures. These two main areas are described in more detail below.

Beam Dynamics

Many systems involving intense charged particle beams can be described by the Vlasov-Poisson equations. There are two main approaches to solving these equations: particle simulation techniques and direct methods.

In the particle simulation approach, the beam distribution function is represented by a number of macroparticles, typically 10's to 100's of millions in a large scale simulation. Often the single particle equations of motion are derived from a Hamiltonian which includes both externally applied fields and a mean field due to the beam's space charge: H=Hext+Hsc. Such a form is ideally suited for the application of symplectic split-operator methods.[*] Consider a Hamiltonian that can be written as a sum of two terms, H=H1 + H2, that are each exactly solvable, i.e., suppose we can obtain the mapping ${\cal M}_1$ corresponding to H1 and the mapping ${\cal M}_2$ corresponding to H2. Then the following algorithm is accurate through 2nd order:

where $\tau$ denotes a time step. If H1 depends only on momenta and H2 depends only on positions, then this algorithm is the same as the well-known leap-frog algorithm. However, the split-operator approach provides a powerful framework capable of dealing with far more complicated Hamiltonians often encountered in accelerator physics, where the Hamiltonian is usually approximated by a high-order polynomial in the phase space variables.

Besides being able to treat Hamiltonians with more terms, this approach is easily generalized to higher order accuracy in time. A well-known fourth-order algorithm is due to Forest and Ruth,[*] and an arbitrary order scheme was derived by Yoshida.[*] There are also implicit symplectic methods based on this approach that do not require the Hamiltonian to be split into a sum of exactly solvable pieces.6 Finally, time-dependent systems are easily treated by ``extending the phase space''.7

Unlike some split-operator treatments that separate the Hamiltonian into terms involving only positions and only momenta, our particle simulations separate the Hamiltonian into terms involving the external fields and terms involving the self fields. The external fields are treated using well-established techniques from magnetic optics. One advantage of this is that it enables us to take large time steps, since the dynamics due to external fields is usually dominated by a linear map which is easily obtained analytically or numerically. To treat the self fields, we use a 3D Particle-In-Cell (PIC) approach with area weighting. Open boundary condition are treated using the method of Hockney.[*] Charge deposition and field interpolation are parallelized using the method of Ferrell and Bertschinger.[*]

Unlike particle-based methods, direct solvers represent the distribution function on a phase space grid. A Vlasov-Poisson code solves the equation:

where $f({\bf z},t)$ is the phase space distribution function, and where ${\bf z}$ denotes the phase space variables. The potential V is a sum of external and space charge contributions. To solve this equation, the approach we use is a spectral method combined with the above-mentioned split-operator techniques. For example, a second-order algorithm for evolving the distribution function is:

Stochastic corrections to Vlasov-Poisson evolution occur due to particle collisions and noise in external fields. To treat these effects requires the addition of damping and Langevin forces in the PIC simulations which corresponds to solving the Fokker-Planck equation for the distribution function.

Electromagnetics

The new set of electromagnetic tools that is targeted by the Grand Challenge differentiates itself from existing available codes in several respects. One is the use of irregular grids for better geometry description. Another is the capability for adaptive mesh refinement to improve accuracy. The last important distinction is the implementation on multi-processing platforms to accommodate very large scale simulations. These combined features will provide the high resolution required to accurately model large complex structures, such as the damped detuned structure (DDS) which has been specifically designed to suppress wake-fields in the NLC linac.

Included in this tool set are solvers in both the time and frequency domains. The time domain solver is useful for computing wake-fields and other transient effects, such as bunch heating, that are associated with beam passage through an accelerator component. The frequency domain solver calculates normal modes and is needed for designing and analyzing accelerating cavities. Another solver of practical interest deals with the matching of RF components by finding the scattering parameters at the input and output ports. All these modules are being developed with either the finite element or finite volume formulation that is based on an unstructured grid.

It is a challenging task to implement adaptive mesh refinement on parallel computers when unstructured grids are involved. The procedure entails the iteration of a mesh manipulation step and a numerical solution step which provides the feedback as to where mesh quality has to be improved. On parallel architectures this requires a domain decomposition strategy that preserves load balancing from one refinement to the next.

Accomplishments

In 1997 we developed a new 3D beam dynamics code called IMPACT, which is based on split-operator techniques. IMPACT (Integrated-Map and Particle Accelerator Tracking code) has an accurate and efficient treatment of RF accelerating gaps, obtained by numerical integration of the gap transfer map rather than integration of single particle trajectories. The code is especially useful for modeling superconducting proton linacs, where there are only a few types of accelerating cavities. The code has been parallelized and runs on the T3E at NERSC and the Origin 2000 system at the ACL. A second code, called HALO, has been developed specifically for beam halo studies. This code was developed in collaboration with the University of Maryland, and includes a new 3D beam equilibrium model, based on analytical work of the Maryland group, which helps isolate halo growth mechanisms. Finally, in 1997 we also developed a parallel version of a code called LINAC, which is being used to support the Accelerator Production of Tritium project. This is now the primary code for performing large-scale beam halo simulations for the project.

The huge amount of data in a high-resolution beam dynamics simulation, coupled with the fact that we are often interested in the very small fraction of the particles in the halo, necessitates the use of internal data analysis in our codes prior to storing simulation results. We have developed parallel algorithms that drastically reduce the amount of data needed to visualize the halo. An example is shown in Fig. 5, which is the result of a HALO simulation. The system being modeled is a spheroidal bunch which develops a halo due to improper matching into the beam-line.


  
Figure 5: Density in longitudinal phase space, from a 25 million particle simulation of a mismatched beam.
\begin{figure}
\centerline{
\psfig {figure=gb_ryne_halo.eps,height=130mm,width=130mm}
}\end{figure}

The simulation used 25 million particles and a 256x256x256 grid for the Poisson solver. The figure shows the longitudinal phase space, (z-pz), after the halo has formed. It contains approximately 100,000 particles grouped in four regions. The density varies from 1 at the beam center to $\frac{1}{10}$ at the edge of the central region. Similarly, the density equals $\frac{1}{100}$, $\frac{1}{1000}$, and $\frac{1}{10000}$ at the edge of the remaining three regions. From the core to the halo, these regions appear as blue, green, yellow, and red, when shown in color. If we had simply plotted the same number of particles chosen randomly from the 25 million in the simulation, the halo would barely be visible and show almost no structure.

We have also developed two- and three-dimensional eigenmode solvers for computing normal modes in accelerator cavities. These codes are based on the finite element method using quadratic elements and have mesh refinement capabilities. A parallel version of the 2D module has been used to calculate the dipole wake-fields in a tapered accelerator section (in this case, a detuned structure) consisting of 206 individual cavities. Fig. 6 shows some of the modes in the section. The horizontal axis denotes the axis of the structure, and every fifth mode, from 1 to 206, is shown from the bottom to the top of figure 6. These are the detuned modes which lead to reduced transverse wake-fields due to de-coherence. Work is in progress to model the Damped Detuned Structure (DDS) which incorporates damping by coupling each cell to external waveguides.

  
Figure 6: Modes in a detuned structure, from a simulation of an entire 206-cell section.
\begin{figure}
\centerline{
\psfig {figure=gb_ryne_waves.eps,height=150mm,width=150mm}
}\end{figure}


NERSC 3 Greenbook

next up previous contents
Next: Non-equilibrium Quantum Field Theory Up: High Energy and Nuclear Previous: Overview of Accelerator Physics
Rick A Kendall
7/13/1998