You are on page 1of 7

Where Do We Stand with High Gain FEL

Simulations?
Gil Travish
Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Milano. Via Cervi, 201. 20090
Segrate (MI), Italy.

Abstract. Computer technology improvements have allowed for more complete and detailed free
electron laser simulations, yet the demands of the large number of new experiments and proposed
projects has outpaced the capability and availability of present codes. This paper, based on a talk
given at the conference of these proceedings, presents a brief assessment of Free Electron Laser
(FEL) codes, their availability and features, as well as some opinions on what direction the FEL
code community should take for the near future. The discussion of FEL codes is restricted here to
ones for high gain amplifiers: no codes for oscillators, waveguides or exotic configurations are
considered.

INTRODUCTION
Free Electron Lasers (FELs), as with many other fields, have benefited from the
availability of computers and computer codes: the ability to study the collective behavior
(many particle effects) in an FEL have allowed theorists to confirm existing analytic
work and discover new effects (1); the ability to predict FEL performance has allowed
experimentalists to design and propose challenging machines. While a number of general
characteristics of FELs (such as gain before saturation, the effect of smooth focusing,
and the effect of energy spread) are well modeled by simple analytic formulas, a number
of issues (such as saturation, undulator errors, and slippage) can only be addressed
carefully by simulations.

There are three areas where high gain experimental FEL research is most active, and
where simulations are, perhaps, most needed: physics test systems, short wavelength
facilities, and industrial applications. Physics test systems are usually small scale efforts
with rapid and numerous reconfigurations. As such, test systems require frequent use of
simulations and comparison with theory. Short wavelength facilities, on the other hand,
are complex, large scale systems which require numerous simulations to verify the
designs and lend credence to the extrapolation to short wavelengths. Industrial applications
generally require integrating detailed engineering considerations such as heat load,
reliability, and cost along with physics modeling.

In the following sections we discuss some basics of an FEL code, present some
opinions on the impact of computer technology on codes, review some past directions
the community has taken, and consider some new directions including the needs of the
above mentioned three areas. Finally, a challenge to the community is issued to develop
and support a new code available to all, and benchmarked against a wide range of cases.

AN FEL AS SEEN BY A CODE


An FEL simulation is one link in a chain of tools available to the FEL researcher: the
chain begins with quantum mechanics, next follows the Maxwell and Lorentz equations,
which lead to the FEL (pendulum) equations, averaging over a wiggle period and
applying eikonal approximations leads to a set of simulation equations, and the code
provides a means of solving this model. Continuing with the analogy, the experiment or
user facility may be viewed as the final link of the chain.

An FEL code can be understood in terms of a simplified box diagram (see Figure 1).
The Beam Input includes user input detailing the beam’s initial six-dimensional phase
space (i.e., energy spread, particle distribution model, prebuncher, etc.). The Radiation
Input contains parameters such as the power, wavelength, and beam size of the initial
radiation, and information on how the startup should be modeled (i.e., whether it is
spontaneous or from a coherent source). The Beam Radiation Interaction is actually
the FEL equation solver. Various flavors of the solver exist including 1D, 2D, 3D, and
time dependent models. The Optics section allows for feedback or manipulation of the
output radiation to better model experimental realities (such as a long drift of the radiation
to a far away detector). Finally, the Output section provides numerical versions of
radiation and beam diagnostics as well as graphics support.

Beam Input Optics

Interaction Output

Radiation Input

FIGURE 1. A box diagram for a generic FEL code.

As an illustration to the reader, we review the single frequency amplifier equations,


highlighting some of the salient parameters. The energy equation can be expressed as

d n aa
= − r r w sin n + SpaceCharge , (1)
dz 2c n

where r and ar are the radiation wavelength and normalized field, aw is the wiggler
normalized field, c is the speed of light, z is the distance along the undulator, and n and
n are the particle (n) energy and phase. As indicated, space charge forces can be added
to the above equation. The phase equation is given by

d n 1 + p 2⊥ + a2w + 2araw cos


= kw − r
n
, (2)
dz 2c 2n

where k w is wiggler wavenumber and p⊥ is the perpendicular particle momentum given


by
dp⊥ 1 aw2
=− + Focusing . (3)
dz n r⊥

Above, the external focusing terms can be added as needed and r⊥ , the perpendicular
particle position, is given by

dr⊥ p
=− ⊥. (4)
dz n

Finally, the evolution of the radiation is expressed as a wave equation:

  a e −i n

2ikr +∇ 2⊥ ar2 ∝ I w , (5)


 z  n

where k r is the radiation wavenumber, I is the beam current and the term in the angled
brackets represents the average “bunching” of the particles.

To the above equations, one must add the initial particle phase space, the initial
radiation profile and any relevant boundary conditions (such as those imposed by
waveguides). Additionally, the model may include choices of undulator type (i.e., planar,
helical, tapered, etc.), spatial modes of the radiation, higher harmonics of the radiation,
time dependent effects, etc..

With a basic model of an FEL code in hand, one can go on to consider the type of
computing facilities one should have to model an FEL. Instead, we turn to the corollary
issue of the impact of computers on FEL simulations.

OPINIONS AND FACTS ON COMPUTERS


It is known that, at least by certain types of benchmarks, computing power is
growing exponentially with time (see Figure 2) (2). Some estimates state that CPU
speed doubles every 18 months. Perhaps what is more significant to the majority of the
scientific community (those not working on Grand Challenge problems) is the common
availability of computing power: “home” computers now exceed the computational
speed of the previous generation of workstations. Indeed, desktop computers are within
an order of magnitude of the fastest (single processor) machines, and researchers are
often conducting work on machines sold to the consumer market. Finally, it is important
to note that unlike the situation in the past, the physics community now has little
influence over the computer market due to the overwhelming size of the business and
personal sectors. Thus, it has become desirable (even necessary) to take advantage of
commercial applications, software tools and standards in performing scientific computing.
The question then arises, what can one do in FEL simulation with all the available
computing power?
10 5

10 4
CPU Peak Speed [A.U.]

10 3

10 2

10 1

0.1
1950 1960 1970 1980 1990 2000
Year
FIGURE 2. The peak available CPU speed (in arbitrary units) is plotted on a log scale against the
calendar year. Note the exponential growth.

There are a number of interesting directions, of which we will only list a few here,
including fully three (3D) and four dimensional (4D) codes (in both the beam and
radiation), unaveraged codes (where the wiggle motion is included), startup modeling,
and the inclusion of more details of the beam-radiation interaction including realistic
undulator fields, and particle distributions obtained from multiparticle beam dynamics
codes. Fully 3D and 4D Codes can yield more exact models of the FEL interaction but
remain CPU intensive and require complex codes. “Unaveraged” codes are also CPU
intensive, and produce results which are often difficult to interpret, and hard to compare
to analytical work. Startup modeling, while urgently needed for experimental comparisons,
is extremely CPU intensive, may require complex 4D models and a large number of
particles.

It is almost an axiom of computational physics that a given problem will grow to


consume all available CPU time. However, while more computing power leads to the
ability to solve new problems, it can do so at the cost of neglecting older but still relevant
issues. A balance needs to be found between solving new problems slowly, and solving
old problems quickly. The use and availability of codes is, obviously, heavily influenced
by the way codes are developed. Thus, we next turn to the past and present trends in
FEL code production.

SUCCESSES AND FAILURES OF THE PAST


The history of FEL codes follows, to a large extent, the history of FELs themselves.
The forces that effected FELs such as the large funding by the United States (US)
Defense programs of the Eighties, and the almost total lack of funding (at least in the
US) following the end of the Strategic Defense Initiative (SDI), also effected the production
and availability of FEL simulations (3).

In the early days of FEL research, many codes where developed by many authors.
Most of these codes where for personal use by one researcher or internal use by a small
group. A few “big” codes where developed primarily by the SDI projects. Their well
developed, more capable codes where not generally available to the community either
because of national security laws, or because there was no incentive to do so. The
notable exception to this lack of distribution was, and still is, Tran and Wurtele’s TDA
code (4). As a consequence of the private nature of the codes, and perhaps for other
reasons, most early codes where not well documented or supported (i.e., there was no
installed user base). After the cancellation of the SDI FEL programs, many researchers
left the field, and the original authors of the codes went on to other research.

SUCCESSES AND FAILURES OF THE PRESENT


Where as individual and small group efforts marked the early days of FEL research,
and defense related work dominated the field after that, today the field is marked by a
mix of small scale groups doing basic research and large groups considering big user
facilities and light sources. The “big” groups are producing their own codes, which is a
good thing. However, this code development seems to be for internal use, resembling
more the efforts of the SDI groups than those of open scientific efforts. On the other
hand, some of the SDI “big” codes are now available, and the authors are providing
support on a volunteer basis. Through the efforts of such authors and a few dedicated
users, documentation is slowly becoming available. Sadly, there is still no serious
high-gain FEL code available on a personal computer, and users must still contend with
archaic and cumbersome interfaces.

Despite the aforementioned problems, willing users have a set of codes available with
a respectable list of capabilities (5). A summary of the present state of FEL simulations
can be given in the form or a laundry list of well modeled parameters: beam current,
steering errors, (external) focusing, multiple undulators (sections), saturation, fluctuations,
slippage, and harmonics. We must also include a list of not so well modeled parameters:
emittance, energy spread, startup, superradiance, and optical beam profile. As we discuss
next, in addition to improving our modeling of the previously listed parameters, each
area of FEL work has specific demands from future simulations.

NEW APPLICATIONS AND NEW FEATURES


We can first consider some of the modeling characteristics and features needed by the
three areas of FEL research, and then we can sample some specific considerations.
Perhaps the most demanding of the three areas are the short wavelength facilities, which
we examine first. Careful modeling of the beam trajectory, including correctors, beam
position monitors, and a realistic undulator field are central to understanding these long
devices (6). In addition, external focusing and any attendant errors should be included.
A realistic beam phase space, such as might be provided by a multiparticle beam dynamics
code (i.e., Parmela), also raises the confidence level in the numerical models of short
wavelength devices. Startup modeling, if feasible, will also provide a boost to projects
trying to extrapolate from infrared experiments to x-ray proposals. Finally, effects such
as harmonics and coherent undulator radiations may have to be included for a realistic
and accurate simulation.

The physics test systems may (depending on the configuration) share some of the
demands of the short wavelength facilities, but would benefit immediately from simulations
which more closely resemble experimental realities. The ability to interface to the accelerator
control system would improve the ease and speed of testing a larger parameter space.
Related to the control system interface is the need to have the code accept inputs of
measured beam parameters, as opposed to theoretical parameters. Finally, a two-
dimensional transverse profile of the output radiation (including drifts and simple optics)
would greatly aid in comparing experimental results to simulation.

Industrial systems have received the least attention, perhaps because FELs have not
matured sufficiently. Nevertheless, some systems are under consideration and their
computational needs should be addressed. In particular, industrial systems tend to be
high duty factor devices with high average and/or peak powers. Thus, space charge and
beam “halo” effects may need to be included in the model.

Having listed some demands that present and near future FELs have placed on
simulations, we can also list some specific considerations that future code writers ought
to take into account. The language, platform, operating system, structure and user
interface play a significant role in the usability and accessibility of a code, and these
issues have been unappreciated in the past. Along with the hardware improvements,
software and programming techniques have matured since the development of early FEL
codes. High level programing techniques including memory management, data structures
and modular coded are now standard practice in industry. A code writer today must
choose between several approaches, including weighing the benefits of staying with the
well known FORTRAN language or changing to the more (industry) standard C series
of languages. Proper choice of the language and good programing techniques can also
assure portability of the code across a number of platforms and operating systems.
Software engineering is no longer merely good programing style.

A new simulation must also offer the FEL community the ability to easily modify the
source code. Again, modularity and documentation provide the means for users to
readily accomplish code alterations. And, while well structured codes and intuitive user
interfaces consume a good deal of time to produce, they provide users with the ability to
rapidly learn the code, make fewer mistakes, and more easily make changes to the
source code. Interfaces, documentation and code comments should not be viewed as
features or “niceties”, but rather as core components of a complete simulation.

Finally, a new author needs to consider details of the numerics such as the type of
grid (mesh), solver and integration scheme. For example, a radial mesh requires less
CPU time and is numerically more stable than a rectangular mesh (which explains why
most codes use the radial mesh). On the other hand, a rectangular mesh does not require
a modal decomposition, and is easier to compare to experiments.

Future FEL codes will surely be more capable than past efforts, and may include
many new physics and engineering models. However, progress in FEL research is
dependent on well tested and widely available simulations in addition to the highly
specialized, state of the art, private codes.
A CHALLENGE TO THE COMMUNITY
We need a new set of codes. The codes should address the shortcomings of previous
efforts. Namely, we need a set of widely distributed, well tested, easy to modify codes
which operate on a number of platforms, and integrate/interface well with existing
codes. Surely groups proposing multi-million dollar projects should verify their proposals
on well tested, and community-wide benchmarked codes.

ACKNOWLEDGMENTS
A survey was sent to a large number of people (as well as posted on the web) prior to
presenting this work. The author thanks the following people for responding and hence
influencing the content of this paper: J. Wurtele, W. Vernon, D. Nguyen, J. Schmerge,
W. Fawley, B. Faatz, S. Reiche, M. Yurkov, and M. Hogan.

REFERENCES
1. Orzechowski, T. J., et al. “Free-electron laser results from the advanced test accelerator,” in the 1988
Linear Accelerator Conference Proceedings (CEBAF-Report-89-001). 1988. Newport News, VA, USA.
2. Special Issue: 50 years of Computers and Physics, Physics Today (Oct. 1996).
3. Warren, R., Star Wars and the FEL. Unpublished: 1995: 92.
4. Tran, T. M. and Wurtele, J. S., Computer Physics Communications, 54(2-3): p. 263-72 (1989).
5. Travish, G., “The How, What, Why, Where and When of Free Electron Laser (FEL) Simulation.”
Invited Talk presented at the 1993 Computers in Accelerator Physics (CAP93) Conference. (Pleasanton,
CA: 1993).
6. Travish, G. “Performance simulation and parameter optimization for high gain short wavelength FEL
amplifiers,” Invited Talk presented at the Sixteenth International Free Electron Laser Conference
(FEL94) (Stanford, CA, USA: 1994). And published in NIM A 358 48-51 (1995).

You might also like