Professional Documents
Culture Documents
Abstract
Four experiments have been accepted for phase 1 of LEP: ALEPH, OPAL, L3,
and DELPHI. These experiments have each of the order of 100,000 or more
electronic channels producing more than 100 Mbytes of raw data per second.
Extensive zero-suppression, data reduction and formating as well as
intelligent triggers are needed to reduce the amount of data to be written to
tape to 0.1-1 Mbyte per second.
We give a brief overview on the LEP accelerator and introduce the four
detectors. We discuss the trigger and the data acquisition for the four
experiments and summarize the different techniques used.
1. HISTORY
During the years 1976 to 1979 preliminary discussions started in the European
High-Energy Physics community about a follow-up programme for the CERN Super Proton
Synchrotron (SPS) . Four alternatives were proposed, namely a 10 TeV Proton Synchrotron,
+
a 400 x 400 GeV pp collider, an ep collider, and a large e e ring. The last option
met with the greatest Interest and the project was named LEP, the Large
Electron-Positron Collider.
LI ALEPH
L2 OPAL
L3
L4 DELPHI.
The present talk is based on the Technical Reports [1-4], the reports on data
handling [5-8] and on private communications from the collaborations. The reader should
keep in mind that we present the plans of the experiments and that some of the
information contained in this presentation will be obsolete by the time this report is
published.
- 5 -
As the topic of the present talk is data acquisition, only a very small section can
be devoted to the superb LEP accelerator project. Fig. 1 shows an aerial view of the
Geneva area with the airport at the bottom of the picture and the Jura mountains in the
back. The present CERN accelerators, the PS, the ISR and the SPS, can be seen as well
as the planned implementation of LEP, which has a circumference of nearly 27 km ( ! ).
The locations of the eight access points are marked in fig. 2. Only the four
even-numbered interaction regions will be equipped with experiments during the first
phase of LEP; pit 2 has been assigned to the L3 collaboration, pit 6 to OPAL, whilst the
assignment for ALEPH and DELPHI is still under discussion.
Figure 3 shows a vertical cut with the Jura on the left-hand side and the lake of
Geneva on the right. Most of the LEP tunnel will be in the "molasse", a convenient
material for tunnelling. Only a small part, which is under the Jura, consists of
limestone and may therefore give some unexpected problems. The ring is inclined by 1.4%
and the depth varies between 50 m and 150 m.
The layout of an interaction region is shown in fig. 4. The "straight" pipe will
receive the accelerator. The tunnel on the left is for machine equipment, mainly the
klystrons for the RF. The experimental hall has a diameter of about 15 m and a length
of 70 m. Three access shafts are provided: the leftmost for machine access, the large
one in the centre to bring the experiment down, and the small shaft on the right will be
used by the experimentalists. Two 40-ton cranes will be installed in the experimental
hall to assemble the detector. The detectors (except L3) will be movable from the beam
position to the "garage" position for maintenance work. The surface buildings provide
storage room for equipment, cranes to bring it down and also space for control rooms,
workshops, some offices, etc. Some of the buildings are devoted to machine equipment.
The interaction region number 2, for the L3 collaboration, differs from the others
as it is oriented parallel to the machine tunnel. Also, the L3 detector has too much
weight to be moved, so it will be assembled directly into its final position.
Table 1 gives a few machine parameters. The particles will make 1 turn every
88 jis and as there are 4 bunches, the beam crossing will occur every 22 us at each
interaction point. According to the present planning, and if everything goes well, the
beam for the experiments is expected for the end of 1988.
We give a short description at the four detectors to make the understanding of the
trigger and data-acquisition easier. For details, the interested reader should refer to
the Technical Reports issued by the collaborations [1-4].
- 6 -
The scenario for the detector construction is very similar for the four
collaborations. The various pieces of the detectors will be constructed in institutes
spread all over Europe and even to the United States, China and Japan. Therefore, all
experiments are confronted with communications problems, which are in particular
delicate for the on-line system.
Starting from the beam-pipe (fig. 6) we recognize the central detector (vertex
chambers and JET chamber), surrounded by the solenoid, followed by time-of-flight
counters, the electromagnetic and the hadron calorimeters, and the muon chambers.
The simulation of a two-jet event in this detector is shown in figs. 7 and 8 [9].
There are 34 institutes from 17 countries, with 280 physicists and engineers,
forming the DELPHI collaboration, spokesman U. Amaldi. The DELPHI detector [4] has the
- 7 -
largest variety of detector elements using different techniques. Starting from the
beam-pipe (fig. 1 0 ) , there are a vertex chamber, an inner detector, a time projection
chamber, the barrel ring imaging Cerenkov (RICH), the outer detector and the barrel
electromagnetic calorimeter, these components are surrounded by a superconducting coil,
after which follow a scintillator hodoscope, the hadron calorimeter, and two layers of
muon chambers. Going from the centre to the end-plates, we find after the TPC a set of
three forward chambers (A,B,C) interspersed by a liquid RICH and a gas RICH, then the
electromagnetic calorimeter and finally the end cap hadron calorimeter.
Because of the different properties of the four detectors, the trigger designs also
vary in complexity, and response time. The information given here is based on the
reports on trigger and data acquisition of January 1984 [5-8]. For more recent
information, the on-line coordinators can be contacted:
In the following, we present the basic principles of the different triggers. For
the timing and rates, Table 2 provides a comparison of the four detectors.
The aim is to trigger on all physics events, whilst rejecting background as much as
possible. Three independent conditions have been defined:
The trigger is performed at three levels: level 1 must respond within 1.5 tis
(including cable delays) of bunch crossing, to open the gate of the TPC. This time
corresponds to ~ 7.5 cm drift space being lost at the end of the tracks, for those
passing through the end-plates. The detector components contributing to this level of
triggering are:
Figure 11 shows a block diagram of the ALEPH level-1 trigger logic. The 72 segments for
the calorimeters are formed by grouping towers of the calorimeter into "super" towers.
The last stage is the event processor, a computer capable of running the
reconstruction programme, which will see for the first time a complete event. Its task
would be to reject further background events and/or clarify events for the off-line
analysis.
A trigger supervisor will assure the proper gating and distribution of the trigger
signals to the different sub-detectors; it will also be used to perform consistency
checks and to guarantee the proper response of all needed components.
1. hadronic jets,
2. charged-lepton pairs,
o
3. radiative production of Z -» v,
and also on two-photon processes and free quarks. As the decision time can be as long
as 18 ys (4 are needed to clear for the next bunch crossing), a very sophisticated
trigger, including all detector components, can be made. Indeed, information comes from
This trigger is supposed to bring the 50 kHz bunch crossing rate down to i 4 Hz!
For the L3 detector, six different triggers are planned with the following
properties :
1. Muon trigger: è 1 track pointing to the vertex, p i 2 GeV/c.
- 9 -
Figure 12 gives details of the level-1 and level-2 triggers. Note, that 4,500 input
signals are used to construct the trigger, which might give an idea of the complexity of
such a device. The timing and the rates for the different stages are represented in
fig. 13.
The DELPHI trigger system is based on four levels. For the first level
(~ 2 us) the requirements are
If these requirements are too loose (rate above 1 kHz), information from the
electromagnetic calorimeter, the time-of-flight counters, or the towers of the hadron
calorimeter can be used in addition.
Level 3 requires information from the readout system. Each detector will have a
microprocessor (GPM [10]) to perform a selective readout. The combined information will
be transmitted to the main level-3 processor, likely to be a XOP [11].
- 10 -
Finally, level 4 will use emulators for further filtering of events. Figure 14
gives an overview of the DELPHI trigger system.
Âs each collaboration applies different terms for similar items, and as the reports
on trigger and data acquisition are organized in different ways, a comparison is
difficult. Table 2 is an attempt to bring the information together.
The different "levels" defined there do not necessarily coincide with the naming
found in the reports: Level 1 includes actions taking place between beam crossings
(22 ys). There is a large difference in the expected rates between OPAL (4 Hz) and
DELPHI (1000 H z ) . Slower detectors such as a TPC will contribute to decisions in level
2. Here the rates of the four detectors are about the same (4-20 H z ) . Level 3 (if
used) reduces the rates by another order of magnitude.
Finally, the event processors will really see the "full" event and might take
decisions based on sophisticated algorithms. In general, emulators or banks of
microprocessors are supposed to deliver massive CPU power for this job, at a reasonable
cost. The table also shows the expected event size and deduces from that the number of
6250 bpi tapes written per day. Mote that for an expected running time of 100 days,
each collaboration will produce about 10,000 tapes per year!
Despite similar requirements, the readout systems of the four detectors have
adopted quite different solutions. ALEPH, L3, and DELPHI have chosen FASTBUS as their
data-acquisition standard; OPAL will use FASTBUS only at the level of the front-end
electronics and base the readout and the processor modules on the VME/VMX bus standard.
Using FASTBUS does not mean that the other three experiments would look alike. The
flexibility of FASTBUS allows the user to dream up virtually any configuration.
The location and interconnection of the mini-computers is also different for the
four experiments, but at least all have decided to use the VAX family of computers
running VMS. Ethernet seems to emerge as the preferred solution for the local area
network (LAN).
producing ~ 80% of the data per event. Its electronics will fill more than 100
FASTBUS crates. The connection via the LAN to the international network allows close
contact between the sub-detectors. In fact, such a sub-detector set-up corresponds in
its complexity to today's large experiments.
The connection to the central readout can be seen from fig. 16. During normal
data taking the readout is controlled centrally, while the "local" VAX can "spy" on
events from its sub-detector. The function of the event builder (EB) is to collect the
pieces of an event from the different readout controllers, which are the masters in the
front-end crates, and to store the combined information in its memory. Whenever a
sub-detector computer needs data, it might request a copy of the next event from the
EB. The main data stream will not be affected by the speed at which a local computer
can absorb data.
Figure 18 shows the latest version of the OPAL VME/VMX readout system and the
connection to CAMAC. As this information was received just before the Symposium, the
reader is asked to contact the OPAL collaboration for more information.
The B60 readout deserves special attention, as each of the 12,000 crystals will
have its own microcomputer (M146805) for readout and control. They are connected in 150
groups of 80, controlled by 150 micros; these are, in turn, organized as 8 groups of
about 20. The 8 controllers can be either CAMAC or FASTBUS modules, controlled by a
machine with a CPU power equivalent to a small VAX.
Without going into the details of every sub-detector, it can be said that L3 will
have, inside each sub-detector, several parallel data streams merging into a buffer
memory, managed by an EB (fig. 1 9 ) . The assembled sub-events are passed through a
central EB to the emulators memory; from there they can be transferred (after treatment)
to the host computer (VAX 11/790) and/or to an extra emulator for full event
reconstruction; they are then further analysed by another VAX 11/790.
Good communications between the various home institutes and CERN are vital for all
LEP collaborations. At the moment, the network is growing all the time. Most of the
machines at CERN are already connected via DECnet, which extends, via the INFN gateway,
to Italy, where DECnet has been in use for several years already. For France,
Switzerland and Germany, public X.25 networks are proposed. Great Britain has its own
scientific network JANET, based on the "coloured book" software and X.25. The network
extends to CERN and it might even be possible to run DECnet over JANET to have a uniform
connection. The GIFT project at CERN is supposed to provide gateways between
incompatible networks.
In the final layout there will be several levels of networking: local connections
at the experiment, regional on the CERN site, and the European or world-wide. The
example in fig. 21 shows how DELPHI sees the networking, which is representative of the
other experiments systems.
- 13 -
6. SLOW CONTROL
For more demanding tasks, VMS-based M68000-type systems are likely to be used. The
connection to the main data-acquisition system could go via VME-based Ethernet
controllers or via direct interfaces to the VAX.
7. SUMMARY OF PROCESSORS
For the first- and second-level triggers, special hardware machines are proposed,
based on look-up tables, coincidence matrices and similar. For the higher-level
triggers, some experiments propose to use fast bit-slice processors such as XOP [11] or
M68000-type machines in the FASTBUS standard.
For data reduction and readout at the crate level, the M68000 is the favourite,
either in FASTBUS or in VME (OPAL). L3 has a special solution for the B60 readout using
one CMOS microprocessor per channel (12,000 units).
At the level of the event processor or event filter three solutions are competing:
emulators such as the 370E [13] or the 3081E [14], arrays of microprocessors (Fermilab
ACP project) [153 or a set of microVAXs. The last one is very attractive because of
software compatibility with the general system. However, the CPU power per dollar might
exclude this solution.
All four LEP experiments have decided to use VAX computers as the minis for the
data acquisition and monitoring.
For slow control applications the plans are to use the M6809 based on G64 or the
M68000 in VME.
Last but not least, work-stations will be needed for on-line and off-line analyses
and high-resolution graphics. Apollo is a candidate; LISA II is being evaluated by
DELPHI ; and there is the hope that DEC might show up with a competitive offer based on a
microVAX running microVMS.
- 14 -
8. CONCLUSION
There is much activity at CERN and in the participating home laboratories. We have
to find solutions for our communications problems which are caused by the distributed
development. A lot of work still needs to be done, in particular in FASTBUS, VUE, and
for the emulators. The software problem has not been addressed at all in this
presentation, which does not mean that it is solved! In fact, the opinions there are as
divergent as the hardware solutions.
Fortunately, there are more than four years left, so there should be time to get
the work done.
Acknowledgements
I would like to thank all the members of the four LEP collaborations as well as the
people from the LEP machine, who provided the necessary input for this talk and helped
with drawings and transparencies.
* * *
References
[5] ALEPH Collaboration, Data acquisition and data analysis, CERN/LEPC/84-8 (1984).
[6] DELPHI Collaboration, Data acquisition and analysis in the DELPHI Experiment,
CERN/LEPC/84-3 (1984).
[7] L3 Collaboration, Trigger and data acquisition system of L3, CERN/LEPC/84-5 (1984).
[8] OPAL Collaboration, Report on data acquisition and data analysis, CERN/LEPC/84-4
and 84-2 (1984).
[10] H. Müller, The 6PM, a general purpose programmable FASTBUS master (F6808), CERN, EP
Division, 20 May 1984.
[11] T. Lingjaerde, XOP, DELPHI 82/45 ELEC; A second generation fast processor for
on-line use in high energy physics experiments, Proc. Topical Conf. on the
Application of Microprocessors to High-Energy Physics Experiments, Geneva, 1981
(CERN 81-07, Geneva, 1981), p. 454.
[12] A.K. Barlow et al., The UTI-NET book, CERN/SPS/ACC/Tech. Note 84-1 (1984).
[13] H. Brafman et al. , A fast general purpose IBM hardware emulator, Proc. of the
Three Days In-Depth Review on the Impact of Specialized Processors in Elementary
Particle Physics, Padua, 1983, (University, Padua, 1983) p. 71.
[14] P.P. Kunz et al., The 3081/E processor, same Proc. as Ref. [13], p. 84.
Table 1
General LEP parameters
Table 2
Trigger summary and rates
Beam crossing: 22 us
Level 3 N/A 10 5 15 ms
~ 0.4 10 5 Hz
Event ~ 2 ? a few £ 2 Hz
processor
region
raction
inte-
t, layout of an
Fig.
Fig. 6 The OPAL detector
- 20 -
r2
_ J I
rïï
E N E R G Y / POSITION TOTAL
> 1 track CONTROL
> 2 tracks
CORRELATION ENERGY
*
MATRICES TOPO- MONITORING
> 3 tracks
LOGY
READ-OUT
SUPERVISOR
SMALL ANGLE
CALORIMETER VERTEX CHAMBER MUON CHAMBER
CALORIMETER
f 400 | 4 0 0 4*2400 J 8 0 0
-'300 ^ 2 0 0
BGO HAD. TRACK RZ R *
PHOTON
FINDER
I E ,
TRACK TRACK
ENERGY
PATTERN FINDER FINDER
VERTEX
BGO
CHAMBER
TRIGGER DATA
O
in
1
LEVEL 2A
~100 us
x T
8
DATA
CONTROL LEVEL 2B
~5ms
x
!
LEVEL 3
22:00 ms F i g . 13
i L3 trigger and data acquisition system
block diagram
DATA MULTI-EVENT
STORAGE BUFFER
EQUIPMENT
EC ECC EC - ECC EC .ECC COMPUTER
CRATES
SUPER-
MULTI EVENT BUFFER
VISOR
EMULATORS
HAIN STREAM
AC OP COMPUTERS
RARE E V E N T S — —
TAPES
F i g . 14 Schematic diagram of the architecture of the data acquisition and filter system
for DELPHI
24
SUB-DETECTOR
J L
CENTRAL
READOUT
EQUIPMENT
COMPUTER
GATE WAY TO
INTERNATIONAL
NETWORK
LAN
ROC ROC
ROC ROC,
- \
I I
I I
ROC. ROC J
S,L S.I.
EVENT
S.I. BUILDER S.I.
LARGE V A X
CONTROL
External
CONSOLE
link
"-•»•• L LEVEL 1
DA
DATA 1
irrFPT
DIGITIZER I DIGITIZER A L T T R
N=1 to 16
MASTER
CP
n -o o
cz
MA
DA
cz
ttn tt VME Master crate TRIGGER TRIGGER
DATA DATA
Trigger 1 z
©
DIGITIZER DIGITIZER
f DISPATCHER I DISPATCHER]
n
"D 3
©
m m
rr
<*
*
CO _i XI
XI m tt It >
-jigger 1 LEVEL 2 LEVEL I DATA
PROCESSOR PROCESSOR COLLECTOR
tt n
m
CAMAC Detector INTERFACE
TRIGGER COHTRQL
1 system crate OATA DATA I0x|
n z z—<
VME
CBA
DISPATCHER DISPATCHER LEVEL
GONE
2 ^
T
UNLIKE
DETECTOR ; : COMPUTER
MC u
er
_¿.
m m -S. x> COMPUTER' • '•
68010 CO
TO CT
_i -¿
tt
m
It > CD ^Trigger 1
I DETECTOR I
* z m •n ACCEPT/REJECT LIST
n
VME
.i,
BUS
_£
z z
—1 —
m m1 3£
EniESIlllB
©
u m m XI EMULATOR
c rn -n XI
FACE 1
XI
CT » tt It >
r-i
z m ^Trigger 1 EMULATOR
tt 1 HT E R F A CE EMULATOR
IRTERFACE
CAMAC Detector
cally 2-3Mb /
I m
system crate
VME .
total) CBA I I HT E H F A C E [ INTERFACE |
Fig. 18 OPAL VME interface and second level trigger Fig. 19 L3 data flow
L o c a l a r e a network
Readout Readout
electronics electronics
Equipment Equipment
computer computer
Other
Many branches equipment
one per detector computers
~TlJ t M t !
i — J
~ L
1
Dumb terminals
I i
I Trigger and filter I
rr3
J processing I
! J
RA.D
Shared disks,
t a p e s , graphics
èà P-
Cernet
gateway
2~a—•
C E R N wide
and European
Emulators to support ' O P and 'DAC on dedicated LAN wide communications
Home mini