You are on page 1of 9

IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 53, NO.

3, JUNE 2006

761

Design and Test Issues of an FPGA Based Data


Acquisition System for Medical Imaging Using PEM
Carlos Leong, Pedro Bento, Pedro Lous, Joo Nobre, Joel Rego, Pedro Rodrigues, Jos C. Silva,
Isabel C. Teixeira, J. Paulo Teixeira, Andreia Trindade, and Joo Varela

AbstractThe main aspects of the design and test (D&T) of a


reconfigurable architecture for the Data Acquisition Electronics
(DAE) system of the Clear-PEM detector are presented in this
paper. The application focuses medical imaging using a compact
PEM (Positron Emission Mammography) detector with 12288
channels, targeting high sensitivity and spatial resolution. The
DAE system processes data frames that come from a front-end
(FE) electronics, identifies the relevant data and transfers it to a
PC for image processing. The design is supported in a novel D&T
methodology, in which hierarchy, modularity and parallelism are
extensively exploited to improve design and testability features.
Parameterization has also been used to improve design flexibility.
Nominal frequency is 100 MHz. The DAE must respond to a
data acquisition rate of 1 million relevant events (coincidences)
per second, under a total single photon background rate in the
detector of 10 MHz. Trigger and data acquisition logic is implemented in eight 4-million, one 2-million and one 1-million gate
FPGAs (Xilinx Virtex II). Functional Built-In Self Test (BIST) and
Debug features are incorporated in the design to allow on-board
FPGA testing and self-testing during product lifetime.
Index TermsFunctional built-in self test, hierarchy, modularity, parallelism, parameterization, pipelining, process diagrams,
re-use.

I. INTRODUCTION

REAST cancer early detection is recognized as a worldwide priority, since it constitutes the most effective way
to deal with this illness. Nevertheless, the detection specificity
of present diagnosis systems is low [1]. Therefore, research on
new diagnosis processes and systems for this type of cancer
are actively pursued. Positron Emission Tomography (PET)
based technology is one of these promising research lines.
PET technology is used in the development of the Clear-PEM
scanner, a high-resolution Positron Emission Mammography
(PEM) system, capable of detecting tumors with diameters
Manuscript received June 19, 2005; revised March 30, 2006. This work was
supported in part by AdI (Innovation Agency) and POSI (Operational Program
for Information Society), Portugal. P. Rodrigues and A. Trindade were supported by the FCT under Grant SFRH/BD/10187/2002 and Grant SFRH/BD/
10198/2002.
C. Leong and P. Bento are with INESC-ID, Lisboa, Portugal.
P. Lous, J. Nobre, and J. Rego are with INOV, Lisboa, Portugal.
P. Rodrigues and A. Trindade are with the Laboratrio de Instrumentao e
Fsica de Partculas, Lisboa, Portugal.
J. C. Silva is with the Laboratrio de Instrumentao e Fsica de Partculas,
Lisboa, Portugal, and also with CERN, Geneva, Switzerland.
I. C. Teixeira and J. P. Teixeira are with INESC-ID, Lisboa, Portugal, and also
with the Instituto Superior Tcnico, Universidade Tcnica de Lisboa, Portugal.
J. Varela is with the Laboratrio de Instrumentao e Fsica de Partculas,
Lisboa, Portugal, and also with CERN, Geneva, Switzerland, and the Instituto
Superior Tcnico, Universidade Tcnica de Lisboa, Portugal.
Digital Object Identifier 10.1109/TNS.2006.874841

down to 2 mm [1][5]. Based on the detection of radiation


emitted by human cells when a radioactive substance is injected
into the human blood stream [3], PET identifies, by image
reconstruction, the spatial origin of the radiation source (the
cancerous cells).
Image reconstruction algorithms demand millions of pixels
for providing acceptable accuracy. Hence, for a correct medical diagnosis, huge amount of data must be generated and processed. The purpose of this paper is to present key aspects of a
novel design and test methodology for high data-volume, data
stream digital systems and to apply it to the development of the
Data Acquisition Electronic (DAE) system responsible for the
digital data processing in the Clear-PEM scanner.
Along with innovative high resolution PEM technology, new
physical data, algorithms and methodologies are under intensive
research. Therefore, hardware/software solutions using reconfigurable hardware (i.e., FPGA-based) constitute an adequate
choice. Additionally, reconfigurable hardware solutions are also
adequate for the volume production of the envisaged product.
The main design challenge in this context is the need to
process huge amounts of data [4] and to perform tumor cell
identification (if resident in the patient tissues) in the shortest
time possible. We refer this as the medical diagnosis process.
These constraints demand an efficient electronic system, which
means, hardware data processing and extensive use of parallelism and pipelining. In order to meet the functional and
performance requirements, moderate speed and high pin count
complex FPGA should be used (For the design in which the
novel methodology is implemented, Xilinx Virtex II devices
have been used).
The paper is organized as follows. In Section II, a brief description of the Clear-PEM detector system architecture is presented. Section III presents the main aspects of the proposed
methodology, including key functional, performance and testability issues. In Section IV, DAE implementation details are provided. Design validation and prototype verification procedures
are presented in Section V. Finally, Section VI summarizes the
main conclusions of this work.
II. CLEAR-PEM DETECTOR SYSTEM
The Clear-PEM detector system is a PET camera for breast
imaging designed to optimize the detection sensitivity and
spatial resolution [1], [2]. It consists of two parallel detector
heads, corresponding to a total of 12288 readout channels.
The system is designed to support a data acquisition rate
of 1 million events per second, under a total single photon
background rate of 10 MHz [2]. An event or hit (photoelectric
event or Comptonaccording to the associated energy) is

0018-9499/$20.00 2006 IEEE

762

IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 53, NO. 3, JUNE 2006

Fig. 1. Sampled energy pulse associated with a hit.

defined as the interaction of a ray with a crystal. Data to be


analyzed and processed correspond to the energy originated
in the different crystals as a consequence of these collisions.
Relevant data are associated with relevant events. An event
is defined as relevant if it corresponds to a coincidence, that
is, the simultaneous occurrence of hits in both crystal planes.
In this context, simultaneous means within the same discrete
time window characterized by a pair of parameters - Time
Tag/Delta. Time Tag (TT) is the time instant associated with
the data sample that corresponds to highest value in the energy
pulse associated with one hit. Delta is the difference between
the time instant associated with the analog energy peak and the
Time Tag (sample) (see Fig. 1).
System functionality is partitioned between the Front-End
(FE) electronics (details in [3], [4]) on the detector heads, and
the off-detector DAE. The FE is responsible for the construction
and distribution of the analog data frames that correspond to the
occurrence of hits. This work focus on the novel design and test
methodology applied to the DAE development. The proposed
DAE architecture is described in the following section.

III. DESIGN AND TEST METHODOLOGY


The proposed Design and Test (D&T) methodology targets
high volume, high rate data processing systems for which data
are concurrently delivered to them. Such a system must satisfy
the following objectives: 1) functional and performance compliance to the functional and timing specifications; 2) testability and debugging capabilities to allow functional self-test
and prototype debug and 3) easily modifiable functionality to
allow low-cost system design adaptation to requirements modifications, that may occur, e.g., as the result of algorithms and
calibration procedures refinements.
To meet these objectives, the following system attributes are
pursued in the hardware D&T methodology: hierarchy, modularity, module re-use, parallelism, pipelining and parameterization.

Cost-effective prototype validation of such a complex system


is a mandatory requirement. This excludes the use of only external test equipment. Therefore, unassigned FPGA resources
should be used, as much as possible, to implement built-in test
structures to support system diagnosis and debug, and self-test
during product lifetime. FPGA devices integrated in the DAE
system are assumed defect-free (they are marketed after production test). Hence, FPGA test resources are not built-in to perform
the structural test (as is usual in manufacturing), but rather to
carry out functional test. We refer this feature of the proposed
D&T methodology as functional BIST.
The complexity and the specificity of the problem justified
the development of a new D&T methodology. The following
aspects have been taken into consideration in the development
of the proposed methodology. Although huge amount of data arrive at the DAE at a relatively high rate, the information flowing
from each channel is identical (since it comes from similar crystals) and should be submitted to identical processing. Therefore,
the electronic system architecture should reflect this characteristic by exhibiting high replication, or re-use of identical processing modules. An important aspect to be addressed is the
choice of the granularity of the modules. Should a module correspond to a single crystal, since a crystal is the source of data,
or should it correspond to some crystal cluster?
It has been decided that the DAE architecture should map the
organization of the crystal arrays. Data are provided by 12288
readout channels (two channels/crystal, one for the top and one
for the bottom plane). These channels are organized in 2 96
identical detector modules distributed by two crystal planes.
Data arrive at the DAE in parallel. Thus, it should, has much
as possible, be processed in parallel.
On the other hand, data are transmitted from the FE to the
DAE by a large number of cables that may introduce diverse
delays. However, to guarantee that a detected coincidence is effectively a coincidence, it is mandatory to guarantee system synchronism.
The random nature of data generation is another aspect that
has been considered. In fact, huge amounts of randomly gener-

LEONG et al.: DESIGN AND TEST ISSUES

763

Fig. 2. Top-level model of PEM DAE electronic system.

Fig. 3. Process diagram corresponding to the normal/random mode Scenario.

ated data arrive at the DAE and require processing to determine


if it should be considered as relevant or not. Irrelevant data must
be discarded as quickly as possible.
Therefore, the D&T methodology should lead to a DAE architecture that reflects the DAE hierarchy and the modular character of the scanner, as well as data flow parallelism using multiple instantiation (re-use) of identical modules.
In order to take into account the random nature of data allocation of enough memory banks in the architecture must be
performed to temporarily store data until they can be safely discarded. Moreover, it should consider the physical limitations
imposed on the timing requirements by the interconnection ca-

bles, which demands adequate design techniques to guarantee


synchronism.
In the next sections, the rationale behind the methodology
will emerge in the explanation of DAE architecture.
A. DAE Architecture
The main functionality of the DAE is, to identify relevant
data coming from the FE electronics. Fig. 2 depicts the top-level
architecture of the DAE system. This figure highlights the hierarchical nature of the design. In fact, the system is composed of
four identical DAQ boards, each one with two identical FPGAs
(DAQ FPGA) that implement the Data Acquisition (DAQ) func-

764

IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 53, NO. 3, JUNE 2006

TABLE I
DATA AND CONTROL SIGNALS DESCRIPTION

Fig. 4. Mapping processes in modules at architecture level.

tionality. In each DAQ FPGA, system functionality is partitioned into DAQ (synchronization and processing) Read-Out
Controller (ROC) and Filter.
Each one of the four DAQ boards maps 48 crystal modules.
Each DAQ FPGA inside each DAQ board processes data corresponding to 24 crystal modules. Modularity and hierarchy are
also present in the design of each module that constitutes the
DAQ FPGA.
The Trigger and Data Concentrator (TGR/DCC) board
houses the TGR/DCC FPGA, which implements the Trigger
and Data Concentration functionality. This FPGA is responsible
for the detection of coincidence occurrences. This functionality
is implemented in module TGR (Fig. 2). Whenever a coincidence is detected, a trigger signal is generated. The presence
of this signal indicates to the DAQ that the corresponding

data must be made available to the DCC module. DCCROC


module is responsible for data organization according to the
communication protocols. DBArb module is the arbiter of the
Dedicated Bus.
The TGR/DCC board also houses the PCI FPGA that implements the controller of the PCI Bus, which is responsible for the
communication between the DAE and the external PC. Within
each FPGA a test module represents all the built-in test structures that are used for functional test and for prototype debug.
The test structure is also modular. In the figure, a single Test
Module represents the test structure. However, test structures
are associated with each one of the different functional modules.
For debug purposes, dedicated connectors (CON) are available
at the different boards, to allow accessibility to and from the test
equipment.

LEONG et al.: DESIGN AND TEST ISSUES

765

In Fig. 2, LVDS stands for Low Voltage Differential Signaling, which is a low noise, low power, low amplitude method
for high-speed (gigabits per second) data transmission over
copper wire.
Two proprietary buses, the Generic Bus and the Dedicated
Bus, are responsible for the fast information flow within the
DAE system.

B. Process Diagrams
In the proposed D&T methodology, system functionality is
partitioned into sub-functions, in a hierarchical way to satisfy
design, testability and diagnostic requirements.
The partitioning procedure is based on the characterization of
data and data streams, as well as on the processes that transform
data.
Process Diagrams, PD, (e.g., Fig. 3) are used to describe
data storage, processing and data and control flow. Process
Diagrams eases problem characterization and modeling. This
procedure has been adapted from the software domain [7],
[8] where this kind of modeling is used as a thinking tool
to characterize the problem under analysis, as completely as
possible, prior to initiate system design.
In Fig. 3, ellipses represent processes, rectangles represent
external objects that dialog with the processes and arrows represent information flow.
A process is defined as a set of functions that carry out a given
functionality. Each ellipse conveys the process name and the
number of instances of that process (e.g., x4 means that there
are 4 instances of this module in the architecture). Each process
can be instantiated more than once. For instance, DAQ Sync is
instantiated 4 times. By doing so, modularity, reuse and parallelism are highlighted.
Each arrow conveys data and control signal information. Different types of arrows represent different types of information.
In this particular case, distinction is made between functional
operation mode (dot lines) and test mode (dash and continuous
lines).
In test mode, distinction is also pointed out between data originated by the test modules, that is, the test vectors (dash lines)
and the modules response to the test vectors, that is, modules
signatures (continuous lines).
In a good design, Process Diagrams should present low connectivity, that is, processes should be designed so that its associated functionality should be executed as independently from
the other processes as possible. This eases the implementation
of hierarchy and parallelism in the design structures.
Another aspect that is contemplated in the Process Diagrams
is the time variable. In fact, although it does not appear explicitly
in the diagrams, it is conveyed in the control signal that, together
with data, defines the flow of information between processes.
To guarantee, as much as possible, the completeness of the
functional description the concept of operational scenario is introduced. In this context, a scenario is defined as the set of processes and corresponding data and control flow that represents
the complete execution of the functionality in a given operation
mode.

Fig. 5. Complete FPGA Test Procedure.

Scenario identification is indicated by the index in fi. As an


example, f1.T15 means the test flow T15 in scenario 1. If necessary, additional meaning can be associated with the remnant
indexes (indicating, e.g., the source and the target modules).
For the DAE, five operational scenarios have been identified,
namely, 1) normal/random mode, 2) single mode (for calibration), 3) constant loading parameters (for calibration), 4) function mode loading and 5) error request. For the sake of completeness, all different scenarios that correspond to the DAE operation modes must be described in terms of Process Diagrams.
As an example, Fig. 3 depicts the Process Diagram (PD) of
the DAE normal/random mode scenario. In this diagram, processes corresponding to functional BIST structures are already
included (test process). As shown, data and control signal are the
inputs and outputs of the transforming processes. Each process
can be further decomposed into sub-processes, and described in
more detailed PDs that correspond to lower hierarchical levels.
In this way, hierarchy emerges.
Table I provides some examples of the data and control signal
of the PD described in Fig. 3. Although, in the software domain,
PD typically describes static flow and processing of data, their
reuse in the context of our methodology take dynamic features
in consideration.
C. Mapping Processes Into Design Modules
At the design level, hopefully, each process should correspond to a hardware module, or sub-module. In Fig. 4 the correspondence between processes in the Process Diagram and modules in the FPGA architecture is shown.
Design documentation (and test planning) requires the specification of data and control signal identified in the Process Diagrams, according to their format and timing requirements.
Lastly, all these modules are designed so they can be configured off and/or online, a very useful feature for a prototype
validation.
D. Performance Issues
Taking into account that the main objective of the system is
the identification of coincidences, it is easy to understand that

766

IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 53, NO. 3, JUNE 2006

Fig. 6. Test structure for processes 2 and 3.

synchronism is a critical issue in this system (de-synchronization may mainly be due to the long and diverse length of the
interconnection cables). In fact, if synchronism is lost, data become meaningless. To guarantee synchronism in key parts of
the circuit, where delays associated to previous processing or
data paths could be variable, self-adjusted pipeline structures
are used. The later is because the data come through an asynchronous bus, so its data are scrambled in the time domain,
which must be de-scrambled before processing. The first is for
auto adaptation of the cable length (cable delay).
Moreover, it is necessary to guarantee the working frequency
of 100 MHz. To achieve this purpose, registers are inserted
among modules whenever it is required. The modular character of the design significantly simplifies this procedure. As
referred, identical modules are used in parallel processing
mode. Different modules can work at different frequencies.
Synchronous and/or asynchronous FIFOs are used to guarantee
the correct data transfer between modules. With this generic

approach, implementing functional BIST structures is equivalent to implementing any other functionality.
E. Testability Issues
DAE testing [8] is carried out in order to insure: 1) design and
prototype validation, diagnosis and debug and 2) lifetime selftest. This may be carried out at component, board and system
level. As mentioned before, the complexity of the system would
make its functional test extremely complex, if based on the use
of external equipment only. Therefore, a test resource partitioning strategy has been adopted. Almost all the DAE test procedures are embedded in the FPGA design with negligible overhead: unused Silicon area and limited speed degradation. The
implemented functional BIST structures support both abovementioned objectives [9].
The functional built-in test modules in the different FPGA
aim at: 1) the verification of the correctness of the DAE system

LEONG et al.: DESIGN AND TEST ISSUES

767

Fig. 7. Crate Overview.


TABLE II
ALLOCATED RESOURCES

functionality and performance, and 2) the diagnosis and debug


of the DAE system, or subsystems.
Moreover, not only DAE system functionality must be correctly implemented, but also timing requirements must be met.
Therefore, Functional Test and Performance Test are carried out
for the different system operating modes, or scenarios.
In Fig. 5 the FPGA test procedure is depicted. As can be observed, for each scenario each FPGA is completely tested using
two working frequencies. First the system is tested at half speed.
If everything works according to specifications, then the functionality is correct (Functional Test). Afterwards, the system
is again tested at nominal speed (Performance Test). If errors
occur, it is possible to conclude that these are timing errors.
At each step, and for all scenarios, test may be carried out
at different hierarchical levels, targeting components, modules,
boards or system.

An example of a test structure is presented in Fig. 6 corresponding to processes 2 and 3 in Fig. 3. As shown, a set of test
benches, TB1, TB2, and Null TB is applied to the processes to
be tested. Comparators are used to validate the module outputs
by comparison with the expected signature. These test benches
and expected outputs are generated by the Geant4 Monte Carlo
simulation toolkit and DIGITsim DAQ Simulator [2] and stored
in ROM blocks within the FPGAs.
Testing is carried out in two steps, one non-deterministic and
one deterministic. The non-deterministic test will verify that all
duplicated modules and blocks have identical response for the
same input vectors, which include Monte Carlo digitized data
frames. The deterministic test will verify that the functionality,
namely the evaluation of the two key values (Delta/Time Tag
and Energy) and samples [4] are correct on, at least, one complete signal path. The deterministic test will also verify that the

768

IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 53, NO. 3, JUNE 2006

Fig. 8. DAQ board.

filter master block functionality (communication with trigger


and the other DAQ FPGA) is correct. Test outputs are the corresponding signatures. Functional BIST structures have been
implemented without significant degradation of system performance.
IV. IMPLEMENTATION DETAILS
The PEM DAE system is implemented in a set of boards
housed in a 6U Compact PCI crate (see Fig. 7). A generic and a
dedicated bus are used for data exchange among boards.
The data acquisition reconfigurable logic is implemented
in large FPGAs with four million gates (Xilinx Virtex II
xc2v4000-4bf957) (eight DAQ FPGA). Another FPGA (Xilinx
Virtex II xc2v2000-4bg575), with two million gates implements the TGR/DCC module. A third FPGA (Xilinx Virtex II
xc2v1000-4bg575), with one million gates implements the PCI
Core.
Table II indicates the allocation resources on the DAQ, and
on the TGR/DCC FPGA (prior to functional BIST). Using standard routing effort, the design managed a register-to-register
delay of 9.348 ns, which corresponds to a clock frequency of
107 MHz. Speed degradation due to BIST insertion is minimal
(107 to 104 MHz-less than 5%).
In Fig. 8, the actual DAQ board, a twelve-layer board, can be
seen. The Bus connectors as well as the main components of the
board (LVDS, transceivers and FPGAs) are pointed out.

Fig. 9. Design and validation flow.

modules (Phantom Factory/PEMSim) [4], [11] modules were


interfaced with DIGITSim and a list of digitized data frames
was obtained (two for each hit). Each data frame corresponds
to the information sent by the front-end system and contains
ten samples plus Monte Carlo truth variables: energy and
phase. This same list of samples has been used as stimuli to the
VHDL test bench (compiled and synthesized by ISE Project
Navigator 6.2.03i and simulated by ModelSim XE II 5.7 g) and
DIGITSim DAQ Simulator. Results obtained with the VHDL
and DIGITSim descriptions of the PEM system are coincident.
In Fig. 9, the overall design and validation data flow diagram
is represented. Three main steps are highlighted in this figure,
namely, system level simulation carried out by the Geant4 Monte
Carlo simulation toolkit using a C
model, design validation
using the VHDL description and the Xilinx ISE/ModelSim and
finally, prototype validation which is carried out using some test
equipment and the functional BIST structures.
The first two steps take place during the design phase, although, as mentioned before, test benches used for prototype
validation and lifetime self-test are generated at phase one.
As indicated in the figure, some FPGA reconfiguration may
be required during the prototype validation phase. Also indicated in the figure is the re-use of Functional BIST structures
for lifetime test. This test is carried out at power up and by user
request, using a software command.
VI. CONCLUSION

V. DESIGN VERIFICATION AND PROTOTYPE VALIDATION


Detailed simulations of the Clear-PEM detector and trigger
system, using the Geant4 Monte Carlo simulation toolkit and a
simulation of the data acquisition system [10]
high-level C
have been carried out to produce realistic datasets that have
been used to study the FPGA design, assess the hardware implementation and evaluate the influence of the data acquisition
system on the reconstructed images. For these tasks, a simulation framework has been implemented. Details of this framework are provided in [2], [4].
Test vectors generated by the simulation framework have
been used for VHDL design validation. The following strategy
has been followed: events produced by the Geant4 based

A design and test methodology for the design of the DAE of


the Clear-PEM scanner has been presented. Underlying principles of the D&T methodology are the extensive use of hierarchy,
modularity, re-use, pipelining, parallelism and parameterization
in hardware implementation. Using these attributes facilitate the
design process, and design and prototype functionality and performance validation. Parameterization leads to more flexible designs, allowing the introduction of late modifications in system
requirements without significant re-design effort.
Functional BIST structures, embedded in the FPGA components, allow prototype debug (significantly reducing the
complexity and costs of test equipment) and lifetime self test.
These test structures have been implemented without significant

LEONG et al.: DESIGN AND TEST ISSUES

degradation of system performance (less than 5%, for the DAQ


FPGA), although they occupy (in the case of the DAQ FPGA)
of the FPGA resources.
around
In the future, refined algorithms will be implemented for coincidence detection and testability will be revised.
REFERENCES
[1] P. Lecoq and J. Varela, Clear-PEM, A dedicated PET camera for mammography, Nucl. Instrum. Meth. A, vol. 486, pp. 16, 2002, 2002.
[2] A. Trindade, Design and evaluation of the clear-PEM scanner for
positron emission mammography, IEEE Trans. Nucl. Sci., to be
published.
[3] J. Varela, Electronics and data acquisition in radiation detectors for
medical imaging, Nucl. Instrum. Meth. A, vol. 527, pp. 2126, 2004.

769

[4] P. Bento, Architecture and first prototype tests of the clear-PEM electronics systems, in IEEE MIC, Rome, Italy, 2004.
[5] N. Matela, System matrix for clear-PEM using ART and linograms,
in IEEE MIC, Rome, Italy, 2004.
[6] OMG-Unified Modeling Language, v1.5, Rational, 2003.
[7] Bran Selic1 and Jim Rumbaugh2, Using UML for modeling complex
real-time systems, 1 Realtime, 2 Rational, 1998.
[8] G. Hetherington, T. Fryars, N. Tamarapalli, M. Kassab, A. Hassan, and
J. Rajski, Logic BIST for large industrial designs: Real issues and case
studies, in Proc. IEEE Int. Test Conf., 1999, pp. 358367.
[9] P. Bento, C. Leong, I. C. Teixeira, J. P. Teixeira, and J. Varela, Testability and DfT/DfD Issues of the DAE System for PEM, Tech. Rep.
Jan. 2005, version 3.1.
[10] S. Agostinelli, GEANT4A simulation toolkit, Nucl. Instrum. Meth.
A, vol. 506, pp. 250303, 2003, 1995.
[11] P. Rodrigues, Geant4 applications and developments for medical
physics experiments, IEEE Trans. Nucl. Sci., vol. 51, no. 4, pp.
14121419, Aug. 2004.

You might also like