You are on page 1of 22

Home Search Collections Journals About Contact us My IOPscience

Quality control measurements for digital x-ray detectors

This content has been downloaded from IOPscience. Please scroll down to see the full text.

2011 Phys. Med. Biol. 56 979

(http://iopscience.iop.org/0031-9155/56/4/007)

View the table of contents for this issue, or go to the journal homepage for more

Download details:

IP Address: 128.210.126.199
This content was downloaded on 22/01/2015 at 22:26

Please note that terms and conditions apply.


IOP PUBLISHING PHYSICS IN MEDICINE AND BIOLOGY

Phys. Med. Biol. 56 (2011) 979–999 doi:10.1088/0031-9155/56/4/007

Quality control measurements for digital x-ray


detectors

N W Marshall1 , A Mackenzie2 and I D Honey3


1 Department of Radiology, University Hospitals Leuven, 49 Herenstraat, 3000 Leuven, Belgium
2 The National Co-ordinating Centre for the Physics of Mammography, Medical Physics,
Level B, St Luke’s Wing, The Royal Surrey County Hospital NHS Trust, Egerton Road,
Guildford, GU2 7XX, UK
3 Department of Medical Physics, Floor 3, Henriette Raphael House, Guy’s and St Thomas’

Hospital, London, SE1 9RT, UK

E-mail: nicholas.marshall@uz.kuleuven.ac.be

Received 26 August 2010, in final form 30 November 2010


Published 19 January 2011
Online at stacks.iop.org/PMB/56/979

Abstract
This paper describes a digital radiography (DR) quality control protocol for
DR detectors from the forthcoming report from the Institute of Physics and
Engineering in Medicine (IPEM). The protocol was applied to a group of six
identical caesium iodide (CsI) digital x-ray detectors to assess reproducibility
of methods, while four further detectors were assessed to examine the wider
applicability. Twelve images with minimal spatial frequency processing are
required, from which the detector response, lag, modulation transfer function
(MTF), normalized noise power spectrum (NNPS) and threshold contrast-
detail (c-d) detectability are calculated. The x-ray spectrum used was 70 kV
and 1 mm added copper filtration, with a target detector air kerma of 2.5 μGy
for the NNPS and c-d results. In order to compare detector performance with
previous imaging technology, c-d data from four screen/film systems were also
acquired, at a target optical density of 1.5 and an average detector air kerma
of 2.56 μGy. The DR detector images were typically acquired in 20 min,
with a further 45 min required for image transfer and analysis. The average
spatial frequency for the 50% point of the MTF for six identical detectors was
1.29 mm−1 ± 0.05 (3.9% coefficient of variation (cov)). The air kerma set for
the six systems was 2.57 μGy ± 0.13 (5.0% cov) and the NNPS at this air kerma
was 1.42 × 10−5 mm2 (6.5% cov). The detective quantum efficiency (DQE)
measured for the six identical detectors was 0.60 at 0.5 mm−1, with a maximum
cov of 10% at 2.9 mm−1, while the average DQE was 0.56 at 0.5 mm−1 for three
CsI detectors from three different manufacturers. Comparable c-d performance
was found for these detectors (5.9% cov) with an average threshold contrast of
0.46% for 11 mm circular discs. The average threshold contrast for the S/F
systems was 0.70% at 11 mm, indicating superior imaging performance for the

0031-9155/11/040979+21$33.00 © 2011 Institute of Physics and Engineering in Medicine Printed in the UK 979
980 N W Marshall et al

digital systems. The protocol was found to be quick, reproducible and gave an
in-depth assessment of performance for a range of digital x-ray detectors.

1. Introduction

Digital x-ray detectors have finally entered widespread service across all diagnostic imaging
facilities, from small, specialized clinics to large hospitals that offer a broad spectrum of
imaging services. In the UK, nearly all new diagnostic x-ray imaging installations will employ
a digital x-ray detector of some sort, either a flat panel detector or computed radiography (CR)
cassettes. Current flat panel detectors are mostly based around two technologies. The first,
known as direct conversion, consists of a photoconductor such as amorphous selenium (a-Se)
coupled to an array of pixels formed of amorphous silicon (a-Si) thin film transistors (TFTs).
The second common method is called indirect conversion, and utilizes a scintillation phosphor
in contact with a light-sensitive pixel array, again formed from a-Si TFT pixels. In this study,
the term ‘DR’ refers to flat panel x-ray detectors using either of these conversion methods,
where the x-ray capture and readout is integrated into a single detector unit. Emphasis in this
work is placed on digital radiography (DR) detectors used for general radiography; however
the methods and measurements are equally applicable to CR detectors.
With regard to general diagnostic x-ray detectors, guidance for routine quality control
(QC) in the UK is already available (IPEM 2005) in the form of the Institute of Physics
and Medicine (IPEM) Report 91; simple, frequent tests are described for the technician or
radiographer together with more involved, less frequent tests intended for the medical physicist.
Tests listed for DR systems include low contrast sensitivity, limiting spatial resolution, dark
noise and detector uniformity, along with supporting remedial and suspension levels. The
use of low contrast sensitivity, via the threshold contrast-detail (c-d) detectability curve, is a
standard, albeit subjective, means of specifying the performance of an imaging system or an
x-ray detector (Cowen et al 1987, Evans et al 2004).
IPEM Report 91 focuses on the routine tests and test frequencies required for a range
of x-ray systems. Report series 32, issued by IPEM, is intended to complement the routine
performance testing described in Report 91 with a fuller discussion of the tests that can be
performed, including commissioning tests. Report 32 (Part VII) (IPEM 2010) presents QC
and commissioning tests that can be performed on DR and CR cassette-based digital systems
in greater detail. As might be expected, many of the tests presented in Report 32 (Part VII)
are similar to those in Report 91; however additional parameters assessed for DR detectors
include stitching artefacts and image retention (or lag); there is also some discussion of the
pixel correction map. One significant addition to Report 32 (Part VII) is the inclusion of a
chapter detailing the measurement of quantitative (objective) image quality parameters such as
the modulation transfer function (MTF), noise power spectrum (NPS) and detective quantum
efficiency (DQE) (Metz et al 1995). While it is recognized that accurate measurement of these
parameters is difficult and potentially time consuming, work has begun on the assimilation
of such measures into routine QC programmes (Marshall 2007, Cunningham 2008). Some
scientific and technical data are available for DR detectors in the literature—see Samei and
Flynn (2003) or the Centre for Evidence-based Purchasing report (CEP 2005); however these
studies concentrate solely on objective measures and use the International Electrotechnical
Commission (IEC) RQC spectra (IEC 2005). An in-depth characterization of three DR
detectors, including the Hologic DirectRay detector which is no longer in production, is
available in the excellent report by Borasi et al (2003).
Quality Control measurements for digital x-ray detectors 981

The aim of this work was to apply some of the tests and measures of detector performance
discussed in Report 32 (Part VII) to a group of digital detectors in a QC setting, using
the practical, established beam quality also used in IPEM Report 91. Not all of the tests
were examined in detail; this work concentrated on detector uniformity, image retention,
threshold c-d detectability and quantitative data analysis. Although Report 32 (Part VII)
gives a reference c-d curve for CR cassette systems, the report does not present c-d data for
DR detectors. During the initial investigation of flat panel imaging properties, there was
considerable interest in a comparison against c-d performance of screen/film (S/F) detectors
(Aufrichtig 1999, Rong et al 2001); however these early results were presented as threshold
hole depths rather contrasts. A further aim was therefore to generate c-d results against which
flat panel detectors can be compared.

2. Materials and methods

2.1. Parameters assessed


The conventional semi-quantitative parameters measured were low contrast sensitivity and
limiting spatial resolution. Quantitative assessment of detector performance involved a
measurement of the signal transfer property (STP), image retention, detector uniformity,
dark noise, presampling MTF, NNPS and DQE. A qualitative (visual) assessment of artefacts
is also described.

2.2. X-ray detectors assessed


The detectors assessed in this study fall into two groups. The first group consisted of six
Shimadzu Dart mobile x-ray systems (Shimadzu Europa GmbH, Germany), each incorporating
a caesium iodide (CsI)-based Canon CXDI-50C flat panel detector (Canon Europe Ltd, UK), all
installed at one NHS Trust. This group of detectors were installed within a period of 2 weeks
and underwent physics commissioning tests based on the new IPEM digital radiography
protocol. This allowed the reproducibility of the various QC parameters and the QC geometry
across a number of identical detectors to be examined. The Canon CXDI-50C detectors are
referred to as Canon_CsI in this study.
The second group of detectors were installed at a number of hospital sites across the
country. This group included a Canon CXDI-50G flat panel detector, based on a powdered
gadolinium oxysulfide (Gd2SO2) (referred to as Canon_GOS), and two CsI phosphor-based
detectors: a Trixell Pixium 4600 (Trixell, France) and a GE Definium (General Electric
Company, USA). Finally, a Fuji Velocity (FujiFilm Corporation, Japan) integrated CR detector
was also assessed. This unit uses a standard BaFBr:Eu powder photostimulable phosphor held
in a binder, built into a bucky holder with line readout of the phosphor. This detector is
therefore operated in a similar manner to flat panel units (no CR cassette handling) but is
based on CR phosphor technology. Table 1 presents technical parameters for these detectors.
With the addition of one of the Canon_CsI detectors, this forms the second group of five
different detectors made up from different detector technologies.

2.3. Image processing presets


For successful application of quantitative data analysis methods, access to linear, shift invariant
images free from image processing is essential (Cunningham 2000). Images in this study
were acquired with the defective pixel, offset and gain corrections applied; all additional
982 N W Marshall et al

Table 1. Characteristics of the x-ray detectors assessed in this study.

Converter Pixel
X-ray thickness Image area pitch Detector
Detector name converter (μm) (cm) (μm) Pixel matrix tiles

Canon CXDI-50G Gd2O2S ∼200 35 × 43 160 2208 × 2688 1


Canon CXDI-50C CsI Not 35 × 43 160 2208 × 2688 1
disclosed
GE Revolution CsI ∼500 41 × 41 200 2022 × 2022 1
Trixell Pixium 4600 CsI ∼500 43 × 43 143 3001 × 3001 4
Fuji Velocity BaFBr:Eu Not 42.8 × 42.8 200 2140 × 2140 1
disclosed

image processings were disabled (edge enhancement or ‘MLT(M) frequency processing’ for
example).

2.4. Application of the QC protocol

The detector measurements were integrated into the standard QC tests as follows. The
first step was to assess x-ray tube and generator performance (including radiation output
reproducibility and tube voltage accuracy) against standard protocols (IPEM 2005). The
detector measurements were then performed: air kerma at the detector at 70 kV and 1 mm Cu
added filtration was measured as a function of the tube current–time product (mA s), and six
flood images (i.e. uniform exposure) were then acquired. This was followed by a c-d image, a
dark noise image, two images for the image retention, one image for the presampling MTF and
finally an image for limiting spatial resolution, giving 12 images in total (10 for the quantitative
measurements). The tube and generator tests were then completed with measurements of half
value layer, alignment of light beam to radiation field, focal spot dimension, dose–area product
meter calibration and leakage.
Some additional points regarding the measurements are now given. On completion of
the tube and generator QC tests, the focus–detector distance (FDD) used for the detector
assessments was set; 120 cm was used for both the Shimadzu Dart and Siemens Polydoros
(Siemens GmbH, Erlangen, Germany) systems. The collimation was then opened so that the
x-ray beam covered the x-ray detector, and the standard IPEM spectrum of 70 kV and 1 mm
added Cu filtration were set. Note that this geometry fully covers the detector compared to
the collimated area of 16 cm × 16 cm specified in the IEC document (IEC 2003) and hence
more scattered radiation is likely to be present in the measurement. Air kerma was then
measured for a typical range of mA s stations (including any ionization chamber or solid-state
detector calibration factor). The mA s required for the target values of air kerma at the detector
(K) was then calculated. A typical range of values was covered (0.5, 1.0, 2.5, 5 and 10 or
20 μGy), with the 2.5 μGy value representing a nominal operating air kerma; this value also
allowed a comparison against a nominal 400 speed film/screen system. A calibrated Unfors
Xi multimeter was used for the air kerma measurements (Unfors, Sweden).
The detector was then centred in the x-ray beam using a consistent orientation and a set of
six flood images (i.e. uniform exposure); just one image was acquired at each target K value.
Although just a uniformly exposed image, flood images are one of the most important means
of assessing detector performance. The STP (also known as the detector response), detector
uniformity, detector artefacts, the variance image (Maidment et al 2003) and NNPS were all
estimated from these images.
Quality Control measurements for digital x-ray detectors 983

(a) (b)

Figure 1. Definition of the attenuation regions and ROIs used for the lag measurement.

Low contrast sensitivity was assessed using the Leeds TO20 c-d test object (Leeds Test
Objects, UK). This was placed at the detector input plane and one image was acquired using
the 70 kV, 1 mm Cu beam and 2.5 μGy at the detector.
Dark noise was evaluated from an image acquired with no exposure. For the portable or
removable detector units the x-ray beam was directed away from the detector; for fixed units
the detector was shielded.
Image retention was evaluated using a radio-opaque square (1 mm thick lead) of
dimensions 5 × 5 cm placed at the detector input. An image of this square was acquired
using the 10 μGy beam; after a delay of 60 s, a further dark image was acquired. Image
retention was calculated from mean pixel values (PV) taken from these two images using
a region of interest (ROI) of approximately 100 × 100 pixels (IPEM 2010), positioned as
indicated in figure 1. Lag was calculated using the formula:

mean(ROI 4) − mean(ROI 3)
lag = × 100%. (1)
mean(ROI 2) − mean(ROI 1)

Two measurements of detector resolution were then made. Detector presampling MTF
was measured using an edge placed at the detector input plane, while a line pair test object,
also placed at the detector input plane and angled at ∼45◦ to the pixel matrix, was used to
measure the limiting spatial resolution.
For the Canon_CsI units, the 12 images required for the detector evaluation were
transferred from the acquisition station using a Universal Serial Bus (USB) flash memory
drive. The c-d and line pair test object images were then written to compact disc (CD)
and imported into the hospital picture archiving and communication system (PACS), a step
which enabled the c-d and limiting spatial resolution to be scored from the hospital diagnostic
displays (described later). The ten images required for quantitative data analysis of detector
performance (flood, lag, dark noise and MTF edge images) were then transferred to a separate
PC for the analysis, using a USB flash memory drive or CD. Finally, all 12 images were
archived locally in the Medical Physics department for future reference. Similar methods
were used when transferring images acquired for the other detector systems.
984 N W Marshall et al

2.5. Quantitative data analysis


Separate software is required for the quantitative calculations; analysis was performed using
the freely available ‘OBJ_IQ’ software package (NHSBSP 2009) developed using the IDL
programming language (ITT, Boulder, CO, USA), although other software could be used
such as the ‘IQworks’ system (http://wiki.iqworks.org/Main/WebHome). The first step in the
quantitative data analysis was to generate the STP by plotting PV, measured using a 1 × 1 cm
ROI placed at the image centre, against air kerma at the detector. The inverse of this function
was used to linearize the images before calculating all objective image quality parameters. The
PV taken from linearized flood images gives the air kerma at which the image was acquired,
offering a valuable check on the STP measurements themselves and the image type used, i.e.
a consistent LUT used.
Uniformity was estimated from the 10 μGy flood image. A simple (manual) uniformity
measure, as described in IPEM Report 32 (Part VII) (IPEM 2010), was calculated from five
ROIs. One ROI with approximate dimension 1 cm2 was placed at the image centre and further
ROIs were placed at the four corners of the detector; uniformity error was then calculated as
maximum deviation from the mean of the five ROIs. A more complete assessment of detector
uniformity was also made using a software routine that trimmed 1 cm from the edge of the
image and then moved a 1 cm2 ROI across the trimmed image and the calculated mean PV for
each ROI. This full uniformity method gave approximately 1600 ROIs in the analysis. Three
measures of full uniformity were estimated from these ROIs: coefficient of variation (cov),
maximum deviation from the mean and maximum deviation from the centre ROI. Uniformity
was also assessed as a function of K for one Canon_CsI detector, covering a range from 0.53
to 19.9 μGy using the full uniformity method.
Variance was examined both as a function of area via the variance image (Maidment et al
2003) and as a function of K for a region at the detector centre. The variance image was
calculated using a method described elsewhere (Marshall 2006a) for the 10 μGy flood image;
a 2 × 2 mm ROI was used for the data presented here. The variance image was examined for
noise uniformity, as this method can uncover serious detector faults such as detector blurring
or delamination of the x-ray convertor (Marshall 2006a, 2006b, 2007).
During STP measurement, the standard deviation of the PV data within the ROI was also
recorded and plotted against K using log–log axes. If x-ray (Poisson) noises were the only
noise source present, then the power coefficient (b) in equation (2) should be 0.5 (Rimkus and
Bailey 1983):
σ  = a. (K)b . (2)
Noise sources present in the detector other than quantum noise, such as electronic noise
or structure noise, will change b from 0.5. Electronic noise sources (additive) dominate at
low air kerma values while structure noise (multiplicative) can dominate at high detector air
kerma values (Nishikawa and Yaffe 1990, Evans et al 2002, Mackenzie and Honey 2007); the
extent to which they influence b will also depend on the detector air kerma range studied. The
area of the ROI will also affect the results; a large ROI will include a greater proportion of
large-area non-uniformity.
MTF data for all the Canon detectors were acquired using a 1 mm thick Cu edge; a 1 mm
Tungsten edge was used to acquire the MTF data for the remaining detectors. The edge was
placed at the detector input plane and orientated by hand to give an angle of approximately 3◦
between the edge and the pixel matrix (a range of 1.5–5◦ is recommended (IPEM 2010)). The
edge was placed at the detector centre for non-tiled systems and at the centre of a detector tile
for tiled detectors (the Trixell unit). A strict MTF procedure would isolate a section of the edge
using collimation (Carton et al 2005) and the edge would also be shifted between acquisitions
Quality Control measurements for digital x-ray detectors 985

so the extracted sections used in the MTF calculation come from the same detector region for
both the horizontal and vertical MTF estimates. This can be performed if desired; however the
data presented in this work (what amounts to a QC MTF) were taken with just one edge image
with no additional collimation. Acquisitions were made at 70 kV, 1 mm Cu beam and 10 μGy
at the detector input plane with the antiscatter grid removed. A 5 × 5 cm region containing
the edge was re-projected using the method described by Samei et al (1998). The sub-pixel
factor used for rebinning the edge spread function (ESF) was 0.1 (Samei et al 1998) and the
only smoothing was a median filter of length 5 pixels applied to the ESF before differentiation
(Marshall 2006a).
For the NNPS measurements, the following parameters were used. A 1088 × 1088 pixel
region from the centre of the 2.5 μGy flood image was selected and the pixel values in this
region were linearized using the inverse of the STP curve. The ROIs of size 128 × 128 pixels
were then extracted from this image section using a half overlapping pattern (IEC 2003), giving
a total of 128 ROIs from each flood image. A 2D second-order polynomial was applied to
each ROI in order to reduce the influence of low-frequency image components on the NNPS,
such as the heel effect (Samei and Flynn 2003). The squared modulus of the 2D fast Fourier
transform of each ROI was then added to the power spectrum ensemble. The NNPS was
calculated by dividing the ensemble by the mean value of the linearized 1088 × 1088 pixel
region squared, i.e. by (air kerma)2.
The final NNPS spectra were sectioned along the horizontal (u) and vertical (v) spatial
frequency axes from five spatial frequency bins on either side of the respective axis; on-axis
noise power (at u = 0 and v = 0) is excluded from the NNPS estimate in this work. NNPS
spectra were averaged into 0.25 mm−1 spatial frequency bins giving an uncertainty of 1.2%
(Dobbins et al 2006). DQE(u) was calculated using equation (3) (Dainty and Shaw 1974):
MTF2 (u)
DQE(u) = (3)
q0 NNPS(u)
where q0 is the total number of photons. Using the data of Cranley et al (1997) gives 34 042
photons mm−2 μGy−1, and hence the total number of photons, q0, is equal to this value
multiplied by the air kerma (K) used for the acquisition.
A brief note can be made on the geometry used for QC measurements. Not all detectors
can be removed from the bucky and hence some units will have to be assessed with the x-ray
table and possibly the antiscatter grid in the x-ray beam. This makes estimation of K difficult,
while the table may increase the amount of scattered radiation and structured noise in the
measurements. To examine the influence of the x-ray table on the MTF, the edge was placed
on the table at a distance of 8 cm above the Trixell detector cover and compared with the edge
placed directly on the detector cover.

2.6. Limiting spatial resolution, threshold c-d data and minimum performance standard
The TO20 c-d images from the digital detectors were scored from 5 MP Barco LCD monitors;
the luminance response followed the DICOM 3.14 greyscale calibration curve (AAPM 2005).
Before scoring the results, window level and width were adjusted by the observers to maximize
visibility of c-d details and viewing conditions adjusted to give low ambient light. A viewing
distance of approximately 40 cm was used, although observers could vary the viewing distance
and use software magnification. For a given disc diameter in the test object, each observer
was asked to count to the last disc that they considered visible; observers were allowed to
specify a score of a half for discs considered to be present but not fully imaged (a patchy or
broken perimeter, for example). The contrast of the last disc seen for each diameter (threshold
986 N W Marshall et al

contrast) was found using information supplied with the TO20 test object. One image was
scored by two experienced observers, giving an estimated uncertainty of 19% in threshold
contrast (Cohen et al 1984).
For a comparison against the DR systems, threshold c-d data for four 400 speed S/F
systems were acquired. A spectrum of 75 kV and 1.5 mm Cu was used for a Kodak Lanex
Regular screen/Agfa Curix film S/F combination, while the remaining F/S systems (two
Kodak Lanex Regular S/F systems and one Agfa Curix S/F system) were imaged at 70 kV
and 1 mm Cu. The average air kerma at the cassette was 2.56 μGy and the average optical
density (OD) was 1.52. Two experienced observers evaluated the two films acquired for each
S/F system, giving an estimated uncertainty of 15% on the c-d data (Cohen et al 1984).
One further set of c-d data was obtained for a comparison against the results found for the
DR detectors and the S/F systems. Routinely acquired c-d data were averaged for 13 Philips
Compano CR readers, also in use at the NHS Trust hospital. These data were acquired with
the TO20 c-d test object at 70 kV, 1 mm Cu, 4.07 μGy average detector air kerma, using 35 ×
43 cm CR cassettes. For  the purposes of comparison, the contrasts have been scaled to 2.50
μGy (i.e. by a factor of 4.07/2.50). This assumes that quantum noise is dominant in the
images at these exposure values, a fact that was confirmed by measuring threshold contrast
response at 0.4, 4 and 40 μGy. These data allowed the comparison of c-d results for the DR
detectors with a typical hospital CR system.

3. Results and discussion

The time taken to acquire a full QC data set was approximately 70 min, including full tube and
generator functionality, alignment of light beam to radiation field, focal spot size measurement,
dose–area product meter calibration and a leakage assessment. Images specific to the detector
tests were acquired in approximately 20 min. A further 45 min was required to find and
archive the images and perform the image quality scoring and quantitative analysis.

3.1. Signal transfer property

Figure 2 plots STP data for five detectors, and the two Canon detectors were tested to a
maximum of 20 μGy while the three other systems were tested up to 10 μGy.
Four detectors applied a logarithmic response, although there is a significant difference
in the curves and pixel values, highlighting the wide range of STP types and curves that can
be applied by the different manufacturers. PV data were linearized by solving the STP curve
(Mackenzie 2008), a process that removes the offset and gradient (gain) applied by the system
for the chosen LUT and transforms the pixel values to a universal scale. Although a simple
measurement, the STP curve is central to the successful testing of digital detector systems
using quantitative measurements. Good reproducibility was seen for the fitted STP curves of
the six Canon_CsI detectors. The curve for this system was PV = B.ln(K) + A; the mean
value for the A coefficient was 34 754 (min = 34 388; max = 35 155; cov = 0.9%), while the
mean value for the B coefficient was 5927 (min = 5768; max = 6027; cov = 1.9%). Figure 2
shows that most detectors in this study can produce an output at the 0.5 μGy target air kerma,
with the exception of the Canon_GOS unit, where the lowest air kerma used was 2.59 μGy.
This reduced air kerma range for the Canon_GOS curve did not give unusual results when
calculating the ESF (no negative pixel values, for example) or the MTF curve. Given these
results, the recommended K range for assessing the STP is from 0.5 to 20 μGy.
Quality Control measurements for digital x-ray detectors 987

3500 60000 Canon_GOS


PV = 1407.1ln(K) - 1199.3
3000
50000
Trixell Pixium 4600

Pixel Value (Canon_CsI)


2500 PV = 398.9ln(K) + 1122.8
40000
Pixel Value GE Definium 8000
2000
PV = 200.1K + 13.658
30000
1500 Fuji Velocity
20000 PV = 142.4ln(K) + 280.9
1000
Canon_CsI
10000
500 PV = 6116.9ln(K) + 34415

0 0
0 5 10 15 20 25
air kerma at the detector (µGy)

Figure 2. STP curves measured for a 70 kV spectrum with 1 mm Cu filtration.

Table 2. Uniformity calculated using three different methods for five detectors (Canon_CsI,
Canon_GOS, Trixell Pixium 4600, GE Revolution and the Fuji Velocity). The simple (manual)
method uses five manually placed ROIs to estimate maximum deviation on the mean; the remaining
three measures are calculated using a 1 cm × 1 cm ROI applied to the entire detector (trimmed by
1 cm around the edge).

Simple Maximum
(manual) Coefficient Maximum deviation
uniformity of variation deviation from from mean
Parameter (%) (cov) (%) centre ROI (%) (%)

Mean 6.1 3.2 12 10


Min 3.5 1.6 4.8 4.6
Max 10 5.5 19 17

3.2. Detector uniformity


Table 2 compares simple (manual) uniformity against the three full uniformity measures
calculated for an ROI moved across the entire image for the five different detectors in this
study. The cov method gave closest agreement with the simple 5 ROI uniformity method; the
maximum deviation on the mean or image centre methods gave higher measures of detector
uniformity. Future work may try to quantify image artefacts; however currently the simplest
means of identifying artefacts is to view the image at a narrow window width and examine the
image by eye, noting the type and position of artefacts.
An example of this is shown in figure 3(a), which presents a uniformity image of positive
polarity for a Canon_CsI unit. The image shows two groups of artefacts, each artefact having
a similar appearance to a focal spot. One set is negative (dark), while the other is positive
(pale); the artefacts appear to be rotated by 180◦ . This was investigated with the service
engineer and it was found that small Pb shavings from the collimator were present in the x-ray
field at the exit of the x-ray tube; these small Pb particles had been present when the engineer
performed the flat field correction procedure. Consequently they were included in the flat
field correction image as a positive correction and hence applied to all subsequent images.
When the QC uniformity image was acquired, the Pb shavings were still present and produced
988 N W Marshall et al

(a) (b) (c)

Figure 3. (a) Uniformity image with artefacts; (b) variance image showing reasonable noise
uniformity; (c) dark noise image showing strong line-correlated noise for a Canon_CsI detector.

a familiar negative (physical) artefact while the positive artefacts were added by the system.
By chance, the detector was being used with a 180◦ rotation compared to the geometry used
by the engineer for the flat field calibration. As a remedial action, the engineer opened and
cleaned the entire light beam diaphragm assembly and then performed the flat field procedure
again. A similar artefact was reported by Honey and Mackenzie (2009).
Finally, full uniformity was assessed as a function of K for one Canon_CsI detector,
covering a range from 0.53 to 19.9 μGy using the cov method. The cov varied from 2.18% to
2.20% as K was changed, indicating little dependence of uniformity on K for the Canon_CsI
detector and that the choice of an air kerma for this measurement is not that crucial. This
is a test of the detector flat field correction algorithm and its ability to remove large area
non-uniformity (x-ray converter structure noise and x-ray heel effect) from the image; while
not recommended as a routine test, it is useful to examine this behaviour at the commissioning
stage.
In order to briefly examine whether similar behaviour was seen for systems without flat
field corrections, additional uniformity measurements were made for two CR systems as a
function of air kerma at the cassette. The systems studied were the Agfa ADC plus (Agfa-
Gevaert Group, Mortsel, Belgium) and the Kodak CR800 (Carestream Health Inc., Rochester
NY, USA). An air kerma range of 0.83 to 30.6 μGy was covered for the Agfa system, while the
range for the Kodak CR800 was 1.49 to 31.0 μGy. The focus–cassette distance was 100 cm
and 35 × 43 cm cassettes were used. Full image uniformity was assessed, again using the cov
measurement. The average cov for the Agfa was 7.9% (minimum = 7.7%; maximum = 8.0%),
while the average cov for the Kodak CR800 was 11.5% (minimum = 11.2%, maximum =
12.2%). These results show that uniformity error is higher for the CR detectors, as might
be expected for systems where no flat field corrections are applied. However, there is little
change in uniformity error for the exposure range studied. This may be an indication that the
heel effect is the dominant factor with regard to uniformity at these air kerma levels for the
two CR detectors studied.

3.3. Variance
Figure 3(b) shows an example variance image for one of the Canon_CsI detectors; noise
(quantum mottle) is expected to be uniform across the image within limits imposed by the
Quality Control measurements for digital x-ray detectors 989

0.10

linearized standard deviation


σ = 0.02 K 0.46

measured
pure quantum (b=0.5)
0.01
0.1 1 10 100
air kerma at the detector (µGy)

Figure 4. Linearized standard deviation plotted against detector air kerma for a Canon_CsI
detector.

heel effect. No x-ray converter delamination or uncorrected dead pixel lines were seen for
the detectors in this work. We currently do not characterize the uniformity of the variance
image using cov (for example) although future work may address this. Fitting equation (2)
to the standard deviation measured at the image centre gave an average value of 0.46 for
the six Canon_CsI detectors, with a range of 0.44 to 0.49 (4.1% cov), while for the five
different detectors, b ranged from 0.44 to 0.61, with an average value of 0.54 for b. Figure 4
plots standard deviation against K for one of the Canon_CsI systems, along with a curve
indicating the expected curve for a system with pure quantum noise. Monitoring b routinely
and comparing against a baseline offers a quick and easy check on the noise performance of
the detector as a function of exposure (Marshall 2006b, IPEM 2010). We have not examined
stability or changes in b over time and hence cannot give figures for long-term reproducibility
or remedial levels for this parameter.

3.4. Dark noise and image retention


Dark noise, assessed using a simple PV measurement, is a system-specific measure. Earlier
advice (IPEM 2005) suggested that the PV could be linearized via the STP and to give a more
universal measure of dark signal. However, the STP for systems with a logarithmic response
often has a large gradient at low signal levels and this can lead to considerable uncertainty
about the linearized PV. It is now recommended that a PV is left as a ‘raw’ PV and used as a
constancy check (IPEM 2010). However, it should be noted that a change in the system STP
(e.g. a change of LUT) could produce changes in the dark noise values that are not consistent
with any significant change in the dark signal itself. The dark signal can vary considerably
between similar detectors; for the six Canon_CsI units, the ‘raw’ (unlinearized) mean PV for
the dark image was 6827 (minimum = 5473; maximum = 10 187; standard deviation = 1758;
cov = 25%). These are unlinearized values and hence not comparable with other detectors due
to differences in the STP, etc. Figure 3(c) presents a dark image for one Canon_CsI detector
with strong line correlated noise (Rowlands and Yorkston 2000), possibly caused by charge
990 N W Marshall et al

Table 3. Spatial frequency for MTF50%, ratio of the horizontal to the vertical resolution at MTF50%,
MTF at limiting spatial resolution, limiting spatial resolution (bar pattern) and MTF at the limiting
spatial resolution. These data are for six Canon_CsI detectors.

Spatial Ratio of horizontal Limiting MTF at


frequency to vertical spatial limiting
for MTF50% resolution at resolution spatial
Parameter (mm−1) MTF50% (mm−1) resolution

Mean 1.29 1.00 3.92 0.09


Standard deviation 0.05 0.05 0.18 0.02
Min 1.22 0.95 3.55 0.06
Max 1.35 1.05 3.99 0.11

Table 4. Spatial frequency for MTF50% and ratio of the horizontal to the vertical resolution at
MTF50% for five detectors.

Spatial frequency for Ratio of horizontal to vertical


Detector MTF50% (mm−1) resolution at MTF50%

Canon CXDI-50C 1.29 1.05


Canon CXDI-50G 1.92 0.94
GE Revolution 1.35 1.00
Trixell Pixium 4600 1.27 1.00
Fuji Velocity 1.47 1.03

injection when switching the array for readout. This line noise (potentially disturbing to the
image viewer) was not evident in images acquired at typical K values (∼2.5 μGy). Dark noise
for the remaining Canon_CsI detectors was more uniform (white). Finally, image retention
data were measured for seven Canon detectors in this study; no lag was found for the seven
detectors (average lag = 0.0; minimum = 0.0; maximum = 0.0; cov = 0.0%).

3.5. Modulation transfer function and limiting spatial resolution


Reproducibility of the edge angle was good, given the manual positioning of the edge at the
detector input plane (average edge angle for the six Canon_CsI detectors was 4.1◦ ). Good
isotropy was found between horizontal and vertical direction presampling MTF results for all
six Canon_CsI detectors studied. Isotropy was quantified by taking the ratio of the horizontal
to the vertical spatial frequencies at the point where the MTF had fallen to 0.5 (MTF50%);
this ratio is 1.00 (table 3), averaged for the six Canon_CsI detectors. Directional blurring
can occur in flat panel detectors, resulting in a non-isotropic MTF (Marshall 2006b); it is
reasonable to require MTF isotropy for current flat panel detectors. Figure 5(a) plots average
MTF ± 1 standard deviation, in the horizontal direction for six Canon_CsI detectors. The
average spatial frequency at MTF50% was 1.29 mm−1 ± 0.05 (3.9% cov) indicating good
reproducibility across these six detectors (table 3). Figure 5(b) shows similar MTFs for the
group of five different detectors (three CsI detectors and a BaFBr:Eu CR detector). The
MTF for the Canon_GOS detector is significantly greater than that found for the other four
units, with a spatial frequency of 1.92 mm−1 for the MTF50% point (table 4). A previous
study (Marshall 2007) found good short-term reproducibility for the MTF method used in
the current work (1% cov for five-edge images acquired over a period of 5 min, with manual
repositioning of the edge between acquisitions)
Quality Control measurements for digital x-ray detectors 991

1.0 1.0 Canon_GOS


Canon_CsI
0.9 0.9 Canon_CsI
±1 stdev
0.8 Trixell Pixium 4600
0.8
GE Revolution
0.7 0.7
Fuji Velocity
0.6 0.6

MTF (u)
MTF (u)

0.5 0.5

0.4 0.4

0.3 0.3

0.2 0.2

0.1 0.1
(a) (b)
0.0 0.0
0 1 2 3 4 5 6 0 1 2 3 4 5 6
spatial frequency (mm-1) spatial frequency (mm-1)

Figure 5. (a) Average presampling MTF ± 1 standard deviation measured in the horizontal
direction for six Canon_CsI detectors; (b) presampling MTF for five digital detectors. Acquired
at 70 kV; 1 mm Cu and 10 μGy at the detector with the MTF edge placed on the detector cover.

1.0
edge at detector
0.9
edge on table, 8 cm above detector
0.8

0.7

0.6
MTF (u)

0.5

0.4

0.3

0.2

0.1

0.0
0 1 2 3 4 5 6
spatial frequency (mm-1)

Figure 6. Presampling measured for the Trixell detector in the horizontal direction with the edge
placed on the detector and with the edge placed on the table, 8 cm above the detector.

With regard to detector geometry, the MTF results for the Trixell detector measured with
the edge placed at the detector and also on the table (8 cm above the detector) are plotted in
figure 6. The MTF measured with the edge on the table was approximately 9% lower using
the average from 0 to 4 mm−1; possible reasons for this include the some influence of the focal
spot dimension (edge is closer to the focus) and increased scattered radiation from the table. A
992 N W Marshall et al

consistent geometry must be used for the routine QC results in order to produce reproducible
results between QC visits.
Table 3 also compares MTF against limiting spatial resolution measured using a line pair
test object, placed at the detector centre at an angle of 45◦ to the pixel matrix for six Canon_CsI
detectors. The average limiting spatial resolution for these detectors was 3.92 mm−1 ±
0.18 mm−1 (4.6% cov). The Nyquist frequency for a detector with a 0.16 mm pixel size
is 3.125 mm−1; however placing √ the line pair test object at 45◦ to the pixel changes the
effective sampling pitch by 2, leading to a maximum expected limiting spatial frequency of
4.42 mm−1; the reported value by the observers therefore lay somewhere between these figures.
The limiting spatial resolution for the Canon_GOS detector was 3.55 mm−1, despite the unit
having a superior presampling MTF. Table 3 also examines the point on the MTF where the
spatial frequency equals the limiting spatial resolution for the Canon_CsI units, showing an
average of 0.09 for the MTF at this point (cov was 22%). Not only does the presampling MTF
provide a fuller description of detector resolution, there can be difficulty in reading a line pair
test object in a digital image due to the presence of strong aliasing artefacts (Albert et al 2002).
IPEM Report 91 (IPEM 2005) employs remedial and suspension levels to define points
where corrective action needs to be initiated in order to restore performance to some earlier
(normal) level. The remedial level for limiting spatial resolution deviation from baseline is
25%; for a Canon_CsI detector, this amounts to a reduction from 3.99 to 2.99 mm−1. If we
assume that limiting resolution remains approximately at the 10% point of the MTF, then this
would reduce the MTF from 0.19 to 0.10 at 2.99 mm−1, indicating an approximately 50%
reduction in the MTF. Obviously, this figure will change somewhat depending on the initial
limiting spatial resolution, the shape of the MTF and the type of blurring causing the resolution
loss. The remedial level for MTF in Report 32 (Part VII) is a change of ±0.2 mm−1 from the
baseline measurement (established as the spatial frequency at MTF50%). For the CsI detectors
in this study, this value is approximately 1.3 mm−1, and therefore the remedial value represents
a change of roughly 15% from the baseline measurement. As shown previously, reproducibility
of the MTF measurement across six similar detectors (3.9% cov) lies comfortably within this
expected change.

3.6. Normalized noise power spectrum


The average normalized noise power measured at a target air kerma of 2.5 μGy at the detector
(70 kV and 1 mm added Cu) for six Canon_CsI detectors is plotted in figure 7(a). Also shown
is ±1 standard deviation for the six measurements. The NNPS at 0.5 mm−1 is 1.42 × 10−5
mm2, while the cov for NNPS at 0.5 mm−1 is 6.5% and at 2 mm−1 is 6.7%, indicating excellent
reproducibility of the method and the QC setup. While the target K was 2.5 μGy, the average
for these six measurements was 2.57 ± 0.13 μGy (5.0% cov), indicating that the target value
could be set to within approximately 5% using a fixed distance of 120 cm from the source to
the detector. This value was 2.55 ± 0.10 μGy (4.1% cov) when considering all nine detectors
in the study. This is an important point when designing a QC that is quick to perform and
applicable in an x-ray room.
When comparing NNPS for the five detectors, two distinct groups are seen. Figure 7(b)
shows that the NNPS is similar for the CsI-based detectors at 0.5 mm−1 being approximately
1.49 × 10−5 mm2; the NNPS for the Gd2SO2-based Canon detector is 3.25 × 10−5 mm2,
a factor of approximately 2.2 greater than that for the CsI-based detectors. The NNPS for
the Fuji Velocity detector is 3.56 × 10−5 mm2. For a fixed detector air kerma (2.5 μGy),
NNPS will therefore vary considerably between detector technologies leading to difficulties
in using NNPS to specify a minimum performance standard. However, NNPS is sensitive
Quality Control measurements for digital x-ray detectors 993

Canon_CsI
1.0E-04 1.0E-04 Canon_GOS
Canon_CsI Trixell Pixium 4600
GE Revolution
±1 stdev Fuji Velocity

NNPS (mm2)
NNPS (mm2)

1.0E-05 1.0E-05

(a) (b)
1.0E-06 1.0E-06
0 1 2 3 0 1 2 3
spatial frequency (mm-1) spatial frequency (mm-1)

Figure 7. (a) Average NNPS ± 1 standard deviation measured in the horizontal direction for six
Canon_CsI detectors; (b) NNPS for one Canon_CsI detector (2.52 μGy) compared against NNPS
for the Canon_GOS detector (2.50 μGy).

to changes in detector performance (Marshall 2006b), and monitoring changes in NNPS at


some spatial frequency against a baseline is potentially useful for QC purposes. This is the
approach adopted in Report 32 (Part VII) where NNPS at 0.5 mm−1 and 2.0 mm−1 is tracked;
the remedial action level is baseline ±15%. Previous work found NNPS reproducibility of
4% for digital mammography detectors (Marshall 2007); this action level appears to be a
reasonable compromise, given the NNPS reproducibility of 6.5% found here.

3.7. Detective quantum efficiency


Figure 8(a) presents the average DQE in the horizontal direction for the six Canon_CsI
detectors, measured at 70 kV, 1 mm Cu added filtration and a target air kerma at the detector of
2.5 μGy. The DQE at 0.5 mm−1 was 0.60, while cov was found to change as a function of spatial
frequency, ranging from 2% at 0.88 mm−1 to 10% at 2.9 mm−1. DQE measurements include
uncertainties from the air kerma and STP measurement, a squared contribution from the MTF
error and the NNPS error. Figure 8(b) plots DQE for the group of five different detectors. The
CsI-based detectors in the figure have a DQE of approximately 0.56 at 0.5 mm−1, compared
to a figure of 0.31 for the Canon_GOS detector and 0.23 for the Fuji Velocity unit. Although
the Gd2SO2 detector has a higher MTF than the CsI detectors, the lower NNPS for the CsI
units results in improved DQE. Conversely, while the Canon_GOS and Fuji Velocity have
similar NPS curves at 2.5 μGy, the higher MTF of the Canon_GOS leads to a higher DQE
(0.31 compared to 0.23 at 0.5 mm−1). We would therefore expect improved c-d performance
for the CsI units over the Gd2SO2 system, for a fixed detector air kerma value.
Although IPEM Report 32 (Part VII) discusses DQE and the utility of this measure
for comparing the performance between detectors, remedial levels are only given for the
variance image, MTF and NNPS, as this is considered sufficient to track changes in detector
performance. If desired, DQE could be used to track detector function over time; Marshall
994 N W Marshall et al

Canon_CsI
1.0 1.0
Canon_GOS
0.9 0.9 Trixell Pixium 4600
Canon_CsI GE Revolution
0.8 0.8
±1 stdev Fuji Velocity
0.7 0.7 Trixell 4600 from product literature

0.6 0.6

DQE (u)
DQE (u)

0.5 0.5

0.4 0.4

0.3 0.3

0.2 0.2

0.1 0.1
(a) (b)
0.0 0.0
0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5
spatial frequency (mm-1) spatial frequency (mm-1)

Figure 8. (a) Average DQE ± 1 standard deviation measured in the horizontal direction for six
Canon_CsI detectors; 70 kV and 1 mm Cu added filtration; 2.5 μGy target air kerma at detector;
(b) DQE for the group of five different detectors; 70 kV and 1 mm Cu added filtration; 2.5 μGy
target air kerma at a detector.

(2007) found a cov of 6.9% for DQE over 17 months while Cunningham (2008) reported no
significant change in DQE over a 12 month period. Finally, manufacturers often present DQE
results in the product literature and this may offer physicists a basic check on a detector. For
example, the DQE measured at RQA5 is plotted in figure 8(b) for the Trixell 4600 showing
reasonable agreement (although there is an ∼3 keV difference in mean energy between spectra
used).

3.8. Measured contrast detail results


Figure 9(a) plots the average c-d performance for the six Canon_CsI detectors (square points)
and a dotted curve showing a second-order polynomial function fitted to the measured c-d
data in order to show the trends more clearly. Two curves marking ±1 standard deviation for
the curve fit results are also shown. The cov for these data is quite high, ranging from 11% to
35% at small detail sizes, with an average of 16%, illustrating one of the problems of using
subjective measures of image quality.
The c-d data for the Canon_GOS system measured at a detector air kerma of 2.50 μGy are
plotted in figure 9(b), along with the observer c-d curves for the Canon_CsI, Trixell Pixium
4600 and GE Revolution detectors at 2.50 μGy target air kerma (note that c-d data were
not available for the Fuji Velocity system). The CsI-based group of detectors have close c-d
performance (cov of 5.9% averaged over all disc diameters for the three c-d curves). Slightly
reduced c-d performance is seen for the Canon_GOS detector compared to the CsI units;
threshold contrast averaged for the CsI group of detectors is a factor of 0.82 lower than the
Canon_GOS contrast, on average. This figure is close to the uncertainty on the measurements;
however some reduction in threshold contrast performance is expected, given the difference
in DQE found. From the analysis of Aufrichtig (1999), threshold contrast for two different
systems should scale approximately as the ratio of the square root of the low-frequency DQE,
although this will depend on the spatial frequencies present in the task used to compare
Quality Control measurements for digital x-ray detectors 995

100.0 100.0 Canon_CsI average


Canon_GdOS
Canon_CsI average Trixell Pixium 4600
±1 stdev GE Revolution

threshold contrast (%)


threshold contrast (%)

10.0 10.0

1.0 1.0

(a) (b)
0.1 0.1
0.1 1.0 10.0 0.1 1.0 10.0
diameter (mm) diameter (mm)

Figure 9. (a) Average c-d curve for six Canon_CsI detectors with uncertainty of ± 1 standard
deviation (∼2.57 μGy air kerma); (b) c-d curves for Canon_GOS, Trixell 4600, Canon_CsI and
GE Revolution detectors (∼2.50 μGy air kerma) (error bars show 19% uncertainty).

Table 5. Threshold contrast (C_T) measured using the Leeds TO20 test object. ‘Average C_T for
S/F’ taken from four S/F systems (∼2.56 μGy and OD = 1.52). ‘90% C_T for S/F’ is the 90th
percentile for the S/F results. ‘Average C_T for three CsI detectors’ is average for the Canon
CXDI-50C, GE and Trixell CsI detectors (∼2.55 μGy).

Diameter Average C_T for 90% C_T for Average C_T for three CsI
(mm) S/F (%) S/F (%) detectors (%)

11 0.70 0.79 0.46


7.9 0.76 0.86 0.53
5.6 0.87 0.98 0.63
4.0 1.03 1.16 0.78
2.8 1.30 1.47 1.02
2.0 1.70 1.93 1.36
1.4 2.37 2.70 1.93
1.0 3.40 3.90 2.77
0.70 5.24 6.05 4.24
0.50 8.29 9.62 6.59
0.35 14.2 16.5 11.00
0.25 24.9 28.9 18.61

systems. Using the DQE at 0.5 mm−1 to compare the systems, we expect threshold contrast
for the CsI group of detectors to be a factor of 0.74 (i.e. (0.31/0.56)1/2) lower, indicating
reasonable agreement.
The c-d results for the four S/F systems are presented in table 5 (average K was 2.56 μGy;
average OD was 1.52). Two sets of threshold contrast data are given: the average value,
996 N W Marshall et al

average CsI curve


100.0 100.0 Canon_CsI
Canon_GdSO
Canon_GOS
typ CR reader scaled to 2.5 µGy
Trixell Pixium 4600
90% SF curve
GE Revolution
Kodak Lanex with T-MAT G
threshold contrast (%)

10.0 10.0

threshold contrast (%)


1.0 1.0

(a) (b)
0.1 0.1
0.1 1.0 10.0 0.1 1.0 10.0
diameter (mm) diameter (mm)

Figure 10. (a) Average observer c-d curve for the CsI-based detectors (2.54 μGy average), plotted
with the c-d curve for the Canon_GOS detector (2.52 μGy) and the 90% S/F c-d curve. Also
plotted is the average c-d curve for 13 Philips Compano CR systems, scaled to 2.5 μGy at the CR
cassette. Error bar shows 19% uncertainty. (b) C-d curves calculated using the NPWE model for
the Canon_GOS, Canon_CsI, Trixell Pixium 4600 and GE Revolution detectors.

indicating typical performance, and a fit to the 90th percentile point of threshold contrast for
the four results, indicating the curve that 90% of the surveyed S/F systems pass.
Figure 10(a) plots the 90% S/F c-d curve along with the average curve for the CsI-based
detectors and the Canon_GOS curve; threshold contrast results for both of these detectors lie
below the 90% curve. The c-d curve averaged for 13 Philips Compano CR readers is also
plotted. The data in figure 10(a) indicate that the average c-d curve for the Philips Compano at
2.50 μGy is close to the 90% curve for the four film screen systems studied, while the average
CsI-based flat panel detector curve comfortably surpasses this level of performance. For newly
introduced imaging technologies, we would expect the imaging performance at least to match
the image quality of established or outgoing modalities. The data in figure 10(a) indicate
superior performance for the DR systems and that a typical CR system can just about match
the S/F results when air kerma at the detector is equal. Note that using c-d curves (threshold
contrast) to make these comparisons assumes that the c-d method is a good indicator of clinical
image quality.

3.9. Modelled c-d results


The aim of this section is to briefly examine whether trends seen in the measured c-d results
follow those expected from the measured MTF and NNPS. Threshold contrast was calculated
using a version of the non pre-whitened matched filter model with eye response (NPWE)
(Aufrichtig 1999), as described previously (Marshall 2006a). A nominal viewing distance
of 35 cm was used for all calculations along with a fixed value of 3.0 for the threshold
SNR. Figure 10(b) plots the calculated c-d curves, indicating that the expected curves follow
the trends found for the observer c-d results. Virtually identical performance is seen for
Quality Control measurements for digital x-ray detectors 997

the CsI group of detectors, while higher threshold contrast performance is predicted for the
Canon_GOS detector, although the ratio of contrasts for the CsI detectors and the Canon_GOS
curve is higher than the figure of 0.82 found for the observer curves, at 0.74.
Finally, we can use data published by Monnin et al (2005) for the Kodak Lanex screen
with T-MAT G film to model the c-d curve for a typical medium-speed S/F detector. Monnin
et al’s (2005) data used here were measured for an RQC5 spectrum, at 1.4 OD and 2.57 μGy
entrance air kerma at the cassette. The calculated c-d curve for these conditions is plotted in
figure 10(b), showing slightly reduced performance compared to the Canon_GOS detector.
Comparing the calculated S/F and CsI detector c-d curves in figure 10(b), the threshold
contrast is 39% lower for the CsI detectors, implying that CsI detectors could be operated at
approximately 63% of the detector air kerma for equivalent threshold contrast performance
(i.e. 1.58 μGy compared to 2.5 μGy) (Aufrichtig 1999). The measured average threshold
contrast for the CsI detectors in table 5 is 41% lower than the average S/F threshold contrast
data, indicating that the CsI detectors could be operated with a 65% reduction in air kerma for
matched threshold contrast performance (i.e. 1.63 μGy compared to 2.5 μGy). Although only
approximate, these figures are close to the dose reduction suggested by Aufrichtig (1999) and
Rong et al (2001) for digital flat panel detectors compared to S/F technology.
With respect to action levels, IPEM Report 91 (IPEM 2005) does not define a remedial
level for threshold contrast detail detectability, but instead suggests that reference is made
to baseline curves; however no value is given for what might be considered a significant
change. The relationship between objective image quality parameters and threshold contrast is
complex, even for the well-defined c-d task (Aufrichtig 1999, Borasi et al 2006). Furthermore,
the objective image quality parameters are interlinked; for example a change in MTF will lead
to a change in one of the components of the total system NNPS (Nishikawa and Yaffe 1990,
Evans et al 2002).

3.10. Application to routine QC


This paper has described tests that are potentially quick and easy to perform and are easily
integrated into current protocols, e.g. the flood images could be acquired for other tests such
as detector dose index measurements. In fact some of these tests may not be additional
measurements but can replace existing tests. We have shown that the c-d and DQE results are
strongly linked; moreover the quantitative measurements are more sensitive and reproducible.
This also applies to the MTF, which is more reproducible and provides more information than
a high contrast spatial resolution bar pattern test. Localized areas of blurring can be identified
using fine meshes, as recommended in IPEM 91; however the variance image requires no
additional images and enables areas of blurring to be identified more easily.

4. Conclusions

This paper has described the application of a protocol for the QC assessment of digital flat panel
detectors which emphasizes a quantitative assessment of detector performance. The protocol
can be successfully applied using 12 images, requiring approximately 20 min to acquire the
image data. The protocol uses MTF and NNPS to track changes in detector resolution and
noise performance; reproducible results were found for both the MTF and NNPS for a group
of six identical detectors. This should help physicists to track changes in detector performance
more accurately compared to the use of subjective threshold contrast measures. If desired, flat
panel detector performance can be benchmarked against the threshold contrast results derived
for medium-speed S/F systems presented in this work.
998 N W Marshall et al

Acknowledgments

The authors thank Joris Nens for providing the CR uniformity images. They would also like to
thank Dimitra Myronaki, Taiwo Faleye, Clara Marquez and Dr Ruby Fong for their assistance
in gathering some of the data used in this work. Finally, the help of Dr Julie Horrocks is also
gratefully acknowledged.

References

AAPM (American Association of Physicists in Medicine Task Group 18) 2005 Assessment of display performance
for medical imaging systems AAPM On-line Report 03 (College Park, MD: AAPM)
Albert M, Beideck D J, Bakic P R and Maidment A D A 2002 Aliasing effects in digital images of line-pair phantoms
Med. Phys. 29 1716–8
Aufrichtig R 1999 Comparison of low contrast detectability between a digital amorphous silicon and a screen-film
based imaging system for thoracic radiography Med. Phys. 26 1349–58
Borasi G, Nitrosi A, Ferrari P and Tassoni D 2003 On site evaluation of three flat panel detectors for digital radiography
Med. Phys. 30 1719–31
Borasi G, Samei E, Bertolini M, Nitrosi A and Tassoni D 2006 Contrast-detail analysis of three flat panel detectors
for digital radiography Med. Phys. 33 1707–19
Carton A K, Vandenbroucke D, Struye L, Maidment A D, Kao Y H, Albert M, Bosmans H and Marchal G 2005
Validation of MTF measurement for digital mammography quality control Med. Phys. 32 1684–95
CEP (Centre for Evidence-based Purchasing) Lawinski C P, Mackenzie A, Cole H, Blake P and Honey I D 2005
Digital detectors for general radiography. A comparative technical report Report 05078 (London: CEP)
Cohen G, McDaniel D L and Wagner L K 1984 Analysis of variations in contrast-detail experiments Med.
Phys. 11 469–473
Cowen A R, Haywood J M, Workman A and Clarke O F 1987 A set of x-ray test objects for image quality control in
digital subtraction fluorography: I. Design considerations Br. J. Radiol. 60 1001–9
Cranley K, Gilmore B J, Fogarty G W A and Desponds L 1997 Catalogue of diagnostic x-ray spectra and other data
IPEM Report 78 (York: IPEM)
Cunningham I A 2000 Applied linear system theory Physics and Psychophysics vol 1 ed J Beutel, H L Kundel and
R L Van Metter (Bellingham, WA: SPIE) pp 79–159
Cunningham I A 2008 Use of the detective quantum efficiency in a quality assurance program Proc. SPIE 6913 69133I
Dainty J C and Shaw R 1974 Image Science—Principles, Analysis and Evaluation of Photographic-Type Imaging
Processes (London: Academic Press)
Dobbins J T 3rd, Samei E, Ranger N T and Chen Y 2006 Intercomparison of methods for image quality
characterization: II. Noise power spectrum Med. Phys. 33 1466–75
Evans D S, Mackenzie A, Lawinski C P and Smith D 2004 Threshold contrast detail detectability curves for fluoroscopy
and digital acquisition using modern image intensifier systems Br. J. Radiol. 77 751–8
Evans D S, Workman A and Payne M 2002 A comparison of the imaging properties of CCD-based devices used for
small field digital mammography Phys. Med. Biol. 47 117–35
Honey I D and Mackenzie A 2009 Artifacts found during quality assurance testing of computed radiography and
digital radiography detectors J. Digit. Imaging 22 383–92
Hiles P, Mackenzie A, Scally A, Wall B and IPEM 2005 Recommended standards for the routine performance testing
of diagnostic x-ray imaging systems IPEM Report 91 (York: IPEM)
IEC (International Electrotechnical Commission) 2003 Medical electrical equipment characteristics of digital x-ray
image devices—part 1. Determination of the detective quantum efficiency IEC 62220-1 (Geneva: International
Electrotechnical Commission)
IEC (International Electrotechnical Commission) 2005 Medical diagnostic x-ray equipment—radiation conditions
for use in the determination of characteristics IEC 61267 (Geneva: International Electrotechnical Commission)
Mackenzie A, Doshi S, Doyle P, Hill A, Honey I, Marshall N, O’Neill J, Smail M and IPEM 2010 Measurement
of the performance characteristics of diagnostic x-ray systems: digital imaging systems IPEM Report 32
(Part VII) (York: IPEM)
Mackenzie A 2008 Validation of correction methods for the non-linear response of digital radiography systems Br. J.
Radiol. 81 341–5
Mackenzie A and Honey I D 2007 Characterization of noise sources for two generations of computed radiography
systems using powder and crystalline photostimulable phosphors Med. Phys. 34 3345–57
Quality Control measurements for digital x-ray detectors 999

Maidment A D A, Albert M, Bunch P C, Cunningham I A, Dobbins J T, Gagne R M, Nishikawa R M, Wagner


R F and Van Metter R L 2003 Standardization of NPS measurement: interim report of AAPM TG16 Proc.
SPIE 5030 523–32
Marshall N W 2006a A comparison between objective and subjective image quality measurements for a full field
digital mammography system Phys. Med. Biol. 51 2441–63
Marshall N W 2006b Retrospective analysis of a detector fault for a full field digital mammography system Phys.
Med. Biol. 51 5655–73
Marshall N W 2007 Early experience in the use of quantitative image quality measurements for the quality assurance
of full field digital mammography x-ray systems Phys. Med. Biol. 52 5545–68
Metz C E, Wagner R F, Doi K, Brown D G, Nishikawa R M and Myers K J 1995 Toward consensus on quantitative
assessment of medical imaging systems Med. Phys. 22 1057–61
Monnin P, Gutierrez D, Bulling S, Lepori D and Verdun F R 2005 A comparison of the imaging characteristics of
the new Kodak Hyper Speed G film with the current T-MAT G/RA film and the CR 9000 system Phys. Med.
Biol. 50 4541–52
NHSBSP (National Health Service Breast Screening Programme) 2009 Calculation of quantitative image quality
parameters—notes describing the use of OBJ_IQ_reduced equipment Report 0902 (Sheffield: NHSBSP)
Nishikawa R M and Yaffe M J 1990 Effect of various noise sources on the detective quantum efficiency of phosphor
screens Med. Phys. 17 887–93
Rimkus D A and Bailey N A 1983 Quantum noise in detectors Med. Phys. 10 470–1
Rong X J, Shaw C C, Liu X, Lemacks M R and Thompson S K 2001 Comparison of an amorphous silicon/cesium
iodide flat-panel digital chest radiography system with screen/film and computed radiography systems—a
contrast-detail phantom study Med. Phys. 28 2328–35
Rowlands J A and Yorkston J 2000 Flat panel detectors for digital radiology Physics and Psychophysics vol 1
ed J Beutel, H L Kundel and R L Van Metter (Bellingham, WA: SPIE) pp 223–328
Samei E and Flynn M J 2003 An experimental comparison of detector performance for direct and indirect digital
radiography systems Med. Phys. 30 608–22
Samei E, Flynn M J and Reimann D A 1998 A method for measuring the presampled MTF of digital radiographic
systems using an edge test device Med. Phys. 25 102–13

You might also like