Professional Documents
Culture Documents
A R T I C L E I N F O A B S T R A C T
Keywords: Background: The Prostate Imaging Quality (PI-QUAL) score is the first step toward image quality assessment in
Prostate multi-parametric prostate MRI (mpMRI). Previous studies have demonstrated moderate to excellent inter-rater
MRI agreement among expert readers; however, there is a need for studies to assess the inter-reader agreement of
Prostate cancer
PI-QUAL scoring in basic prostate readers.
PI_QUAL
Image quality
Objectives: To assess the inter-reader agreement of the PI-QUAL score amongst basic prostate readers on multi-
center prostate mpMRI.
Methods: Five basic prostate readers from different centers assessed the PI-QUAL scores independently using T2-
weighted images, diffusion-weighted imaging (DWI) including apparent diffusion coefficient (ADC) maps, and
dynamic-contrast-enhanced (DCE) images on mpMRI data obtained from five different centers following Prostate
Imaging-Reporting and Data System Version 2.1. The inter-reader agreements amongst radiologists for PI-QUAL
were evaluated using weighted Cohen’s kappa. Further, the absolute agreements in assessing the diagnostic
adequacy of each mpMRI sequence were calculated.
Results: A total of 355 men with a median age of 71 years (IQR, 60–78) were enrolled in the study. The pair-wise
kappa scores ranged from 0.656 to 0.786 for the PI-QUAL scores, indicating good inter-reader agreements be
tween the readers. The pair-wise absolute agreements ranged from 0.75 to 0.88 for T2W imaging, from 0.74 to
0.83 for the ADC maps, and from 0.77 to 0.86 for DCE images.
Conclusions: Basic prostate radiologists from different institutions provided good inter-reader agreements on
multi-center data for the PI-QUAL scores.
1. Introduction [1–3]. However, concerns have arisen regarding the low and varying
image quality, which is crucial in the detection of csPCa [4,5]. Although
Multiparametric magnetic resonance imaging (mpMRI) of the pros the Prostate Imaging-Reporting and Data System (PI-RADS) guidelines
tate has emerged as the preferred diagnostic tool for biopsy-naïve pa sets the minimum technical requirements for prostate MRI acquisition,
tients suspected of having clinically significant prostate cancer (csPCa) adherence to these guidelines does not always ensure high-quality scans
due to its superiority over traditional screening methods (e.g., prostate [6–8].
specific antigen - PSA -) that can cause overdiagnosis and overtreatment In 2020, Giganti et al. proposed the Prostate Imaging Quality (PI-
Abbreviations: ADC, Apparent diffusion coefficient; csPCa, Clinically significant prostate cancer; mpMRI, Multi-parametric MRI; PI-QUAL, Prostate Imaging
Quality.
* Corresponding author at: Acibadem Mehmet Ali Aydinlar University, School of Medicine, Department of Radiology, Istanbul, Turkey.
E-mail addresses: yb772@hotmail.coma (Y. Basar), drdenizalis@gmail.com (D. Alis), smustafaege@gmail.com (M.E. Seker), md.mustafasaidkartal@gmail.com
(M.S. Kartal), batuhanguroz@hotmail.com (B. Guroz), arslanaydan@gmail.com (A. Arslan), sabrisirolu@gmail.com (S. Sirolu), serpil.kurtcan@acibadem.com
(S. Kurtcan), nurper.denizoglu@acibadem.com (N. Denizoglu), ercan.karaarslan@acibadem.edu.tr (E. Karaarslan).
https://doi.org/10.1016/j.ejrad.2023.110923
Received 23 April 2023; Received in revised form 18 May 2023; Accepted 5 June 2023
Available online 9 June 2023
0720-048X/© 2023 Elsevier B.V. All rights reserved.
Y. Basar et al. European Journal of Radiology 165 (2023) 110923
QUAL) score from the PRECISION trial to evaluate the image quality of and 4 years in prostate mpMRI; S.S. (Radiologist 3) had 6 years in
prostate MRI in a standardized manner [9]. This score combines an general radiology and 2 years in prostate mpMRI; N.D. (Radiologist 4)
objective evaluation of factors such as field-of-view and slice thickness had 12 years in general radiology and 6 years in prostate mpMRI
(as per PI-RADS technical recommendations) with a more subjective experience; and S.K. (Radiologist 5) had 15 years in general radiology
assessment based on human reader perceptions [10]. Previous research and 5 years in prostate mpMRI experience. Hence, all readers met the
by the same group has shown excellent reproducibility between two definition of a “basic prostate reader” [4].
experienced radiologists, while another study reported a slightly lower It is important to note that readers may be expected to feel more
inter-reader agreement [11,12]. Hence, there is a need for multi-reader confident and accurate when scoring scans from their own centers,
studies, ideally including multi-center mpMRI data from patients from potentially leading to fewer instances of overcalling. However, this fa
different background, to further evaluate the reliability of the PI-QUAL miliarity can introduce bias, as readers might assign higher scores to
score. scans from their centers due to their familiarity with the examination
The consensus statement from the European Society of Urogenital procedures. In order to mitigate this bias and ensure a more objective
Radiology (ESUR) and EAU Section of Urologic Imaging (ESUI) defines assessment, radiologists were deliberately recruited from different
“expert readers” in prostate imaging as radiologists who have inter centers than those where the multi-center data were collected.
preted a minimum of ≥ 1,000 cases and report ≥ 200 cases on an annual The evaluation by the radiologists was based on the criteria
basis [4]. It is reasonable to anticipate that basic prostate readers, who following the PI-QUAL [10]. Radiologists were blinded to clinical in
are less experienced and are defined as radiologists who have read ≥ 400 formation. Before the independent image readings, radiologists gath
cases and routinely interpret ≥ 150 cases annually, will increasingly be ered in several online meeting sessions where an expert reader (E.K.)
involved in prostate mpMRI readings due to the widespread use of with over 20 years of prostate imaging experience explained the PI-
prostate mpMRI. Several studies have reported poorer inter-reader QUAL score using PI-QUAL papers along with in-house mpMRI scans
agreement in the PI-RADS score by less-experienced readers compared obtained in the study centers that were not used in the final cohort [14].
to expert readers [13]. Thus, it is crucial to investigate the inter-reader The main aim of this session was to encourage experience sharing be
agreement of PI-QUAL scoring among basic prostate readers. tween the expert and basic prostate radiologists and let study readers get
In this study we assessed the inter-reader agreement of the PI-QUAL acquainted with the PI-QUAL score. In the session, radiologists freely
score amongst five basic prostate readers in a multicenter setting. discussed cases regarding image quality. The PI-QUAL score sheet
template used in this study is shown in Fig. 1.
2. Methods
2.4. Statistical analyses
2.1. Study sample
The statistical analyses were performed using the SciPy library of
Each local review board approved this retrospective study and Python Version 3. The continuous variables are presented using median
waived the need for informant consent for the retrospective analyses of and interquartile ranges, and the categorical and ordinal variables are
anonymized medical data. In seven hospitals, we searched consecutive presented with frequencies and percentages. The PI-QUAL scores of the
patients who underwent a prostate MRI scan due to suspicion of clini readers were calculated and compared on a scan level. The inter-reader
cally significant prostate cancer (csPCa) (i.e., raised PSA or suspicious agreements amongst readers in PI-QUAL scoring were evaluated using
digital rectal examination) or active surveillance between June and weighted Cohen’s kappa [15]. The kappa scores were interpreted as
December 2020. Patients with prior history of treatment for csPCa (n = follows: a kappa score of < 20, a poor agreement; 21–40, a fair agree
23) and who underwent biparametric (i.e., without injection of intra ment; 41–60, a moderate agreement; 61–80, a good agreement; and
venous contrast) MRI (n = 167) were excluded from the study. 81–100, an excellent agreement. Additionally, pair-wise inter-reader
absolute agreements for each mpMRI sequence determining the diag
2.2. Multi-parametric MRI protocols nostic adequacy were calculated.
2
Y. Basar et al. European Journal of Radiology 165 (2023) 110923
Fig. 1. Prostate Imaging Quality (PI-QUAL) scoring sheet. Reprinted with permission from Giganti, F., Allen, C., Emberton, M., Moore, C.M., Kasivisvanathan, V.,
PRECISION study group, Prostate Imaging Quality (PI-QUAL): A New Quality Control Scoring System for Multiparametric Magnetic Resonance Imaging of the
Prostate from the PRECISION trial, Eur Urol Oncol. 3 (2020) 615–619. https://doi.org/10.1016/j.euo.2020.06.007. Copyright 2020 Elsevier.
3
Y. Basar et al. European Journal of Radiology 165 (2023) 110923
Table 1
Prostate Image Quality (PI-QUAL) scores by each radiologist.
PI-QUAL Score Radiologist 1 Radiologist 2 Radiologist 3 Radiologist 4 Radiologist 5 Overall Scores
4. Discussion
4
Y. Basar et al. European Journal of Radiology 165 (2023) 110923
scores for the readers in the present study. Most importantly, all radi
ologists in this study were basic prostate readers, whereas radiologists
highly experienced in prostate MR reporting evaluated the scans in the
study by Giganti and colleagues. It is well-known that less-experienced
readers tend to produce more inconsistent results when reading pros
tate mpMRI scans [13].
Further, the PI-QUAL score was developed by the same group of
radiologists reading the scans, potentially leading to a higher level of
agreement, as also stated by the authors [11].
In our study, radiologists from different centers were evaluated
image quality and demonstrated a higher inter-rater agreement
compared to those of Karanasios et al. [12], where the authors found a
moderate level of agreement between a junior and senior reader with a
kappa of 0.47 and between senior readers with a kappa of 0.52. A recent
paper has documented that a teaching course incorporating a dedicated
lecture and hands-on workshop could significantly improve the inter-
reader agreement for the PI-QUAL score [16]. In our study, we pro
vided concise education sessions online before the readings using the
dedicated PI-QUAL primer [14]. Although our online meetings were
likely less effective than the dedicated workshop provided in the
aforementioned teaching course, they might have contributed to better
inter-reader consistency [16].
Another factor that might have contributed to lower inter-reader
agreements in the study by Karanasios et al. could be the substantially
unbalanced data distribution. In their work, approximately 96% of the
scans were rated as PI-QUAL ≥ 3. Kappa statistics can be affected by
unbalanced data, which may result in potentially misleading estimations
when assessing agreements between readers [17]. In cases where the
distribution of data is highly skewed, kappa scores may be lower than
expected, even when there is a high level of agreement between raters,
because kappa accounts for the agreement expected by chance. In un
balanced datasets, the expected agreement by chance may be higher,
making the kappa value appear lower [17].
We suggest that an additional area of inquiry might be using artificial
intelligence (especially deep learning) as an alternative or adjunct to
human readers for assessing image quality. Deep learning has achieved
unprecedented tasks in recent years in prostate cancer diagnostics
[18,19]. Hence, deep learning can readily streamline visual quality
assessment in prostate mpMRI. Currently, we are exploring the utility of
deep learning in assessing image quality on bi-parametric MRI scans of
publicly available publicly available Prostate Imaging: Cancer AI (PI-
CAI) training data [20].
Overall, the percentage of T2W and DCE images with diagnostic
adequacy was the highest, while DWI had the lowest diagnostic ade
quacy. Our findings contrast with the study by Giganti et al. [11], where
the authors found that DCE had the lowest diagnostic adequacy. How
ever, our findings align with those from the study by Karanasios et al.
[12], in which the authors found that T2W and DCE images had the
highest diagnostic quality, while ADC maps received the lowest scores.
In contrast to Karanasios et al.[12], where PI-QUAL scores were ≥ 3 in
96% of the scans, our readers assigned a PI-QUAL score of ≥ 3 to
approximately 78% of the scans, demonstrating a similar score distri
bution to Giganti’s work [11].
Our study has some limitations. First, we did not assess the impact of
image quality on the diagnostic performance of radiologists in identi
fying csPCa in the present work. The main reason for not investigating
Fig. 4. A 60-year-old man who underwent multi-parametric prostate MRI this relationship lies in the challenges associated with objectively eval
at 3 T. The prostate capsule (arrow) cannot be clearly delineated in the left uating it in a retrospective manner across a diverse patient population.
peripheral zone in axial T2WI (a), and there is mild noise in the image (a). The Factors such as variations in patient characteristics, differences in tumor
neurovascular bundle (arrowhead) is also hard to depict. The axial apparent presentation, and potential inconsistencies in image acquisition and
diffusion coefficient map (ADC) is free of artifacts with adequate image quality interpretation could complicate the analysis, making it difficult to draw
(b). The vessels in the Alcock’s canal (arrow) cannot be clearly delineated as no clear conclusions. For instance, a large-conspicuous PI-RADS score 5
fat suppression is used in the dynamic-contrast-enhanced (DCE) image (c). Two csPCa can be readily depicted by radiologists even if the PI-QUAL score
readers assigned PI-QUAL score 3—T2W and ADC are diagnostic, DCE is not
is low, while a small PI-RADS score 4 csPCa might be missed even if the
diagnostic, cannot rule out significant cancer; three readers assigned PI-QUAL
PI-QUAL score is high. Hence, we suggest that the relationship between
score 2—T2W and DCE are not diagnostic, ADC is diagnostic.
the PI-QUAL score and the diagnostic performance of radiologists should
5
Y. Basar et al. European Journal of Radiology 165 (2023) 110923
Table 2
Diagnostic quality of individual mpMRI sequences assessed by each radiologist.
Sequences Radiologist 1 Radiologist 2 Radiologist 3 Radiologist 4 Radiologist 5 Overall
T2WI 254/355 (71.55%) 280/355 (78.87%) 288/355 (81.13%) 304/355 (85.63%) 282/355 (79.44%) 79.32%
ADC 241/355 (67.89%) 245/355 (69.01%) 223/355 (62.82%) 217/355 (61.13%) 231/355 (65.07%) 65.18%
DCE 311/355 (87.61%) 266/355 (74.93%) 298/355 (83.94%) 301/355 (84.79%) 267/355 (75.21%) 81.3%
Fig. 5. The pair-wise absolute agreement scores of radiologists in determining the adequacy of each multi-parametric MRI sequence.
be investigated in future studies where poor and good quality scans had Third, the number of low-quality scans (i.e., a PI-QUAL score ≤ 2)
similar lesion characteristics (i.e., scans with a comparable number of was relatively small compared to that of higher quality (i.e., PI-QUAL ≥
PI-RADS scores or lesion sizes). 3), with the majority of the low-quality scans originating from the same
Second, despite using multi-center & multi-scanner data, all prostate centers. Consequently, future research on larger datasets is required to
mpMRI scans were performed using the same manufacturer’s scanners evaluate the performance of the PI-QUAL score on scans with a more
in the present work, necessitating future studies incorporating mpMRI balanced quality distribution.
scans obtained with different manufacturers’ scanners. In conclusion, our study found that basic prostate readers from
6
Y. Basar et al. European Journal of Radiology 165 (2023) 110923
different institutions provided good inter-reader agreement on multi- biopsy in prostate cancer (PROMIS): a paired validating confirmatory study, The
Lancet. 389 (2017) 815–822, https://doi.org/10.1016/S0140-6736(16)32401-1.
center data in terms of evaluation of prostate image quality by means
[2] M.A. Bjurlin, P.R. Carroll, S. Eggener, P.F. Fulgham, D.J. Margolis, P.A. Pinto, A.
of the PI-QUAL score. Furthermore, readers had a high agreement for the B. Rosenkrantz, J.N. Rubenstein, D.B. Rukstalis, S.S. Taneja, B. Turkbey, Update of
sequence-level adequacy assessment using PI-QUAL. We suggest that the the standard operating procedure on the use of multiparametric magnetic
reported level of inter-rater agreement for basic prostate readers, resonance imaging for the diagnosis, staging and management of prostate cancer,
Journal of Urology. 203 (2020) 706–712, https://doi.org/10.1097/
without prior knowledge or extensive instruction regarding the scoring JU.0000000000000617.
system, shows the potential of the PI-QUAL score for prostate cancer [3] EAU Guidelines on Prostate Cancer - Uroweb, Uroweb - European Association of
diagnostics. Urology. (n.d.). https://uroweb.org/guidelines/prostate-cancer (accessed March
13, 2023).
Funding [4] M. de Rooij, B. Israël, T. Barrett, F. Giganti, A.R. Padhani, V. Panebianco,
This paper has been produced benefiting from the 1001 Science and J. Richenberg, G. Salomon, I.G. Schoots, G. Villeirs, J. Walz, J.O. Barentsz, Focus
Technology Grant Program National Program of TUBITAK (Project No: on the quality of prostate multiparametric magnetic resonance imaging: synopsis of
the ESUR/ESUI recommendations on quality assessment and interpretation of
122E022). However, the entire responsibility of the publication/paper images and radiologists’ training, Eur Urol. 78 (2020) 483–485, https://doi.org/
belongs to the owner of the paper. The financial support received from 10.1016/j.eururo.2020.06.023.
TUBITAK does not mean that the content of the publication is approved [5] M. de Rooij, B. Israël, M. Tummers, H.U. Ahmed, T. Barrett, F. Giganti, B. Hamm,
V. Løgager, A. Padhani, V. Panebianco, P. Puech, J. Richenberg, O. Rouvière,
in a scientific sense by TUBITAK. G. Salomon, I. Schoots, J. Veltman, G. Villeirs, J. Walz, J.O. Barentsz, ESUR/ESUI
Author contributions consensus statements on multi-parametric MRI for the detection of clinically
Each author has made substantial contributions to the conception or significant prostate cancer: quality requirements for image acquisition,
interpretation and radiologists’ training, Eur Radiol. 30 (2020) 5404–5416,
design of the work; or the acquisition, analysis, or interpretation of data;
https://doi.org/10.1007/s00330-020-06929-z.
or the creation of new software used in the work. Each author has [6] S.J. Esses, S.S. Taneja, A.B. Rosenkrantz, Imaging facilities’ adherence to PI-RADS
approved the submitted version (and any substantially modified version v2 minimum technical standards for the performance of prostate MRI, Academic
that involves the author’s contribution to the study). Each author has Radiology. 25 (2018) 188–195, https://doi.org/10.1016/j.acra.2017.08.013.
[7] P.R. Burn, S.J. Freeman, A. Andreou, N. Burns-Cox, R. Persad, T. Barrett,
agreed both to be personally accountable for the author’s own contri A multicentre assessment of prostate MRI quality and compliance with UK and
butions and to ensure that questions related to the accuracy or integrity international standards, Clinical Radiology. 74 (894) (2019) e19–894.e25, https://
of any part of the work, even ones in which the author was not doi.org/10.1016/j.crad.2019.03.026.
[8] J. Sackett, J.H. Shih, S.E. Reese, J.R. Brender, S.A. Harmon, T. Barrett, M. Coskun,
personally involved, are appropriately investigated, resolved, and the M. Madariaga, J. Marko, Y.M. Law, E.B. Turkbey, S. Mehralivand, T. Sanford,
resolution documented in the literature. Please find the detailed list of N. Lay, P.A. Pinto, B.J. Wood, P.L. Choyke, B. Turkbey, Quality of prostate MRI: Is
each author’s contribution to the present work below. the PI-RADS standard sufficient? Acad Radiol. 28 (2021) 199–207, https://doi.org/
10.1016/j.acra.2020.01.031.
Ethical statement [9] A trial looking at using MRI to help diagnose prostate cancer (PRECISION), (2016).
All procedures performed in studies involving human participants https://www.cancerresearchuk.org/about-cancer/find-a-clinical-trial/a-trial-
were in accordance with the ethical standards of the institutional and/or looking-at-using-mri-to-help-diagnose-prostate-cancer-precision (accessed May 11,
2023).
national research committee and with the 1964 Helsinki declaration and [10] F. Giganti, C. Allen, M. Emberton, C.M. Moore, V. Kasivisvanathan, PRECISION
its later amendments or comparable ethical standards. study group, prostate imaging quality (PI-QUAL): a new quality control scoring
The local ethics committee approved this retrospective study and system for multiparametric magnetic resonance imaging of the prostate from the
PRECISION trial, Eur Urol Oncol. 3 (2020) 615–619, https://doi.org/10.1016/j.
waived the need for informed consent for the retrospective evaluation of
euo.2020.06.007.
anonymized medical data (Acıbadem University and Acıbadem [11] F. Giganti, E. Dinneen, V. Kasivisvanathan, A. Haider, A. Freeman, A. Kirkham,
Healthcare Institutions Medical Research Ethics Committee). S. Punwani, M. Emberton, G. Shaw, C.M. Moore, C. Allen, Inter-reader agreement
Consent for publication of the PI-QUAL score for prostate MRI quality in the NeuroSAFE PROOF trial, Eur
Radiol. 32 (2022) 879–889, https://doi.org/10.1007/s00330-021-08169-1.
Written informed consent was not required because it was related to [12] E. Karanasios, I. Caglic, J.P. Zawaideh, T. Barrett, Prostate MRI quality: clinical
identification imaging only, and patient anonymity was maintained. impact of the PI-QUAL score in prostate cancer diagnostic work-up, BJR. 95 (2022)
20211372, https://doi.org/10.1259/bjr.20211372.
[13] A. Stabile, F. Giganti, V. Kasivisvanathan, G. Giannarini, C.M. Moore, A.
CRediT authorship contribution statement R. Padhani, V. Panebianco, A.B. Rosenkrantz, G. Salomon, B. Turkbey, G. Villeirs,
J.O. Barentsz, Factors influencing variability in the performance of
Yeliz Basar: Writing – review & editing, Investigation, Data cura multiparametric magnetic resonance imaging in detecting clinically significant
prostate cancer: a systematic literature review, Eur Urol Oncol. 3 (2020) 145–167,
tion. Deniz Alis: Writing – review & editing, Writing – original draft, https://doi.org/10.1016/j.euo.2020.02.005.
Supervision. Mustafa Ege Seker: Formal analysis, Data curation. [14] F. Giganti, A. Kirkham, V. Kasivisvanathan, M.-V. Papoutsaki, S. Punwani,
Mustafa Said Kartal: Formal analysis, Data curation. Batuhan Guroz: M. Emberton, C.M. Moore, C. Allen, Understanding PI-QUAL for prostate MRI
quality: a practical primer for radiologists, Insights into Imaging. 12 (2021) 59,
Data curation. Aydan Arslan: Data curation. Sabri Sirolu: Data cura https://doi.org/10.1186/s13244-021-00996-6.
tion. Serpil Kurtcan: Data curation. Nurper Denizoglu: Data curation. [15] J. Cohen, A coefficient of agreement for nominal scales, Educat. Psychol.
Ercan Karaarslan: Writing – review & editing, Conceptualization. Measurement. 20 (1960) 37–46, https://doi.org/10.1177/001316446002000104.
[16] F. Giganti, A.P. Cole, F.M. Fennessy, T. Clinton, P.L.D.F. Moreira, M.C. Bernardes,
C.-F. Westin, D. Krishnaswamy, A. Fedorov, D.A. Wollin, B. Langbein, N. Frego,
Declaration of Competing Interest M. Labban, J.S. Badaoui, S.L. Chang, L.G. Briggs, J. Tokuda, A. Ambrosi,
A. Kirkham, M. Emberton, V. Kasivisvanathan, C.M. Moore, C. Allen, C.
The authors declare that they have no known competing financial M. Tempany, Promoting the use of the PI-QUAL score for prostate MRI quality:
results from the ESOR Nicholas Gourtsoyiannis teaching fellowship, Eur Radiol. 33
interests or personal relationships that could have appeared to influence (2023) 461–471, https://doi.org/10.1007/s00330-022-08947-5.
the work reported in this paper. [17] R. Delgado, X.-A. Tibau, Why Cohen’s Kappa should be avoided as performance
measure in classification, PLOS ONE. 14 (2019) e0222916.
[18] X. Yang, C. Liu, Z. Wang, J. Yang, H.L. Min, L. Wang, K.-T. (Tim) Cheng, Co-trained
Appendix A. Supplementary data convolutional neural networks for automated detection of prostate cancer in multi-
parametric MRI, Med. Image Anal. 42 (2017) 212–227. https://doi.org/10.1016/j.
Supplementary data to this article can be found online at https://doi. media.2017.08.006.
[19] X. Wang, W. Yang, J. Weinreb, J. Han, Q. Li, X. Kong, Y. Yan, Z. Ke, B. Luo, T. Liu,
org/10.1016/j.ejrad.2023.110923. L. Wang, Searching for prostate cancer by fully automated magnetic resonance
imaging classification: deep learning versus non-deep learning, Sci Rep. 7 (2017)
References 15415, https://doi.org/10.1038/s41598-017-15720-y.
[20] A. Saha, M. Hosseinzadeh, H. Huisman, End-to-end prostate cancer detection in
bpMRI via 3D CNNs: effects of attention mechanisms, clinical priori and decoupled
[1] H.U. Ahmed, A.-E.-S. Bosaily, L.C. Brown, R. Gabe, R. Kaplan, M.K. Parmar,
false positive reduction, Med. Image Anal. 73 (2021), 102155, https://doi.org/
Y. Collaco-Moraes, K. Ward, R.G. Hindley, A. Freeman, A.P. Kirkham, R. Oldroyd,
10.1016/j.media.2021.102155.
C. Parker, M. Emberton, Diagnostic accuracy of multi-parametric MRI and TRUS