You are on page 1of 34

AI In Medical Imaging Informatics : Current

Challenges And Future Direction

A Seminar Report
Submitted to the APJ Abdul Kalam Technological University
in partial fulfillment of requirements for the award of degree

Bachelor of Technology
in
Applied Electronics and Instrumentation Engineering
by
Eldho Sojan
TVE19AE022

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING


COLLEGE OF ENGINEERING TRIVANDRUM
KERALA
November 2022
DEPT. OF ELECTRONICS & COMMUNICATION ENGINEERING
COLLEGE OF ENGINEERING TRIVANDRUM
2022 - 23

CERTIFICATE

This is to certify that the report entitled AI In Medical Imaging Informat-


ics : Current Challenges And Future Direction submitted by Eldho Sojan
(TVE19AE022), to the APJ Abdul Kalam Technological University in partial fulfill-
ment of the B.Tech. degree in Applied Electronics and Instrumentation Engineering is
a bonafide record of the seminar work carried out by her under our guidance and
supervision. This report in any form has not been submitted to any other University or
Institute for any purpose.

Prof. Anurenjan P.R Prof. Sudhi S.


(Seminar Guide) (Seminar Coordinator)
Assistant Professor Assistant Professor
Dept.of ECE Dept.of ECE
College of Engineering College of Engineering
Trivandrum Trivandrum

Prof. Narendramudra N. G. Dr. Ajayan K. R.


(Seminar Coordinator)
Assistant Professor (ad hoc) Professor and Head
Dept.of ECE Dept.of ECE
College of Engineering College of Engineering
Trivandrum Trivandrum
DECLARATION

I Eldho Sojan hereby declare that the seminar report AI In Medical Imaging
Informatics : Current Challenges And Future Direction, submitted for partial
fulfillment of the requirements for the award of degree of Bachelor of Technology
of the APJ Abdul Kalam Technological University, Kerala is a bonafide work done by
me under supervision of Prof. Anurenjan P.R
This submission represents my ideas in my own words and where ideas or words
of others have been included, I have adequately and accurately cited and referenced
the original sources.
I also declare that I have adhered to ethics of academic honesty and integrity
and have not misrepresented or fabricated any data or idea or fact or source in my
submission. I understand that any violation of the above will be a cause for disciplinary
action by the institute and/or the University and can also evoke penal action from the
sources which have thus not been properly cited or from whom proper permission has
not been obtained. This report has not been previously formed the basis for the award
of any degree, diploma or similar title of any other University.

Trivandrum Eldho Sojan

21-11-2022
Abstract

Medical imaging informatics has been driving clinical research, translation, and
practice for over three decades. Advances in medical imaging informatics are projected
to elevate the quality of care levels witnessed today, once innovative solutions along
the lines of selected research endeavors presented in this study are adopted in clinical
practice,and thus can potentially be transforming to precision medicine. The necessity
for efficient medical data management strategies in the context of AI in big healthcare
data analytics is highlighted. It then provides a synopsis of contemporary and emerging
algorithmic methods for disease classification and organ/ tissue segmentation, focusing
on AI and deep learning architectures that have already become the approach.
Advances linked with evolving 3D reconstruction and visualization applications are
further documented. Integrative analytics approaches driven by associate research
branches highlighted in this study promise to revolutionize imaging informatics as
known today across the healthcare continuum for both radiology and digital pathology
applications. By doing so we are able to enable informed, more accurate diagnosis,
timely prognosis, and effective treatment planning, underpinning precision medicine.

i
Acknowledgement

I take this opportunity to express my deepest sense of gratitude and sincere thanks to
everyone who helped me to complete this work successfully. I express my sincere
thanks to Dr. Ajayan K. R., Head of Department, Electronics and Communication
Engineering, College of Engineering Trivandrum for providing me with all the
necessary facilities and support.
I would like to express my sincere gratitude to Prof. Sudhi S. and Prof. Narendra-
mudra N. G., department of Electronics and Communication Engineering, College
of Engineering Trivandrum for their support and co-operation.
I would like to place on record my sincere gratitude to my seminar guide Prof.
Anurenjan P.R, Assistant Professor, Electronics and Communication Engineering, Col-
lege of Engineering for the guidance and mentorship throughout the course.
Finally I thank my family, and friends who contributed to the successful fulfilment
of this seminar work.

Eldho Sojan

ii
Contents

Abstract i

Acknowledgement ii

List of Figures v

List of Abbreviations vi

1 Introduction 1

2 Literature Review 3

3 Image formation and acquisition 5


3.1 Common Modalities . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.2 Other Modalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.2.1 Nuclear Medicine . . . . . . . . . . . . . . . . . . . . . . . . 6
3.2.2 Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.2.3 TMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.3 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

4 Fair data repositories for reproducible,extensible and explainable research 9


4.1 K-Anonymity and ‘L-Diversity . . . . . . . . . . . . . . . . . . . . 10
4.2 T-Closeness: A New Privacy Measure . . . . . . . . . . . . . . . . 11

5 Processing analysis and understanding 12


5.1 Fuzzy logic process . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
5.2 Machine learning system . . . . . . . . . . . . . . . . . . . . . . . . 14

iii
5.3 Deep learning for segmentation and classification . . . . . . . . . . . 16
5.3.1 U-Net segmentation . . . . . . . . . . . . . . . . . . . . . . 16

6 Visualization and Navigation 18


6.1 Biomedical 3D Reconstruction and Visualization . . . . . . . . . . . 18
6.2 In Silico Modeling of Malignant Tumors . . . . . . . . . . . . . . . 19
6.3 Digital twin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

7 Analysis of radiogenomics system and 3D-Unet 21


7.1 3D U-Net inferences . . . . . . . . . . . . . . . . . . . . . . . . . . 22

8 Conclusion 24

References 25

iv
List of Figures

3.1 Different modalities [1] . . . . . . . . . . . . . . . . . . . . . . . . . 5

4.1 Efficiency of privacy measures [5] . . . . . . . . . . . . . . . . . . . 11

5.1 Fuzzy logic system architecture [7] . . . . . . . . . . . . . . . . . . . 13


5.2 Fuzzy Learning model [2] . . . . . . . . . . . . . . . . . . . . . . . 14
5.3 Machine learning system [2] . . . . . . . . . . . . . . . . . . . . . . 15
5.4 U-Net Segmentation [4] . . . . . . . . . . . . . . . . . . . . . . . . . 17

6.1 In Sillico Modelling Paradigm [1] . . . . . . . . . . . . . . . . . . . 19


6.2 Immune Digital Twin [6] . . . . . . . . . . . . . . . . . . . . . . . . 20

7.1 Radiogenomics system [1] . . . . . . . . . . . . . . . . . . . . . . . 21


7.2 Loss Graph And box Plot [4] . . . . . . . . . . . . . . . . . . . . . . 23

v
List of Abbreviations

AI Artificial Intelligence

ANN Artificial Nueral network

EHR Electronic Health record

CNN Convolutional nueral network

MRI Magnetic resonance imaging

ECG Electrocardiogram

PET positron emission tomography

SPECT Single positron emission tomography

TMA Tissue microarray

CDR Clinical data repositories

FAIR Findable accessible interoperable and reusable

PDDP privacy preserving data publishing

FL Fuzzy Learning

GBM Glioblastoma

LGG Low grade gliomas

HGG High grade gliomas

vi
Chapter 1

Introduction

Medical imaging informatics covers the application of information and communication


technologies to medical imaging for the provision of healthcare services. It is defined
as : “Imaging informatics touches every aspect of the imaging chain from image
creation and acquisition, to image distribution and management, to image storage
and retrieval, to image processing, analysis and understanding, to image visualization
and data navigation; to image interpretation, reporting, and communications.The field
serves as the integrative catalyst for these processes and forms a bridge with imaging
and other medical disciplines.”
The objective of medical imaging informatics is thus to improve efficiency, accuracy,
and reliability of services within the medical enterprise , concerning medical image
usage and exchange throughout complex healthcare systems.
Hence, there was a need of automatic diagnostic system that provides benefits from
both human knowledge and accuracy of the machine. A suitable decision support
system is needed to achieve accurate results from the diagnosis process with reduced
costs. Classification of diseases depending upon various parameters is a complex
task for human experts but AI would help to detect and handle such kinds of cases.
Currently, various AI techniques have been used in the field of medicine to accurately
diagnosis sicknesses. AI is an integral part of computer science by which computers
become more intelligent. The vital need for any intelligent system is learning. There
are various techniques in AI that are based on Learning like deep learning, machine
learning, etc. Some specific AI methods that are significant in the medical field named

1
as a Rule-based intelligent system, provides a set of if-then rules in healthcare, which
act as a decision support system. Gradually, intelligent systems are being replaced
in the medical field by AI-based automatic techniques where human intervention is
very less . The neural network or artificial neural network (ANN) is a large collection
of neural units designed based on biological neurons connected in the brain. It is a
simulation of the human brain and works exactly like it. [2]
Hardware breakthroughs in medical image acquisition facilitate high-throughput and
high-resolution images across imaging modalities at unprecedented performance and
lower induced radiation.Already deep in the big medical data era, imaging data
availability is only expected to grow, complemented by massive amounts of associated
data-rich EHR, and physiological data, climbing to orders of magnitude higher than
what is available today. As such, the research community is struggling to harness the
full potential of the wealth of data that are now available at the individual patient level
underpinning precision medicine. [1]

2
Chapter 2

Literature Review

A wide-spectrum of multi-disciplinary medical imaging services have evolved over


the past 30 years ranging from routine clinical practice to advanced human physiology
and pathophysiology., linked with the associate technological advances in big-data
imaging, -omics and electronic health records (EHR) analytics, dynamic workflow
optimization, context-awareness, and visualization, a new era is emerging for medical
imaging informatics, prescribing the way towards precision medicine . This paper
provides an overview of prevailing concepts, highlights challenges and opportunities,
and discusses future trends. [1] [3].
This paper reviews state-of-the-art research solutions across the spectrum of medical
imaging informatics, discusses clinical translation, and provides future directions for
advancing clinical practice. More specifically, it summarizes advances in medical
imaging acquisition technologies for different modalities, highlighting the necessity
for efficient medical data management strategies in the context of AI in big healthcare
data analytics. It then provides a synopsis of contemporary and emerging algorithmic
methods for disease classification and organ/ tissue segmentation, focusing on AI and
deep learning architectures that have already become the de facto approach. The clini-
cal benefits of in-silico modelling advances linked with evolving 3D reconstruction and
visualization applications are further documented. Concluding, integrative analytics
approaches driven by associate research branches highlighted in this study promise
to revolutionize imaging informatics as known today across the healthcare continuum
for both radiology and digital pathology applications. The latter, is projected to enable

3
informed, more accurate diagnosis, timely prognosis, and effective treatment planning,
underpinning precision medicine. [1].
Disease diagnosis is the identification of an health issue, disease, disorder, or other
condition that a person may have. Disease diagnoses could be sometimes very easy
tasks, while others may be a bit trickier. There are large data sets available; however,
there is a limitation of tools that can accurately determine the patterns and make
predictions. The traditional methods which are used to diagnose a disease are manual
and error-prone. Usage of Artificial Intelligence (AI) predictive techniques enables
auto diagnosis and reduces detection errors compared to exclusive human expertise. In
this paper, we have reviewed the current literature for the last 10 years, from January
2009 to December 2019. The study considered eight most frequently used databases,
in which a total of 105 articles were found. A detailed analysis of those articles
was conducted in order to classify most used AI techniques for medical diagnostic
systems. It further discuss various diseases along with corresponding techniques of
AI, including Fuzzy Logic, Machine Learning, and Deep Learning. This research
paper aims to reveal some important insights into current and previous different AI
techniques in the medical field used in today’s medical research, particularly in heart
disease prediction, brain disease, prostate, liver disease, and kidney disease. Finally,
the paper also provides some avenues for future research on AI-based diagnostics
systems based on a set of open problems and challenges. [2].
This paper reviews a network and training strategy that relies on the strong use of data
augmentation to use the available annotated samples more efficiently. The architecture
consists of a contracting path to capture context and a symmetric expanding path
that enables precise localization.3D U-Net segmentation is an architecture based
on the Convolutional Neural Network (CNN), which has typical use to classify
labels. However, in medical imaging, the desired output should be more than just
classification. It should contain the localization that is set up to predict the class label
of each pixel by providing a local region around that pixel as an input. [3].

4
Chapter 3

Image formation and acquisition

Biomedical imaging has revolutionized the practice of medicine with unprecedented


ability to diagnose disease through imaging the human body and high-resolution view-
ing of cells and pathological specimens. Broadly speaking, images are formed through
interaction of electromagnetic waves at various wavelengths (energies) with biological
tissues for modalities other than Ultrasound, which involves use of mechanical sound
waves. Images formed with high-energy radiation at shorter wavelength such as X-ray
and Gamma-rays at one end of the spectrum are ionizing whereas at longer wavelength
- optical and still longer wavelength - MRI and Ultrasound are nonionizing. [1]

Figure 3.1: Different modalities [1]

5
3.1 Common Modalities
The imaging modalities covered in this section are X-ray, ultrasound, magnetic
resonance (MR), X-ray computed tomography(CT), nuclear medicine, and high-
resolution microscopy.
X-ray imaging’s low cost and quick acquisition time has led to it being one of the
most commonly used imaging techniques. The image is produced by passing X-rays
generated by an X-ray source through the body and detecting the attenuated X-rays
on the other side via a detector array; the resulting image is a 2D projection with
resolutions down to 100 microns and where the intensities are indicative of the degree
of X-ray attenuation . To improve visibility, iodinated contrast agents that attenuate X-
rays are often injected into a region of interest (e.g., imaging arterial disease through
fluoroscopy).
Ultrasound imaging (US) employs pulses in the range of 1–10 MHz to image tissue in
a noninvasive and relatively inexpensive way. The backscattering effect of the acoustic
pulse interacting with internal structures is used to measure the echo to produce the
image.
MR imaging produces high spatial resolution volumetric images primarily of Hy-
drogen nuclei, using an externally applied magnetic field in conjunction with radio-
frequency (RF) pulses which are non-ionizing.
X-ray CT imaging also offers volumetric scans like MRI. However, CT produces a 3D
image via the construction of a set of 2D axial slices of the body. Similar to MRI, 4D
scans are also possible by gating to the ECG and respiration. [1]

3.2 Other Modalities

3.2.1 Nuclear Medicine

Nuclear medicine is based on imaging gamma rays that are emitted through radioactive
decay of radioisotopes introduced in the body. The radioisotopes emit radiation that
is detected by an external camera before being reconstructed into an image .Single
photon emission computed tomography (SPECT) and positron emission tomography

6
(PET) are common techniques in nuclear medicine. Both produce 2D image slices
that can be combined into a 3D volume; however, PET imaging uses positron-emitting
radiopharmaceuticals that produce two gamma rays when a released positron meets a
free electron. This allows PET to produce images with higher signal-to-noise ratio and
spatial resolution as compared to SPECT . The use of fluorodeoxyglucose in PET has
led to a powerful method for diagnosis and cancer staging.

3.2.2 Microscopy

Conventional microscopy uses the principle of transmission to view objects, the


emission of light at a different wavelength can help increase contrast in objects that
fluoresce by filtering out the excitatory light and only viewing the emitted light –
called fluorescence microscopy . Two-photon fluorescence imaging uses two photons
of similar frequencies to excite molecules which allows for deeper penetration of tissue
and lower phototoxicity (damage to living tissue caused by the excitation source) .
These technologies have seen use in neuro and cancer imaging among other areas.

3.2.3 TMA

TMA technology enables investigators to extract small cylinders of tissue from


histological sections and arrange them in a matrix configuration on a recipient
paraffin block such that hundreds can be analyzed simultaneously.TMA is now
recognized as a powerful tool, which can provide insight regarding the underlying
mechanisms of disease progression and patient response to therapy. With recent
advances in immuneoncology, TMA technology is rapidly becoming indispensable
and augmenting single case slide approaches. TMAs can be imaged using the same
whole slide scanning technologies used to capture images of single case slides.

3.3 Challenges
The challenges and opportunities in the area of biomedical imaging include continuing
acquisitions at faster speeds and lower radiation dose in the case of anatomical imaging
methods. Variations in imaging parameters (e.g. in-plane resolution, slice thickness,

7
etc.) – which were not discussed – may have strong impacts on image analysis and
should be considered during algorithm development. Moreover, the prodigious amount
of imaging data generated causes a significant need for informatics in the storage
and transmission as well as in the analysis and automated interpretation of the data,
underpinning the use of big data science in improved utilization and diagnosis. [1]

8
Chapter 4

Fair data repositories for


reproducible,extensible and
explainable research

The data being scattered across and within institutions in a poorly indexed fashion,
not being openly-available to the research community, and not being well-curated nor
semantically annotated. Additionally, these data are typically semi- or un- structured,
adding a significant computational burden for constituting them data mining ready. A
cornerstone for overcoming the aforementioned limitations relies on the establishment
of efficient, enterprise-wide clinical data repositories (CDR). CDRs can systematically
aggregate information arising from: (i) Electronic Health and Medical Records );
(ii) Radiology and Pathology archives (iii) a wide range of genomic sequencing
devices, Tumor Registries, and Biospecimen Repositories, as well as (iv) Clinical Trial
Management Systems .
To effectively incorporate this information into the EHRs and achieve semantic
interoperability, it is necessary to develop and optimize software that endorses and
relies on interoperability profiles and standards. Such standards are defined by the
Integrating the Healthcare Enterprise, Healthcare Interoperability Resources , and
Digital Imaging and Communications in Medicine . [1]
FAIR guiding principles initiative attempts to overcome (meta) data availability, by
establishing a set of recommendations towards constituting (meta) data findable,

9
accessible, interoperable, and reusable (FAIR) . At the same time, privacy-preserving
data publishing (PPDP) is an active research area aiming to provide the necessary
means for openly sharing data. PPDP objective is to preserve patients privacy
while achieving the minimum possible loss of information . Sharing such data can
increase the likelihood of novel findings and replication of existing research results
. To accomplish the anonymization of medical imaging data, approaches such as k-
anonymity , l-diversity and t-closeness are typically used. [5]

4.1 K-Anonymity and ‘L-Diversity


When releasing microdata, it is necessary to prevent the sensitive information of the
individuals from being disclosed. Two types of information disclosure have been
identified as identity disclosure and attribute disclosure. To effectively limit disclosure,
we need to measure the disclosure risk of an anonymized table. k-anonymity is
introduced as the property that each record is indistinguishable with at 1 least k-1 other
records with respect to the quasi-identifier. In other words, k-anonymity requires that
each equivalence class contains at least k records. While k-anonymity protects against
identity disclosure, it is insufficient to prevent attribute disclosure. To address this
limitation of k-anonymity recently introduction of a new notion of privacy, called ‘L-
diversity, which requires that the distribution of a sensitive attribute in each equivalence
class has at least ‘ “wellrepresented” values.
The protection k-anonymity provides is simple and easy to understand. If a table
satisfies k-anonymity for some value k, then anyone who knows only the quasi-
identifier values of one individual cannot identify the record corresponding to that
individual with confidence grater than 1/k. While k-anonymity protects against identity
disclosure, it does not provide sufficient protection against attribute.
While the L-diversity principle represents an important step beyond k-anonymity in
protecting against attribute disclosure, it has several shortcomings that we now discuss.
L-diversity may be difficult and unnecessary to achieve and is also insufficient to
prevent attribute disclosure.

10
4.2 T-Closeness: A New Privacy Measure
Intuitively, privacy is measured by the information gain of an observer. Before seeing
the released table, the observer has some prior belief about the sensitive attribute value
of an individual. After seeing the released table, the observer has a posterior belief.
Information gain can be represented as the difference between the posterior belief and
the prior belief. The novelty of our approach is that we separate the information gain
into two parts: that about the whole population in the released data and that about
specific individuals.
The t-closeness Principle : An equivalence class is said to have t-closeness if the
distance between the distribution of a sensitive attribute in this class and the distribution
of the attribute in the whole table is no more than a threshold t. A table is said to have
t-closeness if all equivalence classes have t-closeness. [5]

Figure 4.1: Efficiency of privacy measures [5]

11
Chapter 5

Processing analysis and understanding

This section reviews the general field of image analysis and understanding by taking
radiology as example.
Medical image analysis typically involves the delineation of the objects of interest
(segmentation) or description of labels (classification) . Examples include segmenta-
tion of the heart for cardiology and identification of cancer for pathology. To date,
medical image analysis has been hampered by a lack of theoretical understanding on
how to optimally choose and process visual features. The recent advent of machine
learning approaches has provided good results in a wide range of applications. These
approaches, attempt to learn the features of interest and optimize parameters based on
training examples. However, these methods are often difficult to engineer since they
can fail in unpredictable ways and are subject to bias or spurious feature identification
due to limitations in the training dataset. An important mechanism for advancing the
field is by open access challenges in which participants can benchmark methods on
standardized datasets. [2]

5.1 Fuzzy logic process


Fuzzy Logic (FL) is a method of reasoning that resembles human reasoning. The
approach of FL imitates the way of decision making in humans that involves all
intermediate possibilities between digital values YES and NO. It works on the levels
of possibilities of input to achieve the definite output. [7]

12
Figure 5.1: Fuzzy logic system architecture [7]

It has four main parts as shown in Fig. 5.1:

• Fuzzification Module : It transforms the system inputs, which are crisp numbers,
into fuzzy sets.

• Knowledge Base : It stores IF-THEN rules provided by experts.

• Inference Engine : It simulates the human reasoning process by making fuzzy


inference on the inputs and IF-THEN rules.

• Defuzzification Module : It transforms the fuzzy set obtained by the inference


engine into a crisp value.

Fuzzy Logic is taken into account among the techniques for AI, where intelligent
behavior is achieved by creating fuzzy classes of some parameters. The rules and
criteria are understandable by humans. These rules and the fuzzy classes are defined
by a domain expert mostly. Therefore, a great deal of human intervention is required
in fuzzy logic. The actual processing of data basically provides a presentation of the
information in fuzzy logic. One of such representations can be done using machine
learning in the medical field even in a much better way than fuzzy logic. [7]

13
Figure 5.2: Fuzzy Learning model [2]

5.2 Machine learning system


Machine learning has granted computer systems new abilities that we could have never
thought of. Machine learning is a field of AI that gives machines to power to learn itself
by examples in order to analyze how to different models perform in ML without using
human judgment. The working of ML are explained step by step as follow:
1) Data Collection: The very first step is to collect data. It is a very critical step as

14
Figure 5.3: Machine learning system [2]

quality and quantity affect the overall performance of the system. Basically it is a
process of gathering data on targeted variables.
2) Data Preparation: After the collection of data, the second step is data preprocessing.
It is a process to change raw data to useful data, on which a decision could be made.
This process is also called data cleaning.
3) Choose a Model: To represent preprocessed data into a model, one chooses an
appropriate algorithm according to the task.
4) Train the Model: ML use supervised learning to train a model to increase the
accuracy of decision making or doing predictions.
5) Evaluate the Model: To evaluate the model, a number parameters is needed. The
parameters are driven from the defined objectives. Also, one needs to capture the
performance of the model with the previous one.
6) Parameter Tuning: This step may include: numbering of training steps, perfor-
mance, outcome, learning rate, initialization values, and distribution, etc.
7) Make Predictions: To evaluate the developed model with the real world, it is
indispensable to predict some outcome on the test dataset. If that outcome will match
with domain expert or opinions nearer to it, then that model can be used for further
predictions. [2]

15
5.3 Deep learning for segmentation and classification
Deep learning based segmentation of anatomy and pathology has witnessed a revolu-
tion , where for some tasks now we observe human level performance. The major draw
of deep learning and convolutional architectures is the ability to learn suitable features
and decision functions in tandem.
Like segmentation, these classification tasks have also bene- fited from CNNs. Many of
the network architectures that have been proven on the ImageNet image classification
challenge have seen reuse for medical imaging tasks by fine-tuning previously trained
layers. Ensembles of pre-trained models can also be fine-tuned to achieve strong
performance as demonstrated in .
Overall, irrespective of the training strategy used, classification tasks in medical
imaging are dominated by some formulation of a CNN – often with fully-connected
layers at the end to perform the final classification. With bountiful training data, CNNs
can often achieve state-of-the-art performance. However, deep learning methods
generally suffer with limited training data. As discussed, transfer learning has been
beneficial in coping with scant data, but the continued availability of large, open
datasets of medical images will play a big part in strengthening classification tasks
in the medical domain. [1]

5.3.1 U-Net segmentation

3D U-Net segmentation is an architecture based on the Convolutional Neural Network


(CNN), which has typical use to classify labels. However, in medical imaging, the
desired output should be more than just classification. It should contain the localization
that is set up to predict the class label of each pixel by providing a local region around
that pixel as an input. [4]

The U-Net is simple in its conception: an encoder-decoder network that goes


through a bottleneck but contains skip connections from encoding to decoding layers.
The skip connections allow the model to be trained even with few input data and
offer highly accurate segmentation boundaries, albeit perhaps at the “loss” of a clearly
determined latent space. While the original U-Net was 2D, in 2016, the 3D U-net was

16
Figure 5.4: U-Net Segmentation [4]

proposed that allowed full volumetric processing of imaging data , maintaining the
same principles of the original U-net. [3]

17
Chapter 6

Visualization and Navigation

6.1 Biomedical 3D Reconstruction and Visualization


Three-dimensional (3D) reconstruction concerns the detailed 3D surface generation
and visualization of specific anatomical structures, such as arteries, vessels, organs,
body parts and abnormal morphologies e.g. tumors, lesions, injuries, scars and
cysts. It entails meshing and rendering techniques are used for completing the
seamless boundary surface, generating the volumetric mesh, followed by smoothing
and refinement. By enabling precise position and orientation of the patient’s anatomy,
3D visualization can contribute to the design of aggressive surgery and radiotherapy
strategies, with realistic testing and verification, with extensive applications in spinal
surgery, joint replacement, neuro-interventions, as well as coronary and aortic stenting
.Furthermore, 3D reconstruction constitutes the necessary step towards biomedical
modeling of organs, dynamic functionality, diffusion processes, hemodynamic flow
and fluid dynamics in arteries, as well as mechanical loads and properties of body
parts, tumors, lesions and vessels, such as wall / shear stress and strain and tissue
displacement .
The reconstruction process necessitates expert knowledge and guidance. However,
this is particularly time consuming and hence not applicable in the analysis of larger
numbers of patientspecific cases. For those situations, automatic segmentation and
reconstruction systems are needed. The biggest problem with automatic segmentation
and 3D reconstruction is the inability to fully automate the segmentation process,

18
because of different imaging modalities, varying vessel geometries, and the quality
of source images . Processing of large numbers of images require fast algorithms for
segmentation and reconstruction. There are several ways to overcome this challenge
such as parallel algorithms for segmentation and application of neural networks ,
the use of multiscale processing techniques, as well as the use of multiple computer
systems where each system works on an image in real time. [1]

6.2 In Silico Modeling of Malignant Tumors


Applications of in-silico models evolve drastically in early diagnosis and prognosis,
with personalized therapy planning, noninvasive and invasive interactive treatment,
as well as planning of pre-operative stages, chemotherapy and radiotherapy . The
potential of inferring reliable predictions on the macroscopic tumor growth is of
paramount importance to the clinical practice, since the tumor progression dynamics
can be estimated under the effect of several factors and the application of alternative
therapeutic schemes. [1]

Figure 6.1: In Sillico Modelling Paradigm [1]

19
6.3 Digital twin
A digital twin is a virtual replica of a complex system to help manage performance,
production, and costs. It is powered by a computer program that uses real-world data
to model a product and produces digital output that mimics the physical behavior of
that object.
In general, digital twin uses and applications benefit not only from CAD recon-

Figure 6.2: Immune Digital Twin [6]

struction tools but also engage dynamic modelling stemming from either theoretical
developments or real-life measurements merging the Internet of Things with artificial
intelligence and data analytics . In this form, the digital equivalent of a complex human
functional system enables the consideration of event dynamics, such as tumour growth
or information transfer in epilepsy network, as well as a systemic response to therapy,
such as response to pharmacogenomics or targeted radiotherapy .

Since the digital twin can incorporate modelling at different resolutions, from organ
structure to cellular and genomic level, it may enable complex simulations with the
use of AI tools to integrate huge amounts of data and knowledge aiming at improved
diagnostics and therapeutic treatments, without harming the patient. Furthermore, such
a twin can also act as a framework to support human-machine collaboration in testing
and simulating complex invasive operations without even engaging the patient. [6]

20
Chapter 7

Analysis of radiogenomics system and


3D-Unet

Figure 7.1: Radiogenomics system [1]

Radiomics research has emerged as a non-invasive approach of significant prognos-


tic value . Through the construction of imaging signatures (i.e., fusing shape, texture,
morphology, intensity, etc., features) and their subsequent association to clinical
outcomes, devising robust predictive models (or quantitative imaging biomarkers) is
achieved . Incorporating longitudinal and multi-modality radiology and pathology
image features further enhances the discriminatory power of these models. A dense
literature demonstrates the potentially transforming impact of radiomics for different

21
disease staging such as cancer, neurodegenerative, and cardiovascular diseases . Going
one-step further, radiogenomics methods extend radiomics approaches by investigating
the correlation between, for example, a tumor’s characteristics in terms of quantitative
imaging features and its molecular and genetic profiling .
The advent of radiogenomics research is closely aligned with associated advances
in inter- and multi- institutional collaboration and the establishment of well curated,
FAIR-driven repositories that encompass the substantial amount of semantically
annotated (big) data, underpinning precision medicine. [1]

7.1 3D U-Net inferences


In an experiment, we use the dataset BraTS 2017, the dataset for brain tumors. The
differentiation between low-grade gliomas (LGGs; grade II) and high-grade gliomas
(HGGs; grades III, IV) is critical, since the prognosis and thus the therapeutic strategy
could differ substantially depending on the grade. Glioblastoma (GBM) is a highly
aggressive (grade 4) type of brain tumor, with cells that reproduce quickly— nourished
by a large network of blood vessels.
Then preprocessing of data is done. The process basically normalizes the image input
and correct the bias and write out a new image output.
Before training the data, we need to reconfigure the information above to fit the
experiment’s purpose. The pool size must be at least two for each dimension. The
labels represent the class labels of the medical imaging such as a whole tumor,
enhanced tumor, etc. The modalities have four types by default, we can take off some
for a specific purpose.
The function used fetches all training data files to separate the training modalities in
the subject list.
The predictions will be written in the folder prediction along with the input data
and ground truth labels for comparison. For evaluating the model After running the
script evalutate.py in the brats folder, you should have both the loss graph and the box
plot. [4]

22
Figure 7.2: Loss Graph And box Plot [4]

23
Chapter 8

Conclusion

Medical imaging informatics has been driving clinical research, translation, and
practice for over three decades.Advances in associate research branches highlighted
in this study promise to revolutionize imaging informatics as known today across the
healthcare continuum enabling informed, more accurate diagnosis, timely prognosis,
and effective treatment planning.Imaging researchers are also faced with challenges in
data management, indexing, query and analysis of digital pathology data. One of the
main challenges is how to manage relatively large-scale, multi-dimensional data sets
that will continue to expand over time since it is unreasonable to exhaustively compare
the query data with each sample in a high-dimensional database due to practical storage
and computational bottlenecks . The second challenge is how to reliably interrogate
the characteristics of data originating from multiple modalities.. For that purpose,
investigating the association between imaging and -omics features is of paramount
importance towards constructing advanced multicompartment models that will be able
to accurately portray proliferation and diffusion of various cell types subpopulations.
In conclusion, medical imaging informatics advances are projected to elevate the
quality of care levels witnessed today,once innovative solutions along the lines of
selected research endeavors presented in this study are adopted in clinical practice,
and thus potentially transforming precision medicine.

24
References

[1] “AI in Medical Imaging Informatics: Current Challenges and Future Directions”
, in IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS,
VOL. 24, NO. 7, JULY 2020

[2] “Medical Diagnostic Systems Using Artificial Intelligence (AI) Algorithms:


Principles and Perspectives” , in IEEE Access ,Digital Object Identifier
10.1109/ACCESS.2020.3042273

[3] O. Ronneberger, P. Fischer, and T Brox, “U-net: Convolutional networks for


biomedical image segmentation,” ,in Proc. Medical Image Comput. Comp.-Assis.
Interv. – MICCAI, Navab N., Hornegger J.,WellsW., Frangi A. eds. Lecture Notes
in Computer Science, vol. 9351, Cham: Springer, 2015.

[4] @online 3D Unet segmentation, https://miccai-sb.github.io/


materials/Hoang2019b.pdf Online; accessed 25-October-2022.

[5] @online t closeness https://www.utdallas.edu/˜mxk055100/courses/


privacy08f_files/tcloseness.pdf Online; accessed 25-November-2022.

[6] @online Building digital twins of the human immune system: toward
a roadmap //https://www.nature.com/articles/s41746-022-00610-z
Online; accessed 25-November-2022.

[7] @online Artificial Intelligence - Fuzzy Logic Systems https://www.


tutorialspoint.com/artificial_intelligence/artificial_
intelligence_fuzzy_logic_systems Online; accessed 23-November-
2022.

25

You might also like