You are on page 1of 6

Basic definitions of ML:

Machine Learning is a type of Artificial Intelligence that provides computers with the ability to
learn without being explicitly programmed.

- Supervised Learning: Learning with a labelled training set. Example: email spam
detector with training set of already labelled emails.
- Unsupervised Learning: Discovering patterns in unlabelled data. Example: cluster
similar documents based on the text content.
- Reinforcement Learning: learning based on feedback or reward. Example: learn to play
chess by winning or losing.

Deep machine learning (ex. Convolutional neural networks): Don’t know what features we’re
looking for  Learn which features to use, and how to use them.

Neural networks consist of layers of connected ‘neurons’. Consists of one input, one output
and multiple fully-connected hidden layers in-between. Each layer is represented as a series of
neurons and progressively extracts higher and higher-level features of the input until the final
layer essentially decides about what the input shows. CNNs are a type of neural networks
optimized for image pattern recognition: they learn a complex representation of visual data
using vast amounts of data. They are inspired by the human visual system and learn multiple
layers of transformations, which are applied on top of each other to extract a progressively
more sophisticated representation of the input.

In this way, using deep learning you avoid feature engineering.

Interpretability/explainability:

Interpretability refers to the ability to understand and make sense of how an AI model arrives
at its predictions or decisions. It focuses on providing insights into the internal workings of the
AI system, such as the factors and features it considers most important when making
predictions. An interpretable AI model allows clinicians and researchers to gain insights into the
model's decision-making process and understand the reasoning behind its outputs  We can
use attention maps.

Explainability, on the other hand, goes beyond interpretability. It not only seeks to understand
how the AI model arrives at its predictions but also aims to provide meaningful explanations
that can be understood by humans. Explainability focuses on producing clear and
comprehensible explanations for the model's predictions, taking into account the context and
requirements of the end-users. These explanations should help users understand the reasons
behind the model's decisions and build trust in its predictions.
Concepts of data augmentation, overfitting, training, etc:

Data Augmentation: Data augmentation is a technique used to artificially increase the size of a
training dataset by applying various transformations or modifications to the existing data. This
approach helps to diversify the training data and can improve the generalization and
robustness of machine learning models.

Overfitting occurs when a machine learning model performs well on the training data but fails
to generalize to new, unseen data. It happens when the model becomes too complex and starts
to memorize the noise or random variations in the training set. Signs of overfitting include high
training accuracy but poor performance on the validation or test set. Regularization techniques
such as L1 and L2 regularization can help mitigate overfitting.

Biases in AI: Ethics, regulation, data sharing, anonymisation:

Ethical Bias in Healthcare AI: Ethical biases in healthcare AI can arise when AI systems
inadvertently amplify existing healthcare disparities or reflect biases present in the data used
for training. For example, if historical healthcare data contains unequal representation of
certain demographic groups, the AI system may produce biased recommendations or
diagnoses that disproportionately affect those groups. Addressing ethical biases in healthcare
AI is crucial to ensure fair and equitable healthcare delivery for all individuals, regardless of
their backgrounds.

Regulatory Bias in Healthcare AI: Regulatory biases can occur in the absence of clear guidelines
and regulations for the development and deployment of AI in healthcare. Inadequate
regulation can result in the misuse or unethical use of AI systems, leading to privacy breaches,
inappropriate data handling, or biased decision-making. Establishing comprehensive
regulations specific to healthcare AI is essential to safeguard patient privacy, prevent bias, and
ensure the responsible and ethical use of AI technology in healthcare settings.

Data Sharing Bias in Healthcare AI: Data sharing biases in healthcare AI arise when there are
limitations in accessing diverse and representative healthcare datasets. Limited data sharing
hampers the development of AI models that can generalize across diverse patient populations
or clinical conditions. It can lead to biased predictions or recommendations that do not account
for the full spectrum of patients. Encouraging data sharing initiatives, while ensuring privacy
and security, can help overcome data sharing biases in healthcare AI and promote more robust
and inclusive models.

Anonymization Bias in Healthcare AI: Anonymization biases in healthcare AI occur when


attempts to de-identify or anonymize patient data are insufficient, posing privacy risks or the
potential for re-identification. Inadequate anonymization may result in biases in AI models,
compromising patient privacy or violating data protection regulations. Applying strong
anonymization techniques, such as differential privacy or secure aggregation, is crucial to
mitigate anonymization biases and maintain patient confidentiality in healthcare AI
applications.

Federated learning: Data sources for machine learning are distributed across multiple
locations:

Clinical translation of computational tools:

The clinical translation of computational tools refers to the practical application of


computational tools, such as artificial intelligence (AI) algorithms and computational models, in
the context of clinical practice and decision-making.

Computational tools have the potential to assist healthcare professionals in various aspects of
clinical care, including diagnosis, treatment planning, prognosis prediction, risk assessment,
and patient monitoring. These tools leverage large datasets, advanced algorithms, and machine
learning techniques to analyse complex medical data and provide valuable insights and support
for clinical decision-making.

In the process of clinical translation, computational tools are developed, validated, and
integrated into clinical workflows to ensure their reliability, accuracy, and usability in real-world
healthcare settings. This involves rigorous testing, validation against gold standards or expert
opinions, and assessment of their performance in diverse patient populations. Additionally,
ensuring the safety, privacy, and security of patient data is crucial throughout the translation
process.

Clinical translation of computational tools holds great promise in enhancing clinical practice by
augmenting healthcare professionals' abilities to make accurate diagnoses, personalize
treatment plans, optimize resource allocation, and improve patient outcomes. However, it is
important to recognize that the successful integration of these tools into clinical practice
requires collaboration between computer scientists, clinicians, researchers, and regulatory
bodies to address technical challenges, regulatory requirements, and ethical considerations.
Overall, the clinical translation of computational tools aims to harness the power of technology
and data-driven approaches to improve healthcare delivery, enhance clinical decision-making,
and ultimately provide better patient care.

Basic definitions of models:

A model is a simplification of reality that tries to describe how objects of interests behave.

The best model is the simplest model that still serves its purpose.

Concepts of Digital Twin in healthcare, in-silico trials, ASME V&V40:

Digital Twin in Healthcare: A digital twin is a virtual representation of a real-world object,


process, or system. In healthcare, a digital twin refers to a digital replica of a patient or a
healthcare system that captures and simulates various aspects of their physiology, anatomy, or
behaviour. It allows for real-time monitoring, analysis, and prediction of health conditions,
aiding in personalized medicine, treatment optimization, and remote patient monitoring.

In-silico Trials: In-silico trials involve the use of computational modelling and simulation
techniques to simulate and predict the outcomes of medical interventions, treatments, or
therapies. By using virtual representations and computational models, researchers can test and
evaluate the effectiveness and safety of new treatments or interventions before conducting
physical trials. In-silico trials can accelerate the development of medical therapies, reduce
costs, and minimize risks associated with traditional clinical trials.

ASME V&V40: ASME V&V40 refers to the "Guide for Verification and Validation in
Computational Solid Mechanics," developed by the American Society of Mechanical Engineers
(ASME). V&V40 provides guidelines and best practices for the verification and validation of
computational models and simulations in the field of solid mechanics. It focuses on assessing
the accuracy, reliability, and credibility of computational models to ensure their suitability for
engineering applications and decision-making processes.

Model credibility requirements are determined by ASME V&V40:

- Question of interest (Task to be performed)


- Context of use (Who or how is this model going to be used)
- Model risk (influence and decision consequence)

Verify: equations are being solved correctly.

Validate: The model represents reality in an accurate way.

Sensitivity analysis: Analysis of how a model output changes wrt small changes in parameters.

Multi-scale electrophysiological models:

- Molecular Dynamics
- Channel scale
- Cell Scale
- Tissue propagation
- Organ scale
- Organism scale

ECG main characteristics:

Integration of ML and biophysical modelling:


Machine learning can help us constrain the space of potential solutions of a state variable:
adding a loss term, we can use machine learning to do so.

Basic concepts of visual analytics:

Integrates scientific and information visualisation with other disciplines such as data mining,
ML or statistics, in highly interactive environments that support data exploration and analysis.

You might also like