Professional Documents
Culture Documents
Medicine?
By Emily Johnson
September 15 2020
Currently, the most common roles of AI in the medical field are clinical decision support
and image analysis. Clinical decision support tools assist healthcare providers in making
decisions about treatments, medications, mental health, and other patient needs by
providing them with quick access to relevant information or research specific to their
patient. In the field of medical imaging, AI tools are used to analyze CT scans, X-rays,
MRIs, and other images to detect lesions or other findings that a human radiologist may
not identify.
The challenges that the COVID-19 pandemic has created for many healthcare systems
have also led numerous healthcare organizations worldwide to begin field-testing new
AI-assisted technologies, such as algorithms for patient monitoring and AI-optimized
tools for screening COVID-19 patients.
Research and results from these tests are still being collected, and general standards for
the use of AI in medicine are still being defined. Nevertheless, the possibilities for AI to
benefit clinicians, researchers, and the patients they serve are continuously expanding. At
this stage, there is little doubt that AI will become a central component of digital
healthcare systems that shape and support modern medicine.
Unlike humans, AI never needs to sleep. Machine learning models could be used to
monitor vital signs of patients receiving intensive care and alert clinicians if certain risk
factors increase. While medical devices like heart monitors can track vital signs, AI can
gather data from these devices and look for more complex conditions, such as sepsis. An
IBM client has developed a predictive AI model for premature babies, which has an
accuracy of 75% in detecting severe sepsis.
Precision medicine could become more manageable with the virtual assistance of AI. As
AI models can learn and retain preferences, AI has the potential to provide real-time
personalized recommendations to patients, 24/7. Instead of having to repeat
information to a new person each time, a healthcare system could offer patients
permanent access to an AI-powered virtual assistant that can respond to inquiries based
on the patient's medical history, preferences, and personal needs.
AI already plays a leading role in medical imaging. Research has shown that AI optimized
by artificial neural networks can be just as effective as human radiologists in detecting
signs of breast cancer and other diseases. In addition to assisting doctors in identifying
early signs of illness, AI can also help manage the staggering number of medical images
that physicians must store by detecting key elements of a patient's history and
presenting relevant images to them.
The discovery of drugs is often one of the longest and most expensive parts of drug
development. AI could help reduce the costs of developing new drugs in two main ways:
by creating better drug designs and by finding promising new drug combinations.
Through AI, many challenges related to Big Data faced by the life sciences sector could
be addressed.
Just a few years ago discussions of “data ethics” and “AI ethics” were reserved for
nonprofit organizations and academics. Today the biggest tech companies in the
world — Microsoft, Facebook, Twitter, Google, and more — are putting together
fast-growing teams to tackle the ethical problems that arise from the widespread
collection, analysis, and use of massive troves of data, particularly when that data
is used to train machine learning models, aka AI.
INSIGHT CENTER
AI and Equality
Designing systems that are fair for all.
Companies need a plan for mitigating risk — how to use data and develop AI
products without falling into ethical pitfalls along the way. Just like other risk-
management strategies, an operationalized approach to data and AI ethics must
systematically and exhaustively identify ethical risks throughout the organization,
from IT to HR to marketing to product and beyond.
What Not to Do
Putting the larger tech companies to the side, there are three standard
approaches to data and AI ethical risk mitigation, none of which bear fruit.
First, there is the academic approach. Academics — and I speak from 15 years
of experience as a former professor of philosophy — are fantastic at rigorous and
systematic inquiry. Those academics who are ethicists (typically found in
philosophy departments) are adept at spotting ethical problems, their sources,
and how to think through them. But while academic ethicists might seem like a
perfect match, given the need for systematic identification and mitigation of
ethical risks, they unfortunately tend to ask different questions than businesses.
For the most part, academics ask, “Should we do this? Would it be good for
society overall? Does it conduce to human flourishing?” Businesses, on the other
hand, tend to ask, “Given that we are going to do this, how can we do it without
making ourselves vulnerable to ethical risks?”
The result is academic treatments that do not speak to the highly particular,
concrete uses of data and AI. This translates to the absence of clear directives to
the developers on the ground and the senior leaders who need to identify and
choose among a set of risk mitigation strategies.
AI ethics does not come in a box. Given the varying values of companies across
dozens of industries, a data and AI ethics program must be tailored to the specific
business and regulatory needs that are relevant to the company. However, here
are seven steps towards building a customized, operationalized, scalable, and
sustainable data and AI ethics program.