You are on page 1of 22

-Information Systems Management

Artificial Intelligence and Machine Learning in Healthcare

1
Abstract:
The usage of Artificial Intelligence in the sphere of healthcare has been investigated in this study. The
difficulty in data collecting, worries about automation, exploitation of information by third-party
organisations such as insurance, lack of accountability, and biases that may be present in the algorithm
due to biases present in the data are all obstacles to AI deployment. In India, there are some substantial
implementation issues, such as a lack of infrastructure and data security. Solutions to problems such as
using government regulation to ensure the quality of healthcare delivered, ensuring the data collected is
free of bias, and overcoming the fear of automation can all be addressed by creating new jobs. In rural
places, front-line personnel may be weak in the ability to analyse the algorithm's judgement. Over-reliance
on decision-support systems can lead to contentment and errors as a result of the AI system's direct
implementation. Hybrid models, in which AI is used to assist doctors with existing knowledge, but not as
a replacement for doctors.

Keywords: Artificial Intelligence, Machine Learning, Healthcare, India

2
Table of Contents
Abstract: .................................................................................................................................................... 2

List of Figures: .......................................................................................................................................... 4

1. Introduction ........................................................................................................................................ 5

2. History and Facts ............................................................................................................................... 5

3. Current Research ............................................................................................................................... 7

4. Major challenges for implementation .............................................................................................. 9


4.1 Data Collection: ................................................................................................................................ 9
4.2. Automation ....................................................................................................................................... 9
According to research, job automation is likely; nevertheless, various external factors other than
technology may limit any employment loss, as well as the expense of computerization technologies.
Outside of job replacement and government, there are both benefits and drawbacks to automation.
Because of the aforementioned variables, job losses may be limited to less than 4%. ........................... 9
4.3. Misuse of information..................................................................................................................... 10
Sharing personal health information with other institutions, such as insurance companies, can be
discriminatory, and it can even be used by banks to assess credit worthiness. This flow of information
to organisations outside of the healthcare sector has the potential to have a disastrous impact on the
entire system, leading to discrimination in the workplace and other social advantages. Cyberattacks
on healthcare organisations could be used to steal existing data or create fictitious healthcare data.
In certain cases, ransomware assaults on hospitals have resulted in a large payment in bitcoins to
unlock the hospital's systems. ................................................................................................................ 10
4.4. Ethical concerns ............................................................................................................................. 10
Errors may always arise when AI software is used in patient therapy, and it may be impossible to
assign blame for those errors. In some circumstances, persons getting healthcare may feel more at
ease when the information is presented by a clinician rather than by software that isn't always
sympathetic. ........................................................................................................................................... 10
With the deployment of AI in healthcare, there is a good chance that significant ethical and
technological developments will occur. Hospitals, the government, and lawmakers must act
responsibly and implement governance structures to mitigate the harmful consequences. It looks that
positions in healthcare that are primarily concerned with patient data, pathology, and radiology,
rather than direct interaction with patients, will be automated. ........................................................... 10
4.5. Accountability................................................................................................................................. 11
4.6. Regulations ..................................................................................................................................... 11
4.7. Algorithmic bias ............................................................................................................................. 12

3
4.8. Privacy concerns ............................................................................................................................ 13

5. Significant challenges in the context of India: ............................................................................... 13

6. Paradoxes present in the use of AI in healthcare. ......................................................................... 15

7. Differences in the opinion of Experts ............................................................................................. 16

8. Way-outs ........................................................................................................................................... 17

9. Contradictions are the use of AI in Healthcare. ............................................................................ 18

10. Trade-offs and Way forward ........................................................................................................ 19

11. References ....................................................................................................................................... 22

List of Figures:

Figure 1 Different Data Types Used In Artificial Intelligence ....................................................... 6

Figure 2 Top 10 diseases where AI is used..................................................................................... 7

Figure 3 flowchart of data processing used in making decisions. .................................................. 8

Figure 4 Expected revenue from use of Artificial Intelligence in Healthcare. ............................. 21

Figure 5 AI in healthcare divided according to regions. ............................................................... 21

4
1. Introduction
Artificial Intelligence has recently gained popularity in the field of healthcare. Concerns have also
been raised that AI doctors may eventually replace human doctors. AI doctors, in my opinion, will not be
replaced by AI in the near future. Still, AI can assist doctors in making better decisions and can even
replace human judgement in fields like radiology. The fast increasing availability of healthcare data, as
well as the rapid growth of big data analytic tools, has made the employment of AI in healthcare more
likely. AI approaches can extract useful information from a large amount of data, which can be utilised to
assist doctors in making decisions.

The rising usage of artificial intelligence and machine learning, as well as the potential of extracting
useful facts from data, gives us the insights we need to make better decisions. Stakeholders in healthcare
can utilise analytical approaches to not only analyse historical data but also estimate future results and
make current decisions, thanks to the capability of these tools.

Doctors have traditionally relied on a limited quantity of data gleaned from their previous experience
treating patients. However, with the help of data from many sources, it is now possible to gain a full and
thorough picture of a patient's health. The application of advanced procedures to data will supply us with
the correct information at the precise moment and location. This is especially crucial in healthcare, where
each decision might mean the difference between life and death for a patient.

2. History and Facts


When Dendral was initially begun at Stanford University in the United States of America in 1973,
Artificial Intelligence was first employed in health care. Initially, it was used to help chemists identify
novel organic chemicals by analysing mass spectra and applying chemistry knowledge. This laid the
groundwork for the next system, MYCIN, which recognises germs that cause infections and prescribes
antibiotics in the appropriate dosage for the patient's weight. MYCIN has a basic interface engine and a
database of roughly 600 questions in which physicians must answer simple yes or no questions. In the
end, the algorithm would generate a list of possible bacteria, which would be ranked from high to low
probability, as well as a prescription recommendation for therapy. MYCIN, on the other hand, was rarely
employed by physicians, not because of performance limitations, but because of moral and legal concerns
about using computers in a person's treatment. By demonstrating the potential of representation and a
reasoning-based approach, MYCIN had an impact.

5
With the rising use of microcomputers and new stages of network connectivity in the 1990s, the
following breakthrough happened. Scholars and developers found a lack of data that may assist physicians
during this time, indicating that AI in health care must be used. Backward integration, as contrast to
forward integration, where data is used to draw conclusions, is used to obtain data in 2011. DeepQA was
the name of the technology, which used natural language processing to extract plausible answers from
unstructured input. Data from patients' records can be used to obtain evidence-based medical answers
using this technology. Pharmabot was later developed in 2015, in the midst of other inventions, to assist
in drug education for paediatric parents and their parents. Since then, artificial intelligence has progressed
to a far more experimental stage.

FIGURE 1 Different Data Types Used In Artificial Intelligence

6
FIGURE 2 Top 10 diseases where AI is used.

3. Current Research
Almost all healthcare specialities have progressed in their utilisation of AI. Due to the novel
coronavirus, the United States is expected to invest almost $2 billion in AI-related healthcare research
over the next five years, nearly 3.8 times what it spent the previous year.

Image processing techniques are primarily used in dermatology. Deep learning has also been used to
investigate how skin cancer cells can be recognised.

With the use of Computerized Tomography (CT) and Magnetic Resonance Imaging (MRI), Radiology
AI has been utilised to discover and treat disorders (MRI).

The use of AI in psychiatry is now in the proof-of-concept stage. In this case, a chat box was used to
imitate human interaction in order to determine the patient's level of sadness.

In telemedicine, a wearable device is being developed to monitor the condition of patients. And when
this data is matched to previously collected data, AI can alert patients to potentially dangerous scenarios.

7
In the formulation of new pharmaceuticals, a molecule was developed by AI in one year for a drug
used in obsessive-compulsive disorder, although the development of new molecules typically takes five
years. Such breakthroughs with the use of AI and enhanced speed can aid in the creation of new drugs.

FIGURE 3 Flowchart
of data processing used in making decisions.

8
4. Major challenges for implementation

4.1 Data Collection:


AI, in general, necessitates a large amount of data. If the data is not preserved with care, this usually
compromises the privacy of the patients. Customers do not like the fact that information is shared. "63
percent of the population is uneasy with exposing their personal data in order to advance artificial
intelligence technology," according to a survey conducted by researchers in the United Kingdom. As a
result, even with the desire to advance AI technology, patients are unwilling to reveal their personal
information.

Electric health records, insurance companies, and user-generated information such as fitness monitors
are all sources of data. This data is widely spread and difficult to collect and feed into the system in order
for AI to work more effectively. Even when data is obtained from the above sources, patients regularly
change service providers and seek insurance from multiple firms, resulting in data being separated into
numerous formats. This dispersion of information causes inaccuracies and raises the cost of data
collection.

4.2. Automation
Doctors may be able to provide better treatment as a result of automation, but support employees may
be replaced by machines. Radiology, for example, is more likely to become more automotive-focused.
More researchers are concerned that the widespread use of AI would result in a loss of human intelligence
and competence over time, causing hospitals to lose the ability to detect and correct AI faults. This also
suggests that relying too heavily on AI is not a smart idea, and the government may refuse to allow AI to
be used in crucial sectors like healthcare.

According to research, job automation is likely; nevertheless, various external factors other than
technology may limit any employment loss, as well as the expense of computerization technologies.
Outside of job replacement and government, there are both benefits and drawbacks to automation.
Because of the aforementioned variables, job losses may be limited to less than 4%.

9
4.3. Misuse of information
Sharing personal health information with other institutions, such as insurance companies, can be
discriminatory, and it can even be used by banks to assess credit worthiness. This flow of information to
organisations outside of the healthcare sector has the potential to have a disastrous impact on the entire
system, leading to discrimination in the workplace and other social advantages. Cyberattacks on
healthcare organisations could be used to steal existing data or create fictitious healthcare data. In certain
cases, ransomware assaults on hospitals have resulted in a large payment in bitcoins to unlock the
hospital's systems.

4.4. Ethical concerns


In the past, only people made health-care decisions, and the use of machines to make judgments using
AI may raise questions about accountability, transparency, permission, and privacy.

Given today's technologies, transparency may be the most difficult topic to discuss. A number of AI
algorithms are nearly impossible to comprehend and describe. If a patient is told that a photograph has led
to the diagnosis of cancer, he or she is likely to want to know how and why. ML algorithms, and clinicians
who are typically familiar with their work, may be unable to provide clarification.

Errors may always arise when AI software is used in patient therapy, and it may be impossible to
assign blame for those errors. In some circumstances, persons getting healthcare may feel more at ease
when the information is presented by a clinician rather than by software that isn't always sympathetic.
With the deployment of AI in healthcare, there is a good chance that significant ethical and
technological developments will occur. Hospitals, the government, and lawmakers must act responsibly
and implement governance structures to mitigate the harmful consequences. It looks that positions in
healthcare that are primarily concerned with patient data, pathology, and radiology, rather than direct
interaction with patients, will be automated.

10
4.5. Accountability
One of the hot topics now being debated is who is responsible in the event of a misdiagnosis.

Currently situation in US:

If an AI software makes a recommendation that could hurt a patient, the doctor who is offering the
patient advice may be held accountable for misconduct. The doctors must ensure that the patient receives
the appropriate therapy, thus the decision made based on AI endorsement is final. This occurs if the
software does not ensure that the doctor reviews the software's recommendations. Currently, the usage of
AI does not appear to be the standard of living, and clinicians can only use AI to validate decisions and
assist in previously existing decision-making processes, rather than blindly following it due to liability
concerns.

4.6. Regulations
Regulatory rules and international standards for software have emerged in two ways over the last
decade.
1. Separate device.
2. It is built in a physical device.

This has given software companies the tools they need to evaluate compliance with medical device
guidelines and sell their products on the market. However, AI may introduce new risks that are not
currently addressed in existing software rules. To ensure the security and performance of AI solutions
now on the market, a variety of ways will be required. The present software rules can be well-thought-out
while these new ways are being developed.

The Medical Device Regulation (MDR) and the In Vitro Diagnostic Regulation (IVDR) are two
European regulations that cover a variety of broad requirements that may apply to software. "Generic
obligations of manufacturers, such as risk management, clinical performance evaluation, quality
management, technical documentation, unique device identification, postmarked surveillance and
corrective actions, requirements regarding design and manufacture, including device construction,
interaction with the environment, diagnostic and measuring functions, active and connected devices; and
information supplied with the device," according to the document.

EU laws also include explicit software guidelines, such as avoiding non-positive links between IT
systems and software, as well as compliance for electronic programmable systems. The EU published a

11
study on the security and responsibility of Artificial Intelligence and the Internet of Things in February
2020. The Commission on Current Technology Evaluation has proposed that "Europe become a world
leader in AI, IoT, and robotics." "A clear and predictable legal framework addressing the technological
issues is essential," according to the organisation, in order to achieve the goal.

The Food and Drug Administration in the United States of America recently published a study
proposing a future guideline framework for software used as a medical device. It is based on rules from
the FDA's existing premarket conditions. In the software adjustments instruction, it contains risk
classification principles from the IMDRF, as well as the FDA benefit-risk framework and risk
management standards.

Other countries, such as China's National Medical Products Administration, have created frameworks
to assist doctors in making decisions using Artificial Intelligence software. AI in healthcare has also been
given instructions by regulatory agencies in Japan and South Korea.

4.7. Algorithmic bias


In the use of AI in healthcare, there are risks of bias and inequality. AI systems often learn from the
data humans offer, but the data we provide may have a variety of biases that the system must overcome.
For example, if the data provided to the algorithm is from a medical college, the resulting algorithm will
not function effectively for those who are not typically from the sector for which the data is provided,
implying that it will treat patients less effectively for those who are underrepresented in the data. Even if
the data supplied is comprehensive and covers all segments of the population, there may be biases due to
existing prejudices in healthcare.

In the United States, for example, black Americans receive less pain medicine than native Americans.
As a result of this skewed data, Black Americans may be prescribed lower-dose medications. Also,
situations where the hospital's earnings are reduced, such as allocating fewer resources to patients.

The following three types of bias can be identified:

1. Model Bias, in which the model evaluated is for the more prevalent groups in the data obtained
rather than the less under-represented groups.
2. Model Variance, which occurs when there is a difference in results due to a lack of data from
under-represented groups.

12
3. Outcome noise, which occurs when a set of unseen data has an impact on the algorithm's
predictions.

4.8. Privacy concerns


As a result of the necessity to collect large amounts of data from patients, there are worries about the
privacy of such data, and many lawsuits have been filed based on information sharing between healthcare
institutions and AI companies. There may also be instances where the AI developers provide information
that the patient does not want to share with anyone, and that information is misused. For example, AI
could determine that a person has Parkinson's disease based on mouse or smartphone movement. This
information sharing may violate their privacy, which is a cause for concern when it is shared with third
parties such as banks or insurance firms.

5. Significant challenges in the context of India:


Datasets are a vital component of AI implementation in healthcare. India's healthcare system is
disjointed and incomplete. Longitudinal data is also required by AI. In India, people frequently switch
doctors and are not committed to any particular institution. Customers aren't loyal to even the largest
hospital chains in India. Because front-line workers store data in handwritten records and doctors use
handwritten prescriptions for patients, the process of digitization is weak and non-standardized, making
data collecting particularly challenging in India.

For a collection of data from states, there may be an issue because states are often hesitant to share
information with the federal government because it may imply bad governance, and data
misrepresentation, as well as a defective algorithm and likely misdiagnosis, may be feasible. For data,
current AI studies rely on a small number of hospitals or research organizations. Many biases will be
present in this historical data, such as women from lower castes being denied healthcare. There will also
be an underrepresentation of senior citizens. Such data, which has many blind spots, may result in a
misdiagnosis. This has already happened in Manipal's hospital, which partnered with IBM to treat cancer
patients, but doctors remarked that the results were not appropriate for Indian patients because the data
fed into the system came from other countries.

Infrastructure and cost by emerging countries such as India, where the price will be quite high due to
the large amount of data collected, computer power, and storage space. Cloud computing infrastructure is
typically found on servers located outside of our country. As a result, a growing number of start-ups are

13
establishing computing infrastructure in foreign nations. This could be a concern because the keeping of
information in other nations could result in a slew of government restrictions. Large businesses like
Google and Microsoft have an advantage because they have a lot of data, and the absence of regulation in
India makes it simpler to buy them. "Google had previously attempted to collect data from US medical
organizations for developing AI solutions for eyecare but was unable to gain consent beyond what was
already in the public domain," according to researcher Seema Singh. It then went to India, where it has
established a number of data-sharing partnerships with Indian eye hospitals."

In India, data security is also an issue. The transfer of information outside of healthcare to other
organizations such as banks and insurance companies can lead to discrimination and raise several issues
about data security. Previously, the Aadhar system had several data breaches, and the government has
made virtually little effort to strengthen data security in this situation. Cyberattacks on hospitals might
result in the disclosure of sensitive medical information. For instance, in 2018 as indicated by a report
"Mahatma Gandhi Memorial Hospital, a trust-run emergency clinic in Mumbai, was impacted by a
ransomware assault in which medical clinic managers viewed their frameworks as locked, in this way
seeing an encoded message sent by the aggressors requesting a payment installment in bitcoin in return
for the opening of the framework and the emergency clinic had lost 15 days' information connected with
charging and patients' accounts, albeit the clinic didn't bring about any extra monetary misfortune"

Currently, artificial intelligence (AI) is employed to assist doctors in making decisions. The idea is not
to entirely replace doctors with AI software, but to keep humans in the loop so that any algorithmic flaws
may be detected. The primary question is who will examine the authenticity of the proposals and assist in
holding people accountable. In rural places, front-line personnel may be weak in the ability to analyses
the algorithm's judgement. Over-reliance on decision-support systems might lead to contentment and
blunders as a result of blindly obeying this AI system. In India, these issues are exacerbated by a lack of
standard norms and severe laws.

14
6. Paradoxes present in the use of AI in healthcare.
The usage of AI will have a significant impact on how patients and doctors interact, but this will
happen without the patient's input. This occurs because the transition to AI does not happen in one fell
swoop or with any dramatic changes, and the patient is largely uninvolved in the process.

With the rising usage of imaging scanners, hospitals will be inundated with information. Humans find
it difficult to translate this data into usable knowledge. AI has the potential to play a significant role in
resolving this issue. Humans and AI may collaborate, and doctors in radiology, in particular, must accept
robots as members of their team. This will aid in automation, allowing doctors to focus more on urgent
instances where the human touch is required and finding solutions. AI algorithms are capable of quickly
grasping the current situation and assisting in decision-making, as well as generating decisions on their
own. Algorithms can also be used to automate more repetitive processes. Some algorithms can gain direct
access to data without the need for human intervention.

With the immense benefits of AI in healthcare, but without the appropriate legal regulations and
policies. "Precision medicine is here," stated the director of the University of North Carolina Charlotte
"We're experiencing it."

Dublin, on the other hand, a physician-patient, advised that there could be several pitfalls to avoid and
stated that "Our healthcare system is dysfunctional and inefficient, and while precision medicine is a
fantastic tool, layering it on top of a failing system can exacerbate the problem.

We need to be really proactive when we're thinking about how to integrate this into our delivery system
because health inequalities can get worse."

The decision-making dilemma is a paradox in which AI can produce diverse conclusions for the same
input. As a result, we cannot totally rely on AI decisions. We must ensure that the method utilized to make
decisions is the best method available.

15
7. Differences in the opinion of Experts

Experts’ opinion in favor of AI in healthcare Experts’ opinion against the use of AI


in healthcare
Ability to deduce meaning from existing data sets and Employee training could be tough for
improve the quality of health care. healthcare staff who are already
employed.
Routine chores like as record keeping and data entry Privacy problems may arise if data is
can be automated so that doctors can concentrate on released to third parties, which might have
diagnosing patients. a catastrophic effect on patients.
Wearables like the Fit Watch can help users keep track Failure to hold people responsible
of their personal health using AI. Someone should be held accountable if
there is any prejudice in the algorithm.
It can prevent human errors such as giving the wrong Ethical problems about who is responsible
prescription to a patient, which might put the patient's and whether a machine's choice should be
life in jeopardy, and this could be avoided with the valued.
help of AI.
AI may also be used as a virtual assistant to assist Data privacy and misuse of information
surgeons in making decisions by offering data from could lead to national security issues,
prior comparable surgeries and assisting in making therefore government regulations could be
better decisions. concerning.

16
8. Way-outs
The following are some of the solutions proposed by professionals in support of the use of artificial
intelligence in healthcare to address the issues it poses.

With the fear of automation in healthcare, there may be an opportunity for alternative career
opportunities to be developed to work and improve new AI technologies. However, more use of AI
software in healthcare does not always imply lower costs for patient diagnosis in the future.

By preserving patient privacy, the problem with data gathering and availability poses several
dangers to the use of technology in healthcare. One answer to this challenge is for the government to
assume responsibility for providing data infrastructure, such as creating standards for Electronic Health
Records and, if necessary, providing support for data collection that the healthcare systems now lack.
Another option is for the government to fund directly in the development of high-quality datasets. Some
governments, such as the United States and the United Kingdom, have already adopted such measures.
Such investments should be directed toward gathering data from a large number of patients, as well as
preserving privacy and building confidence.

For this to happen, there needs to be a quality assurance institution in place. The Food and Drug
Administration (FDA) in the United States, for example, is in charge of overseeing new items that hit the
market. Products like AI are now exempt from FDA regulation because they do not perform any medical
tasks and are commonly seen in homes, such as the Fit Watch. Increased regulation for such items may
help to ensure product quality, and both programmers and hardware makers must be held accountable.

To combat Algorithm Bias, doctors must be made more aware of the problem by encouraging them
to participate in algorithm development. Doctors' engagement aids researchers in ensuring that the correct
program is set up and that bias is addressed before AI programs are used. The algorithms must be
developed in the context of the global community, and program testing must be carried out on a
representative population in the area where the AI system will be used. A thorough analysis must be
carried out to include all of the population's characteristics, such as age, sex, ethnicity, and location so that
the algorithm is not prejudiced against any of the aforementioned traits.

Pilot projects must be implemented in order to gain a deeper understanding of the new AI algorithms
and products so that the costs and benefits, as well as any potential downsides, may be assessed more
thoroughly.

17
9. Contradictions are the use of AI in Healthcare.
The employment of AI in healthcare is riddled with paradoxes. Because of the potential for
automation, most experts oppose the deployment of AI. Doctors may be able to provide better healthcare
as a result of automation, but support employees may be replaced. Radiology, for example, is more likely
to become more automotive-oriented. Researchers are concerned that the overuse of AI would result in a
loss of human intelligence and competence over time, leaving hospitals unable to detect and correct AI
flaws. This also implies that relying too heavily on AI is unwise, and the government may refuse to
approve its usage.

In terms of ethical considerations, experts are opposing the use of AI in healthcare because of the
potential for errors to arise when AI software is used in inpatient treatment, and it may not be possible to
hold people accountable for those failures. People getting healthcare may feel more at ease if the
information is given by a clinician rather than by software that isn't always sympathetic. With the
employment of AI in healthcare, there is a great chance of encountering a slew of ethical and technological
developments. Hospitals, the government, and lawmakers must all act responsibly and implement
governance structures to mitigate the bad consequences. Jobs in healthcare that deal with patient data,
pathology, and radiology, rather than those that have more direct contact with patients, look to be the ones
that will be automated.

Experts have emphasized that AI systems normally learn from the data we offer, yet the data we
provide may have a lot of biases that can generate biases in the system. For example, if the data provided
to the algorithm is from a medical college, the resulting algorithm will not function effectively for those
who are not typically from the sector for which the data was provided, implying that it will treat patients
less efficiently for those who are underrepresented in the data.

Despite the fact that the data supplied covers all segments of the population, biases may occur due to
pre-existing prejudices in healthcare. In the United States, for example, when compared to native
Americans, black Americans receive less pain medicine. As a result of such skewed statistics, Black
Americans may be prescribed lower doses of medication. Also, situations where the hospital's revenues
are lower, such as allocating fewer resources to patients.

18
10.Trade-offs and Way forward
In the near future, AI will play a significant role in healthcare. Precision medicine, which is highly
wanted in the healthcare business, can be achieved through the application of machine learning. However,
many previous efforts to treat patients and provide recommendations to doctors have encountered
numerous problems. We may expect Artificial Intelligence to enter this industry and be used for all
treatments. For example, in the near future, the usage of AI for reviewing images, such as in radiology,
and also in the research of viruses, will increase. Jobs that are repetitive, such as answering frequently
requested questions or gathering critical information from a patient by speech, may be replaced by
computers in the near future.

One of the most significant obstacles to the application of artificial intelligence in the healthcare
industry is assuring adoption in everyday tasks, rather than ensuring capacity. To be used, artificial
intelligence must first be regulated and free of errors and biases, and then merged with existing health
records in the industry. It must also be done in a standard manner, doctors must be fully aware of the
systems, and ultimately, the systems must be updated. The aforementioned issues can be resolved in the
near future, but it will take more time, and the machines themselves will have matured. As a result, we
can anticipate some application of machine learning in the healthcare industry within the next six years,
and more widespread use within the next ten to twelve years.

We may predict that artificial intelligence will not be able to replace human doctors, but that the use
of human touch will be more geared toward patient care. Jobs will be more focused on using human talents
such as persuasion and empathy in a few years. Those healthcare employees who refuse to work for the
integration of AI in healthcare could lose their careers in the future.

Currently, artificial intelligence (AI) can be utilised to assist doctors in making decisions. The idea is
not to entirely replace doctors with AI software, but to keep humans in the loop so that any algorithmic
flaws may be detected. In rural places, front-line personnel may be weak in the ability to analyse the
algorithm's judgement. Over-reliance on decision-support systems might lead to contentment and blunders
as a result of blindly obeying this AI system.

AI in healthcare may obviously increase the efficiency of the Indian healthcare system. However,
there are other areas of worry in the system, including a lack of government funding, inadequate
infrastructure, and a socio-cultural climate. Artificial intelligence will not be able to solve the problems

19
outlined. Even if the technology factors are considered, this does not guarantee that the adoption will take
place. Artificial Intelligence use in India might vary from one place to another due to a lack of
infrastructure, largely deregulated hospitals, and the diverse capacities of individual states in India. It
could also imply that well-established private hospitals would be among the first to implement AI. As a
result, these institutions typically operate in metropolitan areas, serving a tiny segment of the population
and those who can afford such therapy. Even now, in remote areas, hospitals rarely use invoicing and
billing software.

Overall, the effectiveness of these tools will be determined by our ability to identify the problem and
then find an accurate solution to it. Now, solutions are based on technological challenges, but they must
also be based on the limits that exist or on the scenario. For example, developing software that is real-time
sensitive, meaning that the solutions are updated every second, cannot be used in rural locations where
there is no basic internet access. Because there are so many differences between the user and the software
developer, finding the proper form of intervention is more difficult. Before AI is deployed in the healthcare
field, issues like as privacy and responsibility must be addressed.

Hybrid models, in which AI is used to assist doctors with existing knowledge, but not as a replacement
for doctors. This software can assist in identifying risk factors, planning operations, and developing
contingency plans, but the ultimate choice to care for the patient rests with the doctor. This type of hybrid
paradigm can aid AI adoption and speed up the process. AI provides us with a plethora of opportunities
to improve healthcare and give services to patients in a seamless manner.

20
Figure 4 Expected revenue from the use of Artificial Intelligence in Healthcare.

FIGURE 5 AI in healthcare divided according to regions.

21
11.References
• https://reader.elsevier.com/reader/sd/pii/S0016510720344667?token=41B9073D0941E322C
90BB14B88975B40E3E722D7093B0C40BF3716ACA827A74B3CEF85FC2495C2F408A6
BE54D9D448C9
• https://addepto.com/artificial-intelligence-and-big-data-in-
healthcare/#:~:text=History%20of%20AI%20in%20healthcare&text=The%20first%20attem
pts%20to%20implement,Intelligence%20in%20the%20healthcare%20system.
• https://en.wikipedia.org/wiki/Artificial_intelligence_in_healthcare#History
• https://www.chathamhouse.org/2020/07/artificial-intelligence-healthcare-insights-india-0/3-
ai-healthcare-india-applications
• https://www.thinkautomation.com/bots-and-ai/the-paradoxes-in-ai/
• https://www.sciencedirect.com/science/article/pii/S0016510720344667
• https://www.datasciencecentral.com/profiles/blogs/how-ai-in-healthcare-is-changing-the-
industry
• https://www.marketsandmarkets.com/Market-Reports/artificial-intelligence-healthcare-
market-54679303.html

22

You might also like