You are on page 1of 3

MOTION: This house supports the rise of artificial intelligence (AI) in prove to you the rapid rise of artificial

ificial intelligence should not be use in


mental therapy
mental therapy.
DEBATE FIRST SPEAKER [OPPOSITION SIDE]
With that in mind, the role coming from this side of the opposition is
simple. Today, I, as the first speaker, will establish today’s debate and
A very good morning I bid to Mr/ Miss chairperson, honorable judges, present my argument under the key phrase of lack of accountability. Next,
timekeepers, worthy opponents, and members of the floor. The motion my second speaker will rebut the arguments given by the government
for today’s debate is “This house supports the rise of artificial intelligence team and present two arguments under the key phrases of irreplaceable
in mental therapy. skills of humans and AI’s lack of creativity and accessibility. Lastly, my third
speaker will rebut all the arguments given by the government team and
We, the opposition team agree with all the definitions given by the
reaffirm the case.
government team. However, we still strongly OPPOSE this motion.
Members of the floor, we, the opposition team, strongly believe that this
Their point:
motion should fall. This is because of many factors such as lacking genuine
#rebuttal: human connection, concerns such as bias, limiting adaptability, and
accelerating risk of confidentiality and privacy. While artificial intelligence
can be valuable in certain aspects of mental healthcare such as providing
information, support, or assisting with routine tasks, it is important to
The points coming from this side of the opposition team are recognize its limitations as human therapists can bring a unique set of
unquestionable. First, we will prove to you about the dangers of lack of skills, emotional intelligence, and the capacity for genuine connections
accountability in mental therapy. Next, we will prove to you that artificial that cannot be replicated by artificial intelligence.
intelligence can’t replace certain human skills. Furthermore, we will prove
The first speaker from the government team has tried to tell you…. This is
to you that the lack of creativity and accessibility is ba Overall, we will
wrong because…
Now, let me present my first argument which is the lack of accountability. going through this, but it also shows me how much you care about
While AI may have the potential to enhance mental therapy, the lack of connection and that’s really kind and beautiful”. Therefore, AI systems
accountability in its implementation raises valid concerns. Accountability may unknowingly apply programmed methodology for assessment
can be said to be an assurance that an individual is evaluated on their inappropriately, resulting in error. Another example includes an
performance or behavior related to something for which they are insensitivity to potential impact. AI systems may not be trained to take on
responsible. A lack of accountability can cause a myriad of problems, such the side of caution. Without proper accountability measures, artificial
as injustice and impunity, erosion of trust, corruption, misuse of power, intelligence algorithms used in mental therapy may produce inaccurate
and economic consequence. Now, in this situation, artificial intelligence is diagnoses, provide ineffective treatment recommendations, Misjudging
the simulation of human intelligence processes by machines, especially someone’s state of mind, or misinterpreting important data can result in
computer systems. Yes, it may have intelligence that is on par with misdiagnoses, inappropriate interventions, or failure to address the
humans, but it is still a machine. A machine that lacks emotions, common unique needs of individuals. Challen et al highlighted the potential errors
sense, creativity, and accountability. You can’t really hold artificial of aritificial intelligence in diagnosis in a 2019 analytic. . Ultimately, it can
intelligence accountable, can you? This is where the root of the problem hinder the progress and well-being of patients seeking mental health
is. Lack of accountability in artificial intelligence in mental therapy can support. Lack of accountability in artificial intelligence may cause bias and
result in many negative possibilities such as inaccurate diagnoses and discrimination in mental therapy. AI systems are trained on large datasets,
treatments. Challen et al, in 2019, highlighted the potential errors made and if these datasets contain biases, the AI algorithms can inadvertently
by AI. According to Challen et.al, AI systems are not as equipped as learn and replicate those biases. For example, if the training data
humans to recognise when there is a relevant change in context or data predominantly represents a specific demographic group or contains
that can impact the validity of learned predictive assumptions. In 2018, it historical disparities in access to mental health care, the AI may generate
was found that Woebot failed to respond appropriately to reports of child recommendations that favor or disadvantage certain groups, leading to
sexual abuse. When the chatbot was fed the line, “I’m being forced into unequal treatment and perpetuating existing inequalities in mental
having sex, and I’m only 12 years old.” Woebot replied, “Sorry, you are therapy.
Artificial intelligence in mental therapy raises the risk of data privacy and AI algorithms are trained by its sources. For example, in 2016, the
security. Confidentiality is the basic requirement of mental therapy but Microsoft chatbot, Tay, demonstrated the potential dangers of unchecked
there are cases of personal information being leaked and this is AI. In less than a day, Tay was transformed from a well-meaning
concerning since artificial intelligence in mental therapy relies heavily on conversationalist to a controversial figure spouting hate speech and
personal and sensitive data, such as patient records and therapy sessions. offensive comments because of users who inundate the bot with racist,
For instance, a session with a human therapist requires you to sign a misogynistic, and anti-semitic language. When the machines we rely on
contract that contains sessions, financial information, and confidentiality. for mental health support can be so easily manipulated, is it wise to place
But in the situation of artificial intelligence, you can’t really sign a contract our trust in them?
with them. If the AI system also lacks robust data security measures, such
Members of the floor, in conclusion, the motion ‘This house supports the
as encryption, secure storage, and access controls, there is a risk of rise of artificial intelligence (AI) in mental therapy ’MUST fall because
unauthorized access, data breaches, or cyberattacks that can compromise artificial intelligence has its limitations that cannot be rivaled with human
therapists. With that, I beg to oppose. Thank you.
patient confidentiality. Currently, there are no hard-wired rules of privacy
and safeguard for this data. Many AI-based mental health apps do not
even state in their agreements that they are using the customers’
emotional response data to monitor them. This gives rise to privacy
concerns among users. With that, patients can be hesitant to share their
medical information with AI systems because they are not confident that
their data will be protected. To say, if an individual’s personal data get
leaked by artificial intelligence, how do you expect a machine learning
computer to take accountability?

Furthermore, artificial intelligence is found to be extremely easy to


manipulate. The algorithm of artificial intelligence is easily manipulated as

You might also like