You are on page 1of 1

Reply speech [opposition team]

I, as the first speaker of the opposition team, am going to deliver the reply speech for my team.

Due to the limited time, I would like to elaborate deeply on one main clash rather than
superficially state several factors. We, the opposition team, adopt the stance that rapid growth
of Artificial Intelligence in mental therapy has its shortcomings for example data privacy, social
issues, ethical issues, hacking issues, developer issues were among the obstacles to
implementing the successfully AI in medical sector. We, the opposition team, have stated that
AI’s lack of accountability. In this context of debate, the lack of standardized guidance around AI
governance, and the complexity of deep learning and machine learning models, has made it
difficult for experts to decide who to blame when an AI system goes wrong. One of the issues
associated with AI is who owns the private information that we reveal to a therapy chatbot.
While human therapists are bound by privacy laws, can these same rights be waived when
mindlessly agreeing to the “terms and conditions” required to install and use an app? The
opposition team also stated about the inability to understand diversity in Human nature. AI can
perform many tasks with high accuracy and efficiency, but it still lacks the ability to think
creatively and come up with original ideas that resonate with human emotions and experiences
and speak to a brand's unique character. Besides, AI can only function based on the data it
receives. Anything more than that would take on more than it can handle, and machines are not
built that way. So, when the data inputted into the machine does not include a new area of work,
or its algorithm does not include unforeseen circumstances, the machine becomes useless. AI
offers tremendous potential for improving mental health care, BUT it is not a magic wand. After
all, isn’t the goal of mental health care to help us all feel a little more human? According to
Shruthi S, from YourDOST, an online counseling platform, “We find that people prefer talking to
actual humans, and also in a physical setting,” Next, the opposition team has stated about AI’s
lack of creativity. Computer programs developed by artificial intelligence are designed to be
logical and systematic. This means that they cannot be impulsive or spontaneous like human
creativity. AI is programmed to process information in a certain way and achieve a particular
result. It cannot deviate from these instructions, and its actions are predictable. Besides,
Therapy isn’t just about talking that goes on inside the room. For people with social anxiety or
agoraphobia, leaving the house to the clinic may provide a kind of exposure therapy that could
be the most valuable part of therapy. Some of the instances are saying a simple ‘Hi’ to someone
on your way to the clinic or taking a bus ride to the hospital. They can provide some kind of
therapy to these patients. AI CANNOT provide these essential meta-layers of psychotherapy.

The government on the other hand has tried to explain that..

Therefore, due to our superiority in our support of our arguments, the motion 'This house This
house supports the rise of artificial intelligence (AI) in mental therapy' should FALL! with that,
thank you.

You might also like