You are on page 1of 3

Ethical Consultancy Group.

Inc
100 West 49th Street
Vancouver, BC V5Y 2Z6

March 20, 2024

Mr. Corporate Client


Customer Service Representative
kokocares.org
San Francisco, CA

Dear Mr. Client,

We are writing this letter to you after analyzing the ethical problem of your organization. This

was a critical issue to your organization’s reputation, and we would like to suggest a solution.

Before bringing a solution suggestion, we believe that it is essential to look again at the quick

summary of the ethical problem and its dimensions. We summarized facts briefly and identified

ethical dimensions briefly below.

Summary

A co-founder of Koko which provides customers with a free mental health service is facing

backlash for his recent tweets. Rob Morris admitted that the company used ChatGPT to respond

to customers' messages. Morris tweeted out that the company tested “a co-pilot approach with

humans supervising AI as needed”. Wanting to use ChatGPT to improve the overall experience

for the customers was what Morris had in mind. It seemed like the right thing as it would make
the service more efficient and reduce response times to people who may not have a lot of time to

wait. In another tweet on the same day, Morris stated “Koko users were not initially informed the

responses were developed by a bot, And once they learned it was from a bot it didn’t work”.

Morris mentioned that the company removed the feature from the platform quickly as he

understood that some people don't want to speak to a chatbot. He did also mention that the

messages from ChatGPT were given a higher score compared to human messages. He later

tweeted that those users that got responses with help from ChatGPT had opted in or agreed to the

terms.

Ethical Dimension

Koko’s use of Chat GPT-3, to form prompt and efficient responses in dealing with mental health

patients online may seem like a step forward for advancement but is extremely unethical from a

human experience standpoint. Humans can oversee the conversation/supervise all they want, but

having AI-generated responses to deal with critical, sensitive, and unpredictable personal issues

is not something that can be taught or coded into technology. The responses can seem as close to

human dialogue as possible, but it can never truly replicate the nuances and authentic warmth

that we have wired into us, which has taken decades to learn through our own experiences.

Mental health and the issues that come from it are complex. Empathy cannot be broken down

into a code. Most humans that are struggling want to be heard by another human being and be

comforted, supported and understood, which AI cannot do. Each participant that used Koko’s

services should have been informed and should have been required to read and consent to using

its services knowing that the responses that it would be getting would be generated through the

use of AI. Even though Koko has informed people of the service, it should still be in writing

before a person can gain access.


A lot of mental health professionals are also concerned that this type of platform may take us

back a few steps. A lot of people may feel betrayed or duped and this may cause a significant

emotional and mental strain to their wellbeing which would make a lot of people wary of seeking

out the help they need. Another ethical issue is that it isn’t clear if the people overseeing the

conversations are even qualified to be helping people dealing with mental health issues. The

participants also may not get the same person to help them which is a crucial component in

helping people with these issues as forming a bond to comfort and create a safe space for

communication is vital in such circumstances.

Suggestion

We suggest that we inform customers that the chatbot is tested on the website before they start

using the service. By doing so, confusion must be avoided. The main problem in this case is that

the customers were not informed in advance that they were probably talking with an AI tool-

based chatbot while they were expecting to talk to a real human. As long as customers

acknowledged that they would talk with the chatbot and they agreed on that, there would be no

problem. Indeed, as a study shows, customer satisfaction is higher when talking with the chatbot,

and there are benefits such as way shorter reply times, so this AI technology should be used

wisely.

Sincerely,

Group 8

You might also like