You are on page 1of 6

Chatbots – Key Legal Issues

Andrea Trost; Nicola Benz

Chatbot technology is disrupting the ways in which companies interact


and engage with their customers. As software has become more and
more sophisticated and artificial intelligence and natural language
processing more developed, the use of chatbots has significantly
increased in the last few years and is expected to grow further. In this
blog post, we will provide an overview of the legal considerations
connected with the use of chatbots.

What Exactly Are Chatbots?

Chatbots are software programs that can simulate a conversation with


a user in natural language, mainly through messaging applications,
websites and mobile applications. Simple chatbots (so called clickbots)
respond to keywords with answers that are pre-programmed into their
systems. Smart chatbots are designed to understand and respond to a
conversation in a natural, human-like manner. In order to do this,
smart chatbots are equipped with artificial intelligence and machine
learning. Through interactions with people and by accessing
knowledge databases, smart chatbots become smarter and more
personable after each conversation. This blog post will focus on smart
chatbots.

Where Are Chatbots Used and Why?

Chatbots are used in a variety of different contexts, such as in online


customer support, in e-commerce purchase processes, for online
bookings, customer feedback or for marketing purposes. For example,
by using chatbots in online customer support, companies can provide
extensive and always-available customer assistance whilst at the same
time saving costs. From a marketing perspective, companies can use
chatbots to increase their customer engagement and monitor
consumer data.

Chatbots are increasingly used in regulated industries, such as in


financial or healthcare services. Banks have introduced chatbots to
perform tasks that range from collecting basic customer information
and checking an account balance to offering users suggestions on
how to save money. Where chatbots are used in a regulated industry
they must be programmed to comply with the applicable regulatory
framework. For example, where a chatbot collects information of a
bank customer, it must be ensured that the chatbot complies with
banking secrecy laws.

Do Customers Have to Be Informed About the Use of


Chatbots?

It is not always obvious for a chatbot user that he or she is talking to a


piece of software rather than an actual person unless this is clearly
indicated with a disclaimer on the website where the chatbot is used
or by information through the chatbot itself. There is no express legal
obligation in Switzerland to inform customers about the use of
artificial intelligence.[1] However, if customers are not informed that
they are interacting with a chatbot, the Federal Act against Unfair
Competition (UWG) could apply. This is especially relevant in situations
where it is important for customers that they are interacting with an
actual person. The UWG states that any conduct or business practice
which is deceptive or otherwise contrary to the principle of good faith
and which affects the relationship between competitors or between
supplier and purchaser is unfair and unlawful. For example, a company
could act deceptively within the meaning of the UWG if it operates a
chatbot which provides financial advice to customers and describes
itself as a knowledgeable person with the necessary experience
without indicating that it is actually a chatbot. In this case, the
customer would expect to be advised by a human being and not a
chatbot.

What About Data Protection?

Chatbots regularly process personal data during the interaction with


their users. Therefore, the Data Protection Act (DPA) and the Data
Protection Ordinance apply. The operator who makes the chatbot
accessible to the customer has to ensure that the chatbot complies
with the DPA and the relevant data processing principles, namely the
legality, good faith, correctness, proportionality, purpose limitation,
security and transparency principles. For example, in order to comply
with the principle of proportionality, the chatbot operator has to make
sure that the chatbot only stores data from customers for as long as
required for a specific purpose. The transparency principle requires
that the collection of personal data and in particular the purpose of its
processing is evident to the data subject. Chatbot operators should
therefore have a privacy policy on their website in which they inform
the users about the personal data which is collected through the
chatbot, how it is stored, for what purposes it is used and how it is
exchanged with third parties.

What if Chatbots Provide Wrong Information to a


Customer?

If a chatbot provides wrong information or inaccurate advice to a


customer, the customer could make a wrong decision and as a result
thereof suffer a financial loss and even bodily harm. However, since
chatbots are software programs which do not have their own legal
personality, chatbots themselves cannot be liable to the customer.
Instead, the company which operates the chatbot and makes it
accessible to the customer could become liable for the financial loss
based on contract or tort. In the event of bodily harm also the
manufacturer of the chatbot might become liable based on the
Product Liability Act if the bodily harm was caused due to a defect in
the chatbot software. A liability based on contract requires that the
financial loss (with or without bodily harm) was caused by a breach of
a contractual obligation by the chatbot operator and that the chatbot
operator acted with intent or negligence. The specific contract in
question will have to be examined but the provision of wrong or
inaccurate information through a chatbot most likely constitutes a
breach of contract. The chatbot operator would act negligently, for
example, if it failed to properly test the chatbot, failed to correct
known or apparent errors or failed to ensure ongoing checks on the
flawless functioning of the chatbot. Alternatively or if there is no
contract between the chatbot operator and the customer, the chatbot
oberator could become liable based on tort. For pure financial losses
without any bodily harm a liability based on tort requires that the
chatbot operator has breached a written or unwritten rule which aims
to protect the financial assets of the customer.

What if a Chatbot Becomes Abusive?

There is a risk that chatbots which are provided with incorrect or


incomplete training data or opened up to public access, could
misbehave by providing abusive responses to a customer. For
example, in 2016 Microsoft’s chatbot Tay started to tweet highly
offensive and abusive content within a few hours of its release on
Twitter. If a chatbot provides abusive or offensive responses to a
customer, the personality rights of the customer could be unlawfully
infringed and as a result the customer could bring a claim for
damages and satisfaction against the chatbot operator. In addition to
the infringement of customers’ personality rights, a misbehaving
chatbot can negatively affect the company’s image and reputation.

Recommendations

Chatbots have many advantages for companies. In order to reduce the


legal risks associated with them, companies should adequately inform
their customers about the use of chatbots either by a disclaimer on
their website or by information through the chatbot itself and have a
privacy notice in place which informs customers about the data that is
collected through the chatbot. Liability and reputational risks
connected with the chatbot may be reduced by testing and reviewing
the chatbot thoroughly through random conversations and by
ensuring that the chatbot contains an intelligent censoring mechanism
and a built-in oversight. It is further helpful for companies to have an
internal policy in place which covers the permitted activities of the
chatbot, the extent of information fed to it, data collection and
processing, the maintenance of the chatbot and the monitoring of
chatbot activities including triggers for human intervention.

[1] However, on 22 May 2019 the Organisation for Economic Co-


Operation and Development (OECD), to which Switzerland is a
member state, issued recommendations on the use of artificial
intelligence in which it stated that a responsible approach to artificial
intelligence involves informing data subjects about the interaction
with artificial intelligence. Although the OECD recommendations are
not legally binding for member states they carry moral force and it is
expected that member states should try to implement them.

Retrieved from
https://www.mll-news.com/chatbots-key-legal-issues/?lang=en

You might also like