You are on page 1of 16

Andrei Anghelescu

Master II, PPE


University of Bucharest

Trust Relationships in Human-AI Interaction


- Contemporary Economic Debates -
May 2019

Trust, it's complicated

Trust is something indispensable in our day to day activities. It is one of the primary
requirements in any interaction, no matter if it's a one time exchange of information or
business, or a series of prolonged interactions, a sort of "long term relationship".
Relationships of trust occur between two or more rational entities, and we are familiarised
with discussing matters of trust between humans, but nowadays we have to acknowledge that
humans are not the only rational agents around. Throughout the day we may interact mostly
with our fellow human beings, be if we stop and think for a moment, they are not the only
rational entities we chose to entrust; for example, you are searching for that perfect breakfast
place nearby and, once you have found it, you will quickly pop the address in your favourite
navigation app, which in turn will "think" about the best route to get you there. Sure, if you
know the way around the neighbourhood, having lived there for six years of your life, you
may notice that, off the top of your head, you know a faster shortcut of going from A to B.
But if B is largely unknown to you, you'll have no problem in letting an algorithm decide
what your correct path will be that morning. In your relationship with the rational agent that
"lives" inside the app on your device, you have put your trust and found the interaction to
ultimately be beneficial.

What made you trust that the application will be able to provide a proper route for this
specific interaction? Maybe the fact that two weeks prior you used it to arrive on time for that
interview you had scheduled, or that you find yourself quite often lost in places you barely
have any idea to navigate, and the navigator aids you in getting back on track. Repeated
interactions, in which the results were mostly positive, made you trust this non-human
rational agent with the decisions regarding your daily pathfinding. But I think a more
interesting question emerges from this: What made you trust the artificial intelligence in the
first place; the first time you freshly downloaded the app to your mobile device? This is an

1
interesting aspect to analyse, as human trustworthiness is different from machine
trustworthiness. When you talk about trusting another person, there are many things to take
into consideration; first of all, the primary interaction you have with somebody can influence
your decision to cooperate or not, to trust him in your exchange or prefer to keep the distance
and find another way of achieving your goal. It might be that you see something in him, a
first impression that drives you away, he just makes you feel like you should not trust him;
maybe it's that he seems grumpy or rude that makes you approach him with caution, even
though your acquaintance told you that he's a nice person. You may suspect that your
potential partner has an interest in deceiving you for their own benefit. In interpersonal
relationships this feeling of "I can/I can't trust this person" is not always based on rational
analysis but it is actually a very fast heuristic analysis done by our brain, based on social cues
such as the person's facial features and non-verbal communication.1 Also, there is another
aspect that's different in human-machine relationships than in human-human ones: after
establishing a trust relationship with somebody, that interaction can go through constant
changes and adjustments from both parties. People deceive and purposely break one another's
trust, usually for their own benefit. But they are also able to actively work on mitigating their
lack of trustworthiness and also forgive those who have breached their trust. Actually, for
maintaining a long term trust relationships, the most beneficial would seem those in which
the agents are able to forget some wrongdoings and adjust their position in regards to the
level of trust in their partners, not breaking the relationship entirely.

Human - AI trust issues

Now, in the case of human-machine relationships we find that the situation regarding
trust is different than with our human counterparts; first of all, the mechanisms of trust
acquisition are different: there are no social cues from which we might deduct
untrustworthiness and, as such, the relationship is built on a fundament of reputation (we
might trust the application or service because we know that a lot of other people trust is, or
we might trust the technology because we trust the company or people that develop it). What
is fundamentally different, though, is that when interacting with rational agents governed by

1
Klapper, Dotsch, van Rooij, Wigboldus, Do we spontaneously form stable trustworthiness impressions from
facial appearance?, Journal of Personality and Social Psychology, Vol. 111(5), November 2016, pp. 655-664

2
artificial intelligence, they are built in such a way that it will never have an interest in
deceiving you for its own purposes, as its goal is actually to help the human agent in
achieving its goals. In this respect AI agents are more akin to a tool then an actual partner.
Sure, it benefits from all the human interactions by becoming better at what it is supposed to
do, but there are mechanisms in place that prevent it from deliberately harming their partners
and purposefully break their trust; but that doesn't mean that there is no possible way in
which they can fail us. Let's take for example spam filtering algorithms, which are advanced
natural language processors that employ algorithms to detect and automatically filter
unwanted emails. We rely on those to keep us safe from malware, phishing forms and bulk
unwanted advertisements, and they get better by the day, relying on the information gathered
from millions of daily beneficiaries, automatically adjusting its filtering algorithms to keep
up with the spammer's techniques. But, from time to time, the filter might put one of your
important mails in the "Spam" folder, legitimately thinking it was spam. In terms of a formal
relationship, you could say that this is a breach of your trust in this professional interaction.
But the algorithm couldn't have deliberately done that to you, he couldn't have moved that
important email regarding the interview you have been scheduled for on purpose, to see you
fail and increase his chances of nailing that job. It was simply a mistake, and the you are the
one responsible for it. You put too much trust in that interaction, thinking that the system is
almost infallible, being used to having it be right 99% of the time. In this case, the problem in
human-AI relationships is having too much trust, forgetting that these repeated interactions
are nothing much than you using a tool to achieve your purposes; of course it is a highly
advanced and sophisticated tool, especially designed to learn how to do its job properly and
more efficiently, but it is still simply a tool.

There are always chances of error, especially when we try to automate certain things.
Sure, in the majority of the cases it is much more efficient than if that exact same job would
have been done entirely by humans, but we must not forget that, if it fails, the action can
automate in the wrong way. But, as little of an inconvenience missing an important email is,
we must also think about the other scenarios in which we entrust technology with things of a
much greater importance than this, sometimes even with our lives. When your GPS navigator
fails, the stakes are much lower than when the AI assisted surgery equipment malfunctions
during an operation. For this reason I think it is important to approach trust relationships in

3
regards to AI technology differently, in accordance to the importance of the tasks they are
supposed to automate or assist. While not actively thinking about this thing while using those
low risk-high reward tools that I mentioned before, people actually tend to approach higher
risk AI tools with low trust levels, cautionary, because they tend to partially or totally replace
activities which humans always felt are under their total control, such as providing medical
diagnostics and care, financial advice or driving a vehicle. These categories of activities are
very complex to automate, and strip the human of that feeling of being in charge and being
able to influence the outcome, categories that were fundamentally interpersonal activities and
maybe that's why there are such strong opinions about automated vehicles and medical
appliances. The truth is, these are still technologies in their infancy, and maybe it is for the
best to be somewhat skeptical in regards to using them; but this does not mean that we should
not use them at all, as we would actually slow down the development of these tools. Artificial
intelligence and machine learning mostly rely on gathering and processing data, which means
that it is desired for them to be used by as many users as possible, gathering more and more
information to be processed and thus improve their reliability and stability; as it is said
"Practice makes perfect" and that is especially true for an entity able to practice something
without getting tired and, on top of that, with huge amount of processing power.

When talking about trust in machines, I think the issue is much more complex than
brainlessly trusting them or refusing to use them at all. There is a full spectrum of grey zones
regarding where people place themselves in regards of letting a virtual rational agent
automate a part of their lives. The potential for such technologies is huge, and it is now, while
it is still in development, that we need to start building this trust. We saw that AI is indeed
powerful, besting some of our sharpest minds in games such as chess, Go!, and most recently
even complex real-time strategy video games such as Dota 2 and StarCraft II. Of course, not
all AI are the same, being that they are highly specialised and adapted for a certain niche, but
the governing principle is somewhat the same, and I think that this is the proper time to
address and regulate it, for the better implementation in our future. We must acknowledge
that there is a certain difference between one type and the other, and I think that these
differences are best seen in the impact that a potential failure has on human life. When
discussing AI liability things are much more difficult, because if human lives are lost or there
is property damage as a result of automation failure, somebody needs to be held accountable;

4
it can't be the AI per se, because even though it is a rational agent and the damage is a direct
result of an error in its judgement, it is still a non-human entity, with no legal liability, atleast
under current jurisdiction. This means that maybe those in charge with its development and
administration are the ones to be held accountable? Policy makers and developers should be
under higher liability, depending on how high the risks of automation failure are, because
while the occasional wrong route that your navigator may decide to take you on will only be
a minor impediment and annoyance in your daily routine, a highway pile-up generated by an
autonomous vehicle gone rogue might actually result in killing people. But, as of today,
deaths as a result of autonomous vehicles are still low and the legal problems largely
unexplored, the companies usually managing to settle the issues with the defunct's family.

As of 2016, when the first death related to an autonomous vehicle was recorded, it is
still not clear in which direction to shift the blame; the truth is that automakers such as Tesla
and GM do not actually advertise the cars as fully autonomous but as capable of full
automation, in the future. There are still grey areas in assessing legal responsibility, as the
accidents are usually attributed to be users errors. Sure, people expect Tesla's "Autopilot" to
be able to fully handle the vehicle, and initially it seems that it actually can. The car is fully
capable to handle all the functions, properly keep and switch lanes even in sharp curves and
fastly react to dangerous situations, but the user is still required to maintain attention and is
even regularly prompted to maintain his hands on the steering wheel. This means that the
user, although heavily assisted by AI, is still liable for the accidents having various visual and
auditory warnings prompting him to take action in clutch situations. The technology is very
much in its first years and, although Tesla acknowledged that their chips and sensors are fully
capable to drive the car on their own, without requiring any human assistance, this capability
is still software locked2; big tech companies are aware that they can't yet release fully
autonomous vehicles into the public and take steps to push development for both said
technology and legal regulation. In this aspect, I see holding the user accountable as a
strategy for avoiding legal troubles, and rightfully so; the early adopters must be aware of the
restricted capabilities of their car and see AI as an assistance tool rather than a personal
chauffeur. Another aspect to keep in mind is that autonomous vehicles are increasingly safer,

2
Robert Ferris, Elon Musk: Tesla will have all its self-driving car features by the end of the year, CNBC Online,
published February 2019

5
depending on the amount of other autonomous vehicles that they interact with in traffic. By
removing as much human interaction as possible, and constantly communicating decisions
between them, the variables are substantially reduced and this makes for easier management
and traffic flow, thus increasing safety. Little by little we will get to that point, but for now
our main goal should be to regulate these technologies and put in place a multitude of
standards, so that we safely get more people to dip their toes in the vast ocean that is going to
be the human-machine relationships of the future.

Intelligent machines, still a long way to go

In order to sustain the adequate development strategies for AI tools and services, we
need as much user input and usage statistics and analytics as possible, and this can only be
done by observing how people position themselves in regards to acceptance and usability. An
important factor when adopting any kind of technology is their initial trust in regards to it.
People have the tendency to put their trust into a product only when they are familiarised with
it. But artificial intelligence is something that actually scares a bunch of people. It may be
that the concept itself seems too abstract for the average consumer but I think it also has
something to do with the ways science fiction literature and movies portrayed the technology
from the 19th century to contemporaneity. AI is usually discussed upon as a new, emerging
technology, that may potentially be dangerous. This might be, of course, because of the way
mainstream media and culture portrayed it. When you think "Artificial Intelligence" the first
thing that might pop into your head may be the Terminator or HAL9000.

This is problematic, as your first contact with technology has an impact on your desire
to learn more and actually use said technology.3 An apocalypse scenario in which the robots
become sentient is very unlikely to occur, as by its nature AI is very specialised on achieving
certain goals it was tasked with, constrained by the algorithms put into place by the
developers; Of course, when choosing to undergo let's say, cosmetic surgery, you might
benefit from AI assisted equipment without even knowing it. You trust the doctors and, by
extension, the tools that they are using. The trust exchange is still a human-human one. It's

3
Keng Siau, Weiyu Wang, ​Building trust in Artificial Intelligence, Machine Learning, and Robotics​, Cutter
Business Technology Journal, Vol. 31(2), March 2018, pp. 47-53

6
the same type of relationship you have with your accountant and financial adviser. However,
nowadays you can actually get financial advice from AI. So, it shouldn't be any different than
a human advisor, right? Well this is one of the problems: people do not seem to be bothered if
they don't necessarily know that they are interacting with an artificial rational agent, but when
having to decide if they are ever going to buy an automated vehicle the feeling changes, as
the user is now put in front of a decision that implies a direct relationship.

Because not many people have had first hand experiences with self-driving cars, most
of their knowledge about them comes from news outlets. Sure, there have been more and
more articles about positive aspects of AI technologies, the majority of information about
them being also related to Silicon Valley start-ups. These largely go under the radar and do
little to improve people's perception of these technologies. However, given that almost
everybody is interested in hearing about sensational and controversial things, accidents in
which automated vehicles are involved tend to capture most reader's attention. Hearing that a
technology of which you know little about and already trust poorly was "responsible for" or
"involved in" a car crash might not only confirm your bias that these things are not really
safe, but also deepen your fear about driving or being driven in one of them too soon. The
majority of people currently using or considering upgrading their car to one that has some
degree of automation are enthusiasts, people that are already interested in the technology and
they are already interested in the directions the technology is going to. There is also a niche
of people that adopt AV because they are also fully electric vehicles, but that is beyond the
scope of my current research, so I will only focus on analysing interest in automation
technology.

Of course, early adopters are very useful for the development of technologies in
general, but even more so for the automotive industry, as they provide data and usage
statistics for vehicles that are running preliminary software in regards to prospected full
automation; we might even call them beta testers and to a certain extent even trend
influencers, as it is by their feedback that the software evolves and transforms. Being that
this technology is so new and impressive, seeing somebody drive around in a car such as a
Tesla or Acura will definitely catch some bystander eyes. Many online personalities and
technology reporters adopted self-driving technology and aid to their acceptance and

7
popularisation for mainstream drivers. However, while all these aspects help in increasing
overall trust in the technology, the number of people that would actually want to ride in a
fully autonomous vehicle tomorrow is low, fluctuating between 20% to 40% in the United
States4, arguably the main driver in AV technology and one of the world's biggest markets for
vehicles in general. In an ongoing, yearly survey conducted by the American Automobile
Association (AAA) the respondents were asked to assess the likelihood of being driven
around by a fully automated vehicles, item which correlates with their trust in said vehicles.
We can observe in fig. 15
that there are several
spikes in user aversion,
most notably in the jump
of 10%, from December
2017 to April 2018. As I
mentioned above, I think
it's likely that this
aversion is mostly
influenced by news
reporting around issues about AVs getting into accidents. In March 2018, an experimental,
fully self-driving, Uber vehicle was responsible for fatally wounding a 49 year old woman
that was crossing the highway in an pedestrian free zone.6 The car was in fully automated
mode and it had a person assigned as a security monitor, behind the wheel, but whom was
there only as a backup in case of machine error. In this case the car did not seem to react fast
enough to the pedestrian, and the security monitor was not attentive in the moments prior to
the impact. Following this event, Uber stopped all AV public road experiments for the
following nine months. This event was highly mediatised before the official investigation was
over, and largely attributed guilt to the car's inability to predict and avoid the impact in time.

4
American Automobile Association, ​Automated Vehicle Survey - Phase IV,​ NewsRoom.AAA.com, published
March 2019
5
Idem
6
Sam Levin, Julia C. Wong, ​Self-driving Uber kills Arizona woman in first fatal crash involving pedestrian,​ The
Guardian Online, published March 2018

8
We can see a correlation between the April 2018 decrease in trust and the accident
that resulted in a fatality. This was actually the first pedestrian death in history caused by
getting hit by a self-driving vehicle, and the event caused an increase in discussions
surrounding the subject of regulation, but also negative press coverage about Uber and the
technology itself. We can see that 2017 was marked by a high increase in trust, of almost
15%, and this is interesting given that, in June of 2017, an investigation started in May 2016,
surrounding the death of a driver that was using Tesla's Autopilot was finalised. The result of
the investigation was that the death occurred due to user error, the driver failing to comply
with the car's warnings to re engage with the vehicle, over relying on the autonomous
function.7 There is also a small decrease in trust, of 2%, from January 2016 to January 2017,
but it would be hard to find a correlation between the aforementioned 2016 accident and this
decrease; immediately following the crash, the company released a press statement which
focused mostly on damage control and balancing the systems reputation as a life saver, if
properly used.

Tackling trust issues

The main issue with developing trust relationships towards AI assisted driving is that
for most people this technology is either out of reach or something they don't even want to
consider trying. Of course, it is something new and people are going to fear something that is
unknown to them, or the world in general. But the majority of the population has absolutely
no first-hand experience with automated driving, and are expressing concerns based on the
events reported by news outlets, which are most of the time about crashes and fatalities,
because these are the things that will actually stir debates, polarize discussions and, of course,
sell publications. It seems that, before asking regular consumers to trust these technologies,
the main issues must be first addressed by lawmakers and developers, as the
consumer-machine trust relationship could actually be boiled down to the level of trust and
feeling of safety provided by both developers and lawmakers.

7
Brian Fung, ​The driver who died in a Tesla crash using Autopilot ignored at least 7 safety warnings,​ The
Washington Post Online, published June 2017

9
Now, having looked at the consumer part of the problem, I will turn my attention
towards the businesses and institutions that can potentially solve the problem. For this, I will
analyse the results of a study published in January 2019 by Perkins Coie, an international law
firm specialised in transport law, and the Association for Unmanned Vehicle Systems
International.8 They surveyed people working in the automotive industry such as dealers,
manufacturers, developers etc. and regulatory bodies from both federal and state levels.
Given that only professionals were questioned for this report, we can observe what their main
concerns are about bringing AVs to the market and what future aspects need to be addressed
in order to smoothen the transition from traditional cars; for this I have selected two specific
questions from the survey: "What do you see as the biggest obstacle for AVs growth in the
near future?" and "What are the top challenges in bringing AVs to market?" The companies
that develop AI systems for third-party manufacturers, as well as those that manufacture the
car and system in-house all have the common interest of pushing for regulation and
standardisation, as it will greatly increase the overall security and establish automated
vehicles as the norm for day to day commute.
We can see that infrastructure concerns in the following five years rank low among the
respondents and it is understandable, as the United States already has a well build system of
highways and, from what we are seeing in experimental driving on public roads, the AVs are
adapting well at the current
road systems. Sure, this will
need to change in the distant
future, as the technology
evolves and the roads will
become mainly populated with
AVs, nationwide infrastructure
changes and adaptation will be
required, both for maximising
efficiency and for increasing
safety.

8
Perkins Coie, The Association for Unmanned Vehicle Systems International, ​Autonomous Vehicles Survey
​ erkins Coie LLP, published January 2019
Report, P

10
Investment prices are another concern, as developing this type of technology implies
huge costs for the manufacturers. Because it's such a young industry sector, research and
development are very expensive; adding to this there are huge costs related to prototyping
and testing the cars in real life scenarios. For achieving a profitable business you need to be
able to offer the technology at a acceptable price, so more and more consumers start to
transition towards automation. By buying into the initially expensive products, early adopters
are helping companies slowly drive prices downwards and increase popularity, increasing
demand for the product; higher demand also ties in with lowering production costs, as the
tools are standardised for producing high amounts of the same machine. Safety concerns are
the highest ranking perceived obstacle, and rightfully so; there are many aspect to consider
when discussing traffic and user safety, but this transitional period is especially hard because
you have a relatively low number of AVs that are going to use the public roads alongside
their human counterparts. Self-driving cars function better and better when the majority of
traffic is comprised of other self-driving vehicles. Human error is still something to take into
consideration, even if we reach a state of perfect automation, because it is impossible that
everybody is going to transition to an AV in the first 10-15 years. There is also the aspect of
the consumer's readiness to adopt the technology, that I have talked about in the previous
paragraphs; 13% of the
respondents agree that this is
an important obstacle to
overcome, but in my opinion
the problem will slowly be
tackled by the regulation and
standardisation of the
technology. As the cars
become more affordable and
people start interacting with
them in traffic, general acceptance levels will rise and curiosity will get more customers
involved with this market sector. When it comes to bringing fully autonomous vehicles to
market most professionals in the field have concerns in respect to liability and user safety.
Who is responsible in the case of AV related damages or injury? How will insurance policies
cover accidental damage that is not caused by human error? There are many ways in which

11
blame can be shifted and, as previously stated, there should be higher liability for AI assisted
technology that poses a greater risk for human life. Even though, de facto, there is no human
that is actually responsible for the potential accident, someone must be to blame. I see no
scenario in which there is zero liability for any of the parties; consumers also look at certain
producer warranties and insurances when deciding which products to buy, and a no-liability
kind of reglementation will drive the customers away from the market.

In my opinion, the car manufacturer should be the one accountable for the failure of
their own products; after all, they are the ones who control all the parameters, who have all
the data necessary to properly assess what went wrong and try to fix it for the future. This
acceptance of responsibility will also increase overall trust in the technology, because it acts
as a sort of warranty for their products, telling the consumer that the company is confident
enough in the capabilities of their AVs as to cover the damages in case of failure. Admittedly,
this will only apply for full automation and will not work for cases in which the driver is only
to be assisted by AI; these are cases in which we might discuss split liability or total driver
liability due to improper use of the vehicle. However, we might think of a time when the cars
we drive will not only be able to drive themselves along predetermined routes, but also chose
whatever roads it thinks best suited to get us from A to B and even chose who or what to hit
in an ethical dilemma type of scenario. This means that the AI makes its own decisions,
which can not really be controlled by the developer or manufacturer, and the car itself, the
rational agent, is the one to blame for its actions. How do we punish an artificial rational
agent, then? Do we disable it? Do we destroy it? Although this is one scenario to consider, it
pertains to a much too distant future and I think that it's better for now to explore legal
options related to the current state of technological advance, while also keeping an open mind
about emerging problems.

Increasing trust by empowering the users

People have the tendency to not want to follow certain evidence based rules when
having to make a decision, even though in the majority of situations using an AI or an
evidence based algorithm will give you much more accurate results and forecasts; therefore it
would be wise for the user to choose using an evidence-based algorithm rather than his own

12
judgement.9 In a series of studies regarding people's willingness to use algorithms for
achieving forecasts, the researchers found that people have a tendency to rely on their own
judgement, their gut feelings. In the first study, they were told that they have to get to a
certain result, a forecast, and were given the choice of either achieving it on their own or by
using the advice of an algorithm. Initially they tend to mostly agree on using the algorithm;
however, after some experience with using it, and observing that algorithms can make
mistakes they will choose not to use it anymore, even though these mistakes are much smaller
and infrequent than human judgement. This is because people expect algorithms to be
perfect, infallible, when in fact they are only designed to be better and more time-efficient
than humans.

The solution to this reluctancy seems to be giving the users a feeling of control. If for
example the user is told that expected result is 24,03% but he doesn't have to necessarily fully
accept that result, and giving him the option of slightly adjusting the automation process in
order to achieve a slightly different result, this will translate in a much more higher
acceptance rate in using said algorithm.10 People are much more likely to use automated
decision algorithms and their results in their final decisions regarding real-world problems if
they feel that they can influence and control the forecasting agent, even in the slightest. This
makes sense, given that most of the times the math and decision paths behind these AI tools
are hard to understand, so that the result might seem "forcibly imposed" in a sense. If giving
the users control over the algorithm makes them use it is a good thing, there is, however,
another aspect to keep in mind. The algorithm is already designed to give out the best
result/forecast it can, and this means that the user's modifications will have an impact on the
final accuracy of these results; the more the user is able to interfere and modify, the worse the
final result will be, and that ultimately defies the logic of using the algorithm for achieving a
near-accurate result. In their research scenarios, the three authors found that the best way to
both overcome this algorithm aversion and also maintain a good level of accuracy in results is
to allow the users to modify the algorithm, but restrict it to a certain value, such as 5%. This
is a good compromise, since by adjusting the algorithm only within certain parameters they

9
Berkeley Dietvorst, Joseph Simmons, Cade Massey, ​Overcoming Algorithm Aversion: People will use
imperfect algorithms if they can (even slightly) modify them,​ Management Science, Vol. 64, No. 3, November
2016
10
​Idem

13
can only make it worse by that amount, but they still gain all the advantages that come with
using the algorithm in their decision process. Ideally people will acknowledge that the
algorithm is "almost perfect" and use it only to guide their decision in a certain direction, as
these algorithms are mainly used in software for financial companies or for forecasting
whether a product will have commercial success or not, so that pinpoint accurate results are
not necessarily a must. Users should view them as heuristic tools and work along them in
order to achieve better decisions for their products or services. It all seems to boil down to the
way the choice of using an algorithm is presented in the first interaction; people tend to view
using these tools in a very black or white manner: you either commit to it fully or don't use it
at all, and in these cases users tend to ditch the algorithm altogether. That is why the mixture
of the two situations is so well responded to, people need to see that there is also a vast grey
zone, and the best way to get them acquainted to it seems to be letting them tinker with the
tool, customise it, basically giving them the feeling that "hey, I have an active role in using
this tool; I am not just imputing data and waiting for an imperfect result".

Now, we can see how this phenomenon applies to something such as autonomous
vehicles. As seen before, people have little trust in having an initial interaction with them,
mostly because they either don't really know anything about what the interaction demands
from them and because of fear towards new technologies. In this regard, even though the full
automation setting is in the vast majority of cases the way you would want to use the car,
given that it has a low rate of failure compared to just having a human in control, giving users
the option to take matters in their own hands anytime during the autopilot will make them
much more willing to use the technology. This has been the case with airplanes for decades;
the majority of the functions and actions of an airplane during flight are completely
automated, basically flying itself, but we still want a pilot and copilot to be there at all times,
looking over the automation process and taking matters in their own hands if they see the
need for it. Results from the studies suggest that few people will ultimately tinker with the
automation function, because it is the feeling that they can get in control whenever they want
that actually counts, and ultimately the result will be that more and more consumers will take
a step towards fully automated vehicles. In fact, it should be even easier to get new users into
the market as actually none of the cars available today are enabled for full automation; the
user not only has the ability to take full control whenever he pleases, but he is actually

14
required to stay alert in case he is prompted to do so. It is interesting to note that the study
found that once accepting to use the algorithm, with the ability to modify it, few will actually
do so, and this can be seen also in AVs used today. Even though they are not fully
autonomous, users tend to forget that and use them as if they were, sometimes with the result
of horrible accidents caused by this negligence and ignorance.

Closing remarks

While people are still reluctant to letting new technologies into their lives, as time
passes and jurisdiction evolves we will start to see an increasing interest in the domain. Given
that direct AI-Human interactions are still in their infancy, the burden of optimising and
marketing these tools is mostly on the corporations and startups developing them, but also on
the governments that are supposed to come up with legislation and solutions to regulate and
set the future directions for evolution.

The focus should first be on pushing proper legislation and setting technical standards
that will help with inter-device communication and make management much more easier
once we start moving towards a more automated future. Interest in the field of AI trust
development has grown substantially in the last decade and it is interesting to observe how
people choose to react in these relationships; as a result of these studies we gather valuable
information about the users, their needs and wishes, which further contributes to valuable
development. Governments are already integrating these systems in their institutions, such as
is the case with China and the USA, but it is very important to constantly observe and analyse
the data and its uses, in order to make necessary adjustments and updates, in the interest of all
the parties involved in the process. We are going to see an increase in the research of ethical
aspects of using these technologies, in a vast number of domains, and set the right path for
future generations of developers and consumers. Sure, as with any new thing acceptance
levels are currently low and there are a multitude of factors for which this is so, but my belief
is that things are moving in a positive direction and it is only going to get better as the tools
become more affordable for the average consumers.

15
Bibliography

Studies & Reports

❖ American Automobile Association, ​Automated Vehicle Survey - Phase IV,​


NewsRoom.AAA.com, published March 2019

❖ Berkeley Dietvorst, Joseph Simmons, Cade Massey, ​Overcoming Algorithm Aversion:


People will use imperfect algorithms if they can (even slightly) modify them​,
Management Science, Vol. 64, No. 3, November 2016

❖ Perkins Coie, The Association for Unmanned Vehicle Systems International,


Autonomous Vehicles Survey Report, ​Perkins Coie LLP, published January 2019

Articles

❖ Brian Fung, ​The driver who died in a Tesla crash using Autopilot ignored at least 7
safety warnings​, The Washington Post Online, published June 2017

❖ Klapper, Dotsch, van Rooij, Wigboldus, Do we spontaneously form stable


trustworthiness impressions from facial appearance?, Journal of Personality and
Social Psychology, Vol. 111(5), November 2016, pp. 655-664

❖ Keng Siau, Weiyu Wang, ​Building trust in Artificial Intelligence, Machine Learning,
and Robotics,​ Cutter Business Technology Journal, Vol. 31(2), March 2018, pp.
47-53

❖ Robert Ferris, Elon Musk: Tesla will have all its self-driving car features by the end of
the year, CNBC Online, published February 2019

❖ Sam Levin, Julia C. Wong, ​Self-driving Uber kills Arizona woman in first fatal crash
involving pedestrian​, The Guardian Online, published March 2018

16

You might also like