Professional Documents
Culture Documents
Artificial
Intelligence
A global study
2023
uq.edu.au
KPMG.com.au
Citation
Gillespie, N., Lockey, S., Curtis, C., Pool, J., &
Akbari, A. (2023). Trust in Artificial Intelligence:
A Global Study. The University of Queensland
and KPMG Australia. doi:10.14264/00d3c94
KPMG Advisors
James Mabbott, Rita Fentener van Vlissingen,
Jessica Wyndham, and Richard Boele.
Acknowledgements
We are grateful for the insightful input, expertise
and feedback on this research provided by
Dr Ali Akbari, Dr Ian Opperman, Rossana Bianchi,
Professor Shazia Sadiq, Mike Richmond, and
Dr Morteza Namvar, and members of the
Trust, Ethics and Governance Alliance at The
University of Queensland, particularly Dr Natalie
Smith, Associate Professor Martin Edwards,
Dr Shannon Colville and Alex Macdade.
Funding
This research was supported by an Australian
Government Research Support Package grant
provided to The University of Queensland AI
Collaboratory, and by the KPMG Chair in Trust
grant (ID 2018001776).
Acknowledgement of Country
The University of Queensland (UQ)
acknowledges the Traditional Owners and their
custodianship of the lands. We pay our respects
to their Ancestors and their descendants, who
continue cultural and spiritual connections
to Country. We recognise their valuable
contributions to Australian and global society.
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
Contents
Executive summary 02
Introduction 07
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 2
Executive
summary
Artificial Intelligence (AI) has become a ubiquitous part of everyday life and work.
AI is enabling rapid innovation that is transforming the way work is done and
how services are delivered. For example, generative AI tools such as ChatGPT
are having a profound impact. Given the many potential and realised benefits for
people, organisations and society, investment in AI continues to grow across all
sectors1, with organisations leveraging AI capabilities to improve predictions,
optimise products and services, augment innovation, enhance productivity and
efficiency, and lower costs, amongst other beneficial applications.
However, the use of AI also poses risks and challenges, raising concerns about
whether AI systems (inclusive of data, algorithms and applications) are worthy
of trust. These concerns have been fuelled by high profile cases of AI use
that were biased, discriminatory, manipulative, unlawful, or violated human
rights. Realising the benefits AI offers and the return on investment in these
technologies requires maintaining the public’s trust: people need to be confident
AI is being developed and used in a responsible and trustworthy manner.
Sustained acceptance and adoption of AI in society are founded on this trust.
This research is the first to take a deep dive examination into the public’s trust
and attitudes towards the use of AI, and expectations of the management and
governance of AI across the globe.
We surveyed over 17,000 people from 17 countries covering all global regions:
Australia, Brazil, Canada, China, Estonia, Finland, France, Germany, India, Israel,
Japan, the Netherlands, Singapore, South Africa, South Korea, the United Kingdom
(UK), and the United States of America (USA). These countries are leaders in
AI activity and readiness within their region. Each country sample is nationally
representative of the population based on age, gender, and regional distribution.
We asked survey respondents about trust and attitudes towards AI systems in
general, as well as AI use in the context of four application domains where AI is
rapidly being deployed and likely to impact many people: in healthcare, public safety
and security, human resources and consumer recommender applications.
The research provides comprehensive, timely, global insights into the public’s
trust and acceptance of AI systems, including who is trusted to develop,
use and govern AI, the perceived benefits and risks of AI use, community
expectations of the development, regulation and governance of AI, and how
organisations can support trust in their AI use. It also sheds light on how people
feel about the use of AI at work, current understanding and awareness of AI,
and the key drivers of trust in AI systems. We also explore changes in trust and
attitudes to AI over time.
Next, we summarise the key findings.
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 3
Most people are wary about trusting AI systems and People perceive the risks of AI in a similar way
have low or moderate acceptance of AI: however, across countries, with cybersecurity rated as the
trust and acceptance depend on the AI application top risk globally
Across countries, three out of five people (61%) are wary While there are differences in how the AI benefit-risk
about trusting AI systems, reporting either ambivalence or ratio is viewed, there is considerable consistency across
an unwillingness to trust. Trust is particularly low in Finland countries in the way the risks of AI are perceived.
and Japan, where less than a quarter of people report trusting
Just under three-quarters (73%) of people across the globe
AI. In contrast, people in the emerging economies of Brazil,
report feeling concerned about the potential risks of AI.
India, China and South Africa (BICS2) have the highest levels
These risks include cybersecurity and privacy breaches,
of trust, with the majority of people trusting AI systems.
manipulation and harmful use, loss of jobs and deskilling,
People have more faith in AI systems to produce accurate system failure, the erosion of human rights, and inaccurate
and reliable output and provide helpful services, and are or biased outcomes.
more sceptical about the safety, security and fairness of AI
In all countries, people rated cybersecurity risks as their
systems and the extent to which they uphold privacy rights.
top one or two concerns, and bias as the lowest concern.
Trust in AI systems is contextual and depends on the Job loss due to automation is also a top concern in India
specific application or use case. Of the applications and South Africa, and system failure ranks as a top concern
we examined, people are generally less trusting and in Japan, potentially reflecting their relative heavy
accepting of AI use in human resources (i.e. for aiding dependence on smart technology.
hiring and promotion decisions), and more trusting of
These findings reinforce the critical importance of protecting
AI use in healthcare (i.e. for aiding medical diagnosis
people’s data and privacy to secure and preserve trust, and
and treatment) where there is a direct benefit to them.
supporting global approaches and international standards
People are generally more willing to rely on, than share
for managing and mitigating AI risks across countries.
information with AI systems, particularly recommender
systems (i.e. for personalising news, social media, and There is strong global endorsement for the principles
product recommendations) and security applications of trustworthy AI: trust is contingent on upholding
(i.e. for aiding public safety and security decisions). and assuring these principles are in place
Many people feel ambivalent about the use of AI, Our findings reveal strong global public support for the
reporting optimism or excitement on the one hand, while principles and related practices organisations deploying
simultaneously reporting worry or fear. Overall, two-thirds AI systems are expected to uphold in order to be trusted.
of people feel optimistic about the use of AI, while about Each of the Trustworthy AI principles originally proposed by
half feel worried. While optimism and excitement are the European Commission3 are viewed as highly important
dominant emotions in many countries, particularly the BICS for trust across all 17 countries, with data privacy, security
countries, fear and worry are dominant emotions for people and governance viewed as most important in all countries.
in Australia, Canada, France, and Japan, with people in This demonstrates that people expect organisations
France the most fearful, worried, and outraged about AI. deploying AI systems to uphold high standards of:
People recognise the many benefits of AI, but only – data privacy, security and governance
half believe the benefits outweigh the risks – technical performance, accuracy and robustness
People’s wariness and ambivalence towards AI can be partly – fairness, non-discrimination and diversity
explained by their mixed views of the benefits and risks. – human agency and oversight
Most people (85%) believe AI results in a range of benefits, – transparency and explainability
and think that ‘process’ benefits such as improved efficiency,
– accountability and contestability
innovation, effectiveness, resource utilisation and reduced
costs, are greater than the ‘people’ benefits of enhancing – risk and impact mitigation
decision-making and improving outcomes for people. – AI literacy support
However, on average, only one in two people believe the
benefits of AI outweigh the risks. People in the western
countries and Japan are particularly unconvinced that the
benefits outweigh the risks. In contrast, the majority of
people in the BICS countries and Singapore believe the
benefits outweigh the risks.
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B..
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 4
People expect these principles to be in place for each of the People expect AI to be regulated with some form of
AI use applications we examined (e.g., Human Resources, external, independent oversight, but view current
Healthcare, Security, Recommender, and AI systems in regulations and safeguards as inadequate
general), suggesting their universal application. This strong
The large majority of people (71%) expect AI to be
public endorsement provides a blueprint for developing and
regulated. With the exception of India, the majority in all
using AI in a way that supports trust across the globe.
other countries see regulation as necessary. This finding
Organisations can directly build trust and consumer corroborates prior surveys4 indicating strong desire for
willingness to use AI systems by supporting and regulation of AI and is not surprising given most people
implementing assurance mechanisms that help people (61%) believe the long-term impact of AI on society is
feel confident these principles are being upheld. Three out uncertain and unpredictable.
of four people would be more willing to trust an AI system
People are broadly supportive of multiple forms of regulation,
when assurance mechanisms are in place that signal
including regulation by government and existing regulators,
ethical and responsible use, such as monitoring system
a dedicated independent AI regulator, and co-regulation and
accuracy and reliability, independent AI ethics reviews, AI
industry regulation, with general agreement of the need for
ethics certifications, adhering to standards, and AI codes
some form of external, independent oversight.
of conduct. These mechanisms are particularly important
given the current reliance on industry regulation and Despite the strong expectations of AI regulation, only two in
governance in many jurisdictions. five people believe current regulations, laws and safeguards
are sufficient to make AI use safe. This aligns with previous
People are most confident in universities and defence surveys5 showing public dissatisfaction with the regulation
organisations to develop, use and govern AI and least of AI, and is problematic given the strong relationship
confident in government and commercial organisations between current safeguards and trust in AI demonstrated
People have the most confidence in their national by our modelling. This highlights the importance of
universities and research institutions, as well as their strengthening and communicating the regulatory and
defence organisations, to develop, use and govern AI in the legal framework governing AI and data privacy.
best interest of the public (76–82% confident). In contrast, There are, however, substantial country differences, with
they have the least confidence in governments and people in India and China most likely to believe appropriate
commercial organisations to do this. A third of people lack safeguards are in place (74–80% agree) followed by Brazil
confidence in government and commercial organisations to and Singapore (52–53%). In contrast, people in Japan
develop, use and regulate AI. This is problematic given the and South Korea are the least convinced (13–17% agree)
increasing scope with which governments and commercial as are the majority of people in western countries. These
organisations are using AI, and the public’s expectation differences in the perceived adequacy of regulations may
that these entities will responsibly govern and regulate partly explain the higher trust and acceptance of AI among
its use. An implication is that government and business people in the BICS countries.
can partner with more trusted entities in the use and
governance of AI. Most people are comfortable with the use of AI to
augment work and inform managerial decision-
There are significant differences across countries in
making, but want humans to retain control
people’s trust of their government to use and govern
AI, with about half of people lacking confidence in their Most people are comfortable with the use of AI at work
government in South Africa, Japan, the UK and the USA, to augment and automate tasks, but are less comfortable
whereas the majority in China, India and Singapore when AI is focused on them as employees, for example for
have high confidence in their government. This pattern HR and people management (e.g. to monitor and evaluate
mirrors people’s general trust in their governments: we employees, and support recruitment). On average, half the
found a strong association between people’s general people are willing to trust AI at work, for example by relying
trust in government, commercial organisations and other on the output it provides. People in Australia, Canada,
institutions and their confidence in these entities to use France and Germany are the least comfortable with the
and govern AI. These findings suggest that taking action use of AI at work, while those in the BICS countries and
to strengthen trust in institutions generally is an important Singapore are the most comfortable.
foundation for trust in specific AI activities.
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 5
Most people view AI use in managerial decision-making They have greater knowledge of AI and are better able
as acceptable, and actually prefer AI involvement to sole to identify when AI is used, and have greater interest in
human decision-making. However, the preferred option is learning about AI. They perceive more benefits of AI, but
either a 25%-75% or 50%-50% AI-human collaboration, remain the same as other groups in their perceptions of
with humans retaining more or equal control. This indicates a the risks of AI. They are more likely to believe AI will create
clear preference for AI to be used as a decision aid, and a lack jobs, but also more aware that AI can perform key aspects
of support for fully automated AI decision-making at work. of their work. They are more confident in entities to
develop, use and govern AI, and more likely to believe that
While about half believe AI will enhance their competence
current safeguards are sufficient to make AI use safe. It is
and autonomy at work, less than one in three people believe
noteworthy that we see very few meaningful differences
AI will create more jobs than it will eliminate. However,
across gender in trust and attitudes towards AI.
most managers believe the opposite – that AI will create
jobs. This reflects a broader trend of managers being more There are stark differences in trust and attitudes
comfortable, trusting and supportive of AI use at work than across countries: people in the emerging economies
other employees, with manual workers the least comfortable of Brazil, India, China, and South Africa are more
and trusting of AI at work. Given managers are typically the trusting and accepting of AI and have more positive
drivers of AI adoption in organisations, these differing views attitudes towards AI
may cause tensions in the implementation of AI at work.
A key insight from the survey is the stark differences in trust,
A minority of people in western countries, Japan and South attitudes and use of AI between people in the emerging
Korea report that their employing organisation invests in AI economies of Brazil, India, China and South Africa and those
adoption, recognises efforts to integrate AI, or supports the in other countries.
responsible use of AI. This stands in contrast to a majority
of people in the BICS countries and Singapore. People in the emerging economies are more trusting
and accepting of AI and hold more positive feelings and
People want to learn more about AI but currently have attitudes towards AI than people in other countries. These
low understanding differences held even when controlling for the effects of age
and education. Singapore followed this positive orientation
While 82% of people are aware of AI, one in two people
on several indicators, particularly comfort, trust and
report feeling they do not understand AI or when and how
familiarity with the use of AI at work, adequacy of current AI
it is used. Understanding of AI is highest in China, India,
regulation and governance, confidence in companies to use
South Korea, and Singapore. Two out of five people are
and govern AI, and the belief that AI will create jobs.
unaware that AI enables common applications they use.
For example, even though 87% of people use social media, Our data suggests that this high trust is not blind to the
45% do not know AI is used in social media. risks. People in BICS countries and Singapore did not
perceive the risks of AI, or the uncertain impact of AI on
People who better understand AI are more likely to trust
society, any lower than people in other countries. Nor
and accept it, and perceive greater benefits of AI use.
did they differ from other countries on the importance
This suggests understanding AI sets a foundation for
of the principles and practices required to ensure AI is
trust. Most people across all countries (82%) want to
trustworthy. Rather, a key differentiator is that most people
know more about AI. Considered together, these findings
in the BICS countries and Singapore believe the benefits
suggest a strong need and appetite for public education
of AI outweigh the risks, whereas a minority of people
on AI.
in western countries, such as Australia, Canada, France,
Younger generations, the university educated and Netherlands, the UK and USA, hold this view.
managers are more trusting, accepting and generally The higher trust and more positive attitudes in the BICS
hold more positive attitudes towards AI countries is likely due to the greater benefits afforded
Younger generations, the university educated, and by technological advances and deployment in emerging
managers show a consistent and distinctly more positive economies and the increasingly important economic role
orientation towards AI across the findings, compared to of AI technologies in these countries.
older generations, those without a university education
and non-managers. They are more trusting and accepting
of AI systems, including their use at work, and are more
likely to feel positive about AI and report using it.
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B..
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 6
This may encourage a growth mindset that motivates 1. an institutional pathway reflecting beliefs about the
acceptance and use of technology as a means to adequacy of current safeguards, regulations and laws
accelerate economic progress, prosperity, and quality to make AI use safe, and confidence in government and
of life. An implication is that these countries may be commercial organisations to develop, use and govern AI
uniquely positioned to rapidly accelerate innovation 2. a motivational pathway reflecting the perceived
and technological advantage through AI. It is notable, benefits of AI use
however, that on international rankings these countries
3. an uncertainty reduction pathway reflecting the need
rank low on governance and regulation frameworks to
to address concerns about the risks associated with AI
ensure the ethical and responsible use of AI compared
to western countries.6 4. a knowledge pathway reflecting people’s understanding
of AI use and efficacy in using technology
AI awareness, understanding and trust in AI has Of these drivers, the institutional pathway had the strongest
increased over time, but institutional safeguards influence on trust, followed by the motivational pathway.
continue to lag
These findings highlight the importance of developing
We had the opportunity to examine how trust and select adequate governance and regulatory mechanisms that
attitudes to AI compared with our 2020 Trust in AI survey safeguard people from the risks associated with AI use and
data, which was based on representative sampling from public confidence in entities to enact these safeguards, as
five western countries (Australia, Canada, Germany, the well as ensuring AI is designed and used in a human-centric
UK and the USA).7 Comparisons were made between way to benefit people and support their understanding.
data from these five countries in 2020 and 2022 using
equivalent measures over time. Pathways to strengthen public trust and acceptance
This comparison suggests that trust in AI systems has Collectively, the survey insights provide evidence-
increased in these countries over time, as has awareness based pathways for strengthening the trustworthy and
of AI and understanding of AI use in common applications. responsible use of AI systems, and the trusted adoption
However, there has been no increase in the perceived of AI in society. These insights are relevant for informing
adequacy of institutional safeguards, such as regulation responsible AI strategy, practice and policy within
and laws to protect people from problems, despite most business, government, and NGOs at a national level, as
people in these countries perceiving such institutional well as informing AI guidelines, standards and policy at
safeguards as insufficient in 2020. Similarly, there was the international and pan-governmental level.
no increase in people’s confidence in government and There are a range of resources available to support
business to develop, use or regulate AI, despite low levels organisations to embed the principles and practices of
of confidence in these entities. There was, however, an trustworthy AI into their everyday operations and put
increase in the view that AI regulation is needed in two in place mechanisms that support stakeholder trust in
countries - the UK and USA. the use of AI.9 While proactively investing in these trust
These findings suggest the institutional safeguards foundations can be time and resource intensive, our
governing AI are not keeping pace with expectations and research suggests it is critical for sustained acceptance
technological uptake. In some jurisdictions, these findings and adoption of smart technologies over time, and hence
may reflect a lack of communication and awareness of a return on investment.
regulatory change.
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 7
Introduction
AI is rapidly becoming a ubiquitous part of everyday life and continuing to transform the way we
live and work.10 All sectors of the global economy are now embracing AI, with AI applications
expanding and diversifying into domains ranging from transport, crop and service optimisation,
the diagnosis and treatment of diseases, and the protection of physical, financial, and cyber
security, for example by fining distracted drivers, detecting credit card fraud, identifying
children at risk, and enabling facial recognition.
While the benefits and promise of AI for from all global regions: the original five
society and business are undeniable, western countries in addition to Brazil, In the next section, we
so too are the risks and challenges. China, Estonia, Finland, France, India, provide an overview
These include the risk of codifying and Israel, Japan, Netherlands, Singapore,
of the research
reinforcing unfair biases, infringing on South Africa, and South Korea.
human rights such as privacy, spreading
methodology used.
Our research aims to understand and In the concluding
fake online content, deskilling and
quantify people’s trust in and attitudes section, we draw on
technological unemployment, and the
towards AI, benchmark these attitudes
risks stemming from mass surveillance the survey insights
over time, and explore similarities and
technologies, critical AI failures and
differences across countries. Taking a
to identify evidence-
autonomous weapons. Even in cases based pathways for
global perspective is important given AI
where AI is developed to help people strengthening the trusted
systems are not bounded by physical
(e.g. to protect cybersecurity), there
borders and are rapidly being deployed and responsible use of AI
is the risk it can be used maliciously
and used across the globe. systems, and discuss the
(e.g. for cyberattacks). These issues
are causing public concern and raising Our report is structured to provide implications for industry,
questions about the trustworthiness evidence-based insights on the government, universities,
and governance of AI systems.11 following questions about the public’s and non-government
The public’s trust in AI technologies
trust and acceptance of AI: organisations (NGOs).
is vital for continual acceptance. If – To what extent do people trust AI
AI systems do not prove worthy of systems?
trust, their widespread acceptance – How do people perceive the benefits
and adoption will be hindered, and and risks of AI?
the potential societal and economic – Who is trusted to develop, use and
benefits will not be fully realised.12 govern AI?
Despite the central importance of – What expectations do people have
trust, to date little is known about how about the development, governance
trust in AI is experienced by people in and regulation of AI?
different countries across the globe, or – How do people feel about the use of
what influences this trust.13 In 2020, we AI at work?
conducted the first deep dive survey
– How well do people understand AI?
examining trust in AI systems across
five western countries – Australia, – What are the key drivers of trust and
Canada, Germany, the UK and the USA acceptance of AI?
(Gillespie, Lockey & Curtis, 2021). The – How have trust and attitudes
current study extends this focus on trust towards AI changed over time?
in AI by examining the perspectives of
people representing 17 countries drawn
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 8
How we 17
conducted countries
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 9
15%
Generation Z
33%
Millennial
28%
Generation X
24%
Baby Boomer +
tasks or make predictions,
recommendations or
(18 – 25) (26 – 41) (42 – 57) Silent Generation decisions that usually
(58 – 91)
require human intelligence.
AI systems can perform
these tasks and make
Education
these decisions based on
objectives set by humans
Lower secondary school or less but without explicit human
4% instructions (OECD, 2019).
Upper secondary school
23%
Vocational or trade qualification
24%
Undergraduate degree
35%
Postgraduate degree
14%
Occupation
31%
Professional & Skilled
24%
Manager
19%
Administrative
(including army)
13%
Service & Sales
13%
Manual
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 10
Given perceptions of AI systems can be influenced by the These use cases were chosen as they represent domains
purpose and use case,19 survey questions asking about where AI is being rapidly deployed and is likely to be used
trust, attitudes and governance of AI systems referred to by, or impact, many people.
one of five AI use cases (randomly allocated): Healthcare AI
Before answering questions, respondents were provided
(used to inform decisions about how to diagnose and treat
with a description of the AI use case, including what it is
patients), Security AI (used to inform decisions about public
used for, what it does and how it works. These descriptions
safety and security), Human Resource AI (used to inform
are shown below, and were developed based on current
decisions about hiring and promotion), Recommender AI
in use systems and input from domain experts working in
(used to tailor services to consumers), or AI in general (i.e.
healthcare, security, human resources, and recommender
AI systems in general).
systems, respectively.
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 11
TOPIC ONE
To what extent
do people trust
AI systems?
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 12
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 13
unwilling to trust AI systems ‘How willing are you to trust AI [specific application]?’ [8 items]
Three out of five people (61%) across
% Unwilling to trust % Ambivalent % Willing to trust
countries are wary about trusting
in AI systems, reporting either Trust 29 32 39
ambivalence or an unwillingness to % Unwilling = 'Somewhat unwilling', 'Unwilling', or 'Completely unwilling'
trust (see Figure 1). In contrast, 39% % Ambivalent = 'Neither willing nor unwilling'
% Willing = 'Somewhat willing', 'Willing', or 'Completely willing'
report that they are willing to trust
AI systems.
Mirroring these findings, most ‘To what extent do you accept the use of AI [specific application]?’ [3 items]
people (67%) report low to moderate % Low acceptance % Moderate % High acceptance
acceptance of AI. Only a third of
Acceptance 29 38 33
people across countries report
high acceptance. There is a strong % Low acceptance = 'Not at all' or 'Slightly'
% Moderate acceptance = 'Moderately'
association between trust in AI and % High acceptance = 'Highly' or 'Completely'
acceptance of AI (correlation r=0.71,
p<0.001).
There are stark differences across countries: We see a similar pattern for acceptance of AI as we
do for trust. The BICS countries are notably higher in
AI is most trusted and accepted in the emerging their acceptance of AI, with 48–67% of people in these
‘BICS’ economies of Brazil, India, China, and countries reporting high acceptance.
South Africa Again, India and China lead the way, with 66–67% reporting
Our survey revealed stark differences in trust and high acceptance of AI, compared to only 18% in the
acceptance of AI systems across countries. Figure 2 Netherlands and Canada, respectively. There are low levels
shows trust in AI is highest in India, China, Brazil, and of AI acceptance across all Western countries, with Germany
South Africa, respectively. reporting the most acceptance (35% high acceptance).
These countries are each part of the BRICS alliance of The higher trust and acceptance of AI in the BICS
major emerging economies. We use the acronym ‘BICS’ countries is likely due to the accelerated uptake of AI in
in this report to denote the four countries of Brazil, these countries, and the increasingly important economic
India, China, and South Africa included in our survey that role of emerging technologies.21 As discussed in the
showed a distinctively different pattern of findings to the forthcoming sections of this report, people in the BICS
other countries. countries are the most positive about AI, perceive the
most benefits from it, and report the highest levels of AI
In the BICS countries, most people (56–75%) trust in adoption and use at work.
AI systems, with people in India reporting the greatest
willingness to trust, followed by China. In contrast, a
minority of people in other countries report trusting AI,
with the Finnish reporting the lowest trust (only 16%).
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 14
75
% Willing to trust
67 67 66
% Willing to accept
57 56
54
48
45
40
35 35 34 35 34 34
33 32 31 32 31
29
26 26
24 23 23 23 23
20 19
18 18
16
Ind
Ch
So
Br
Sin
Un
Ge
Isr
Au
Un
Ca
So
Fra
Ne
Es
Ja
Fin
pa
az
ae
ton
uth
na
str
ite
ite
the
rm
ut
i na
lan
n
ga
ia
ce
n
il
da
hA
d
alia
po
an
ia
d
rl a
Ko
Sta
Kin
y
re
nd
f ri
re
gd
tes
s
a
ca
om
the application: AI use in human ‘How willing are you to trust AI [specific application]?’ [8 items]
resources is the least trusted % Unwilling % Ambivalent % Willing
and accepted Healthcare AI 25 31 44
People’s trust and acceptance of AI Security AI 28 32 40
depends on the specific application or
AI in general 27 34 39
use case.
Recommender AI 30 34 36
There is a tendency for people to trust
HR AI 35 31 34
the use of AI in human resources (HR)
the least (34% willing, M=3.9; see % Unwilling = 'Somewhat', 'Mostly' or 'Completely unwilling’ to trust
Figure 3), and the use of AI in healthcare % Ambivalent = 'Neither willing nor unwilling' to trust
% Willing = 'Somewhat', 'Mostly' or 'Completely willing’ to trust
diagnosis and treatment the most (44%
willing, M=4.3).
This difference likely reflects the important
direct benefit that increased precision of
medical diagnosis and treatments affords
people, combined with the high levels of
trust in doctors in most countries.
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 15
This difference between applications is meaningful in some South Korea, Japan, and China – report lower trust
countries, but not all. Specifically, people in eight countries and acceptance of Human Resources AI, than either
– Germany, the Netherlands, Finland, Estonia, Israel, Healthcare AI or AI in general (see Figure 4).
3 3.2 3.4 3.6 3.8 4 4.2 4.4 4.6 4.8 5 5.2 5.4 5.6
India
China
Brazil
South Africa
Singapore
Germany
Israel
South Korea
United States
Estonia
Netherlands
Canada
United Kingdom
Australia
Japan
France
Finland
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 16
Recommender AI
Rely on output 23 35 42
Share information 38 30 32
HR AI
Rely on output 31 33 36
Share information 38 26 36
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 17
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 18
As shown in Figure 6, we see a similar pattern across with 93% agreeing that AI systems are trustworthy,
countries in beliefs about the trustworthiness of AI followed by the Chinese (87%).
systems, as for AI trust and acceptance. People in the
In contrast, people in western countries, as well as
BICS countries hold much more positive beliefs about
Japan, have the least favourable beliefs about the
the trustworthiness of AI systems, compared to all
trustworthiness of AI, ranging from 49–58% viewing
other countries, with 79–93% viewing these systems as
AI systems as trustworthy.
Figure 6. Perceptions
trustworthy. Indians againof thethe
have trustworthiness of AI systems
most positive views,
‘I believe [specific AI application] would: produce output that is accurate (ability) / have a positive impact
on most people (humanity) / be safe and secure to use (integrity)’ [14 items]
30 35 40 45 50 55 60 65 70 75 80 85 90 95
India
China
Brazil
South Africa
Singapore
Israel
Estonia
South Korea
Japan
United Kingdom
United States
Canada
Germany
France
Netherlands
Finland
Australia
On average, 63% of people across all countries and There is a strong association between perceived
applications perceive AI systems are trustworthy. trustworthiness and trust in AI systems (r=0.77, p<0.001).
Perceptions of trustworthiness are typically higher
In line with the findings for trust and acceptance, we
than trusting intentions because trust involves risk and
found differences in trustworthiness across applications.
vulnerability (e.g. by relying on AI output or sharing
In most countries, Human Resources AI is seen as less
information with an AI system), whereas perceiving
trustworthy than other AI applications.
a system as trustworthy does not.
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 19
excited about AI: however, many 'In thinking about AI [specific application] to what extent do you feel…'
also feel worried and fearful
% Moderate to High
We asked people the extent to which 67%
they feel a range of emotions about the 60%
57%
AI applications. A majority of people
report positive emotions such as feeling 48% 47%
optimistic, excited, or relaxed about these
AI systems. However, just under half the
people also report feeling worried or fearful
about the AI applications, and just under
a quarter feel outraged (see Figure 7). 24%
p<0.001).
Further analysis22 revealed people
commonly experience ambivalent feelings
towards AI: 41% experience both high
positive and negative emotions, for
example feeling excited but also worried
about AI. In contrast, 35% experience
high positive emotions coupled with low
negative emotions, and 16% have low
positive emotions coupled with high negative
emotions. Only 8% report feeling low
positive and negative emotions towards AI.
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 20
People in the BICS countries feel most Fear or worry about AI was the dominant emotion
experienced by people in Australia, Canada, France, and
optimistic, excited and relaxed about AI Japan, with people in France amongst the most fearful
Figure 8 shows how people feel about AI in each country, and worried. People generally did not feel much outrage
ordered by the countries with people who felt most towards AI.
positive about AI. People in the BICS countries are the While people in India are most likely to feel positive
most optimistic, excited, and relaxed about AI, and people emotions about AI, they also have one of the highest levels
in Japan the least. Positive emotions were significantly of fear and are more likely to report outrage than other
stronger than negative emotions in the BICS countries countries. This reinforces that in many countries, fear and
as well as
Figure 8. in Estonia, Finland,
Emotions towards and
AIIsrael.
across countries worry about AI often coincides with optimism or excitement.
'In thinking about AI [specific application] to what extent do you feel… '
1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3 3.2 3.4 3.6 3.8 4 4.2
India
China
Brazil
South Africa
Israel
Singapore
Estonia
Germany
France
Finland
South Korea
United States
Netherlands
Canada
Australia
Japan
United Kingdom
* 5 point scale
1 = Not at all, 2 = Slightly, 3 = Moderately, 4 = Very, 5 = Extremely
[Order of countries sorted by ‘Excited’ category]
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 21
Younger generations, the university educated, and People with a university education are also more trusting
and accepting of AI than those without a university degree
managers are more trusting and accepting of AI and hold more positive views of the trustworthiness of AI.
systems, and more likely to feel positive emotions This difference was also particularly evident in Australia,
with 42% of university educated Australians willing
As shown in Figure 9 and through statistical analyses,
to trust AI, compared to 27% of Australians without a
younger people, notably Generation Z and Millennials,
university education.
are more trusting and accepting of AI, than older
generations and view AI systems as more trustworthy Managers are also more trusting and accepting of AI,
than older generations. and perceive it as more trustworthy than people in
other occupations.
These generational effects held across most countries
and are particularly pronounced in Australia and the USA. In addition, younger generations, those with a university
For example, in Australia, 25% of older generations trust education, and managers are more likely to feel positive
AI compared to 42% of Gen X and Millennials, and 13% emotions about AI. There are no generational, educational,
of older generations accept AI compared to 34% of Gen Z or occupational differences in the experience of negative
and Millennials. In contrast, in South Korea and China, we emotions about AI.
see a reversal of this pattern, with older generations more
It is noteworthy that there are no meaningful differences
trusting of AI than younger generations. For example,
across men, women and other genders in trust, acceptance,
in South Korea, 23% of Gen Z and Millennials are willing
or emotions towards AI, however in a few countries (USA,
to trust AI, compared to 44% of Baby Boomers and
Singapore, Israel, and South Korea, respectively), men were
older generations.
more trusting or accepting of AI, and reported more positive
emotions, than other genders.
Figure 9. Trust and acceptance of AI systems by generation and education
University education 46 40
No university education 32 27
Manager 54 48
Professional & Skilled 42 37
Administrative and Service/Sales 37 31
Manual 32 27
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 22
TOPIC TWO
How do people
perceive the
benefits and
risks of AI?
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 23
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 24
People expect AI will deliver a Figure 10: The perceived benefits of AI use
range of benefits but perceive 'To what extent do you expect these potential benefits from the use of AI
more process benefits than [specific application]?'
benefits to people % Low % Moderate % High
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 25
'To what extent do you expect these potential benefits from the use of AI [specific application]?’
Enhanced decision-making Enhancing what people can do Improved effectiveness Reduced costs Better use of resources
Improved outcomes for people Enhanced precision or personalisation Innovation Improved efficiency
People are concerned about a range of Figure 12: The perceived risks of AI use
potential risks from AI use, particularly 'How concerned are you about these potential risks of AI use
cybersecurity risks [specific application]?'
% Low % Moderate % High
While people expect significant benefits from AI,
the large majority (73%) also perceive significant Overall Risks 27 29 44
potential risks from AI. People who perceive more Cybersecurity risks 16 24 60
risks of AI use, are somewhat less trusting of AI
Manipulation or harmful use 24 26 50
systems (r=-0.25, p<0.001).
Job loss due to automation 23 27 50
Cybersecurity risk (e.g. from hacking or malware)
Loss of privacy 25 28 47
is the dominant concern raised by 84% of
people. Other risks of moderate to very large System failure 24 31 45
concern raised by more than two-thirds of people Deskilling 27 30 43
(68–77%) include manipulative or harmful use Human rights
30 29 41
being undermined
of AI, job loss and deskilling, loss of privacy,
Inaccurate outcome 32 40 28
system failure, undermining of human rights
and inaccurate outcomes. Potential for bias 42 30 28
In comparison, people are less concerned about % Low= 'Not at all' or 'To a small extent' '
% Moderate = 'To a moderate extent'
the risk of bias from AI use. However, bias is % High = 'To a large extent' or 'To a very large extent'
still a concern for the majority of people (58%).
This may reflect that the general public perceive
AI systems as less biased than humans, or
alternatively, are less aware of the potential
risk of bias from AI systems.
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 26
To complement these quantitative findings, we asked – system failure or malfunction causing harm
people: ‘What concerns you most about the use of AI – the loss of human control and agency, and loss of
[specific application]? Thematic analysis of this open- human judgement in decision-making, resulting in
ended data23 reinforced that people across all countries unintended consequences (including AI ‘taking over’).
are concerned about each of the risks shown in Figure 12,
The qualitative data further highlighted concerns around:
including concerns about:
– a lack of transparency of when and how AI is being
– privacy breaches, cybersecurity attacks and hacking
used, and how AI generates decisions and outcomes
– manipulation and harmful use, including misuse by service
– a lack of regulations, policies and governance to make
providers and governments (e.g. to monitor or control)
AI use safe and ethical.
– job loss, technological unemployment and deskilling
It is also important to note that some people report having
– inaccurate outcomes and recommendations, and poor no concerns.
or biased decisions
People view the risks of AI in a comparable way While AI acceleration clearly has the potential to provide
economic benefits to these countries, it may also result
across countries in job losses. In Japan, the top concerns are AI system
In contrast to the distinct differences across countries failure (e.g. where the AI system malfunctions or goes
in how people view the benefits of AI, there are many offline) and cybersecurity, which may reflect the heavy
commonalities in how people from different countries dependence on smart technology in Japan.
perceive the risks of AI use. We also see that across all countries, people are least
In almost all countries, people are most concerned about concerned about the potential risk of bias from AI use,
cybersecurity risks. The exceptions are people in India followed by inaccurate outcomes from AI use.
and South Africa, who are most concerned with job People in South Africa, South Korea and Brazil perceive
loss due to automation, followed by cybersecurity risks. the risks of AI higher than people in most other countries.
Figure
This 13. The
concern perceived
about job loss risks of AI across
may reflect countries
the recent In contrast, people in Germany perceive the potential
increase in AI-related activity in these two countries.24 risks of AI lower than people in most other countries.
Potential for bias Inaccurate outcome Loss of privacy System failure Human rights being undermined
Manipulation or harmful use Deskilling Job loss due to automation Cybersecurity risks
* 5 point scale
[Countries sorted by 'Cybersecurity risks']
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 27
People are divided about the Figure 14: The likelihood of risks impacting people
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 28
One in two people believe the benefits of AI the benefits of AI outweigh the risks, and under a quarter
(21–24%) disagreed. The remainder (26–29%) are neutral.
outweigh the risks
In several countries, people were less likely to believe the
We asked people whether the benefits of AI outweigh benefits of AI use in the Human Resources application
the risks, both in relation to people in their country, and outweigh the risks.
to themselves personally. In both cases half agree that
Figure 15. The perceived benefits and risks of AI across countries
People are more likely to believe Figure 15: Perceptions across countries that AI benefits outweigh risks
the benefits of AI outweigh 'Thinking about people in your country generally, to what extent do you agree
the benefits of AI [specific application] outweigh the risks?' [2 items]
the risks in the BICS countries
and Singapore: people in % Agree
circumspect China 81
Germany 42
Estonia 42
Japan 42
United States 41
United Kingdom 40
Netherlands 40
France 40
The university educated, younger generations, as outweighing the risks (51% vs 38% agree) and believe
that one or more of the risks associated with AI will impact
and managers perceive more benefits of AI, them personally (38% vs 29%).
but there are no demographic differences in Managers are also more likely to perceive benefits
perceptions of risk associated with AI than those with other occupations (62%
Younger generations, namely Gen Z and Millennials, view vs 43–51% high), and more likely to believe the benefits of
the benefits of AI more positively than people of the Baby AI outweigh the risks (58% vs 36–47% agree).
Boomer generation or older (55% vs 37% high). People There are no differences between men, women, and other
with a university education also view the benefits of AI genders in the perceived benefits of AI, and no differences
more positively than people without a degree (56% vs in the perceived risks across generation, education or
41% high) and are more likely to view the benefits of AI occupational groupings.
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 29
TOPIC THREE
Who is
trusted to
develop, use
and govern
AI?
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 30
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 31
People are most confident in universities and Seventy-one percent report feeling confident in technology
companies to develop and use AI (M=3.2/5).
defence organisations to develop, use and
People have the least confidence in government and
govern AI in the best interests of the public commercial organisations (63% each, Ms=2.9–3.0),
As shown in Figure 16, people have the most confidence in with a third of people reporting no or low confidence in
their national universities and research institutions, national government and commercial organisations to develop
defence forces, and international research organisations and use AI. A solution may be for these organisations to
to develop and use AI in the best interests of the public, collaborate in AI development with more trusted entities,
with between 77–82% reporting moderate to complete such as universities and research institutions.
confidence (Ms=3.4–3.5/5).
Figure 16. Confidence in entities to develop and use AI
'How much confidence do you have in the following to develop and use AI in the best interests of
the public?'
National universities 15 32 50
Security and defence forces 4 19 28 49
International research organisations 7 15 32 46
Technology companies 26 33 38
Government 4 33 30 33
Commercial organisations 34 35 28
There is a similar pattern regarding confidence in entities People reported the least confidence in governments,
to regulate and govern AI in the best interest of the public technology, and commercial organisations (60–66%, Ms=2.9–
(see Figure 17).25 People are more confident in national 3.0). About a third of people report no or low confidence in
universities, international research organisations, as these entities to develop and regulate AI (see Figure 17).
well as security and defence organisations (76–79%
When people are confident in entities to develop and govern
confidence, Ms 3.4/5 each) to regulate and govern AI,
AI, they are more likely to trust in AI systems (correlations
than other entities.
ranging from 0.42 [defence forces] to 0.54 [technology
companies], p<0.001).
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 32
'How much confidence do you have in each of the following to regulate or govern AI in the best interests of the public?'
National universities 17 32 47
International research organisations 7 17 31 45
Security and defence forces 20 29 47
International organisations (e.g. ISO, UN) 6 20 32 42
A partnership or association of tech companies,
academics, and civil society groups 6 19 35 40
Existing agencies that
regulate or govern specific sectors 25 36 36
Technology companies 31 32 34
Government 33 30 34
Commercial organisations 36 34 26
*Research institutions, defence forces, existing agencies that govern specific sectors, and government were country specific
Countries vary in their confidence in entities to In contrast, many people in China (86%), India (70%),
and Singapore (60%) have high or complete confidence
develop, use and govern AI in their governments to develop, use and govern AI.
There is significant variation across countries on people’s While confidence in technology companies to develop, use
confidence in entities to develop, use and govern AI, and govern AI is generally low in western countries and
particularly confidence in government and technology Israel, particularly Finland, Canada, and Australia (30–40%
firms (see Figure 18). no or low confidence, M=2.7–2.8), it is comparatively
A lack of confidence in government to develop, use and high in the BICS countries (51–73% high or complete
govern AI in the best interests of the public was reported confidence, M=3.7–4.2).
by about18.
Figure half the people
Confidence in in South Africa
technology and (52%), the USA
government entities to develop, use and govern AI
(49%), Japan (47%), and the UK (45%).
Figure 18: Confidence in technology and government entities to develop, use and govern AI
4.4
4.2 4.1 Confidence in technology companies
3.8 3.8 3.8 Confidence in government
3.7
3.4
3.2
3.0 2.9 3.0
2.9 2.8 2.9 2.8 2.8 2.8 2.8 2.8 2.8 2.8
2.9 2.8
2.7 2.7 2.7 2.7 2.8 2.7
2.5 2.4 2.5 2.5
Ind
Ch
So
Br
Sin
So
Jap
Es
UK
Ge
Fra
Ne
US
Au
Isr
Ca
Fin
az
to
ae
u
n
str
ina
rm
the
lan
ga
nc
ia
an
th
ad
th
nia
il
l
alia
po
e
an
d
rla
a
Ko
Af
re
nd
re
ric
s
a
a
* Mean of 5 point scale amalgamating confidence to develop and use AI with confidence to regulate and govern AI
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 33
General trust partly explains confidence in government to develop, use and govern AI, namely China,
India, and Singapore, are also the countries with higher
entities to use and govern AI general trust in their governments (60–82% high trust,
These country differences in confidence of entities to use Ms 4.9–5.6/7). Similarly, the countries where people have
and govern AI can be partly explained by generalised trust the least confidence in government to develop, use and
towards these entities. General trust in an entity sets a regulate AI, also have low general trust in government,
foundation that influences domain specific trust in the namely South Africa, UK, USA, and Japan (53–65% low
entity to perform particular actions. trust, Ms=2.7–3.1).
There are very high correlations between general trust in We also find high correlations (ranging between 0.55 and
government (to do the right thing and act competently) 0.70, p<0.001) between general trust in universities and
and confidence in government to develop/use (r=0.80, research institutions, security forces, and business with
p<0.001), and regulate/govern AI (r=0.82, p<0.001). confidence in each of these entities to develop, use and
The countries where people are most confident in their govern AI.
Younger generations, the university educated, People with a university education are more confident
in some entities to develop and regulate AI, particularly
and managers are more confident in entities to government (38% vs 23% high confidence), defence forces
develop, use and govern AI (50% vs 40% high confidence), and international research
and scientific organisations (49% vs 38% high confidence).
Generation X and Millennials are more confident in all
entities, except government, to develop and regulate AI Managers are more confident in government (44% high
in the best interest of the public than Baby Boomers and confidence vs 20–33%), commercial organisations (35% vs
older generations (42% vs 28% high confidence). 19–23%), and technology companies (44% vs 25–31%) to
develop and regulate AI than all other occupation groups.
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 34
TOPIC FOUR
What do people
expect of the
management,
governance and
regulation
of AI?
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 35
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 36
Sixty-one percent believe the Figure 19: Perception that the impact of AI on society is uncertain
societal impact of AI is uncertain ‘To what extent do you agree with the following statements…?
Most people (61%) believe the long- (1) The impact of AI [specific application] is unpredictable
(2) The long-term impact of AI [specific application] on society is uncertain
term impact of AI on society is uncertain
(3) There is a lot of uncertainty around AI [specific application]’
and unpredictable (see Figure 19). The
more uncertainty people perceive, the % Disagree % Neutral % Agree
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 37
* 7 point scale
[Countries sorted in order of ‘AI regulation is not needed’]
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 38
Only two in five people believe current However, there are stark country differences. Most people in
India (80%) and China (74%) believe appropriate safeguards
safeguards are sufficient to make AI use safe are already in place, more than people in any other country.
The majority (61%) of people disagree or are unsure that About half of people in Singapore (53%) and Brazil (52%)
current safeguards around AI (i.e. rules, regulations, and also believe current safeguards are sufficient. Conversely,
laws) are sufficient to make the use of AI safe and protect less than one in five people in Japan (13%) and South Korea
them from problems (see Figure 22). This pattern was (17%) agree, with people in these countries rating current
strongest in the western countries together with Israel, safeguards lower than all other countries.
Japan, and South Korea. Only 39% of people believe that The younger generations of Gen Z and Millennials (M=4.3)
there are sufficient structural assurances around AI use. and Gen X (M=4.0) are more likely to believe there are
This finding corroborates previous surveys27 reporting sufficient safeguards in place to govern AI, compared to
people do not think current rules are effective in regulating people in the Baby Boomers generation or older (M=3.8).
AI and 22.
Figure is problematic
Perceptionsgiven the strong
of current relationship
regulations, laws,between
and rules toManagers
make AI useare safe
also more likely to perceive sufficient
current safeguards and trust in AI (r=0.66, p<0.001). safeguards than other occupations (M=4.6 vs 4.0–4.1)
Figure 22: Perception of current regulations, laws, and rules to make AI use safe
(1) There are enough current safeguards to make me feel comfortable with the
use of AI [specific application]
(2) I feel assured that there are sufficient governance processes in place to
protect me from problems that may arise from the use of AI
[specific application]
(3) The current law helps me feel that the use of AI [specific application ] is safe
(4) I feel confident that there is adequate regulation of AI [specific application]'
Whole sample 32 29 39
India 6 14 80
China 5 21 74
Singapore 17 30 53
Brazil 27 21 52
South Africa 26 30 44
Germany 29 32 39
Estonia 29 33 38
Israel 32 31 37
Australia 39 26 35
Netherlands 39 29 32
USA 43 27 30
UK 37 33 30
France 39 32 29
Canada 42 30 28
Finland 38 37 25
South Korea 51 32 17
Japan 51 36 13
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 39
Assurance mechanisms enhance trust in for explainable and transparent AI, and an AI ethics
certification (see Figure 23). These mechanisms increase
AI systems perceptions of safeguards and reduce uncertainty.
In addition to external rules, laws, and safeguards, we also Of the specific assurance mechanisms, four out of five
asked people whether a range of assurance mechanisms people (80%) agree that system accuracy and reliability
available to organisations would influence their trust. monitoring would enhance their trust, with fewer, but still
Three out of four people (75%) report they would be more two thirds (68%), agreeing that adherence to an AI ethics
willing to trust an AI system when assurance mechanisms certification would enhance trust.
are in place that support ethical and responsible use. The assurance mechanisms influence trust most strongly
These mechanisms include monitoring system accuracy in the BICS countries and Singapore, and the least in Japan.
Figure 23. AI Assurance
and reliability, mechanisms
using an AI code of conduct, oversight by an
independent AI ethical review board, adhering to standards
Assurances composite 8 17 75
The accuracy and reliability of the
system was monitored 7 13 80
The organisation using the system
had an AI ethics code of conduct 10 17 73
The system was reviewed by an AI
ethics board 9 18 73
It adhered to standards for
explainable and transparent AI 9 18 73
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 40
There is strong global endorsement of the emerging on these principles.28 One goal of this survey
was to determine the extent to which these principles are
Trustworthy AI management and governance important for people to trust in AI across the globe.
principles: each principle is important for To answer this question, we asked about the importance of
trust globally 16 practices reflecting the principles for trustworthy AI shown
A proliferation of reports and guidance documents on in Table 1. These principles primarily reflect the Principles
the development and deployment of trustworthy, ethical for Trustworthy AI adopted by the European Union.29
AI have been produced, with considerable consensus
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 41
These principles were endorsed globally, with almost all People in South Africa viewed the principles as more
people (96-99%) across all countries surveyed viewing important for trust than people in all other countries,
these eight principles, and the practices that underlie other than Brazil.
them, as moderately to extremely important for trust in AI
Collectively these findings indicate clear public endorsement
systems (see Figure 24). This finding held in relation to all
of these principles and practices and a blueprint for
AI use cases examined (i.e. AI systems in general and in
developing, using and governing AI in a way that supports
healthcare, human resources, security, and recommender
trust across the globe.
systems) suggesting their universal relevance.
Figure 24. Importance of the principles for Trustworthy AI across countries
Figure 24: Importance of the Principles for Trustworthy AI across countries
'How important are the following for you to trust AI [specific application]?
Whole sample 23 74
South Africa 14 85
Brazil 17 81
India 19 80
Germany 20 78
Australia 5 18 77
UK 21 76
Israel 22 76
Finland 24 74
Estonia 24 74
Canada 4 23 73
USA 4 24 72
Netherlands 25 72
South Korea 26 72
Singapore 27 72
China 29 70
France 4 28 68
Japan 4 39 57
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 42
Figure 25: Importance of the Principles for Trustworthy AI Data privacy, security and
'How important are the following for you to trust AI [specific application]?'
governance is most important
% Low % Moderate % High
in all countries
Data privacy, security While all eight principles are deemed
& governance 4 15 81
important (see Figure 25), data privacy,
Technical robustness
4 18 78 security and governance practices
& safety
are considered the most important
Transparency
& explainability 5 20 75 for trust in AI systems in all countries,
Risk & impact except China where it was ranked
mitigation 5 21 74
second (see Figure 26). In contrast, AI
Accountability 6 21 73 literacy practices are rated the least
& contestability
important in many countries.
Human agency 5 23 72
& oversight
Fairness, inclusion 6 22 72
& non-discrimination
AI literacy 6 24 70
'How important are the following for you to trust AI [specific application]?'
AI Literacy Fairness, Inclusion & Non-discrimination Accountability & Contestability Human Agency & Oversight
Risk & Impact Mitigation Transparency & Explainability Technical Robustness & Safety Data Privacy, Security & Governance
3.6 3.7 3.8 3.9 4 4.1 4.2 4.3 4.4 4.5 4.6
South Africa
Brazil
Estonia
Australia
Finland
United Kingdom
Germany
Canada
India
Netherlands
Israel
United States
Singapore
France
South Korea
China
Japan
*5 point scale
1 = Not at all important, 5 = Extremely important
[Countries sorted by 'Data privacy, security & governance' category]
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 43
TOPIC FIVE
How do people
feel about AI
at work?
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 44
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 45
Figure 27: Perceived use of AI in work One in two people report using AI in their work
'How often do you use AI in your work?' As shown in Figure 27, over half the surveyed people (54%)
% of whole sample
report using AI in their own work. A third report using AI
rarely (19%) or occasionally (14%), and one in five (21%)
Don't know 13 report using it about half the time or more frequently.
Never 33 In comparison, a third of the people surveyed believe they
Rarely never use AI, and a further 13% ‘don’t know’, suggesting
(~10% or less) 19
they do not have sufficient understanding to gauge
Occasionally
(~30%) 14 whether they use AI in their work.
About half
(~50%) 9
Often
(~70%) 7
Always or almost always
(~90% or more) 5
AI use for work is most commonly reported Finland and Estonia are known for their efforts to educate
people about AI through public literacy programs.
in the BICS countries and least common in
This pattern suggests that the low reporting of AI use at
western countries work may partially reflect a general lack of understanding
As shown in Figure 28, there are clear differences across and awareness of when and where AI is embedded in
countries in people’s reported use of AI in their work. While everyday technologies (e.g. internet searches, email
the majority in the BICS countries (71–90%) report using AI filters etc) in other western countries (see next section).
in their work, a minority of people in most western countries Indeed, there is a correlation between one’s knowledge
(32–41%) report using AI. Exceptions are Finland and of when and where AI is being used and perceived use
Estonia, where
Figure 28. 54–59%
Reported useofof
people
AI at report using AIcountries
work across at work. of AI at work (r= 0.32, p<0.001).
Often (~70%)
70
60
About half (~50%)
50
40
Occasionally (~30%)
30
10 Never
Don't know
Ne
Ca
Ja
UK
Au
US
Ge
Fr
Isr
So
Es
Fin
Sin
So
Br
In
Ch
an
di
p
az
ae
to
na
ut
str
th
rm
ut
in
A
lan
g
an
a
c
ap
n
hK
il
er
a
da
h
a
e
an
ia
d
lia
ore
lan
Af
ore
y
ric
ds
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 46
People in the BICS countries and Figure 30: AI culture and support for responsible AI in my
employing organisation
Singapore rate their employers
as more supportive of AI and its AI culture is important Support for responsible AI
responsible use, than people in
other countries India
2.5 3 3.5 4 4.5 5 5.5 6
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 47
Most people view AI use in managerial decision- making to be the most acceptable configuration. The next
most popular proposal is a 50/50 split, advocated by a
making as acceptable, and prefer AI involvement third of people (35%).
to decision-making by humans only However, few people (12%) believe AI should be dominant
To understand people’s views of the use of AI in in managerial decision-making. This demonstrates a lack of
managerial decision-making, we asked people to choose support for fully automated managerial decision-making or
the most acceptable weighting between human and AI a dominance of AI over humans in decision-making.
involvement in decision-making.31
People in BICS countries are more likely to view a 50/50
As shown in Figure 31, most people believe AI should split as the most acceptable configuration, whereas in all
have a role in managerial decision-making, with only a other countries a 75% human/25% AI split was the most
few people (8%) advocating for humans to have sole popular option.
decision-making power.
Figure 31. Human-AI Almost half (45%)
collaboration view a decision-making
in managerial
collaboration of 75% human/25% AI split in decision-
'Which of the following proposals do you find most acceptable for human manager-AI collaboration in managerial
decision-making activities?'
35%
33%
32%
18%
11%
9% 9%
8%
3%
2%
1% 1%
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 48
Most people are comfortable with AI use to with queries. People are less comfortable with the use
of AI for informing organisational decision-making.
augment and automate tasks but not for HR
We also asked about the use of AI for employee-focused
and people management activities. Here we found a distinction between AI use for
We asked respondents how comfortable they are with augmenting employees at work, and AI use for human
the use of AI for a range of activities at work. While over resource management purposes. Most people are
half (55%) feel comfortable with the use of AI at work, comfortable with AI use for the purpose of augmenting
comfort varies depending on the purpose or application employee performance and decision-making, for example
of the AI in the workplace. by providing feedback on how to improve performance
and supporting employees to make decisions. People are
Over half of people surveyed feel comfortable with notably less comfortable with AI use for human resource
the use of AI for organisation-focused purposes to management, such as to monitor and evaluate employees,
augment and automate tasks. This includes tasks such and support recruitment and selection processes.
as monitoring
Figure security,
32. Comfort withautomating
the use ofadministrative,
AI at work
analytic, marketing, and physical tasks, and assisting
'How comfortable are you with AI being used in the following ways at work?’
Organisation focused
Monitor organisational security 12 21 67
Automate administrative processes 14 23 63
Automate data analysis 13 24 63
Employee focused
Help employees perform tasks 13 22 65
Monitor employees 36 23 41
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 49
There are notable differences across countries. People in This is consistent with the pattern of higher use of AI at
the BICS countries and Singapore are the most comfortable work by people in these countries. In contrast, people in
with AI use at work, with 67-87% comfortable across the France, Germany, Japan, Australia, and Canada are the
various AI work applications (see Figure 33). least comfortable (33–46% across AI work applications).
Almost half of people trust AI at work, two-thirds or more of people (66–87%) in these countries
trusting AI at work, significantly more than all other
with trust highest in the BICS countries countries. Singapore and Israel had the next highest levels
We asked people how willing they are to trust AI systems of trust at work.
at work by using the systems for work purposes, relying on In contrast, people are the least willing to trust AI systems
the information and decisions they provide, and allowing at work in the western countries, Japan, and South Korea,
data and relevant information about themselves to be used. with only 26–40% willing to trust. People’s trust in AI at
We found almost half (48%) of people surveyed are work is strongly associated with their trust in AI systems
willing to
Figure 33.trust AI and
Trust at work, with stark
comfort differences
with the between
use of AI more
at work across broadly (r=0.71, p<0.001).
countries
countries. Trust is highest in the BICS countries, with
Figure 33: Trust and comfort with the use of AI at work across countries
87 87 86
83 % Willing to trust
76 % Comfortable
70 69
66 67
60 58
53
51 50
46 47 45 46 47
44 42
40 39 39 39 37 37 38 37 37 35
33 31
26
Au
Ch
So
Sin
Ge
Ca
Jap
Ind
Un
Br
Isr
So
Fra
Ne
Un
Es
Fin
s
na
az
ina
uth
ae
rm
to
uth
g
ite
the
ite
lan
nc
an
tra
ia
ap
nia
da
il
an
dS
dK
e
lia
d
rla
ore
Ko
Af
y
nd
tat
ing
ric
re
s
a
es
do
a
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 50
Half of people believe AI will by giving them more choice in how they
do their work.
improve their competence and
Almost two in five (38%) believe using
autonomy at work AI would help them feel a sense of
We asked people how AI use would relatedness and connection with other
affect their sense of competency, people and groups at work, with a third
autonomy, and relatedness at work. unsure and a little under a third disagreeing.
Prior research shows the fulfilment of
People in the BICS countries had more
these three basic psychological needs
positive views about how AI would
at work enhances people’s wellbeing,
impact their competency, autonomy,
motivation, and performance.32
and relatedness at work, compared to
In contrast, when these needs are
people in other countries. For example,
unfulfilled, it can result in maladaptive
over 80% of people in India believe AI
behaviours and loss of motivation.
will increase their effectiveness, choice,
Almost one in two people believe the use and connection to others at work. In
of AI in their work would make them feel contrast, in many western countries, a
more competent and effective in their job third or fewer people believed AI would
(49%) and enhance their autonomy (48%) result in these positive effects.
Fewer than one in three people People in China, India, Brazil, and
Singapore are the most positive about
believe AI will create more jobs job creation from AI and more positive
than it will eliminate than all other countries, yet there is
significant variation even among these
Most people (71%) disagree or are
more optimistic countries. For example,
unsure that AI will create more jobs
about two-thirds (63–67%) of people in
than it will eliminate (see Figure 34).
China and India, and 37–48% of people in
This finding supports prior surveys
Singapore and Brazil, agree AI will create
reporting concerns about job loss
Figure 34. Perceived impact of AI on jobs generally
more jobs than it will eliminate, compared
from AI and automation. 33
to less than 30% in other countries.
Agree (29%)
Disagree
(45%)
Neutral (26%)
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 51
As shown in Figure 35, people are Figure 35: Perceived impact of AI on my work
split in their beliefs about whether AI
will impact jobs in their area of work, 'To what extent do you agree...?
with 42% believing AI will replace
(1) AI will replace jobs in my area of work
jobs in their area, 39% disagreeing (2) Key aspects of my work could be performed by AI'
and 19% unsure.
People in the BICS countries, as well
as Singapore, South Korea, and Israel,
are more likely to report that AI will
replace jobs and key aspects of work Disagree
Agree (42%) (39%)
in their area (48-74% agree).
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 52
Manager 65 71
Professional and Skilled 50 56
Administrative and Service/Sales 46 53
Manual 36 45
Younger generations and the university educated (49% vs 34%) but also that AI will create more jobs than
it will take away (34% vs 22% agree).
are more likely to believe AI will create jobs,
Similarly, people with a university education are also
even though it can replace aspects of their work more likely than those without a degree, to believe key
The younger generations of Gen Z and Millennials are aspects of their work can be performed by AI (49% vs
more likely than Baby Boomers and older generations to 37%) and that AI will create jobs (36% vs 22% agree).
believe key aspects of their work can be performed by AI
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 53
TOPIC SIX
How well
do people
understand
AI?
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 54
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 55
Figure 38: Awareness of AI across countries Four out of five people are aware of AI: Awareness
'Have you heard, read or seen anything about AI?'
is highest in Asian countries in Finland
Eighty-two percent of people across all countries had
Yes No
heard, read or seen something about AI.
Whole sample 82 18 As shown in Figure 38, awareness of AI varies across
countries. Asian countries and Finland have the highest
South Korea 98 levels of AI awareness, with people from the Netherlands
China 96 reporting the lowest awareness. The high rates in Finland
Finland 95 compared to other western nations may partially reflect
Singapore 94 6 investment in public AI education created and championed
India 91 9 in Finland.35
Japan 86 14
Israel 84 16
Germany 84 16
One in two people don’t feel they understand
Canada 81 19
South Africa 79 21
AI or when it is used
United Kingdom 78 22 Half the people (see Figure 39) have low subjective
Brazil 77 23 knowledge of AI, reporting that they do not understand AI
or when and how it is used.36 A smaller proportion (18%)
United States 75 25
report high subjective knowledge of AI, and a third report
Australia 75 25
moderate understanding.
Estonia 71 29
France 70 30 Subjective knowledge is associated with trust in AI
(r=0.36, p<0.001), as well as technology efficacy, one’s
Netherlands 58 42
self-assessed ability to use digital technologies effectively
(r=0.34, p<0.001). Technology efficacy is also associated
Figure 39. Subjective knowledge of AI with trust (r=0.42, p<0.001).
% High
(18%)
% Low
(49%)
% Moderate
(33%)
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 56
96 94
90 91 91 90
85 84 87
82 82 80 80
77 78
73 74 73
65
61 59 57 57 55
48 47 45
41 41 38 37
32 31
25
Un
Au
Ca
Fra
Es
Un
Ne
Jap
So
Ch
Ind
Sin
Isr
So
Fin
Ge
Br
az
ae
ton
na
uth
str
ina
uth
rm
ite
ite
the
lan
nc
ga
ia
an
il
l
da
dS
alia
dK
e
po
an
ia
d
rla
Ko
Af
y
re
nd
tat
ing
rea
ric
s
es
do
a
Moderate to high subjective knowledge = 'to a 'moderate', 'large' or 'very large' extent'
Moderate to high interest in learning more about AI = 'to a 'moderate', 'large', or 'very large' extent'
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 57
Most people want to learn more about AI with the of people interested in these countries. Of the Western
countries, people in Germany and Finland report the
greatest desire in BICS countries strongest interest.
Most people (82%) are interested in learning more about AI. People in Japan and Australia have notably lower interest
Only 19% report no or low interest in learning more about AI in learning about AI compared to other countries. Only
(ranging from 4% of Chinese to 45% of Japanese). 55% of people in Japan and 65% of people in Australia
People in the BICS countries, South Korea and Israel have expressed a desire to learn more about AI.
the strongest desire to learn more about AI, with over 90%
Figure 41. Use of AI technologies and awareness of their use
Figure 41: Use of AI technologies and awareness of their use Two out of five people are unaware
‘For each technology below, please indicate if you have that AI enables common applications
used it and if it uses AI’ they use: 45% don’t know AI is used in
% Unaware that technology uses AI
% Who use this technology
social media
As an indicator of people’s objective awareness
41 of AI use, we asked if the common technologies
Aggregated
68
shown in Figure 41 use AI. Overall, we found that
although 68% of people had used these common
Accommodation 64 AI-enabled technologies, two in five people
sharing apps 45 (41%) were unaware that these technologies
use AI. There is a small association between
59 objective awareness of AI use and trust in AI
Ridesharing apps
48
(r=0.20,p<0.001).
Chatbots and 25
virtual assistants 69
* % Unaware that technology uses AI includes both "No" and "Don't know" responses
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 58
Figure 42. Use of AI technologies and awareness of their use across countries
Figure 42: Use of AI technologies and awareness of their use The finding that 68% of people across
across countries countries report using AI-enabled
applications demonstrates the high global
‘For each technology below, please indicate if you have used it and if it uses AI’ penetration of AI into people’s daily lives.
% Unaware that technology uses AI However, there are notable differences
% Who use this technology across countries.
Israel
43 People with a better understanding
67
of AI are more likely to perceive
Finland
32 greater benefits
65
Subjective knowledge of when and
where AI is being used is associated with
51
USA perceived benefits of AI (r=0.37, p<0.001)
63
but has only a very small association
43 with perceived risks (r=0.04, p<0.001).
Australia
63 A similar pattern emerges for objective
awareness of AI with perceived benefits
36 (r=0.22, p<0.001) and perceived risks
South Korea
63
(r=0.02, p=0.019).
42
Canada
61
48
France
60
53
Netherlands
58
47
Japan
56
* ‘% Unaware that technology uses AI’ and ‘% Who uses technology’ reflect an average
of 10 different technologies for each country.
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 59
Younger generations, the university educated, feel they understand AI compared to only 40% of those
without a university education. Those with a university
and managers are more knowledgeable of AI and education are also more likely to correctly identify AI use
more interested to learn about AI in common applications than those without a university
education: 34% of those with a university education are
Younger generations and those with a university
unable to identify AI use across applications, compared to
education have greater knowledge of AI, both subjectively
48% of those without a university education. The university
and objectively (i.e. are better able to identify when AI
educated are also more likely to express interest in learning
is used), and have greater interest in learning about AI,
more about AI (86% vs 77%).
than older generations and those without a university
education (see Figure 43). There is a significant gender gap in awareness of AI, and
subjective but not objective understanding of AI. Men
Over half (57%) of Gen Z and Millennials report they are more likely to have heard of AI than women (87% vs
understand AI, compared to only 39% of Baby Boomers 77%), and report more subjective knowledge about AI
and older generations. Gen Z and Millennials are also than women (60% vs 42%). This is one of the few gender
more likely to correctly identify AI use in common differences found. But men and women do not differ
applications than Baby Boomers and older: 48% of in their ability to objectively identity AI use in common
older respondents are unable to identify AI use across applications, or in their interest to learn more about AI.
applications, compared to 39% of younger respondents.
Managers report having a stronger understanding of AI
Younger generations are also more likely to express
than other occupation groups (67% vs 40-58%), and are
interest in learning more about AI (86% vs 76%).
more aware of AI use across common applications than
We see a similar pattern across education levels. Fifteen other occupations except professional and skilled workers.
percent more people with a university education have Managers are also more interested in learning more about
heard of AI compared to people without a university AI than all other occupations except professional and
Figure 43. Demographic
education. differences
Sixty-two percent of thein subjective
university knowldge of AI and interest to learn more about AI
educated skilled workers (88% vs 77-79%).
Figure 43: Demographic differences in subjective knowledge of AI and interest to learn about AI
University education 62 86
No university education 40 77
Men 60 84
Women 42 79
Manager 67 88
Professional and Skilled 58 86
Administration and Service/Sales 46 79
Manual 40 77
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 60
TOPIC SEVEN
What
are the key
drivers of trust
and acceptance
of AI?
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 61
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 62
Trust is central to AI acceptance Trust acts as the central mechanism through which other
drivers impact AI acceptance. It is important, therefore, to
The model shows that trust is a key driver of AI acceptance understand what influences trust in AI systems. To do this,
(B=0.8738), and explains 74% of the variance in acceptance. we examine four distinct “pathways to trust” – institutional,
This finding empirically supports why trustworthy AI motivational, uncertainty reduction, and knowledge – and
matters: if people perceive AI systems as trustworthy and examine their comparative importance in predicting trust39.
are willing to trust them, then they are more likely to accept
these technologies.
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 63
Figure 44: Model of the key drivers of trust and acceptance of AI systems
Knowledge: .12
understanding & efficacy
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 64
Institutional safeguards and confidence are the government, technology and commercial organisations
to develop, use and govern AI. This institutional pathway is
the strongest drivers of trust the strongest predictor of trust (B=0.50) and significantly
The institutional pathway is grounded in the view that more important than the other drivers.
people often defer to, and expect, authoritative sources Given our results show most people are unconvinced that
and institutional processes to provide assurance of the current governance and regulations are adequate to protect
safety and reliability of new technologies and practices. people from problems associated with AI, a critical first
As shown in the model, people are more trusting of AI step towards strengthening trust and the trustworthiness
systems when they believe current regulations and laws of AI is ensuring it is governed by an appropriate regulatory
are sufficient to make AI use safe, and have confidence in and legal framework.
The perceived benefits of AI motivate trust use technologies that provide immediate benefits (e.g.
convenience), despite concerns about potential risks.
The motivational pathway to trust is grounded in evidence
This finding highlights the importance of designing and
that the more people perceive benefits from using AI, the
using AI systems in a way that delivers demonstrable
more motivated they will be to trust AI. This motivational
benefits to people, in order for these systems to be
pathway is the second strongest predictor of trust
trusted and accepted.
(B=0.37) and is a stronger driver of trust than the perceived
risks of AI. This helps explain why people are willing to
The perceived risks of AI create uncertainty technical risks associated with AI use (e.g. cybersecurity
and privacy risks, risk of inaccurate or biased outcomes)
and reduce trust and broader societal risks (e.g. manipulation, deskilling
The uncertainty pathway is based on evidence that it is and job loss). This is the third strongest driver of trust.
more difficult to trust in uncertain and risky contexts. This finding underscores the importance of ongoing action
The model shows that the more concerned people are to mitigate the risks associated with AI at multiple levels,
about the perceived risks of AI use, the less likely they as well as communicating these risk mitigation strategies
are to trust in AI systems (B=-0.14). This includes both to reassure people.
Understanding AI and how to use technology The model shows that people are more likely to trust and
accept AI when they feel they understand when and how AI
influences trust is used and are sufficiently skilled to use digital technologies
The knowledge pathway is based on evidence (B=0.12). This knowledge pathway is the fourth driver of
that knowledge and understanding enhance trust trust and highlights the importance of supporting people’s
in technology. technological and digital literacy and skills.
Country and demographic factors have a People in the BICS countries (B=0.02), as well as younger
generations (B=0.01), the university educated (B=0.01),
smaller impact on trust and men (B=0.01) are more trusting of AI.
After taking into account the institutional, motivational,
uncertainty and knowledge drivers of trust in the model,
we found four other factors have a small but meaningful
influence on trust.
Collectively, the drivers in our model account for 84% of the variance in trust.
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 65
Qualitative insights on what enhances trust – Institutional safeguards particularly ‘regulations’, ‘laws’,
‘rules’, ‘appropriate governance’ and ‘monitoring’
in AI systems mechanisms that ‘guarantee’ responsible use and
To supplement our understanding of the key drivers of prevent problems.
trust, we asked people: What would enhance your trust – Showing benefits to people and society, including
in this AI system? People responded to this question in improving the quality, effectiveness, efficiency and
relation to one of the AI applications (i.e. Healthcare AI, ease with which work is done, reducing costs, and
Security AI, Recommender AI, Human Resources AI, ‘improving the lives of people’ and ‘betterment’ of
or AI systems in general). society, including taking a people-centred approach.
We conducted a thematic analysis to identify the key – Demonstrating trustworthiness by being ‘transparent’
themes, supplemented by a word cloud to identify the about how AI is being used and implemented, including
most frequently mentioned terms (see Figure 45). how data and personal information is used and how
This qualitative data reinforces the importance of the outcomes are reached, and proving the ‘trustworthiness’
following pathways for trust: and ‘safety’ of AI systems ‘over time’.
– Knowledge and understanding of AI including learning The qualitative analysis also revealed that a proportion of
how AI works, and skills, experience and familiarity in people feel ‘nothing’ can be done to enhance their trust,
using it. with some perceiving the technology as too risky.
– Risk mitigation by protecting the privacy and security of
personal data and information, ensuring the accuracy
and reliability of AI output and performance, and
retaining human control and oversight.
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 66
TOPIC EIGHT
How
have trust
and attitudes
towards AI
changed over
time?
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 67
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 68
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 69
Similarities in the drivers of trust across time We also see similarity in the predictors of AI uncertainty
(2020) and perceived risks (2022) with both having a
and samples significant negative impact on trust. Trust is a strong
In our 2020 report, we presented a model that included predictor of acceptance in both models.
some similar drivers to the model reported in the previous In addition, we re-ran the model presented in the previous
section (e.g. institutional safeguards, familiarity with AI, section using the sample of respondents from the five
AI uncertainty). These models differ in terms of: a) the western countries surveyed in both 2020 and 2022 (USA,
inclusion of additional and refined measures in 2022 to UK, Canada, Australia and Germany). Given the different
account for the global nature of the survey and to better and more positive pattern of response for the BICS
represent the theoretical pathways to trust; b) the use of a countries, we wanted to determine if the model and drivers
more advanced statistical modelling technique (structural of trust are consistent for these five western countries.
equation modelling instead of path analysis); and c) the
larger and more diverse sampling (17 countries across all Results indicate that the model holds for these five
global regions vs 5 western countries). countries. While the strength of relationships vary slightly,
the four key pathways and drivers of trust remain the same.
Despite these differences, we see several similarities in the Specifically, the institutional path remained the strongest,
findings between models. Institutional safeguards is the followed by the motivational, uncertainty and knowledge
strongest predictor of trust in both models. Familiarity and paths. Trust remains a strong predictor of acceptance.
knowledge of AI are also significant predictors of trust with
similar impacts.
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 70
Trust and attitudes towards AI benefits of AI outweigh the risks, combined with the emerging nature
vary across countries: western compared to people in the BICS of their economies, is that these
countries, together with Japan, countries and Singapore. countries may be uniquely positioned to
South Korea, and Israel, generally – Perceptions of institutional accelerate innovation and technological
have lower trust and less positive safeguards: there are differences advantage and offer a more supportive
attitudes than people in the across countries in the perceived environment to attract businesses
emerging economies. adequacy of safeguards and wishing to invest in AI development and
regulations to make AI use safe, innovation, supporting a competitive
Across our findings, we see a
and confidence in the institutions advantage. Over time, this may
general pattern of lower trust, greater
responsible for this. Fewer people contribute to a disruption of traditional
ambivalence, and less positive views
in western countries, Japan, economic hegemonies.
towards AI in western countries, Japan,
South Korea and Israel, compared to South Korea and Israel view However, although people in the
people in the BICS countries, and to current laws and regulations for BICS countries are more likely to
some extent, Singapore. safeguarding AI as sufficient, perceive current AI regulations and
and report less confidence in laws as adequate, it is noteworthy
Our findings suggest that the companies to develop, use and that on international rankings these
more positive attitudes in the govern AI, compared to people in countries rank low on governance and
BICS countries do not reflect blind Brazil, India, China and Singapore. regulation to ensure the ethical and
optimism or lack of awareness of the
– Familiarity and understanding of responsible use of AI.41 In contrast,
potential risks of AI use. Rather, we
AI: people in western countries the European Union (EU) and Canada
see some evidence of the opposite,
generally report less use of are viewed as leaders in AI and data
with people in Brazil and South Africa
AI at work, and lower use and governance and ethics. The EU’s AI
(together with South Korea) rating the
knowledge of AI in common Act will set limits and conditions on
risks of AI higher than other countries,
applications, compared to people in the use of AI systems based on a
and Indians and South Africans more
the BICS countries and Singapore. risk classification and restrict the
likely to believe AI risks will impact
The key commonality across the types of AI products and services
people in their country.
BICS countries, despite differences that can be developed and sold in
Rather, our analysis suggests the in economic strength, institutional the EU, which are likely to influence
varying levels in trust and acceptance arrangements, and technological AI development and governance
across countries largely reflect three advancement, is the emerging practices in other countries.
key factors: nature of their economies. This may Sustained competitive advantage
– Differences in the perceived encourage a growth mindset in relation requires trust and acceptance to be
benefits of AI and the extent to to the acceptance of technology, as preserved. We turn now to discuss
which they outweigh potential it provides a means to accelerate the four pathways identified by our
risks: people in western countries economic progress and prosperity. modelling for strengthening the
and Japan are generally less An implication of the greater trust and responsible use of AI to secure and
convinced of the benefits of AI, acceptance of AI in the BICS countries sustain public trust: the institutional,
and together with South Korea (and to some extent Singapore), motivational, uncertainty reduction
and Israel, less likely to believe the and knowledge pathways.
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 71
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 73
Survey procedure Before answering this subset of adapted from McKnight et al (2002).
questions, respondents were Example items include: I believe
The research was approved by
provided with a brief description of [specific AI application] would:
and adhered to the Guidelines
the specific AI system, including what produce output that is accurate;
of the ethical review process of
it is used for, what it does and how it have a positive impact on most
The University of Queensland and
works (see full descriptions on page people; be safe and secure to use.
the National Statement on Ethical
10). The research team developed
Conduct in Human Research. Acceptance was measured using a
these descriptions based on a range
reliable 3 item scale adapted from
The survey was divided into five of in use systems and input from
Zhang & Moffat (2015). Items included:
sections with questions in each domain experts working in healthcare,
To what extent do you accept the use
section focused on the respondent’s: human resources, security, and
of the AI application, approve of the
1) use and understanding of AI; recommender systems, respectively.
use of the AI application, embrace the
2) attitudes towards AI systems
Survey measures, piloting and use of AI application.
(including trust, acceptance, risks,
benefits, impacts and emotions); 3) translations The psychometric properties of
attitudes towards AI governance and Where possible, we used or adapted all multi-item constructs were
management; 4) attitudes to the use existing validated measures from assessed to examine reliability and
of AI at work; and 5) education and academic research (e.g., Lu et al., dimensionality. All were found to be
individual differences (e.g. technology (2019), McKnight et al., 2002; Pavlou reliable with Chronbach alphas ranging
efficacy, disposition to value privacy). et al., 2007; Zhang & Moffat, 2015) or from 0.78 (uncertainty avoidance)
At the end of the survey, respondents from previous public attitude surveys to 0.93 (basic psychological needs).
were asked open-ended questions to (e.g. Eurobarometer, 2017; Ipsos, All constructs were unidimensional
understand what would enhance trust 2017; Zhang and Dafoe, 2019). except for AI system trustworthiness,
in AI and what concerns them most which is conceptualised and reported
about the use of AI. Trust in each specific AI application as three dimensions: 1) Ability (system
was measured using a reliable 8 item perceived to perform accurately,
After completing the first section scale from Gillespie et al. (2021). reliably, and as intended); 2) Humanity
on use and understanding of AI, Example items are: How willing (system perceived to create benefits,
participants read the OECD’s are you to… rely on information provide a helpful service, and have
definition of AI. Questions in sections provided by the AI system; depend a positive impact); and 3) Integrity
two and three of the survey, as well on decisions made by the AI system; (system perceived to uphold ethical
as the open-ended questions in share relevant information about principles and data privacy rights, and
section 5, referred to one of the AI yourself to enable the AI system to operate in a fair and safe way).
use cases described on page 10, or perform a service or task for you;
referred to ‘AI systems in general’. allow your data to be used by the AI
Respondents were randomly system. Perceived trustworthiness
allocated to one of these five use was measured using a 14 item
cases, providing equivalent numbers measure assessing positive
of responses across use cases. expectations towards the AI system,
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 74
We extensively piloted and refined the survey before full Reporting differences between countries,
launch. To ensure survey equivalence across countries, we applications, and people
conducted translation and back-translation of the English
Our reporting of between-country, between-application,
version of the survey into the native languages dominant
between-people and within-person differences was
in each country, using separate professional translators.
informed by statistical analyses and adhered to well
Respondents could also opt to complete the survey in
established benchmarks for interpreting between- and
English if preferred. The following languages were offered
within-subject effect sizes (see Cohen, 1988; Lakens, 2013).
in their respective countries: French (France and Canada),
German, Finnish, Estonian, Simplified Chinese, Japanese, We used one-way analysis of variance (ANOVA) to examine
Portuguese, Hebrew, Dutch, and Korean. differences between countries, AI applications and people
(e.g. generational differences). We took several steps
Reporting percentages and rounding to ensure the responsible reporting of only meaningful
Most survey measures used either a 5 or 7-point Likert differences in the data. First, we adopted a stringent cut-
scale. When reporting percentages we adopted the off of p<0.01 to interpret statistical significance. Where
following cut-off values for reporting low, moderate and there are statistically significant differences between
high values, unless otherwise reported. groups (p<0.01), we examined the omega-squared effect
size to determine the magnitude of difference between
– 5-point scales: the groups. Differences with an effect size less than 0.01
– Low = (mean) values ranging from 1.0 to 2.49 were deemed practically insignificant and are not reported.
– Moderate = (mean) values ranging from 2.50 to 3.50 Meaningful patterns of between country findings that
– High = (mean) values ranging from 3.51 to 5.0 exceeded the 0.01 effect size are reported43.
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 75
Appendix 2: Country
samples
In this section, we describe the demographic profile of each country sample.
The demographic profile of each Country samples varied on education. We found significant differences
country sample was nationally Samples from emerging economies between country groupings on all key
representative on age, gender and (Brazil, India, China, and South Africa) indicators, even when controlling for
regional location, within a 5% margin represented considerably more education and age, with higher levels
of error, based on official national university educated people than of trust and more positive attitudes
statistics within each country. their respective general populations for people in BICS countries.
(using OECD 2021 education
Across countries, the gender balance Furthermore, people without a
data as a comparison45). A higher
was 51% women, 49% men and university education in the BICS
representation of educated people
1% other genders for all countries, countries demonstrate higher mean
is common in survey research from
with Estonia having the highest values on key constructs than those
the BICS countries. For instance,
representation of women (53%) without a university education in non-
Edelman (2022) and Ipsos (2022)
and India the lowest (47%). BICS countries. For example, people
both note that online samples in
without a university education in
The mean age across countries was Brazil, India, China and South Africa
BICS countries have more trust in AI
44 years and ranged from a mean of are more educated, affluent, and
(M=4.8 vs 3.8) and perceive more
36 years (India) to 51 years (Japan). urban than the general population.46
benefits of AI (M=3.8 vs 3.2), than
In three countries (India, Israel and
However, our analyses show that the those without a university education
Estonia), the sample of respondents
more positive responses reported in all other countries. There are also
was slightly younger than the
by people in BICS countries is not no educational differences in trust in
respective population average due to
due to the higher education level AI for China or South Africa
under-representation in the 55+ age
of these samples. We examined
bracket in these countries: Estonia
differences between BICS and non-
(55+ expected: 34%, achieved:
BICS countries on key indicators
14%), India (55+ expected: 18%,
when controlling for the effects of
achieved: 10%) and Israel (55+
education and age, using analysis
expected: 31%, achieved: 8%).
of covariance (ANCOVA) tests.
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 76
Table A1:
The demographic profile for each country sample
Age
% Gender % Generation % Education
(yrs)
Country
W M O Mean Z M X BB <SS SS Qu UG PG
Australia 51 48 1 47 14 30 23 33 4 26 28 31 11
Brazil 52 47 1 40 19 37 31 13 4 27 17 38 14
Canada 51 48 1 48 12 28 25 35 5 24 31 31 9
China 48 52 0 43 16 37 25 22 1 7 15 70 7
Estonia 53 46 1 38 23 41 24 12 8 29 24 24 15
Finland 51 48 1 45 14 30 29 27 11 24 29 22 14
France 50 49 1 46 13 28 30 29 10 23 24 26 17
Germany 51 49 0 46 11 30 31 28 5 14 59 13 9
India 47 53 0 36 26 43 25 6 1 6 4 40 49
Israel 47 52 1 38 20 44 26 10 1 25 25 34 15
Japan 50 50 0 51 9 20 33 38 2 29 17 47 5
Netherlands 50 49 1 46 14 30 28 28 10 28 26 23 13
Singapore 51 49 0 44 13 32 34 21 1 12 31 44 12
South Africa 51 48 1 39 19 43 23 15 1 31 26 35 7
South Korea 52 48 0 45 12 25 41 22 1 19 3 65 12
UK 50 49 1 46 13 33 26 28 2 27 29 30 12
USA 51 48 1 45 15 28 26 31 7 30 26 24 13
Notes: Gender: W = women, M = men, O = Non-binary or other gender identity; Generation: Z = Generation Z, M = Millennials, X = Generation
X, BB = Baby Boomers and older generations; Education: <SS = Lower secondary school or less, SS = Upper secondary school, Qual = Voca-
tional or trade qualification; UG = Undergraduate degree; PG = Postgraduate degree
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 77
South Africa 4.7 5.3 3.4 3.8 3.5 4.7 4.3 2.6
South Korea 4.0 4.8 3.1 3.7 3.5 4.4 3.4 3.1
Notes: Trust = Trust in AI systems, Twthy = Perceived trustworthiness of AI systems, Accept = Acceptance of AI systems, Benefits = Perceived
benefits of AI systems, Risks = Perceived risks of AI systems, Benefit-Risk = Perception that benefits of AI systems outweigh the risks,
Current Safeguards = Perceived adequacy of current laws and regulations governing AI, Subjective Knowledge = Self-reported knowledge of AI.
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 78
Endnotes
1. McKinsey (2022). The state of AI in 2022 – 9. Gillespie, N., Curtis, C., Bianchi, R., Akbari, 16. We focused primarily on the 2021 Government
and a half decade in review. A., & Fentener van Vlissingen, R. (2020). AI Readiness Index. This index ranks and
Achieving Trustworthy AI: A Model for provides a total score for 160 countries on AI
2. BRICS is the acronym used to describe the
Trustworthy Artificial Intelligence. KPMG readiness across three pillars: Government
five major emerging economies of Brazil,
and The University of Queensland Report. (e.g., existence of a national AI strategy,
Russia, India, China, and South Africa (see
Tabassi, E. (2023), Artificial Intelligence Risk cyber-security), Technology (e.g. number of
https://en.wikipedia.org/wiki/BRICS). We did
Management Framework (AI RMF 1.0), NIST AI unicorns, R&D spending), and Data and
not include Russia in our sampling due to the
Trustworthy and Responsible AI, National Infrastructure (e.g. telecommunications
current invasion in Ukraine, and therefore
Institute of Standards and Technology, infrastructure, households with internet
use the acronym BICS in this report.
Gaithersburg, MD, [online]. OECD AI access). We supplemented this with data
3. European Commission (2019). Ethical Policy Observatory Tools for Implementing from the 2021 Stanford AI Index, which
guidelines for trustworthy AI Trustworthy AI examines country-level private investment
in AI and acceleration in AI activity over time
4. Eurobarometer. (2017). Attitudes towards 10. Morning Consult (2022) While Tech
to enable identification of countries that
the impact of digitisation and automation on Fawns Over AI, Consumers Need More
are rapidly emerging in AI in regions that
daily life (Report no. 460). Selwyn, N., Gallo Convincing Loureiro, S. M. C., Guerreiro, J.,
historically lacked AI capacity and investment
Cordoba, B., Andrejevic, M, & Campbell, L. & Tussyadiah, I. (2021). Artificial intelligence
(e.g. South Africa, Brazil, and India). The
(2020) AI for Social Good? Australian public in business: State of the art and future
countries selected maintained their ranking
attitudes toward AI and society. Zhang, B., research agenda. Journal of Business
at or near the top for their region on the 2022
& Dafoe, A. (2019). Artificial intelligence: Research, 129, 911-926.
Government AI Readiness Index.
American attitudes and trends.
11. OECD (2019) Artificial Intelligence in Society. 17. The Responsible AI Index ranks countries
5. Eurobarometer (2019). Europeans and according to the readiness of their
12. Multiple international and pan governmental
Artificial Intelligence. The European governments to use AI in a responsible way
organisations, including the OECD,
Consumer Organization (2020). Artificial according to four pillars that correspond
The European Commission, and the G7
Intelligence: what consumers say. Findings with the OECD Principles on AI: Inclusivity,
Innovation Ministers, note the importance
and policy recommendations of a multi- Accountability, Transparency and Privacy. It is
of trust in AI and developing ‘trustworthy’ AI
country survey on AI. a sub-index of the Government AI Readiness
systems, to support continual AI adoption.
This is also recognised in the AI roadmaps Index produced by Oxford Insights.
6. See the Government AI Readiness Index
2022. See also the Responsible AI Sub-Index and strategic plans of the five countries 18. Occupational groupings sourced from OECD
of the Government AI Readiness Index 2020. examined in this report (see for example: International Standard Classifications of
the UK AI Roadmap and the US National Occupations
7. For a full description of the 2020 Trust in AI Artificial Intelligence Research and Strategic
survey, see Gillespie, N., Lockey, S., & Curtis, 19. See for example Gillespie, N., Lockey,
Development Plan)
C. (2021). Trust in Artificial Intelligence: A S., & Curtis, C. (2021). Trust in Artificial
Five Country Study. 13. Prior public attitude surveys have examined Intelligence: A Five Country Study.
general acceptance and support of AI. For
8. Our finding that trust influences AI acceptance 20. This definition aligns with dominant
example, see: The European Consumer
supports prior research. For example, see interdisciplinary definitions of trust (e.g.
Organization (2020). Artificial Intelligence:
the following studies: Mayer et al., 1995; Rousseau et al., 1998),
what consumers say. Findings and policy
including trust in technological systems
Choung, H., David, P., & Ross, A. (2022). recommendations of a multi-country survey
(see McKnight et al., 2002, 2011).
Trust in AI and its role in the acceptance of on AI. Eurobarometer (2019). Europeans and
Artificial Intelligence. Ipsos (2022) Global 21. The 2021 Stanford AI Index reports
AI technologies. International Journal of
Opinions and Expectations about Artificial accelerated use of AI in the major emerging
Human–Computer Interaction, 1-13.
Intelligence. Morning Consult (2022) While economies, as well as the increasing
Fan, W., Liu, J., Zhu, S., & Pardalos, P. M. Tech Fawns Over AI, Consumers Need economic importance of AI in these countries.
(2020). Investigating the impacting factors More Convincing. Pew Research Centre. Our pattern of findings aligns with two prior
for the healthcare professionals to adopt (2020). Science and Scientists Held in surveys reporting more positive sentiments
artificial intelligence-based medical diagnosis High Esteem Across Global Publics: Yet towards AI in emerging economies (see
support system. Annals of Operations there is ambivalence in many publics over Ipsos, 2022; Pew Research Center, 2020).
Research, 294(1), 567-592. developments in AI, workplace automation, 22. To determine if people experience
food science. Zhang, B., & Dafoe, A. (2019). ambivalent emotions towards AI, we
Gillespie, N., Lockey, S., & Curtis, C. (2021).
Artificial intelligence: American attitudes categorized positive (e.g. optimistic,
Trust in artificial intelligence: A five country
and trends. excited, and relaxed) and negative (e.g.
study. The University of Queensland and
KPMG Australia. 14. Data was collected from research panels fearful, worried, and outraged) emotions
sourced by Qualtrics, a global leader in into high (ratings of ‘moderately’ and
Zhang, S., Meng, Z., Chen, B., Yang, X., & above, i.e. 3-5 on the five point scale) and
survey panel provision.
Zhao, X. (2021). Motivation, social emotion, low (ratings of ‘slightly’ and below, i.e.
and the acceptance of artificial intelligence 15. The global regions of 1) East Asia, 2) Eastern 1-2 on the five point scale). Participants
virtual assistants: Trust-based mediating Europe, 3) Latin America and the Caribbean, experiencing one or more of the positive
effects. Frontiers in Psychology, (12) 3441. 4) Middle East and North Africa, 5) North emotions ‘moderately’ or above, were
America, 6) Pacific, 7) South and Central Asia, classified high for positive emotions, and
8) Sub-Saharan Africa, and 9) Western Europe. similarly for negative emotions.
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
TRUST IN ARTIFICIAL INTELLIGENCE 79
Endnotes
23. 27% of respondents opted to answer this 33. Edelman AI. (2019). 2019 Edelman AI Survey. 40. In 2020, we asked questions about three
open-ended question. Eurobarometer. (2017). Attitudes towards AI use cases: AI in general, Healthcare AI
the impact of digitisation and automation and Human Resources AI. As such, we only
24. Stanford University (2021). Artificial
on daily life (Report no. 460). Retrieved from include 2022 comparative survey data based
Intelligence Index Report 2021.
https://ec.europa.eu/; Zhang, B., & Dafoe, on these same three use cases. While the
25. We asked respondents to rate their A. (2019). Artificial intelligence: American samples collected in 2020 and 2022 are
confidence in twelve entities. Given attitudes and trends. based on the same methodology and panel
similarities in the data across some of these provider, they are independent of each other.
34. We grouped into an “other” category people
entities, for simplicity we amalgamated As such, our analyses examine general
who are not currently in work, but had worked
responses to the following six entities into trends rather than a longitudinal analysis
previously (e.g., retirees, homemakers, the
three entities: ‘The federal government’ and of the same respondents.
unemployed), those who could not be reliably
‘My state/province government’ to form
coded into another category (e.g., “employee”, 41. See the Government AI Readiness Index
‘Federal/State/Provincial Government’;
“self-employed”, “various”, etc.), and those 2022. See also the Responsible AI Sub-Index
‘Independent regulatory bodies funded by
who selected “Other” but did not provide of the Government AI Readiness Index 2020
the government’ with ‘Existing agencies
any details of their occupation. Given the 42. https://www.ibm.com/watson/resources/
that regulate and govern specific sectors’
miscellaneous nature of this category, we do ai-adoption
to form ‘Existing regulatory agencies’;
not report on it.
‘Intergovernmental research organisations 43. See Field, A. (2013). Discovering statistics
(e.g. CERN)’ with ‘Non-government 35. For example, The Elements of AI course is a using IBM SPSS statistics (4th ed.). Sage:
scientific organisations (e.g. AAAI)’ to form free online course created by the University London.
‘Intergovernmental and non-governmental of Helsinki and MinnaLearn. It has been
research organisations’. completed by over 850,000 people. 44. As a rule of thumb, a Hedges g value of 0.2 is
considered a small effect size, 0.5 a medium
26. Eurobarometer. (2017). Attitudes towards 36. Responses to the three items assessing effect size, and 0.8 or larger, a large effect
the impact of digitisation and automation on subjective knowledge were aggregated to size (see Lakens, D. (2013) Calculating and
daily life (Report no. 460). Selwyn, N., Gallo produce an overall score. reporting effect sizes to facilitate cumulative
Cordoba, B., Andrejevic, M, & Campbell, L. science: A practical primer for t-tests and
37. Structural equation modelling is a set of
(2020) AI for Social Good? Australian public ANOVAS. Frontiers in Psychology, 4, 863).
multivariate modeling techniques that has
attitudes toward AI and society. Zhang, B., We chose a cut-off of 0.3 to ensure a
advantages over traditional multivariate
& Dafoe, A. (2019). Artificial intelligence: practically meaningful and robust difference.
techniques including (a) the assessment of
American attitudes and trends.
measurement error to provide less biased 45. Comparative data sourced from https://
27. Eurobarometer (2019). Europeans and effect estimates, (b) the estimation of latent data.oecd.org/eduatt/adult-education-level.
Artificial Intelligence. The European variables via observed variables, and (c) htm#indicator-chart or from UNESCO where
Consumer Organization (2020). Artificial the ability to assess the fit of the model to not available from OECD.
Intelligence: what consumers say. Findings the data. Our model fit the data well: X2 (N
and policy recommendations of a multi- = 17134, df = 1740) = 46803.72, p < .001; 46. See Edelman (2022). Edelman Trust
country survey on AI. CFI: .93, TLI: .93, SRMR: .06, RMSEA: .04. Barometer 2022. Ipsos (2022) Global
For an accessible guide to the structural Opinions and Expectations about Artificial
28. Jobin, A., Ienca, M., & Vayena, E. (2019). Intelligence.
equation modeling process, see Kline, R. B.
The global landscape of AI ethics guidelines.
(2015). Principles and Practices of Structural
Nature Machine Intelligence, 1(9), 389-399.
Equation Modeling (4th ed.). Guilford Press:
See the OECD AI Policy Observatory.
New York.
29. European Commission (2019). Ethical
38. ‘B’ refers to the standardised beta
guidelines for trustworthy AI
coefficient, which indicates the strength
30. Ninety percent of the sample were either of the effect of each independent variable
currently in work (67%) or had worked (i.e., driver) on the dependent variable
in the past (23%). This translated into a (i.e., outcome). Beta coefficients can be
sample size of 15,409 (of the total 17,193) compared to indicate the relative strength
respondents completing questions about of each independent variable.
the use AI at work.
39. This model differs to models in our prior
31. We adapted a measure from Haesevoets, de reports due to the inclusion of different and
Cremer, Dierckx & van Hiel (2021) Human- new predictors (e.g., perceived benefits,
machine collaboration in managerial decision perceived risks, technology efficacy), the
making. Computers in Human Behavior, 119 use of a more advanced statistical modelling
technique (structural equation modelling
32. See Van den Broeck, A., Ferris, D. L., instead of path analysis), and the larger and
Chang, C.H., & Rosen, C. C. (2016). A more diverse sampling (17 countries across
review of self-determination theory’s basic all global regions vs. 5 western countries).
psychological needs at work. Journal of
Management, 42(5), 1195–1229. See also
https://selfdeterminationtheory.org/topics/
application-basic-psychological-needs/
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company
limited by guarantee. All rights reserved. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.
Liability limited by a scheme approved under Professional Standards Legislation.
Key contacts
The University of Queensland
Professor Nicole Gillespie Dr Steve Lockey Dr Caitlin Curtis
KPMG Chair in Organisational Trust Research Fellow (Trust) Research Fellow (Trust)
Professor of Management, The University of Queensland The University of Queensland
The University of Queensland T: +61 7 3443 1206 T: +61 7 3346 8083
T: +61 7 3346 8076 E: s.lockey@uq.edu.au E: c.curtis@uq.edu.au
E: n.gillespie1@uq.edu.au
KPMG
James Mabbott Rita Fentener van Vlissingen Jessica Wyndham
National Leader, KPMG Futures Associate Director, ESG and Associate Director, KPMG Banarra,
KPMG Australia Impact Relations Human Rights & Social Impact
T: +61 2 9335 8527 KPMG Australia KPMG Australia
E: jmabbott@kpmg.com.au T: +61 2 9346 6366 T: +61 2 9335 8573
E: ritafentener@kpmg.com.au E: jwyndham@kpmg.com.au
Richard Boele
Chief Purpose Officer
Partner, KPMG Banarra,
Human Rights & Social Impact
KPMG Australia
T: +61 2 9346 585
E: rboele@kpmg.com.au
uq.edu.au
KPMG.com.au
© 2023 The University of Queensland ABN:63 942 912 684 CRICOS Provider No:00025B.
The information contained in this document is of a general nature and is not intended to address the objectives, financial situation or needs of any particular individual or
entity. It is provided for information purposes only and does not constitute, nor should it be regarded in any manner whatsoever, as advice and is not intended to influence
a person in making a decision, including, if applicable, in relation to any financial product or an interest in a financial product. Although we endeavour to provide accurate
and timely information, there can be no guarantee that such information is accurate as of the date it is received or that it will continue to be accurate in the future. No one
should act on such information without appropriate professional advice after a thorough examination of the particular situation.
To the extent permissible by law, KPMG and its associated entities shall not be liable for any errors, omissions, defects or misrepresentations in the information or for
any loss or damage suffered by persons who use or rely on such information (including for reasons of negligence, negligent misstatement or otherwise).
©2023 KPMG, an Australian partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited,
a private English company limited by guarantee. All rights reserved.
The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organisation.