You are on page 1of 12

A Roadmap for

Responsible AI
Leadership in
Canada

September 2023
Contents
01 Introduction

02 The Challenge for Canada

03 Global Regulatory Landscape for AI

06 AI Leadership Principles

08 Acknowledgements

09 About the Council of Canadian Innovators


Introduction

Artificial Intelligence (AI) will be a defining


challenge for policymakers in our time. As a
new, and genuinely general-purpose family
of technologies, AI has the potential to
radically impact the economy and society in
profound ways — on a scale equivalent to
the Industrial Revolution and electrification.
What is unlikely to change, however, are the
fundamental economic structures that
enable firms to be successful.

Waves of economic and technological change built the knowledge-


based economy, where more and more value is derived from
intangible assets like intellectual property rights and data. Today,
network effects and vast data assets drive outsized success for a
few firms. As a result, today’s innovation economy is characterized
by superstar firms equipped with the data and IP assets they need to
fend off competitors.

AI-driven businesses will thrive based on the freedom to operate


granted by their control of key IP, as well as their access to data,
replicating the winner-take-most pattern we’ve seen in previous
generations of digital technology. Policymakers focused on creating
PREPARED BY lasting and inclusive prosperity should prioritize domestic Canadian
Laurent Carbonneau firms, and embrace policies that support their growth to become
Nicholas Schiavo global champions. By leveraging their success, we can realize
Wenny Jin broad-based gains for Canadians.

CCI | 1
The Challenge
for Canada
The broad AI sector is currently valued around $200 billion
dollars and by 2030 will likely expand to around $2 trillion.¹
Canada is well-positioned in the industry in terms of highly-
qualified personnel and leading research, but Canadian
companies face significant barriers while scaling.²
A lack of scaling Canadian companies
means that many of the benefits created
from public investment in research and
training, including intellectual property, are
accruing to firms outside of Canada – for
example, nearly 75% of intellectual property
Canada should create and
rights (IPRs) generated through the federal implement a Responsible AI
government’s AI Strategy are owned by Leadership roadmap, based on
foreign entities, including American tech
giants such as Uber, Meta and Google. four principles: cultivating citizen
and consumer trust, regulatory
The significant challenge in Canada will be
constructing a policy and regulatory
clarity and agility, and an export
framework that encourages the rapid focus to international rule- and
growth of domestic companies into global standards-setting.
leaders. The Council of Canadian Innovators
believes that scaling innovative companies
need access to talent, capital and
customers, as well as strong marketplace
frameworks that enable success, including
law and regulation.

¹ https://www.grandviewresearch.com/industry-analysis/artificial-intelligence-ai-market
² Scale AI, “AI At Scale: How Canada Can Build an AI-Powered Economy,” 2023, 3. CCI | 2
Global Regulatory
Landscape for AI
The Canadian federal government tabled the Artificial Intelligence
and Data Act (AIDA) as part of the broader Bill C-27, the Digital
Charter Implementation Act. If passed, AIDA will regulate the design,
development, and use of “high-impact” AI systems in the private
sector. Other countries have already taken steps to regulate AI. As
Parliament and the government consider AIDA, they should weigh
how other jurisdictions have approached regulating AI in their
deliberations, especially in avoiding pitfalls that add complexity and
erode trust as international norms continue to evolve.

European Union

The European Union’s AI Act passed a The first category includes uses like ‘social
significant legislative milestone in June scoring’ or constant facial recognition
2023, setting it on a path to coming into tracking in public, and such uses are simply
force in 2026. The EU Single Market is banned.
significant and regulations and standards
that apply in the European market often The law focuses on ‘high-risk’ systems,
become global leaders. including systems used in products covered
by EU product safety laws, as well as a
The AI Act centres around four tiers for AI broad family of other uses cases where
systems based on their risk to human automated decision-making is directly and
safety, livelihoods and rights: unacceptable, significantly consequential to individuals
high, and low or minimal (which are not with the potential to unfairly discriminate or
regulated at all beyond existing privacy and cause harm – including job applications,
consumer protection rules). admission to educational institutions, and
biometric identification.

CCI | 3
High-risk systems require risk management guidance on the responsible use of data-
measures to identify, evaluate and mitigate driven technologies.
negative impacts, and maintain public
technical documentation and decision logs In March 2023, the UK government
to show compliance. The law mandates published a white paper staking out what it
human oversight and adequate called "A pro-innovation approach to AI
cybersecurity, and providers of high-risk AI regulation." The white paper sets out a
services must notify national governments "flexible" approach to regulating AI that is
that they are making it available. The EU will intended to both build public trust and make
also maintain a union-wide database of high- it easier for businesses to grow.
risk systems.
Rather than creating a new, dedicated
AI systems that present limited risk will not regulatory body or single legal instrument
be as tightly regulated. Instead, they will for AI, the UK Government is encouraging
require simple notification and transparency regulators and departments to tailor
to users. strategies for individual sectors, with the
goal of maintaining support for innovation
The Act creates a limited carveout to and adaptability. The white paper outlines
promote innovation through the creation of five principles that regulators must consider
regulatory sandboxes. The Act also creates to facilitate the safe and innovative use of AI:
a European AI Board, made up of national AI safety, transparency, fairness,
authorities and the EU Data Protection accountability, and redress. The
Supervisor, to advise the Commission on AI government has indicated that the door to
issues and promote best practices. Despite legislation remains open should it be
the fairly far-reaching requirements in the needed, and also recognizes the
legislative text, the EU law, in an important complementary role played by standards.
step towards making compliance simpler,
allows for the development and recognition
of standards to govern regulated activities
rather than prescribing methods through
United States
regulation.
US AI law is mostly being made at the state
level — to date, there is no comprehensive
United Kingdom federal law in place. California and Illinois
have passed laws focused on data privacy
In 2021, the UK government published its and the use of AI.
National AI Strategy, which sets out a plan
for the responsible adoption of AI Federal action is starting to take shape,
technologies. The high-level strategy however. In December 2022, the White
focuses on ethics, transparency, and House’s Office of Science and Technology
accountability. The UK also established an Policy released a blueprint for an AI Bill of
AI Council in 2019 to advise the government Rights to define principles for the
on AI policy and regulation as well as the development and deployment of AI in the
Centre for Data Ethics and Innovation, an US. This document will guide future federal
independent advisory body that provides AI-related policy in the US and could help to

CCI | 4
address some of the key challenges
associated with AI development and
deployment.

The US is also moving through the National


Institute of Standards and Technology to
create technical standards like the AI Risk
Management Framework.

The American government has also been


proactive in working at the international
level to secure commitments on AI
standards and rules. For example, at the
end of May 2023, the US and EU announced
their intention to draft an AI code of
conduct. In June 2023, the UK and US jointly
announced the Atlantic Declaration, a broad
bilateral agreement that includes (as part of
a long list) a commitment to closer
cooperation on AI regulation, ethical
standards, and sharing best practices.

The White House also announced at the


end of July 2023 that they had secured
commitments from significant private sector
actors on AI safety – including Alphabet,
Amazon, Meta and Microsoft.

Lessons for Canada

The approach that Canada is taking, embodied in the Artificial Intelligence and Data Act, most
closely resembles that of the European Union by creating in legislation categories of AI
technologies to be regulated. Canada is also participating in several international AI
governance fora, including the G7’s Hiroshima Process.

Without the market size and international leadership weight of (especially) the US or the
European Union, Canada should take care to ensure that its eventual governance model does
not stray too far from the emerging global norm – an outlier policy mix in Canada would
drastically harm the efforts of Canadian-headquartered companies to scale globally and to
contribute to Canadian economic and productivity growth and innovation.

In creating a legal framework, CCI believes that Canada can succeed by adopting a strategy
of responsible AI leadership that leverages high trust, clear rules, fast action and global
leadership to pave the way for commercial success at home and abroad for Canadian
companies.
CCI | 5
Responsible AI
Leadership
Principles
Canada has an opportunity to define itself
as a leader in responsible AI development
and deployment and export a flexible
approach that allows for innovation while
protecting citizen and user rights.

Put Trust First

Innovators in AI recognize that unlocking the


Build an institutional home for public interest
economic potential of AI technology
technology expertise by creating an
requires trust and buy-in from the public that
independent, public-facing Parliamentary
their products and services are safe and
Technology and Science Officer to advise
produce fair results for end users.
Parliament and Canadians about
technological issues in the public interest as
Canada’s AI framework should build trust
part of AIDA, as a complement to the planned
and certainty for the public that products
Artificial Intelligence and Data Commissioner
and services are safe and reliable through a
housed within the executive.
clear statement of user and citizen rights
with regard to automated decision systems.
Build trust for Canadians by including a
preamble that enumerates AIDA’s protections
for citizens and users with regard to AI
systems, such as protection from biased
outputs or harms.

CCI | 6
Clarity and Certainty

Innovators and investors are looking for


rules that are clear and consistent, while Ensure that AI regulations, where regulations
recognizing that one size cannot fit all use are the right approach, are sensitive to a
cases, companies or end users. range of uses and potential impacts, and
potentially incorporate a tiered structure with
The federal government’s existing Directive corresponding rules and responsibilities for
on Automated Decision Making, like the specific applications of AI.
EU’s AI Act, includes a tiered structure. This
could be the basis for a private sector Ensure that the rules and standards
regulatory model, with a sliding scale of innovators must comply with are clear and
compliance and disclosure obligations easy to understand.
based on self-assessed tier with an
accompanying mechanism for public Allow for regulatory sandboxes or pilots for
complaints and audit. AI regulations should novel use cases.
also allow for supervised ‘sandboxes’ for
novel use cases.

Develop at Speed

As major peer and competitor countries


accelerate their efforts to build policy Move up the schedule for regulatory
frameworks for the development and use of development and implementation if AIDA
AI, Canada should move to be among the passes (12 months after Royal Assent) and
first countries with a durable and trusted aim for a ‘minimum viable product’ approach
framework to establish a global brand as a that allows for flexibility and iteration.
leader in responsible AI.
Allow for and prioritize the development of
standards for AI governance wherever such
an approach would represent an
improvement in speed over regulation while
adequately protecting citizen and user rights.

Prioritize the creation or adoption of


governance measures around higher-impact
AI use cases or technologies first and provide
for an enforcement ‘on-ramp’ that is sensitive
to industry learning curves.

CCI | 7
Gear for Export

If Canada promotes rights for the public,


certainty for innovators, and is among the Ensure that Canadian companies have simple
first countries to publicize a clear and means to get regulatory recognition in other
replicable set of rules, responsible AI markets by continuing to shape international
leadership will be a competitive advantage policy direction and standards setting through
for Canada and for Canadian companies. forums like the Global Partnership on Artificial
Just as GDPR increased consumer Intelligence, the OECD AI Principles, and the
confidence and created a global gold G7 Hiroshima Process.
standard for privacy, Canada’s AI rules
should aim to be the world’s most copied. Ensure that our regulatory framework is clear
and useful ‘out of the box’ to inspire other
countries to use it.

Acknowledgements
Tara Dattani, Director of Legal, Ada
This report was created in
Nicole Janssen, Co-Founder & Co-CEO, AltaML
consultation and collaboration Humera Malik, CEO, Canvass Analytics
Dr. Alexandra Greenhill, Founder & CEO, Careteam Technologies
with CEOs and commercialization Ronak Shah, Privacy and AI Product Counsel, Cohere
experts from Canada's AI Laure Lalot, Director, Legal Compliance, Coveo
Nabeil Alazzam, CEO, Forma.ai
ecosystem. We thank them for Amir Sayegh, AVP, Data Product Discovery, Geotab
participating in roundtable Rebecca Wellum, Vice President, Compliance, Geotab
Julia Culpeper, Senior Program Manager, Innovation Asset Collective
discussions and interviews that Mike McLean, CEO, Innovation Asset Collective
have provided the necessary Peggy Chooi, Strategic IP Specialist, Innovation Asset Collective
Geoff MacGillivray, CTO, Magnet Forensics
details to develop a credible Ehsan Mirdamadi, Partner and CEO, NuBinary
roadmap for responsible AI in Sina Sadeghian, Co-Founder & CTO, NuBinary
Adolfo Klassen , CEO, Paladin AI
Canada. Ian Paterson, CEO, Plurilock
Yvan Couture, President & CEO, Primal
Sam Loesche, Head of Policy and Public Affairs, Waabi
Mathieu Letendre, Legal Advisor, Workleap

CCI | 8
About the Council of
Canadian Innovators

Established in 2015, the Council of Canadian


Innovators represents and works with over 150 of
Canada’s fastest-growing technology companies.
Our members are the CEOs, founders, and top
senior executives behind some of Canada’s most
successful ‘scale-up’ companies. All our
members are job and wealth creators, investors,
philanthropists, and experts in their fields of
healthtech, cleantech, fintech, cybersecurity, AI
and digital transformation. Companies in our
portfolio are market leaders in their verticals,
commercialize their technologies in over 190
countries, and generate between $10M - $750M
Laurent Carbonneau in annual recurring revenue. We advocate on their
Director of behalf for government strategies that increase
Policy & Research
lcarbonneau@canadianinnovators.org
their access to skilled talent, strategic capital, and
Nicholas Schiavo
new customers, as well as expanded freedom to
Director of operate for their global pursuits of scale.
Federal Affairs
nschaivo@canadianinnovators.org

Learn more about our members and our initiatives


Wenny Jin
Research Assistant at www.canadianinnovators.org.

CCI | 9

You might also like