You are on page 1of 2

SECURING AI DECISION- INSTITUTO UNIVERSITARIO DE TECNOLOGÍA

MAKING SYSTEMS POLICY PROPOSALS FOR AI DE ADMINISTRACIÓN INDUSTRIAL (IUTA)


SECURITY INSTITUTO UNIVERSITARIO DE TECNOLOGÍA
DE ADMINISTRACIÓN INDUSTRIAL (IUTA)
One of the major security risks to AI In the past four years there has been a
CARRERA: INFORMATICA
systems is the potential for rapid acceleration of government SECCIÓN: 282A3
adversaries to compromise the interest and policy proposals regarding MATERIA: INGLES BASICO
integrity of their decision-making artificial intelligence and security, with
processes so that they do not make 27 governments publishing official AI
choices in the manner that their plans or initiatives by 2019.
designers would expect or desire.
One way to achieve this would be for
adversaries to directly take control of
an AI system so that they can decide
EDITOR'S NOTE:
what outputs the system generates HOW TO IMPROVE
and what decisions it makes. “Editor's Note: This report from The
Alternatively, an attacker might try to Brookings Institution’s Artificial
influence those decisions more subtly Intelligence
CYBERSECURITY FOR
and indirectly by delivering malicious and Emerging Technology (AIET)
inputs or traini ng data to an AI Initiative ARTIFICIAL
model. is part of “AI Governance,” a series that
identifies key governance and norm INTELLIGENCE
issues
related to AI and proposes policy
remedies
to address the complex challenges
associated with emerging technologies”.

Profesor: Alumno:

Henry Abraham Ever Rodríguez.CI. V-27.903.202

Caracas, Abril 2021


HOW TO IMPROVE Much of the discussion to date has A related but distinct set of issues
CYBERSECURITY FOR centered on how beneficial machine deals with the question of how AI
learning algorithms may be for systems can themselves be secured,
ARTIFICIAL identifying and defending against not just about how they can be used
computer-based vulnerabilities and to augment the security of our data
INTELLIGENCE threats by automating the detection and computer networks. The push to
of and response to attempted implement AI security solutions to
attacks. Conversely, respond to rapidly evolving threats

I
n January 2017, a group of artificial
intelligence researchers gathered makes the need to secure AI itself
at the Asilomar Conference even more pressing;
Grounds in California and
developed 23 principles for artificial “Increasing dependence on AI
intelligence, which was later dubbed for critical functions and
the Asilomar AI Principles. The sixth services will not only create
principle states that “AI systems greater incentives for attackers
should be safe and secure throughout to target those algorithms, but
their operational lifetime, and also the potential for each
verifiably so where applicable and successful attack to have more
feasible.” Thousands of people in both severe consequences.”
academia and the private sector have.
concerns have been raised that using
AI for offensive purposes may make if we rely on machine learning
cyberattacks increasingly difficult to algorithms to detect and respond to
block or defend against by enabling cyberattacks, it is all the more
rapid adaptation of malware to adjust important that those algorithms be
to restrictions imposed by protected from interference,
countermeasures and security compromise, or misuse. Increasing
controls. dependence on AI for critical
functions and services will not only
create greater incentives for
attackers to target those algorithms,
but also the potential for each
successful attack to have more
since signed on to these principles, severe consequences.
but, more than three years after the
Asilomar conference, many questions
remain about what it means to make
AI systems safe and secure.

You might also like