Professional Documents
Culture Documents
Profesor: Alumno:
I
n January 2017, a group of artificial
intelligence researchers gathered makes the need to secure AI itself
at the Asilomar Conference even more pressing;
Grounds in California and
developed 23 principles for artificial “Increasing dependence on AI
intelligence, which was later dubbed for critical functions and
the Asilomar AI Principles. The sixth services will not only create
principle states that “AI systems greater incentives for attackers
should be safe and secure throughout to target those algorithms, but
their operational lifetime, and also the potential for each
verifiably so where applicable and successful attack to have more
feasible.” Thousands of people in both severe consequences.”
academia and the private sector have.
concerns have been raised that using
AI for offensive purposes may make if we rely on machine learning
cyberattacks increasingly difficult to algorithms to detect and respond to
block or defend against by enabling cyberattacks, it is all the more
rapid adaptation of malware to adjust important that those algorithms be
to restrictions imposed by protected from interference,
countermeasures and security compromise, or misuse. Increasing
controls. dependence on AI for critical
functions and services will not only
create greater incentives for
attackers to target those algorithms,
but also the potential for each
successful attack to have more
since signed on to these principles, severe consequences.
but, more than three years after the
Asilomar conference, many questions
remain about what it means to make
AI systems safe and secure.