You are on page 1of 1

It has been argued that all the necessary ethical measures and legislation on artificial

intelligence and robotic technology have already been adopted, and that additional legislation
would only hinder its development. I do not agree with this statement for the reasons I will
explain.

It is true that certain guidelines have already been made, but that does not mean that it
is enough. Indeed, the European Commission has published some ethics guidelines 1 that aim
to preserve respect for human dignity, democracy, equality, the rule of law and human rights
when applying or developing artificial intelligence.

Broadly speaking, these ethics guidelines2 propose a supervision by human beings,


while looking for systems that are resistant and resilient to possible manipulation attempts,
always having adequate contingency plans. They also seek to guarantee the privacy of
citizens' data, which is a fundamental legal principle.

Furthermore, they state that it must be guaranteed that the algorithms used do not have
direct or indirect discriminatory biases, and that the social and environmental impact they
generate must be taken into account. Additionally, it proposes that both applied artificial
intelligence and its results be accountable to external and internal auditors.

1
High-Level Expert Group on Artificial Intelligence, ‘Draft Ethics Guidelines for Trustworthy AI’ (2018).
2
ibid.

You might also like