You are on page 1of 1

Íñigo Vicente Hernández

Bias in AI – Inês Saavedra


Bias in AI is an ongoing topic, it is not hard to find examples of biases in AI from sexism to racial
biases, AI replicates humans wrong behaviour.
The main questions are: Where does this bias come from? And How can we tackle this problem?
AI is biased through three principal ways, data generation and collection, human bias and
algorithmic bias.
Some of the real examples we can find are: Racial bias against black patients by the US Healthcare
Algorithm, Gender bias against women by Amazon’s hiring Algorithm, or Racial bias against black
defendants by COMPAS Court System Algorithm.
Some of the biases produced can be tackled by:
- Collecting data from different countries and different perspectives.
- Adding more diversity not just in data but in the teams developing the AIs.
- Adjusting the datasets to the proper context.
- Minimizing the historical bias of the data collection.
- Promoting ethics and diversity like offering courses in psychology and human development.
Nevertheless, there are some limitations on the bias reduction, for example the complexity and
nature of AI algorithms (blackbox nature of AI), or for example the ethical considerations that arise
from regulating sensitive topics have no definitive right answers.
I personally believe that addressing these biases is such a complex topic, not just because of the
current biases, but for the ones to come, we as humans have biases by nature, we try to simplify
everything around us so that we can process and understand the world around us. While making
these simplifications we introduce biases in the way we perceive the world and these biases affect
our daily life, including AI.
On the other hand, I think that there’s hope, and that ethic experts can help to avoid these human
biases, with regard to the mathematical biases I think that more research is needed in order to
minimize them, but I’m positive towards these advances.

You might also like