You are on page 1of 1

Neural Network

We learned that we can use our data to train the model and make a network
of neurons to successfully carry our task.

The data that is fed to system can have human bias as mentioned in
Vaibhav's article. There are few things that I can think of that can eliminate it
are:-

1. Collecting and using diverse and representative training data that is


representative of the population that the model will be applied to and this will
not work in case we have a smaller set available.

2. To use fair and transparent algorithms that are less prone to bias.

3. Auditing and monitoring the model using fairness metrics to measure the
model's performance on different demographic groups, and making
adjustments as necessary.

4. Explainable AI methods can help in understanding how the model is making


its decision, and it is possible to detect if the model is making decisions based
on sensitive features.

Although I understood that it is important to note that there is no one-size-


fits-all solution to bias in machine learning models, and that a combination of
different strategies may be needed to ensure fairness and mitigate bias in the
final model.

What are your thoughts on how can we remove such bias?

Permalink

You might also like