Professional Documents
Culture Documents
com)
Creating a code of ethics is the first step in developing ethical AI. This code should
outline the values and principles that your AI system should follow. The code should
be created with relevant stakeholders, such as employees, customers, and industry
experts. This will ensure that the code reflects the values and needs of all parties
involved.
Ensuring that the data used to train your AI system is diverse and inclusive is crucial
to avoiding perpetuating biases. This can lead to discriminatory outcomes that can
harm individuals or groups. Therefore, it is essential to ensure that the data used is
representative of different genders, races, ethnicities, and other diverse factors.
Communicate clearly with people (externally and internally) about what AI can do and
its challenges. It is possible to use AI for the wrong reasons, so organizations need to
figure out the proper purposes for using AI and how to stay within predefined ethical
boundaries. Everyone across the organization needs to understand what AI is, how it
can be used, and its ethical challenges.
2. Be transparent.
This is one of the biggest things I stress with every organization I work with. Every
organization needs to be open and honest (both internally and externally) about how
they use AI.
As much as possible, organizations need to make sure the data they're using is not
biased.
For instance, Google created a massive database of facial images called ImageNet.
Their data set included far more white faces than non-white faces, so when they
trained AIs to use this data, they worked better on white faces than non-white ones.
Creating better data sets and better algorithms is not just an opportunity to use AI
ethically – it’s also a way to try to address some racial and gender biases in the world
on a larger scale.
4. Make it explainable.
When we use modern AI tools like deep learning, they can be “black boxes” where
humans don’t understand the decision-making processes within their algorithms.
Companies feed them data, the AIs learn from that data, and then they make a
decision.
5. Make it inclusive.
At the moment, we have far too many male, white people working on AI. We need to
ensure the people building the AI systems of the future are as diverse as our world.
There is some progress in bringing in more women and people of color (POC) to
make sure the AI you’re building truly represents our society as a whole, but that has
to go far further.
Solution
For the situation above, the best solution is to include training and testing data that is
broad and diverse enough to cover all use cases. More specifically, to our linked
example above, images of black women (as well as other ethnicity) should have
been incorporated into the AI’s training and testing data. This way, the system could
better recognize people of various colors, races, and ethnicity.
While some factors may always be outside your sphere of influence, you will always
have control over your behavior. You can control what data you give AI while
implementing safeguards such as training across your company to support the best
possible data privacy and protection. Companies should also ensure that any third-
party providers they work with take these protective measures seriously.
Solution
You must develop robust data protection protocols to ensure data protection and
user security. This could include the appointment of privacy officers, ongoing privacy
impact assessments, and more thorough product planning during initial development.
In addition, employees should be trained to effectively protect data within systems
while adhering to the strictest data privacy regulations.
Finally, you can implement anonymization techniques and data encryption to ensure
that personal data in AI models is always kept confidential and secure. For example,
you can create modification techniques such as encryption and word or character
substitution to guard data. This may sound like a slight shift, but can have a
tremendous impact.
Model response