You are on page 1of 4

Link for 1:10 Steps to More Ethical Artificial Intelligence (inclusioncloud.

com)

Artificial Intelligence has become increasingly popular in different industries, from


producing other goods in factories to healthcare and finance. At the same time, there
has been growing concern about the ethical implications as AI plays an increasingly
decisive role in people’s daily lives.

1. Develop a code of ethics

Creating a code of ethics is the first step in developing ethical AI. This code should
outline the values and principles that your AI system should follow. The code should
be created with relevant stakeholders, such as employees, customers, and industry
experts. This will ensure that the code reflects the values and needs of all parties
involved.

2. Ensure diversity and inclusion

Ensuring that the data used to train your AI system is diverse and inclusive is crucial
to avoiding perpetuating biases. This can lead to discriminatory outcomes that can
harm individuals or groups. Therefore, it is essential to ensure that the data used is
representative of different genders, races, ethnicities, and other diverse factors.

3. Monitor the AI system

Regular monitoring of the AI system is essential to ensure that it is performing as


intended and not causing harm. This includes regular testing, auditing, and analysis
of the system. Monitoring also involves identifying and addressing any errors or
issues that may arise. This will help ensure that the AI system continues to function
ethically.

Link for 2: How Do We Use Artificial Intelligence Ethically? (forbes.com)

1. Start with education and awareness about AI.

Communicate clearly with people (externally and internally) about what AI can do and
its challenges. It is possible to use AI for the wrong reasons, so organizations need to
figure out the proper purposes for using AI and how to stay within predefined ethical
boundaries. Everyone across the organization needs to understand what AI is, how it
can be used, and its ethical challenges.

2. Be transparent.

This is one of the biggest things I stress with every organization I work with. Every
organization needs to be open and honest (both internally and externally) about how
they use AI.

3. Control for bias.

As much as possible, organizations need to make sure the data they're using is not
biased.
For instance, Google created a massive database of facial images called ImageNet.
Their data set included far more white faces than non-white faces, so when they
trained AIs to use this data, they worked better on white faces than non-white ones.

Creating better data sets and better algorithms is not just an opportunity to use AI
ethically – it’s also a way to try to address some racial and gender biases in the world
on a larger scale.

4. Make it explainable.

Can your artificial intelligence algorithms be explained?

When we use modern AI tools like deep learning, they can be “black boxes” where
humans don’t understand the decision-making processes within their algorithms.
Companies feed them data, the AIs learn from that data, and then they make a
decision.

5. Make it inclusive.

At the moment, we have far too many male, white people working on AI. We need to
ensure the people building the AI systems of the future are as diverse as our world.
There is some progress in bringing in more women and people of color (POC) to
make sure the AI you’re building truly represents our society as a whole, but that has
to go far further.

Link for 3:6 Ethical Considerations of Artificial Intelligence | Upwork

If an AI algorithm has been trained to recognize faces using primarily pictures of


white people, it may be more likely to identify a white person than a person of a
different race. This can create discrimination and racial bias between other groups of
people. For example, a black reporter in New York City recently discovered that an
AI-powered facial recognition system couldn’t detect her face.

Solution

For the situation above, the best solution is to include training and testing data that is
broad and diverse enough to cover all use cases. More specifically, to our linked
example above, images of black women (as well as other ethnicity) should have
been incorporated into the AI’s training and testing data. This way, the system could
better recognize people of various colors, races, and ethnicity.

In April 2023, a group of engineers at Samsung accidentally leaked sensitive


company information to ChatGPT. Their goal was to improve their source code, but
now Open AI has access to all of the data they shared. As a result, Samsung has put
stricter policies in place about the use of AI and is working on its own AI systems for
future employee use.

While some factors may always be outside your sphere of influence, you will always
have control over your behavior. You can control what data you give AI while
implementing safeguards such as training across your company to support the best
possible data privacy and protection. Companies should also ensure that any third-
party providers they work with take these protective measures seriously.

Solution

You must develop robust data protection protocols to ensure data protection and
user security. This could include the appointment of privacy officers, ongoing privacy
impact assessments, and more thorough product planning during initial development.
In addition, employees should be trained to effectively protect data within systems
while adhering to the strictest data privacy regulations.

Finally, you can implement anonymization techniques and data encryption to ensure
that personal data in AI models is always kept confidential and secure. For example,
you can create modification techniques such as encryption and word or character
substitution to guard data. This may sound like a slight shift, but can have a
tremendous impact.

What is the goal of this task?

Understand the ethical considerations in the AI system.

Science and technology

Prompt based on text

Help me gain knowledge about the ethical consideration of AI systems.

Model response

Artificial intelligence has become an integral part of many industries. As AI grows,


ethical concerns for AI also increase. To use and develop AI ethically, we must make
a code of ethics that includes some values and principles that must be followed while
using and developing an AI system ((1)). Different and comprehensive data is used
to train AI , In order to refrain from Biased results, i.e., It should not harm specific
individuals and communities((1)). AI systems should be inclusive as the future is for
diversity((2)). Regularly maintain and monitor the AI system and fix bugs and errors
that may arise occasionally ((1)).
AI can be used for harmful purposes, so making ethical boundaries is essential. This
can be done by educating and spreading awareness about the proper use of AI ((2)).
Organizations must be transparent and open about using their AI systems ((2)). AI
systems should be made to explain data more efficiently and accurately, and they
should know how AI explains those decisions ((2)).
AI is trained more to recognize white people instead of other races, which creates a
racial bias. Broad and diverse testing should be done during the AI system training
((3)). AI-related errors could be minimized by involving human factors during AI
training((3)). Strict policies are needed regarding the privacy of sensitive information,
which can be done by appointing privacy officers who help protect data and revise
data privacy regulations ((3)). Data encryption could also be included to ensure
confidentiality((3)).

You might also like