You are on page 1of 4

TANAUAN CITY COURSE:

COLLEGE CPEN18 CPE LAWS AND PROFESSIONAL PRACTICE


Tanauan City, PROFESSOR CATEGORY RATING
Batangas
Engr. Rionel B. Caldo Case Study # 1

Name: Santos, Mark Angel V. Year/Section: BSCpE – 3A


CPEN18 – CpE Laws and Professional Practices February 6, 2024

Machine Experiment No. 3


Ethical Dilemmas in AI Hiring Systems: Navigating Engineering, Social Justice, and Human
Rights Traditions

I. PROBLEM SPECIFICATION

Imagine a scenario where a multinational technology company is developing a cutting-edge artificial


intelligence (AI) system for automated hiring processes. The company claims that the AI system, based
on advanced algorithms, can select candidates more efficiently and objectively. However, reports
suggest that the system exhibits biases, disproportionately favoring certain demographics while
disadvantaging others. The biases seem to be perpetuating existing social inequalities rather than
promoting fairness in hiring practices.

In light of the learning outcomes related to engineering and social justice, human rights traditions, and
the application of social justice frames in engineering and philosophy, critically analyze and discuss
the following:

1. Identify the engineering and social justice implications:

• How do the biases in the AI hiring system align with the principles of social justice in the
context of employment and human rights?

Ans: The biases in the AI hiring system goes against the fairness or in the principle of social
justice, especially in terms of employment and human rights. Social justice says that everyone
should have a fair chance of getting a job, whatever their background or identity. But if this AI
system is favouring some individuals over others, it's not fair. It just means that there's a people
might not get hired just because of their status or who they are, which goes against the idea of
equal opportunity.

• What ethical concerns arise from the potential discriminatory impact of the technology on
diverse groups?

Ans: Discrimination means treating people unfairly because of their status or who they are.
If the AI hiring system is biased, it definitely could unfairly disadvantage certain groups of
people, like women, condition, etc.
TANAUAN CITY COURSE:
COLLEGE CPEN18 CPE LAWS AND PROFESSIONAL PRACTICE
Tanauan City, PROFESSOR CATEGORY RATING
Batangas
Engr. Rionel B. Caldo Case Study # 1

This is an issue because it means these people could lose job chances just because of things like
their gender or race, which isn't fair.t. It goes against the ethical principle of treating everyone
fairly and equally. Also, it might make it tougher for people who already face challenges to find
jobs. This unfairness needs fixing to make sure everyone gets a fair shot.

2. Evaluate the situation from the perspective of human rights traditions:

• How do the reported biases in the AI system relate to human rights traditions and principles?

Ans: The biases in the AI system go against human rights principles. Human rights say
everyone should be treated equally and have the same chances, no matter who they are. But if
the AI system favours some groups over others when hiring, it breaks this rule by not giving
everyone an equal shot. Discriminating based on things like race or gender is a clear violation
of human rights because it stops people from being treated fairly and having the same job
chances.

• In what ways might the deployment of this technology infringe upon individuals' rights and
contribute to social injustices in the professional sphere?

Ans: The use of this biased technology can harm people's rights and make professional
unfairness worse in a few ways. First, it keeps existing unfairness going by making it harder for
some groups to get hired, which means they can't get the same job chances. This not only treats
people unfairly but also makes differences in society based on things like race or gender even
stronger. Also, it goes against meritocracy, the idea that people should be judged on their
skills, not other stuff. Letting biased algorithms decide who gets hired means the technology
isn't being fair or equal, which is really important for a fair professional world.

3. Examine the role of social justice frames in engineering and philosophy:

• How can social justice frames be applied to assess and rectify the biases in the AI system?

Ans: Social fairness ideas help us check and fix unfairness in AI by looking at fairness and
equality. Engineers can look at the AI's rules to find any unfairness based on things like race,
gender, or how much money someone has. Then, they can change the rules to make sure
everyone gets treated fairly. Also, having different viewpoints and ideas from groups who are
left out can make the AI system better and more fair.

• Discuss the ethical responsibilities of engineers and professionals in addressing social justice
concerns, particularly in the development and deployment of advanced technologies like AI.

Ans: Engineers and philosophers have a responsibility to deal with social justice issues,
especially when making and using advanced technologies like AI. They have to think about any
unfairness or bad effects their creations might have and try to stop them before they happen.
Keeping fairness, equality, and treating people with respect at the top of their minds is really
important in their work. This could mean doing careful checks to make sure everything is okay,
TANAUAN CITY COURSE:
COLLEGE CPEN18 CPE LAWS AND PROFESSIONAL PRACTICE
Tanauan City, PROFESSOR CATEGORY RATING
Batangas
Engr. Rionel B. Caldo Case Study # 1

talking to different kinds of people for their opinions, and making sure decisions are clear and
responsible. By making fairness a big deal in their work, engineers and philosophers can help
make the world a fairer and better place for everyone.

4. Propose strategies for incorporating social justice in engineering practices:

• Suggest concrete steps that the company and its engineering team could take to rectify the
biases in the AI system, ensuring fair and equitable hiring practices.

Ans: To fix the biases in the AI system and make sure hiring is fair, the company and its
engineering team can do a few things. First, they should check the AI's rules really well to find
out where the bias comes from. Then, they can work on making the information used to teach
the AI better, so it includes lots of different kinds of people and doesn't have any unfair
patterns. Also, doing regular checks and tests can help find and fix biases as they happen. It's
really important to include different kinds of people, like ethicists, social scientists, and people
from groups who are left out, when making and checking the AI. This helps make sure it's fair
and includes everyone. Keeping an eye on things and being clear about how decisions are
made are also really important to make sure people trust the system.

• Explore how a philosophical framework emphasizing social justice can guide engineers and
professionals in creating technologies that align with ethical and human rights standards.

Ans: A philosophical framework that focuses on social justice can guide engineers and
professionals when they create technologies that meet ethical and human rights standards. This
framework would give importance to principles such as fairness, equality, and protecting
individual rights during the design, development, and use of technologies. Engineers can use
philosophical ideas like Rawlsian justice, which stresses fairness in distributing social goods,
to help them make decisions. By carefully considering how their creations might affect society
through a philosophical perspective, engineers can ensure that their technologies promote
social justice and improve human well-being.

This case study encourages students to apply their understanding of engineering and social justice,
human rights traditions, and the role of social justice frames in the context of a real-world scenario,
emphasizing the ethical considerations and responsibilities of engineers in shaping a more just and
equitable society.

II. RESULTS AND DISCUSSION

The results of the case study reveal that the AI system developed by the multinational
technology company is exhibiting biases in its automated hiring processes. Even though it's
supposed to be efficient and fair, the system unfairly helps some groups more and puts others at
a disadvantage. This unfairness keeps existing social unfairness going instead of making hiring
TANAUAN CITY COURSE:
COLLEGE CPEN18 CPE LAWS AND PROFESSIONAL PRACTICE
Tanauan City, PROFESSOR CATEGORY RATING
Batangas
Engr. Rionel B. Caldo Case Study # 1

fair for everyone. Reports say these biases come from the rules the system follows,
which might not have been taught well enough to see and fix biases in the data it learned from.

This scenario highlights the critical intersection of engineering and social justice. While
technological advancements such as AI systems can offer efficiency and automation, they must
also be developed with careful consideration of ethical implications and societal impacts. The
case shows how important it is to include human rights ideas when making and using
technology, to make sure fairness and equality are always respected. Also, it makes us wonder
if engineers and developers should work hard to find and fix biases in what they make, and
keep checking and improving AI systems to make sure everyone is treated fairly. Finally, using
social fairness ideas in engineering and philosophy makes us think about how new technology
affects everyone and the rules we use to make it.

III. CONCLUSION

In conclusion, the case study reveals that the AI system designed for automated hiring
processes by the multinational technology company is not achieving its intended goal of
fairness. Instead, it's perpetuating biases and inequalities. This situation underscores the
importance of considering social justice and human rights principles in technological
development. Engineers and developers need to actively work to address biases in their
creations to ensure fair outcomes for everyone. Moving forward, it's crucial to continue
evaluating and refining AI systems to promote equality and fairness in hiring practices and
beyond. Ultimately, technology should be a tool for positive change, and it's up to us to make
sure it serves everyone fairly.

You might also like