You are on page 1of 9

1

Aaron Bourdeaux
CST 300 Writing Lab
04 February 2023
Ethical Use of AI in the Hiring Process

Companies have begun using artificial intelligence algorithms to streamline the hiring

process with the aim of identifying talented people in a way that is faster, cheaper, and ultimately

more effective than older methods. These algorithms can be used to screen resumes, analyze

interviews, and collect personal data from social media in order to determine whether a job

candidate is a good fit for a role (Dattner et al., 2019). Ideally, these algorithms should benefit

both companies and prospective job candidates equally. However, the potential for AI bias,

inaccuracy, discrimination, and infringement of privacy are all realistic problems that may be

introduced when using AI in this way. Before plunging forward with the use of AI in the hiring

process, it is important to consider the ethical implications of doing so.

Issue

Over the last two decades, advancements in AI algorithms have enabled companies to

improve their approach to recruiting. These algorithms have automated several recruiting

processes which used to be costly and time-consuming, such as resume-filtering and talent

identification (Dennison, 2022). As such, companies are keen to continue to improve on these

algorithms. However, due to issues of data privacy, bias, and potential discrimination, job

candidates are wary of the possibility that they may be taken advantage of in this process

(Dennison, 2022).

As technology evolves at an accelerated rate, so too do the capabilities of AI. With these

growing capabilities, the use of AI in the hiring process will likely expand, becoming more

effective and reliable. For everyone involved in the hiring process to benefit from these

advancements, careful ethical consideration is worth exploring. Conversely, if companies move


2

forward with developing these AI solutions without properly considering the ethics of doing so,

the new norms of the job market may be characterized by discrimination and a general disrespect

for privacy.

Stakeholder Analysis

Stakeholder #1: Companies

Values: It is no secret that companies generally prioritize profit and sustainable growth.

As such, the values of companies include utility, efficiency, and responsibility because each of

these values lend themselves to a company achieving success.

Position: Companies that are making use of AI in their hiring processes are doing so to

save time, cut costs, and hire effective employees (Johar, 2022). When a company is searching

for a new role, the tasks of searching for candidates, filtering resumes, conducting effective

interviews, and ultimately hiring a solid fit for the role can be difficult. With the amount of

exposure afforded by the internet, the number of candidates applying for a single role can be

staggering. This costly process can go on for months, eating up plenty of time and money for the

company. This process can also result in the new hire either quitting or otherwise being

ineffective in their role. All of these outcomes are less than ideal for the company, especially so

when the use of AI offers them a solution that can potentially mitigate all of these issues

(Dennison, 2022).

Claims: To support their position, the first stakeholder makes the claim of policy that AI

should be used to make the hiring process more effective, affordable, and efficient. They also

make the claim of fact that AI can be used to benefit both companies and candidates equally.

Stakeholder #2: Job Candidates


3

Values: On the opposing side of the argument, job candidates fear what the misuse of AI

in the hiring process would embody. This group values privacy, transparency, non-

discrimination, and fairness, and as such the members of this group are wary of a perceived

imbalance of power that would disregard these values in the name of efficiency.

Position: There are several concerns that characterize this group’s position. For one,

effective AI relies on plenty of data, data which must be collected from a wide variety of

sources. These candidates perceive the use of AI to dig through their personal information as a

sacrifice in personal privacy. Furthermore, AI bias has the potential to introduce discrimination

into the hiring process without adequate liability. An AI algorithm may pick up on a prospective

candidate’s race or sexuality, and on that basis might determine a candidate’s level of talent. If a

human were discriminating on this basis it would be illegal, but because of the opaque nature of

the technology discrimination may go undetected (Dennison, 2022).

Claims: In order to support their position, this group makes the claim of fact that

companies cannot be trusted to adequately address these ethical concerns. They also make the

claim of cause that if AI continues to be used in the hiring process, then job candidates will be

subject to intrusions of privacy and discrimination.

Argument Question

Should AI continue to be used in the hiring process?

Stakeholder #1: Companies

The first group of stakeholders argues in favor of continuing to use AI in the hiring

process, and that everyone involved stands to benefit. The foundation of this argument relies on

the ethical framework of Utilitarianism. This ethical framework as it is understood today was

organized and popularized by a man named Jeremy Bentham. Bentham, who lived during the
4

late 18th and early 19th centuries, was driven by the idea that his theorizing and writing could

increase public happiness, and through this process he organized the system of ethical thought

that would come to be known as Utilitarianism (Scarre, 1996, p. 72). Through the ethical

framework of Utilitarianism, the moral rightness of a decision is determined by the overall

benefit, happiness, or utility produced by that decision (Scarre, 1996, p. 74).

Companies in favor of using AI in the hiring process make their argument through a

Utilitarian lens. They make the claim of fact that everyone involved in the hiring process will be

happier and benefit from greater utility due to the inclusion of AI. Automating the processing

and filtering of resumes frees up recruiters for other work, saving the company both time and

costs associated with labor (Johar, 2022). Supposing that the AI algorithm is well-configured,

this process also ends up being much more effective in determining a viable pool of candidates,

as the constraints of human error are avoided. As such, the turnaround time taken for a company

to make a decision for new applicants is greatly decreased, allowing for candidates to hear back

from prospective employers promptly. Candidates in certain cases also benefit from resume

scanning algorithms, allowing them to simply upload their resumes without having to manually

enter in all of their relevant information (Johar, 2022). As this argument holds, the utility of both

companies and candidates is improved by saving everyone time. Another aspect of this side’s

argument contends that AI will improve talent identification, thereby improving the overall

satisfaction of both companies and new hires. AI can use resume data in conjunction with data

gathered from interviews to determine whether a candidate will be a genuinely effective fit for

the role (Dattner et al., 2019). This should mean that candidates will ultimately be happier in said

role than in a role that does not make adequate use of their skillset, all other factors being equal.

An individual who is happy and effective in their role is less likely to quit or be fired, meaning
5

that the company does not have to deal with the time and expense of a vacant position. As a

product of this process, utility is maximized for both company and candidate.

The benefits of using AI in the hiring process clearly benefits companies. Improvements

to this process have saved companies time and money, and have freed up their employees to

work on other tasks. If the use of AI in the hiring process were suddenly banned, companies

would have to revert to older, manual methods of sorting through applicant information, which

would cost them time and money, ultimately hurting profits and growth.

Stakeholder #2: Job Candidates

The second group of stakeholders argues against the continued use of AI in the hiring

process. This argument is made through the ethical framework of Kantian Ethics, a framework

founded upon the work of the early 18th century German philosopher Immanuel Kant. The moral

rightness of a decision from a Kantian perspective is one that is motivated by a good will and

especially so a sense of duty, regardless of the outcome of the decision (Kim & Schönecker,

2022, p. 152). Within the framework of Kantian ethics, other important principles include

respect for autonomy and universalizability. Put simply, universalizability is the principle that an

action can only be right if it could hypothetically be taken by everyone in accordance with good

will.

Job candidates against the use of AI in the hiring process rely on Kantian ethics as the

foundation for their argument. Through a Kantian perspective, this group makes the claim of fact

that companies are not acting from good will when choosing to employ AI solutions in the hiring

process because their intentions are primarily self-motivated at the expense of ethical concerns

such as data privacy or the potential for discrimination. This stakeholder group argues that these
6

concerns should not be brushed over so easily and goes into detail about both issues to support

their argument.

In regard to data privacy, this group points to the fact that AI is evolving at an accelerated

pace, and that its evolution is fueled by the accumulation of massive amounts of data (Kerry,

2020). Companies scrape candidate data from resumes, social media, interviews, and phone

calls, which is used to discern deeper insights about these candidates. This means that to apply

and interview at a company entails a massive sacrifice of data privacy for candidates. This

stakeholder group maintains that companies are not acting with respect to Kantian

universalizability in this case. Companies use their position of power to enforce transparency of

candidates’ histories without reciprocating that transparency. Candidates have neither the

resources nor the opportunity to dig up all relevant information about the companies that are

using these technologies. In a sense, these companies are acting from hypocrisy, which is not in

alignment with Kantian duty or good will.

In terms of discrimination, this side of the argument points out that humans frequently

introduce their biases into the data sets with which they train AI algorithms, which can heavily

influence the decisions that these algorithms make (Manyika et al., 2019). With regard to the

hiring process, the introduction of AI bias can mean that discrimination is happening without

anyone to hold accountable. The second stakeholder argues that companies know that they

should not ask candidates about candidate race, sexual orientation, or disabilities, but take little

issue with the fact that AI can discern these features about candidates based on related data

(Dattner et al, 2019). Therefore, given that companies are aware of the issue but choose to

proceed anyway, they are not acting in accordance with their Kantian duty.
7

With the continued use of AI in the hiring process, candidates will be subject to further

invasion of their data privacy. Furthermore, certain individuals may be subject to discrimination

and may lose out on opportunities that would otherwise be available to them. Overall, the

technology causes an imbalance of power in favor of companies, a precedent which may have

some yet unknown consequences for candidates in the future. Conversely, if the use of AI in the

hiring process is halted, candidates will avoid these potential issues altogether.

Student Position

Personally, I believe that AI should continue to be used in the hiring process with a few

caveats. First and foremost, AI tools should be used by individuals who understand and prioritize

the shortcomings of such technology. AI should also be developed to avoid potential

discrimination.

While my position leans in favor of that of the first stakeholder, I agree and disagree with

elements from both sides of the argument. On the first side of the argument, it is obvious to me

that AI can be used to improve efficiency and affordability of the hiring process. I also agree that

there is strong potential for job candidates to benefit, allowing them to save time, better

understand personal strengths and weaknesses, and providing them insights into their career

goals (Dattner et al., 2019). However, I do not believe that companies and candidates will benefit

equally. Given that companies control the technology, the use of AI in the hiring process will

always be used primarily to benefit these companies.

In relation to the other side of the argument, I agree that companies are primarily profit-

driven and that there are issues inherent within AI that need to be addressed. However, I disagree

with the claim that the use of AI is inherently bad because it makes use of personal data. The

world of technology that we live within today relies on the collection of personal data and I
8

believe that that is unlikely to change. Many large companies already have massive amounts of

data compiled on anyone who uses the internet. I take little ethical issue with using Google for

personal decision-making, so it would be hypocritical to suddenly take issue with sharing a

fraction of this data through resumes and social media posts. Ultimately, I also disagree with the

idea that the potential ethical concerns raised by the second stakeholder mean that AI should not

be used in the hiring process.

To approach this issue ethically, I believe that companies should be allowed to continue

to use AI in the hiring process under a set of regulations. For one, they must develop AI that

meets certain ethical criteria to avoid discrimination and respect personal privacy. Second, these

companies must employ a specialist whose role involves interpreting AI conclusions as

suggestions rather than decisions. Lastly, these companies must disclose to candidates the full

extent to which AI is used in their hiring process, from start to end. Under these regulations, I

believe it is much more likely for AI in the hiring process to evolve in such a way that companies

and candidates can fully experience its benefits.


9

References

Dattner, B., Chamorro-Premuzic, T., Buchband, R., & Schettler, L. (2019, April 25). The legal

and ethical implications of using AI in hiring. Harvard Business Review. Retrieved

January 23, 2023, from https://hbr.org/2019/04/the-legal-and-ethical-implications-of-

using-ai-in-hiring

Dennison, K. (2022, June 27). Are AI recruitment tools ethical and efficient? The pros and cons

of ATS. Forbes. Retrieved February 6, 2023, from

https://www.forbes.com/sites/karadennison/2022/06/27/are-ai-recruitment-tools-ethical-

and-efficient-the-pros-and-cons-of-ats/?sh=5ebf55892e4f

Johar, V. (2022, June 10). AI in hiring: a tool for recruiters. Forbes. Retrieved January 30, 2023,

from https://www.forbes.com/sites/forbesbusinesscouncil/2022/06/10/artificial-

intelligence-in-hiring-a-tool-for-recruiters/?sh=7c6b9ac93200

Kerry, C. (2020, February 10). Protecting privacy in an AI-driven world. Brookings. Retrieved

February 11, 2023, from https://www.brookings.edu/research/protecting-privacy-in-an-ai-

driven-world/

Kim, K. & Schönecker, D. (2022). Kant and AI. Berlin, Boston: De Gruyter. Retrieved January

30, 2023, from https://doi.org/10.1515/9783110706611

Manyika, J., Silberg, J., & Presten, B. (2019, October 25). What do we do about the biases in AI?

Harvard Business Review. Retrieved February 10, 2023, from

https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai

Scarre, G. (1996). Utilitarianism. Taylor & Francis Group. Retrieved January 30, 2023, from

https://ebookcentral.proquest.com/lib/csumb/detail.action?docID=169174

You might also like