You are on page 1of 13

PART III: ROGERIAN ARGUMENT

An Intensive Essay Presenting Common Ground and a Potential Solution

For Discrimination in AI

Ethical Guidelines vs. Increasing Diversity

Kaylee Lynn Sweet

Barr: Dual Enrollment English Composition II – Period 3

Thursday, April 8, 2021


“The effectiveness of guidelines or ethical codes is almost zero and they do not

change the behavior of professionals from the tech community” (Hagendorff, 2020)
PART III: ROGERIAN ARGUMENT

An Intensive Essay Presenting Common Ground and a Potential Solution

For Discrimination in AI

I. Preface: Topic + Subtitle Overview and Cover Photo/Collage

II. Key Quotation Page

III. Introduction and Thesis


A. Cursory Topic Information: Presenting The Problem
B. Laying the Foundation For Discussion
C. Interwoven Quotations
D. Minimum of Three Lenses /Perspectives / Viewpoints
E. Thesis Statement

IV. Presenting The Problem

A. Stakeholder I – Viewpoints B.
Stakeholder II – Viewpoints C.
Interpretations / Interwoven Perspectives / Deeper Meanings

V. Various Approaches Taken Toward Solving The Problem


A. Innovative Viewpoint I Toward The Problem
1. Text Evidence Must Be Referenced (Works Cited &
Bibliography) 2. Interpretations / Interwoven Perspectives / Deeper
Meanings
B. Innovative Viewpoint II Toward The Problem
1. Text Evidence Must Be Referenced (Works Cited &
Bibliography) 2. Interpretations / Interwoven Perspectives / Deeper
Meanings

C. Innovative Viewpoint III Toward The Problem


1. Text Evidence Must Be Referenced (Works Cited &
Bibliography) 2. Interpretations / Interwoven Perspectives / Deeper
Meanings

VI. Elaborations Upon The Value of Opposing Positions: Collective Personal


Reflections of Viewpoints Points I + II + III and Common Ground Rationales
A. What conclusions may be drawn from reading and applying the material?
B. Text Evidence (Any Combination of Sources)
C. Recap Multiple Lenses / Perspectives
D. Interpretations / Interwoven Perspectives / Deeper Meanings

V. Closing Applications of Rogerian Argument To Today’s Society and


Conclusions
A. Topic Rephrasing
B. Common Ground / Modern Applications
C. Common Ground Collective Ideas Revisited
D. Final Text Evidence (Any Combination of Sources)
E. Final Interpretations / Interwoven Perspectives / Deeper Meanings
PART III: ROGERIAN ARGUMENT

An Intensive Essay Presenting Common Ground and a Potential Solution

For Discrimination in AI

Artificial Intelligence (AI) programs can be applied to a vast set of industries,

increasing efficiency and accuracy of many tasks. While AI has many promising benefits

and applications, AI programs have been found to discriminate against users or

individuals when making decisions. It seems counterintuitive that the programs that are

meant to increase efficiency can be biased, but the biases can stem from several

sections of development including the data, algorithm, and developer. With AI being

implemented into complex industries, the consequences of using biased, discriminating

algorithms increase. For example, AI is being used in the judicial system as a prediction

technique that predicts future criminal activity, and court decisions are made with that in

mind. Of course, solutions to discrimination have been proposed. On the technical side,

there are already bias mitigation softwares (often build-in to the AI) used to spot bias in

decisions made by the AI program. Increasing explainability and transparency would

allow for more opportunities to point out flaws that challenge an AI program’s ethicality

(Rossi, 2019). However, despite such solutions having been proposed, discrimination is

still a prominent issue in AI. At this point, it is crucial to evaluate the effectiveness of

such solutions, and act based on that. Two of the most commonly implemented

solutions are creating ethical guidelines and increasing diversity in the AI community.
Many companies create ethical guidelines in response to concerns about the

goodness of the AI technologies they use or put out. These ethical guidelines give

general rules about how developers and other involved employees under the company

should create and put their AI program in place. With discrimination in AI becoming a

prominent issue, AI4People under the Atomium - European Institute for Science, Media,

and Democracy has stepped up to the plate. Started in June of 2017, the organization’s

goal is to create a common public space for laying out the founding principles, policies,

and practices on which to build a good AI society. As a result of AI4People’s activity,

one of its key members, Prof. Luciano Floridi, presented a unified ethical framework for

AI--“AI4People’s Ethical Framework for a Good AI Society: Opportunities, Risks,

Principles, and Recommendations”. AI4People approaches AI’s issue of discriminaiton

with a preventative approach; with the creation of a universally recognized, specific

ethical framework, those creating AI under the principles outlines will create fair, non-

discriminating technology. Furthermore, the organization has established 7 AI4Peoples

which will work towards creating common, internally recognized, industry-specific

frameworks: Automotive, Banking & Finance, Energy, Healthcare, Insurance, Legal

Services Industry, and Media & Technology. Each of these subcommittees will provide

more industry-specific recommendations rather than the general rules that many

companies offer.

A second approach to discrimination in AI is increasing diversity in the AI

community. With more diverse AI teams that include a range of identities on all

spectrums, the technologies can be created with a broader societal context and various

perspectives. The hope is that perspectives from groups who have been historically
discriminated against will be able to point out possible issues in which the program

might become biased such as using a predominantly white dataset, or neglecting to test

the algorithm on females. The followup question to this solution is how can we increase

diversity? AI4ALL is an organization that works to increase diversity in AI by offering AI

courses to historically underrepresented groups. AI4ALL provides summer programs

through several universities known for their technology programs including Carnegie

Mellon University, Princeton University, Georgia Tech University, and more. The

summer programs aims to serve groups historically excluded from AI: indigenous

peoples, black, hispanic or latinx, pacific islander, southeast asian people, trans and

non-binary people, two-spirit people, cis women and girls, lesbian, gay, bisexual, and

queer people, student who demonstrated financial need, and future first-generation

college students. Clearly, AI4ALL targets all underrepresented groups, and seeks to

broaden AI’s diversity in all spectrums of identity. According to AI4ALL’s website, the

organization has “impacted 12,300 people in all 50 states around the world through our

programs and our alumni outreach”. Educating underrepresented groups about AI and

inspiring them to pursue careers in AI is the first step to increasing diversity.

Both the preventative ethical approach and the increasing diversity approach are

valid means to reducing discrimination in AI, but the ethical approach is simply not as

effective. The problem with ethical guidelines as a solution to AI discrimination is that

many of these outlines are ineffective, do not go into enough detail, and do not cover

many important concerns. Many guidelines do not go into enough specifics to actually

change the approach to many important concerns including explainability and

transparency, and workers under those guidelines have reported that they do not
adhere to them strictly. More specifically, “The effectiveness of guidelines or ethical

codes is almost zero and they do not change the behavior of professionals from the

tech community” (Hagendorff, 2020). While guidelines are important to building ethical

AI programs, this alone will not be enough to combat the complex issue of

discrimination. The root causes of discrimination have been found to be in the

developer, the data training sets, and/or the algorithm. The fact that there are several

causes to discrimination in AI is just one reason why this issue is such a daunting one;

bias in AI is often a result of implicit bias--an automatic stereotype or prejudice that

affects our opinions (Lin, 2020). Implicit bias is unintentional and hides under the norms

of society, and it often makes its way into AI without the developers realizing. Therefore,

even with the most detailed ethical guidelines, AI will reflect and further perpetuate the

implicit biases of society. However, this does not mean that developers and companies

that implement AI should stop striving to create ethical machines.

Increasing diversity is a more effective approach to reducing discrimination in AI

than the ethical guidlines approach. Between the examples of AI4People and AI4ALL,

AI4ALL already shows promising results by the sheer amount of reach their program

has. Although AI4People’s guidelines have been widely recognized, there is no

guarantee that any, let alone all, AI programs will be made with such guidelines in mind.

More diverse teams will be more likely to have an ethical mindset from the beginning, as

it is only in their best interest, and the programs can undergo the critique of multiple

perspectives. A common issue with AI is the neglect to test the program on diverse

people, like in face recognition softwares. Face recognition applications were reported

to have difficulty identifying non-Caucasian faces, and voice recognition systems could
not recognize a woman’s voice as accurately as a man’s. If the programs were tested

with a diverse set of consumers in mind, these issues could have been spotted and

corrected. Even more efficiently, if the developing team was diverse itself, including

such diversity would only be the automatic choice. However, diversity itself is not a

catch-all solution. As mentioned above, implicit bias is a tricky situation and can even

result from diverse AI communities. Therefore, reducing discrimination in AI should be

an optimized combination of the many solutions, including diversity and AI ethics. The

diversity solution still has a limitation, there aren’t many ways to increase diversity itself

besides the approach of AI4ALL which is educating the historically underrepresented.

Even then, the students and alumni of AI4ALL aren’t guaranteed to follow through with a

career in AI, nor are they guaranteed to create ethical AI programs, as they could still be

just as likely to discriminate against other groups. Creating more robust and enforced

ethical guidelines along with the active hiring of diverse teams on AI projects, these two

opposing solutions can work in tandem to reduce discrimination in AI with greater

efficiency.

In conclusion, discrimination and bias is a pressing issue in AI, especially with

the expanding applications of AI. Several algorithms have been found to make biased

decisions based off of race, gender, and other factors without this being the intention of

the developers. Bias in AI is a difficult problem to solve in AI as it reflects the implicit

biases of society, and there are multiple different possible causes to the bias in each

algorithm (data, algorithm, and developer). As AI is implemented in situations with

increasingly higher stakes, it is important to analyze whether the solutions to

discrimination in AI are sufficient. On one hand, several developers and companies who
use AI are proposing ethical guidelines to be incorporated in the development of the

algorithms. These ethics cover important topics that will work towards eliminating bias

such as explainability, transparency, and bias mitigation. On the other hand, it has been

shown that these ethical guidelines are not effective, and even after their

implementation, biased outputs from these algorithms persist. Increasing diversity is a

more effective solution to AI as the teams creating the programs themselves will have a

broader societal context in mind, but even this approach has its limitations. Therefore,

AI communities should seek to create more robust ethical guidelines and actively hire

diverse teams to work on the programs. AI technologies can and will be applied to

almost every field, and will increase the efficiency and accuracy of many processes.

This fast-growing pace is why it is important to ensure that our approaches to reducing

discrimination in AI will be enough to keep up.


Works Cited

Cover Page Picture. https://redshift.autodesk.com/diversity-in-artificial-intelligence/ Date

Retrieved: April 8, 2021.

AI4ALL, ai-4-all.org/.

“AI4People - Atomium.” EISMD, www.eismd.eu/ai4people/.

“AI4People Summit: Towards the 7 AI Global Frameworks.” AI4People 2020

Summit, 27 Oct. 2020, ai4people.eu/.

Floridi, Luciano, et al. “AI4People—An Ethical Framework for a Good AI Society:

Opportunities, Risks, Principles, and Recommendations.” Minds and Machines,

vol. 28, no. 4, 2018, pp. 689–707., doi:10.1007/s11023-018-9482-5.

Hagendorff, Thilo. “The Ethics of AI Ethics: An Evaluation of Guidelines.” Minds

& Machines, vol. 30, no. 1, Mar. 2020, pp. 99–120. EBSCOhost,

doi:10.1007/s11023-020-09517-8.

Lin, Ying-Tung, et al. “Engineering Equity: How AI Can Help Reduce the Harm of

Implicit Bias.” Philosophy & Technology, July 2020, pp. 1–26. EBSCOhost,

doi:10.1007/s13347-020-00406-7.

Rossi, Francesca. “Building Trust in Artificial Intelligence.” Journal of

International Affairs, vol. 72, no. 1, Fall/Winter2019 2019, pp. 127–133.

EBSCOhost, search.ebscohost.com/login.aspx?

direct=true&db=a9h&AN=134748798&site=ehost-live&scope=site.
BIBLIOGRAPHY

Cover Page Picture. https://redshift.autodesk.com/diversity-in-artificial-

intelligence/ Date Retrieved: April 8, 2021.

AI4ALL, ai-4-all.org/.

“AI4People - Atomium.” EISMD, www.eismd.eu/ai4people/.

“AI4People Summit: Towards the 7 AI Global Frameworks.” AI4People 2020

Summit, 27 Oct. 2020, ai4people.eu/.

Floridi, Luciano, et al. “AI4People—An Ethical Framework for a Good AI Society:

Opportunities, Risks, Principles, and Recommendations.” Minds and Machines,

vol. 28, no. 4, 2018, pp. 689–707., doi:10.1007/s11023-018-9482-5.

Hagendorff, Thilo. “The Ethics of AI Ethics: An Evaluation of Guidelines.” Minds

& Machines, vol. 30, no. 1, Mar. 2020, pp. 99–120. EBSCOhost,

doi:10.1007/s11023-020-09517-8.

Lin, Ying-Tung, et al. “Engineering Equity: How AI Can Help Reduce the Harm of

Implicit Bias.” Philosophy & Technology, July 2020, pp. 1–26. EBSCOhost,

doi:10.1007/s13347-020-00406-7.

Rossi, Francesca. “Building Trust in Artificial Intelligence.” Journal of

International Affairs, vol. 72, no. 1, Fall/Winter2019 2019, pp. 127–133.

EBSCOhost, search.ebscohost.com/login.aspx?

direct=true&db=a9h&AN=134748798&site=ehost-live&scope=site.
Word Count Screenshot

You might also like