You are on page 1of 17

CSC300 Tutorial

Race & Computing


Race & Computing

● Current research, designs, and industry


standard within computing contribute to
race disparities.
● This tutorial will focus on Critical Race
Theory for HCI.
“The New Jim Code”
: “the employment of new technologies that reflect and reproduce existing inequities but that
are promoted and perceived as more objective or progressive than the discriminatory systems of a
previous era.”

● Ruha Benjamin in Race after Technology: Abolitionist Tools for the New Jim Code
● Codes act as narratives - racial codes are born to facilitate social control
○ “Codes, [...], operate within powerful systems of meaning that render some things visible, others
invisible, and create a vast array of distortions and dangers.”
● Jim Crow refers to an era, a region, laws, institutions, customs, and a code of behaviour that upholds
white supremacy
○ What happens when cultural coding gets embedded in software code?
Critical Race Theory for HCI

● Racism is ordinary. Racism is not an occasional occurrence, it is pervasive.


● Race and racism are socially constructed. They do not represent biological truths.
● Identity is intersectional. Every person has unique (conflicting) experiences.
● Interest convergence hinders anti-racism movements. Groups that benefit from racism
are reluctant to stop it unless it benefits them.
● Liberalism hinders anti-racism movements. Liberalism’s take on equality is often
associated with “colour-blindness”.
● Voices of marginalized groups provide unique perspectives.
Example: Workplace Terms

● Github made strides to replace the


terminology used for their branches,
going from “master” to “primary”.
● This has encouraged other tech
companies to rethink the terminology
they use in their dev environments.
Example: Workplace Terms

● Some criticism has suggested the


renaming of such terms is a small matter
and that terms like “slave” have lost its
meaning in the tech community.
● However, under critical race theory for
HCI, we should not treat racism as
“ordinary”.
Example: Cell Phone Cameras

● A key selling point of the Google Pixel 6 is that it


has Real Tone technology to better capture darker
skin tones
● This problem goes back to the era of film cameras
where Kodak film was optimized to capture white
skin tones
● Google brought in a team of Black, Hispanic, and
Asian photographers and videographers to help
train its AI software - including marginalized
groups in the design process
Big Tech Commitments

● Following mass protests in 2020, major


tech companies made commitments to
anti-racism
● Apple
○ 2020: Invest $100 million in opportunities for
communities of colour, increase spending with
Black owned suppliers
○ 2021: Created numerous projects for the $100
million commitment, created an academy to help
Black coders gain skills
○ Note: Black employees at Apple remained at 9%
from 2016 through 2021, with Black
representation in leadership at 4%
Big Tech Commitments

● Following mass protests in 2020, major


tech companies made commitments to
anti-racism
● Google
○ 2020: Made numerous financial pledges towards
equity and opportunity and diversity related
commitments in company diversity
○ 2021: Met numerous of its financial pledges, and
increased hiring of Black workers
○ Note: Controversy occurred in their HCBU
partnered program with disorganization, cultural
clashes, and microaggressions experienced by
students - addressed after an expose
Big Tech Commitments

● Tech layoffs of 2023:


○ Seem to be disproportionately affecting
women and mid-career talent
○ “Women and Latino workers represent
46.64% and 11.49%, respectively, of the
tech layoffs from September to December
2022, while those segments make up
39.09% and 9.96%, respectively, of the
entire industry”
○ Companies have also cut diversity, equity,
and inclusion budgets
Black Defendant’s Risk Scores
Bias in Algorithms: Justice
System

● Predicting policing tools have also proven to


perpetuate systemic racism.
● Algorithms within predictive policing tools
do not use race as a factor in their training
White Defendant’s Risk Scores
models, but do use other socioeconomic and
geographical characteristics like zip codes
and education background
○ These can serve as proxies for racial
bias in data as racism is systemic
Bias in Algorithms: Healthcare

● Hospitals and insurers use algorithms to help


determine and manage care for patients in the
US.
● One algorithm was less likely to refer black
patients (compared to white patients who
were equally sick) to programmes that would
serve their medical needs.
Bias in Algorithms: Speech Recognition
● Speech recognition is a part of numerous
technologies, and is frequently used in voice
assistant technologies - algorithms are used to
conduct natural language processing (NLP)
● In a study, the systems misidentified words about
19 percent of the time with white participants, but
with Black participants, mistakes jumped to 35
percent.
○ Best Performance: Microsoft
■ About 15 percent of words from white participants were
misidentified and 27 percent from Black participants.
○ Worst Performance: Apple
■ Failed 23 percent of the time with audio from white “Companies like Google may have
participants and 45 percent of the time with Black
participants.
trouble gathering the right data, and
they may not be motivated enough
to gather it.”
Bias in Algorithms & Critical Race Theory
● Algorithms used in the justice system and healthcare industries violate principles of
critical race theory which can deepen existing systemic racism.
● Training models are often built by engineers who do not come from marginalized
communities.
● Need to recognize that private industry choices in tech are public policy decisions.
● Solutions:
○ Some companies have employed auditors who can determine whether their training models
contain bias data.
○ Data 4 Black Lives: A movement created to stimulate positive change for black communities
using data science.
○ Algorithmic Justice League: A group that fights bias in automated systems.
Discussion for Quercus
Read the following blog post from Facebook AI about their new dataset that addresses differences in skin color: Shedding light on fairness in AI with a new data set.

Facebook AI has built and open-sourced a new, unique data set called Casual Conversations, consisting of 45,186 videos of participants having non scripted conversations. It serves as a tool for AI
researchers to surface useful signals that may help them evaluate the fairness of their computer vision and audio models across subgroups of age, gender, apparent skin tone, and ambient lighting.

Involves paid individuals who explicitly provided their age and gender themselves — as opposed to information labeled by third parties or estimated using ML models.

Discuss the following questions:

● How is it different from previous approaches (think of ImageNet)?

● Why is self-identification important?

● What are the limits of using the scale to classify skin color?

● Is race being erased?

● Is this a positive or a negative choice?

● Which aspects of Critical Race Theory of HCI does this case study speak to? Which aspects does it neglect to consider, if any?

Total submission: 1 page .doc or .pdf to quercus or brief in-tutorial presentation

You might also like