You are on page 1of 14

Received: 30 December 2018 Revised: 16 August 2019 Accepted: 4 December 2019

DOI: 10.1002/asi.24335

RESEARCH ARTICLE

Female Librarians and Male Computer Programmers?


Gender Bias in Occupational Images on Digital Media
Platforms

Vivek K. Singh | Mary Chayko | Raj Inamdar | Diana Floegel

School of Communication and


Information, Rutgers University, New
Abstract
Brunswick, New Jersey Media platforms, technological systems, and search engines act as conduits
and gatekeepers for all kinds of information. They often influence, reflect, and
Correspondence
Vivek K. Singh, School of Communication reinforce gender stereotypes, including those that represent occupations. This
and Information, Rutgers University, study examines the prevalence of gender stereotypes on digital media plat-
4 Huntington Street, New Brunswick, NJ
forms and considers how human efforts to create and curate messages directly
08901.
Email: v.singh@rutgers.edu may impact these stereotypes. While gender stereotyping in social media and
algorithms has received some examination in the recent literature, its preva-
lence in different types of platforms (for example, wiki vs. news vs. social net-
work) and under differing conditions (for example, degrees of human- and
machine-led content creation and curation) has yet to be studied. This research
explores the extent to which stereotypes of certain strongly gendered profes-
sions (librarian, nurse, computer programmer, civil engineer) persist and may
vary across digital platforms (Twitter, the New York Times online, Wikipedia,
and Shutterstock). The results suggest that gender stereotypes are most likely
to be challenged when human beings act directly to create and curate content
in digital platforms, and that highly algorithmic approaches for curation
showed little inclination towards breaking stereotypes. Implications for the
more inclusive design and use of digital media platforms, particularly with
regard to mediated occupational messaging, are discussed.

1 | INTRODUCTION easily become stereotypes that help to solidify these inequal-


ities and “hierarchies of the gender order” (Wajcman, 2008,
Media platforms, technological systems, and search engines p. 81). This contributes to women being underrepresented
are conduits and gatekeepers for all kinds of information in certain occupations which tend to be better-paying
(Bhargava & Feng, 2002; Otterbacher, Bates, & Clough, and/or more respected (that is, computer programmer, civil
2017). They often influence, reflect, and reinforce stereo- engineer), and overrepresented in others (that is, librarian,
types (Herdagdelen & Baroni, 2011; Noble, 2018, 2013), nurse; U.S. Department of Labor, 2018).
including those that represent occupations. Occupational Stereotypes are “sets of socially shared beliefs about
choice is heavily influenced by gender; people are often ste- traits that are characteristic of members of a social cate-
ered towards and select professions on the basis of per- gory” (see Prot et al., 2015). They can provide a kind of
ceived gendered traits and qualities. Messages shortcut for visualizing groups and categories of people
communicated and received about maleness, femaleness, and conceptualizing highly complex social constructs.
technology, work, and power, including via the media, can When stereotypes are taken as fact and applied

J. Assoc. Inf. Sci. Technol. 2020;1–14. wileyonlinelibrary.com/journal/asi © 2020 Association for Information Science and Technology 1
2 SINGH ET AL.

uncritically (as to individual members of a social cate- stereotyping with regard to a range of diverse digital plat-
gory), they can lead to bias, prejudice, discrimination, forms (Bing, Twitter, NYTimes.com, Wikipedia,
and outright harm. Considerable research, however, Shutterstock) and occupations (librarian, nurse, com-
indicates that while exposure to media-depicted stereo- puter programmer, civil engineer). It also investigates the
types can increase stereotyped thinking by those who role of human and algorithmic efforts in curating gender-
engage with them, exposure to counter-stereotypical biased content and looks at how these practices may
images can reduce stereotypical attitudes (Dasgupta & (or may not) change over time.
Greenwald, 2001; Prot et al., 2015; Ramasubramanian, This research, then, examines how images rep-
2011). As the latter offers real opportunities for resenting four occupations that are highly gender-
destabilizing gender hierarchies and contributing to segregated as per the U.S. Department of Labor statistics
social equality, there is great value in investigating the (2018)—two that are traditionally female (librarian and
conditions under which such stereotypes prevail in the nurse) and two that are traditionally male (computer pro-
modern media landscape, and the extent to which grammer and civil engineer)—are depicted on four types
human agency in new media use may be challenging of digital media platforms (Twitter, NYTimes.com,
or altering entrenched stereotypes. Wikipedia, and Shutterstock), from the standpoint of
While much evidence exists that depictions of occupa- content creation and curation. Rather than considering
tions in mass media overwhelmingly reflect “traditional” human and algorithmic content creation and curation in
stereotypes (Arslan & Koca, 2007; Bligh, Schlehofer, a binary fashion, this work follows a sociotechnical per-
Casad, & Gaffney, 2011; Sink & Mastro, 2017), the preva- spective in assuming that all modern sites require a com-
lence and nature of stereotypical—or potentially counter- bination of human effort and automation to populate and
stereotypical—images in new digital media platforms is curate content (O'Neil, 2016). Specifically, the study
less studied. Research by Kay, Matuszek, and Munson explores the following research questions:
(2015), Otterbacher et al. (2017), and Noble (2018) indi-
cates that gendered stereotyping via Internet search RQ1 How different is the image-based representation of
engines is very much in evidence. This would seem to highly gender-segregated professions (librarian,
align with the proposition that preexisting gendered nurse, computer programmer, civil engineer) in the
power relations introduce inequities into technological physical world compared to digital spaces (Twitter,
systems so powerfully that stereotypical messages and NYTimes.com, Wikipedia, Shutterstock)?
images are bound to result. This sociotechnical perspec-
tive (Winner, 1980) recognizes that social systems and RQ2 How do the differences above vary with time?
technical systems are interdependent and co-constructed,
and thus suggests that inequities are introduced into the RQ3 How does the relative ratio of human and algorith-
system at so many points in the process (design, content mic effort in content creation and curation affect the
creation, curation, use) that they continually reinforce degree and types of biases observed on different types
one another, becoming all but impossible to extract from of digital media platforms?
the system (Emery, 1959). However, some theorize that
as gendered relations develop and change over time, This research has particular relevance for the field of
technological systems can and will change as well, poten- library and information science, especially as the shape
tially dramatically (if gradually), as they are redesigned and constitution of modern libraries and the cultural
and used by human beings in new ways that reflect and roles of librarians are concerned (Dilevko & Harris,
even drive new social attitudes and behavior (see Sha- 1997). It also advances the study of the impact of technol-
piro, 2015; and Wajcman, 1991, for a discussion of these ogy on diverse participation in Science, Technology,
theories and this debate). If the latter is the case, media Engineering, and Mathematics (STEM) disciplines, the
stereotypes could begin to shift and change in visible, need for which was recently underscored by the ACLU's
measurable ways, especially in times like the current cul- lawsuit against Facebook for disproportionately showing
tural moment in which gender relations are actively younger male users technical job ads (Tiku, 2018).
being reexamined, and especially on new media plat- Finally, the results of this study will point the way
forms on which human users can directly create and towards the development of more inclusive messaging in
curate content. information created and shared on digital media plat-
Research into bias in media and algorithms is abun- forms and other online spaces. This can enhance under-
dant (see Kay et al., 2015; Noble, 2018; O'Neil, 2016; standing of bias in the design and use of technological
Otterbacher et al., 2017). This study, however, is the first platforms and information portals, decrease the preva-
research effort to explore the phenomenon of gender lence of stereotyping in mediated messaging regarding
SINGH ET AL. 3

occupations and in these spaces overall, and help to bring sexy, heterosexual, and mothers; content also favors male
about a more equal representation of women and men in athletes over female athletes and content on women in
occupations that are currently gender-segregated. As we sports more often includes information such as marital
will discuss in subsequent sections of this article, we status than content on men in sports (Arslan & Koca,
acknowledge that our study is significantly limited 2007). Linguistic analysis further demonstrates that
because it takes a binary approach to gender, meaning adjectives and compliments used to describe women
we only discuss men and women in our analysis. How- make them seem powerless and beautiful, whereas those
ever, we hope that our discussions of this limitation bring associated with men imply strength (Rasul, 2009).
continued attention to wider problems with ways in Advertisements also draw on gender stereotypes.
which media, technical systems, and data collection Content analysis of advertisements for technology prod-
instruments operate within frameworks that assume a ucts sampled from professional journals in the fields of
gender binary and subsequently perpetuate inequities business, computing, science/engineering, and library
against nonbinary people (Keyes, 2018; Spiel, Keyes, & and information science finds that men are depicted
Barlas, 2019). more frequently than women, although the distribution
of male and female figures in various poses is more egali-
tarian in ads found in traditional library journals
2 | REVIEW OF THE L ITERATURE (Dilevko & Harris, 1997). Additionally, the depictions of
male and female roles in relation to technology is largely
Both scholarly and popular discourses focus on (often stereotypical: Men are often portrayed as deep thinkers
binary) gender stereotypes perpetuated by and reflected who are connected to the future, whereas women often
in media. While disparities between gender representa- convey the notion of simplicity of product use (Dilevko &
tions have been reported in print media at least since the Harris, 1997; Döring & Pöschl, 2006).
1960s, recent research suggests that inequitable treat-
ments of gender have only increased with the influx of
digital media, despite conceptions of such platforms as 2.2 | Gender Stereotyping in Social
liberating or emancipatory (Baily, Steeves, Burkell, & Media and Search Engines
Regan, 2013; Döring, Reif, & Poeschl, 2016). This
section reviews the literature on disparities across binary Studies of social media suggest that digital platforms' pur-
conceptions of gender in different digital and analog ported ideal to present more expansive or equitable gender
media platforms, as well as theoretical frameworks constructions is often unrealized. For example, social net-
relevant to the understanding of mediated gender working sites (SNS) provide many stereotypical images of
stereotypes. girls as sexualized objects desiring male attention (Baily
et al., 2013). SNS are commoditized environments in which
girls' self-exposure yields both high status and heightened
2.1 | Gender Stereotyping in Newspapers judgment, fueling discriminatory standards that police girls'
and Advertisements ability to fully participate online and engage in gender-
defiant presentations (Baily et al., 2013). An analysis of
Studies of newspaper content have found worldwide pro- sexting via Facebook and Blackberry Messenger finds that
liferation of binary gender stereotypes in articles, images, girls may experience shame or punishment for sexting
and advertisements. For example, Spanish newspapers while boys' social status is enhanced (Ringrose, Harvey,
often depict gender stereotypes in print and visual con- Gill, & Livingstone, 2013). As opposed to magazine adver-
tent and disproportionately feature women in shorter tisements, research suggests that selfies uploaded to SNS
news items and the less prestigious Sunday news such as Instagram more strongly reflect gender stereotypes
(de Cabo, Gimeno, Martínez, & López, 2014). Such depic- (Döring, Reif, & Poeschl, 2016).
tions are linked to wider patterns of underrepresentation, Social media can, however, also spread messages,
stereotyping, and discrimination against women images, and expressions of support that counter gender
(de Cabo et al., 2014). Similarly, analysis of the Showbiz stereotypes, including the Western binary conception of
and Entertainment sections of Pakistani newspapers gender. Social and other digital media forms may help
reveal that female subjects are discussed more than male transgender individuals develop their identities (Craig &
subjects in terms of their personal lives, women are McInroy, 2014), given affordances that online platforms
shown in more pictures than men, and women are more can provide for members of marginalized populations
sensationalized overall (Rasul, 2009). Turkish daily news- (Mehra, Merkel, & Bishop, 2004). The presence of gender
papers feature images that portray women as glamorous, role-resistant content curated by users may create
4 SINGH ET AL.

informal learning environments (Fox & Ralston, 2016) or millions of users (see Baym, 2015; Chayko, 2018). All
spaces for self-disclosure (Green, Bobrowicz, & Ang, forms of media, as agents of socialization, provide indi-
2015) for queer individuals who wish to challenge gender viduals with the possibility to learn social norms and
stereotypes and the gender binary. values related to both broad social categories such as gen-
Search engines often also perpetuate gender stereo- der, race, and occupation, as well as more micro-
types that intersect with other identity categories (for interactional phenomena such as making day-to-day
example, race, class, disability) and associated inequities. decisions regarding practices, beliefs, and relationships
Noble (2018) reports that, as a sociotechnical system, (Genner & Süss, 2017; Prot et al., 2015). In the process,
Google perpetuates conceptions of people and ideas in people's attitudes and behaviors are constructed, shared,
ways that strongly reflect racist and sexist stereotypes; and then continue to affect a system in a cyclical process,
black women, for example, are frequently associated with thus contributing to a sociotechnical co-construction of
pornography in results generated by PageRank. Similarly, humans and technology. These attitudes and behaviors
Google image search results for various careers have been are often influenced by one's gender, the meaning and
reported to systematically underrepresent women and nature of which is in itself also continuously being
present stereotypical exaggerations of women in male- shaped (see Butler, 1988; Shapiro, 2015).
dominated professions (for example, a sexy construction
worker) (Kay et al., 2015). Further, Google has been
reported to show high-income jobs to men more often 3 | METHODOLOG ICAL
than to women (Datta, Tschantz, & Datta, 2015). This is APPROACH
of particular concern given that algorithms are often
framed as uncontestable despite their demonstrated rein- This study compared the number of images of men and
forcement of discrimination, including gender discrimi- women associated with four occupations (librarian,
nation (O'Neil, 2016). nurse, computer programmer, civil engineer) on four
media companies' digital platforms (Twitter, NYTimes.
com, Wikipedia, Shutterstock) to the rates of gender rep-
2.3 | Sociotechnical Systems resentation in these occupations according to national
labor statistics (U.S. Department of Labor, 2017). All data
The ability of media to both spread and counter gender ste- were collected using Microsoft Bing search engines appli-
reotypes can be contextualized in a sociotechnical frame. cation programming interface (API), which allows for
Conceived of in organizational studies, a sociotechnical sys- getting search results from specific sites (for example,
tems approach recognizes that technology is designed by NYTimes.com); the results from Bing without the specifi-
humans and will be used in a social context that shapes cation of any sites were used for baseline comparison.
how it is adopted (Benders, Hoeken, Batenburg, & A determination was then made as to whether the
Schouteten, 2006; Leonardi, 2012). Thus, neither a technol- results break from or replicate national trends regarding
ogy nor its human designers and users fully determine how bias and stereotyping in binary gender-segregated occu-
a system will function; systems are inherently fused with pations. These large, influential, yet different kinds of
human biases. This perspective is closely related to that of digital sites were selected because they capture images
sociomateriality, which claims that technical features and and text for different and distinct purposes: one provides
social action co-construct each other (Leonardi, 2012). For a platform for social media networking (Twitter), one
example, the sociocultural context surrounding material presents the news along with editorial and feature stories
affordances and constraints of social media platforms and (NYTimes.com), one is a crowdsourced encyclopedia
search engines can result in a system being either more or (Wikipedia), and one provides consciously framed photo-
less useful for individuals looking for information about graphs for a variety of uses (Shutterstock).
their gender identities and sexualities (Kitzie, 2017). How- While Facebook and Google are obviously highly
ever, while sociomateriality focuses on platforms' technical influential media companies and platforms, the investiga-
features (for example, a search box), a sociotechnical sys- tors elected not to use them as research sites for this
tems view presents a more holistic perspective on systems study, selecting Twitter and Bing instead. Researchers
and how they function. studying Facebook (and its photo-sharing app Instagram)
The fusion between humans and technical systems are subject to growing and frequently shifting restrictions
also influences how individuals experience various that are much more restrictive than Twitter (Bastos &
media. The data generated in the use and consumption of Walker, 2018). And researchers working on issues similar
digital media provide a unique window on the world as it to those explored here have reported that Bing provides
is experienced, shaped, and comes to be understood by much more robust user-friendly API support than does
SINGH ET AL. 5

Google (Otterbacher et al., 2017). The investigators algorithmic effort continuum for content creation and
expect, however, that most, if not all, of the results of curation. Given its massive scale of elements (tweets) to
these studies will have broad applicability across media process and real-time primacy model, the Twitter
platforms and sites, including Facebook and Google. curation process is largely automated; while humans con-
We use the terms “men” and “women” in this arti- structed, maintain, and continue to shape Twitter's algo-
cle, as these are the terms used by Clarifai, the API used rithms, they are not moderating content as they do for
to obtain gender labels (see the Implementation sec- platforms like Wikipedia. An algorithm based on reverse
tion). The U.S. Department of Labor's data collection chronological ordering was Twitter's default presentation
instruments exclude nonbinary individuals as well, rec- mode for a long period of time and temporal recency is
ognizing the same two gender categories. We acknowl- still one of the important factors in its content curation
edge the ways in which this impacts our results: (i) the process (Binder, 2018). However, content is vetted quite
images Clarifai classifies may not depict men or extensively by human moderators on Wikipedia (Stvilia
women, given that gender presentation does not always et al., 2008) and presumably on professional media com-
correlate with gender identity, and Clarifai uses recog- panies like NYTimes.com and Shutterstock. This study
nition software based on normative conceptions of what considers Twitter to involve mostly algorithmic effort in
“men” and “women” look like; (ii) this work perpetu- content curation, while Wikipedia, NYTimes.com, and
ates binary conceptions of gender that exclude non- Shutterstock are more likely to require more direct signif-
binary individuals and are harmful for them and for icant human effort in content curation. This does not
society writ large. Both of these limitations are mean that NYTimes.com does not apply algorithms in
expanded upon in the Discussion section of this article. curation but simply indicates that NYTimes.com articles
Despite these limitations, though, we believe that this typically require more direct human effort in curation
work advances the understandings of gender stereo- than those found on Twitter (see Figure 2).
types in media spaces while drawing needed attention
to problematic epistemic values embedded in systems
that perpetuate gender inequities and attempt to quan- 4 | IMPLEMENTATION
tify gender (Keys, 2019).
In this work, we pay specific attention to the process For the analysis in this work, the top 100 relevant images
of content creation and content curation across plat- were downloaded for each profession from each platform
forms. Content creation across all platforms requires using Bing API, then passed through the Clarifai API to
humans to contribute content. Twitter is considered the obtain gender labels. Potential errors in gender labels
most inclusive of amateur content, undertaking minimal were then quantified and the difference between
quality checks on content (Naaman, Boase, & Lai, 2010). expected (for example, based on labor statistics or an
While some content providers on Wikipedia may be ama- equal ratio) and observed binary gender ratios was com-
teurs, they do need to meet certain thresholds of quality, puted. Details follow below on the data collection, filter-
as opposed to contributors to Twitter (Stvilia, Twidale, ing, gender label assignment, quantifying the potential
Smith, & Gasser, 2008). Content on NYTimes.com and errors, and baseline comparison.
Shutterstock, on the other hand, is specifically created by
specialists and vetted for quality and “organizational
voice.” This continuum is summarized in Figure 1.
Consistent with the sociotechnical perspective, we
suggest that the curation process of content on these plat-
forms requires interlocking human and algorithmic con-
tributions. Figure 2 indicates the relative position of the
different digital platforms on the human effort—

F I G U R E 1 Relative position of the different digital platforms


on the continuum for the type of human effort (amateur to F I G U R E 2 Relative position of the digital platforms on the
specialized) in content creation [Color figure can be viewed at human effort—algorithmic effort continuum for content curation
wileyonlinelibrary.com] [Color figure can be viewed at wileyonlinelibrary.com]
6 SINGH ET AL.

All images were downloaded from the Bing image subjects, only the subject that is closest to the center of the
search using the publicly available Bing image search image was considered. The use of the visual imagery-based
API. Collecting images directly using the search APIs or API allows only external, visible markers of identities to be
interfaces for the various platforms (Twitter, NYTimes. captured; it considers these to be proxies for gender (see
com, and so on) was also considered, but only Twitter Methodological Approach as well as the Discussion sections
and Shutterstock allowed for a specific search of images for why this is problematic). Gendered visual markers, how-
pertaining to the keywords, while NYTimes.com and ever flawed, are used by both experts and novices (including
Wikipedia allowed only for broader searches within their children) for making sense of the world around them
systems, not for searches focusing only on images. Hence, (Goffman, 1978; Singer & Singer, 2012) because social insti-
for consistency, we decided to search for images using a tutions such as schools and governing bodies often instanti-
common Bing API where the platform considered was ate binary conceptions of gender and normative
indicated as a search parameter as described below. assumptions about gender presentation (Gowen & Winges-
To download images from Bing, both query parame- Yanez, 2014). The Clarifai API also provides a confidence
ters and image filters were needed. score that states how accurately it rates its own binary gen-
der assignment. For this study, only those images with sub-
jects Clarifai identified as men or women with a confidence
4.1 | Query score greater than 90 were considered. While these settings
reduced the potential errors within Clarifai's own ratings of
The query search term on Bing image search is of the for- its gender assignments, some did remain. The next
mat “<Job> site:<domain>”. section (specifically, subsection labeled “Quantifying
Example- “Librarian site:twitter.com” downloaded the errors”) presents the approach adopted to interpret errors in
images pertaining to keyword librarian from twitter.com. the results and estimate the error bounds. Although we use
For baseline Bing search images, sites are not men- “men,” “women,” “male,” and “female” throughout the
tioned. Thus, the query was simply: “Librarian.” Results section, we acknowledge that these labels are
assigned to a subject by the Clarifai API and may not be
accurate to the individuals in images.
4.2 | Image filters This study interprets the results based on two metrics
discussed in the recent literature on algorithmic bias
We use filtering options “imageType”: “Photo,” (Calmon, Wei, Vinzamuri, Ramamurthy, & Varshney,
“imageContent”: “Portrait,” and “Safesearch”: “Moder- 2017). Demographic parity suggests that the representation
ate.” Image results on Bing image search may consist of of the different demographic groups (here, men and
cliparts, animations, or logos in addition to photographs. women) should be equal. Equal opportunity suggests that
These kinds of images were not included in the results the number of images for each gender successfully selected
due to the above parameters. Adult content images were by the algorithm should be in proportion to the number of
also not found in the results, presumably because of the candidates from each class that are eligible (this study con-
“safe search” option. Filters focused on images that are siders labor statistics to be such a ratio). Hence, the female
photographs and in portrait format. All images available percentages for each job across all social media sites were
(up to 100) upon searching the query on Bing image sea- evaluated in two ways:
rch after applying the abovementioned filters were down-
loaded for each job across all the platforms under 1. Comparing results with the labor statistics;
consideration. Note that we do not have control over the 2. Comparing results with a scenario including equal
process used by Bing or the underlying websites (for representation for the considered genders.
example, Twitter.com) to provide images in the search
results. For instance, the results for the keyword “Librar- All the data considered in the study were collected and
ian” could be based on the text in the tweets, a computer analyzed twice in this work—once in the summer of 2018
vision algorithm that searches for librarian-like appear- and once in the summer of 2019. Search data in this study
ances, or something else. Hence, interpreting individual were compared with the last available labor statistics (that
results is difficult; we focus on comparisons across the is, 2017 and 2018) data provided by the U.S. Bureau of
results coming from different websites for which the col- Labor Statistics for each of the considered professions before
lection and analysis process has been kept consistent. the search data were collected (U.S. Department of Labor,
Clarifai demographics API assigns a subject a male or 2018). Collection of data across the 2 years allows for higher
female label in a given image based on its computer vision confidence in the obtained results and also allows for the
algorithm (Clarifai, 2018). If the image has multiple analysis of trends across time.
SINGH ET AL. 7

The results are shown in Tables 1, 2, and 3. The Note that the stereotypes and biases found in this
results indicate the percentage of image results, which study can be attributed to multiple factors. For instance,
were considered to represent women in the search results the discrepancies in the NYTimes.com results could be a
for the four professions and the four digital platforms. function of the content creation process, the curation pro-
For baseline comparison purposes, we also report the cess, the Bing processing of NYTimes.com image results,
results for labor statistics, Bing platform (without any site and the Clarifai API for identifying the men and women
specified for filtering), and the search results for the in the pictures. As shown in Table 1, in most cases the
terms “men” and “women.” Bing results (without site filters) were closer to the labor
All figures represent woman percentage. Thus, as per statistics than those obtained using Bing with specific fil-
Table 1, 77% of the “Librarian” search results on Bing ters (for example, Twitter.com). This suggests that speci-
(without any site filters), were judged to depict women. fying site filters led to more pronounced disparities, and a
comparison of those disparities across sites can be indica-
tive of the kind of content present on those sites. Note
4.3 | Quantifying Errors that because the cause of each error cannot be deter-
mined, the focus remains on the relative analysis across
As a baseline to quantify the errors in the process different platforms wherein the data collection and analy-
“Man” and “Woman” were searched in each of the sis process has been kept consistent.
considered settings. The results were expected to be,
respectively, 100% male subjects and 100% female sub-
jects for these queries. The results, however, were not 5 | COMPARING RESULTS WITH
always exactly 100%. For instance, as shown in Table 1, L A B O R ST A T I ST I C S
99% female subjects were obtained after searching for
“Woman” on Twitter, indicating 1% fewer female sub- To evaluate performance of different digital platforms
jects than expected. Similarly, a search for “Man” on with respect to labor statistics, the media platform result
Twitter retrieved 1.5% female subjects, indicating 1.5% was subtracted from the labor statistics (averaged over
more female subjects than expected. Such errors are the years 2018 and 2019) as shown in Table 4. For
typically less than 5% (average = 1.9%), which provides instance, the Twitter performance for “librarian” com-
a reasonable amount of confidence in the analysis pro- pared to actual labor statistics would be 69.5–79.0 = −9.5.
cess. The errors obtained for “Male” and “Female” are The following trends are noted with regard to these
considered the error margins and strong claims are results.
avoided within those margins of errors for any subse-
quent analysis.
“Librarian” obtained 69.5% female subjects on Twit- 5.1 | Direction of errors
ter, which is 9.5% less than labor statistics for women in
librarianship. Because 1% fewer female subjects were 1. Always fewer women.
retrieved when querying for “Woman” and 1.5% more
female subjects were retrieved when querying for “Man” • Twitter and Wikipedia always show fewer female sub-
during the librarian search on Twitter, the observed 9.5% jects for all occupations.
difference between Twitter images and labor statistics
should be in the range from 8.5% to 11%. 2. Generally fewer women.

T A B L E 1 Comparison of results between labor statistics and considered digital platforms for search results. (Average values across
years 2018 and 2019.)

Job Labor statistics Bing (without site filters) Twitter NYTimes Wikipedia Shutterstock
Librarian 79.0 77.0 69.5 45.5 24.5 74.0
Nurse 89.3 90.5 83.0 57.5 71.0 98.5
Computer programmer 21.2 39.5 13.0 14.9 9.5 34.0
Civil engineer 14.6 14.5 14.5 39.1 2.5 27.0
Woman 100.0 99.5 99.0 97.0 91.0 99.5
Man 0.0 0.5 1.5 2.0 3.5 0.0
8 SINGH ET AL.

TABLE 2 Comparison of results between labor statistics and considered digital platforms for search results in summer 2018

Job Labor statistics Bing (without site filters) Twitter NYTimes Wikipedia Shutterstock
Librarian 79.5 74.0 71.0 50.0 22.0 74.0
Nurse 89.9 87.0 83.0 51.0 71.0 100.0
Computer programmer 21.2 31.0 8.0 19.2 9.0 34.0
Civil engineer 14.4 13.0 10.0 47.6 2.0 28.0
Woman 100.0 100.0 98.0 98.0 92.0 99.0
Man 0.0 1.0 1.0 3.0 2.0 0.0

TABLE 3 Comparison of results between labor statistics and considered digital platforms for search results in summer 2019

Job Labor statistics Bing (without site filters) Twitter NYTimes Wikipedia Shutterstock
Librarian 78.5 80.0 68.0 41.0 27.0 74.0
Nurse 88.6 94.0 83.0 64.0 71.0 97.0
Computer programmer 21.2 48.0 18.0 10.6 10.0 34.0
Civil engineer 14.8 16.0 19.0 30.6 3.0 26.0
Woman 100.0 99.0 100.0 96.0 90.0 100.0
Man 0.0 0.0 2.0 1.0 5.0 0.0

TABLE 4 Differences between the ratio of gender representation on platforms as compared with actual labor statistics

Job Labor statistics Twitter NYTimes Wikipedia Shutterstock


Librarian 79.0 −9.5 −33.5 −54.5 −5.0
Nurse 89.3 −6.3 −31.8 −18.3 9.3
Computer programmer 21.2 −8.2 −6.3 −11.7 12.8
Civil engineer 14.6 −0.1 24.5 −12.1 12.4
Woman 100.0 −1.0 −3.0 −9.0 −0.5
Man 0.0 1.5 2.0 3.5 0.0
Average for top 4 rows 51.0 −6.0 −11.8 −24.1 7.4

• NYTimes.com generally shows fewer female sub- • NYTimes.com and Twitter in general underrepre-
jects than actual labor statistics except for civil sented females moderately.
engineer. • Shutterstock overrepresented females moderately.

3. Generally more women. 3. Significant errors (20% or higher).

• Shutterstock generally shows more female subjects • The Wikipedia results show significant divergence
than actual labor statistics except for librarian. (underrepresentation) from the labor statistics.

5.2 | Magnitude of errors (considering 5.3 | Comparing results with gender


the direction) representation

1. No to little error (within 5%). In this data set, two of the professions (librarian and
nurse) are heavily occupied by women and the other two
• N/A. (civil engineer and computer programmer) by men.
Hence, when quantifying the magnitude of errors, the
2. Moderate errors (5% to 20%). gendered nature of the jobs, and whether the errors
SINGH ET AL. 9

TABLE 5 Comparison of results with actual labor statistics (giving credit for the challenging of stereotypes)

Job Labor statistics Twitter NYTimes Wikipedia Shutterstock


Librarian 79.0 9.5 33.5 54.5 5.0
Nurse 89.3 6.3 31.8 18.3 −9.3
Computer programmer 21.2 −8.2 −6.3 −11.7 12.8
Civil engineer 14.6 −0.1 24.5 −12.1 12.4
Average for top 4 rows 1.9 20.9 12.2 5.2

observed are breaking or reinforcing stereotypes can be Consistent with many previous studies, this study
considered. For instance, it can be argued that NYTimes. finds that women (or, at least, subjects the API assigns as
com underrepresents women in nursing-related photos women) are largely underrepresented in images on digi-
because it wants to provide more equal gender represen- tal platforms. There were clear exceptions, as discussed
tation in its content. The same could be argued regarding below, but the general trend holds. Remarkably, this
the overrepresentation of women as civil engineers. underrepresentation was consistent for both male-
Therefore, in Table 5, positive credit is given for dominated and female-dominated professions.
errors if they are in the direction of 50:50 parity; they are Examining the data based on whether creators are
scored negatively otherwise. amateurs or specialized provides an interesting perspec-
The following trends were noted. tive (Figure 2). With regard to error directions in the rep-
resentation numbers and labor statistics (Table 4),
platforms with largely amateur contributors (Twitter,
5.4 | Magnitude of errors (giving credit Wikipedia) underrepresent women across all professions.
to challenging stereotypes) This may relate to these systems' sociotechnical contexts.
A dearth of female editors on Wikipedia, for example,
1. Breaking stereotypes towards equal representation may lead to less frequent and more stereotypically
(scoring above 20 points). charged coverage of women across its content (Hill &
Shaw, 2013). Most recently, among three Nobel Prize-
• NYTimes.com. winning scientists, only the woman scientist lacked a
Wikipedia page until the awarding of her prize; pages for
2. Challenging stereotypes towards equal representation the men had already been created (Nechamkin, 2018).
(between 5 and 20 points). Although subsets of Twitter users may certainly engage
 Wikipedia and Shutterstock. in equity-oriented work, a study of discourse among a
3. Little to no movement towards equal representation large sample of users found that male–female power
(less than 5 points). inequities, including “mansplaining” and other gender-
specific language, are prevalent on the platform (Bridges,
• Twitter. 2017). In the aggregate, results from these platforms sug-
gests that gender stereotypes are far from eradicated on
digital media, just as they are far from absent in societal
6 | DISCUSSION discourse.
Regarding the magnitude of these errors (see
The results of this study suggest that some gender-based Table 4), Twitter's divergence from actual labor
occupational stereotypes are being challenged on some digi- statistics—the gendered composition of those occupa-
tal media platforms (for example, NYTimes.com, as shown tions in reality—is lowest. Images on these platforms cor-
in Table 5). Other gender stereotypes and biases, however, relate most precisely to the demographic makeup of
are being reinforced. Here we revisit the three research those occupying the jobs examined here. On Twitter, as
questions and discuss the results obtained. on Wikipedia, more images of men are found rep-
resenting all occupational categories (though men and
RQ1 How different is the image-based representation of women seem to use Twitter in roughly equal numbers—
highly gender-segregated professions (librarian, see Smith & Anderson, 2018). The difference between the
nurse, computer programmer, civil engineer) in the two platforms arguably lies in the terms of the level of
physical world compared to digital spaces (Twitter, automation in curation. While Twitter largely sorts the
NYTimes.com, Wikipedia, Shutterstock)? results based on recency without any major filtering,
10 SINGH ET AL.

Wikipedia content is curated at length by human content Images on NYTimes.com diverge from the labor sta-
moderators and curators. This aspect is discussed further tistics most substantially, providing images of civil engi-
under RQ3. neers who are women, librarians who are men, and
nurses who are men to a much greater extent than
RQ2 How do the differences above vary with time? reflected by the Bureau of Labor Statistics (although this
trend does not hold for images of women as computer
RQ2 of this work refers to the changes observed over programmers). Shutterstock seems to be making a more
time. As can be seen in Tables 2 and 3, the trends have modest effort towards presenting gender representations
remained largely consistent over the yearlong gap in that depart from the actual labor statistics, with the
which the data for this work were analyzed. First, the exception of an insufficient number of men being repre-
labor statistics-based participation across men and women sented as nurses. The magnitude of gender representa-
remained consistent, with an average change of less than tions that challenge labor statistics is also modest on
1%. Further, the abovementioned trends in terms of direc- Wikipedia, although women subjects are underrepre-
tion of errors, magnitudes of errors (considering the direc- sented in all jobs on the platforms, including nurses and
tion), and magnitude of errors (giving credit to breaking librarians.
stereotypes) remained largely consistent (see Tables 2 As posited in RQ3, we interpret the results using the
and 3). An overwhelming majority of the platforms axes of content creation and content curation (see Figures 1
stayed in the same bins across the 2 years, and the over- and 2). We find that the magnitude of errors compared to
representation and underrepresentation across profes- actual labor statistics showed an increasing trend based
sions was also quite similar. There were two exceptions. on the degree of human involvement in the curation
First, Twitter moved from “Always fewer females” in (that is, from mostly algorithmic curation to mostly non-
2018 to “Generally fewer females” in 2019 in terms of algorithmic human curation). At the same time, the mag-
direction of errors. It also moved from “Moderate errors nitude of error (while giving credit to the challenging of
(5% to 20%)” in 2018 to “No to little error (within 5%)” in stereotypes) showed the opposite trend. While highly
2019 in terms of the magnitude of errors (considering the algorithmic approaches for curation showed little inclina-
direction). Second, NYTimes.com moved from “Breaking tion towards challenging stereotypes, those with a largely
stereotypes towards equal representation (scoring above nonalgorithmic human curation process were more likely
20 points)” in 2018 to “Challenging stereotypes towards to do so.
equal representation (between 5 and 20 points)” in 2019 NYTimes.com and Shutterstock, as companies that
in terms of magnitude of errors (giving credit to challeng- deal with media and images on a very large scale (though
ing stereotypes). While the changes in Twitter over the with very different purposes), tend to speak with some-
year are encouraging, the changes in NYTimes.com are what more unified editorial “voices” than does Wikipedia
less so. Nevertheless, NYTimes scores the highest in (and more so than Twitter). This may result in the some-
terms of challenging the stereotypes in both years. The what more careful, less stereotypical curation of the
results paint a cautiously optimistic horizon for the repre- images than was seen on the other platforms examined.
sentation of women across platforms. While there is some As is the case with the NYTimes.com and Shutterstock,
movement in the positive direction, the trends overall Wikipedia is curated and vetted for content, but its
remain consistent. We hope that multiple works help entries are submitted by a wide variety of people who act
shine more light on these disparities and motivate trans- independently. There is no expectation of a unified edito-
formative changes by system designers across platforms rial vision throughout Wikipedia; oversight focuses on
in the near future to yield larger equality across genders accuracy and sourcing of independently created entries
in results. (Wikipedia, 2018). Thus, as was found here, a wider
range of types of images to accompany entries, including
RQ3 How does the relative ratio of human and algorith- both stereotypical images and those that defy stereotypes,
mic effort in content creation and curation affect the is expected. More images of men than women accompany
degree and types of biases observed on different types all occupational categories on Wikipedia, which exacer-
of digital media platforms? bates the bias against women as civil engineers and com-
puter programmers, while bolstering the expectation that
As indicated in Table 5, NYTimes.com, Shutterstock, women will work as nurses and librarians. This may be
and Wikipedia challenged binary gender stereotypes influenced by the practice of Wikipedia being primarily
most successfully. On the other hand, Twitter replicated, written by, and edited by, men (over 85% of Wikipedia
and, in some cases, reinforced binary gender stereotypes contributors are men by one oft-quoted estimate from
and biases. Lih, 2015).
SINGH ET AL. 11

While gender stereotyping certainly persists on digital another, this could explain some of the less biased out-
media platforms, there are sites and conditions under comes seen here. The biases in these platforms are a
which these stereotypes are beginning to be challenged reflection of their human designers, the technological
(for example, expert human curation on NYTimes). systems they are a part of, and the users of these systems.
Given that exposure to counter-stereotypical images can If and as these biases change over time, systems and plat-
reduce stereotypical attitudes, this enhances opportuni- forms may begin to be designed in more gender-inclusive
ties for gender-related social change. ways over time, and the next generations of digital plat-
form users may observe far fewer stereotypical images.
To the extent that human beings can create and curate
7 | IMPLICATIONS their own content, these biases can be reflected differ-
ently and even change over time. However, addressing
These findings have import and resonance. First, any inequities in technical systems is no simple task, given
indication that gendered occupational stereotypes are that scientific and technological development are infused
being countered and challenged in the new media envi- with longstanding normative assumptions about gender
ronment is striking. The reinforcement of gender stereo- (Hoffman, 2017).
types hinders progress in desegregating occupations and This work also contributes to the ongoing conversation
can discourage people from striving for careers that they on bias in algorithms. While multiple scholars have argued
may otherwise be well suited for because of their gender, that algorithms and automation significantly amplify ineq-
especially as mediated platforms serve as information uities (for example, Day, 2016; Noble, 2018; O'Neil, 2016),
resources (Chatman, 1996). More diverse participation in there has also been a recent counterargument on
STEM disciplines and in library and information-related benchmarking such algorithmic bias with human biases
fields would be a welcome result of this research. (Miller, 2018). Rather than arguing a case for the benefits or
Activists and others continue to challenge inequities hazards of algorithms, this work hopes to add nuance to
that fuel stereotypes on both institutional and more individ- this conversation. Biases occur in multiple forms and vary
ual levels. For example, although more popular iterations of across media platforms. Similarly, human effort in the
“Me Too” have been critiqued for succumbing to individual- curation process is also a continuum. This research points
ist perspectives and commodification (Banet-Weiser, 2018), out the value of considering the level and type of human
the original movement founded by Tarana Burke aimed to decision-making in the curation process. While more auto-
accelerate people's awareness and understanding of gender- mation appears to replicate societal conditions, more careful
based sexual harassment, abuse, and violence (Burke, 2018). nonalgorithmic human curation might help challenge ste-
Trans Lifeline, a national trans-led organization headed by reotypes towards gender parity. Different types of curation
Elena Rose Vera at the time of this article's publication, pro- may surely be more appropriate for different digital plat-
vides service, support, advocacy, and education for trans forms, companies, or users, depending on their goals.
individuals through justice-oriented and collective commu-
nity aid (Trans Lifeline, 2019). It is possible that editors and
specialized creators and curators of digital media content 8 | L I M I T A T I O N S AN D
(and their advertisers) may be becoming sensitized to such FUTURE WORK
efforts and their goals. They may be looking to some extent
to drive substantive change, deciding, strategically, to be Although this work discusses biases and stereotypes, both
more welcoming to and inclusive of individuals other than our results and their limitations indicate wider algorith-
cisgender men given the present cultural climate (Roderick, mic inequities beyond highly individualized conceptions
2017). That may explain some of the countering of stereo- of “bias,” “stereotypes,” and “fairness” (Dave, 2019).
types found in this study. Of course, some media platforms While the overrepresentation of images that align with
may also sincerely wish to be less overtly biased and to por- normative conceptions of masculine-presenting bodies is
tray occupations in a more gender-inclusive way. However, certainly an important finding, the process through
it is imperative to note that individuals—and especially which we reached that conclusion reveals wider systemic
members of queer populations—continue to experience harms perpetuated by technical systems that cannot
heightened discrimination and harassment online (Powell, account for gender as a spectrum (Keyes, 2018; Keys,
Scott, & Henry, 2018). 2019). Both Clarifai's API and the U.S. Department of
Some individuals may also be becoming more sensi- Labor statistics operationalize gender in binary terms
tized to their own gender biases given recent attention to that exclude nonbinary individuals, meaning that analy-
this topic. In alignment with the body of sociotechnical sis derived from their data perpetuates significant episte-
work in which technology and humans co-construct one mic harms (for example, erasure, misrepresentation,
12 SINGH ET AL.

perpetual normative conceptions of both gender and gen- is one of the first to examine gender variations in occupa-
der presentation). In essence, programs like the Clarifai tional imagery across digital media platforms and infor-
API act like institutions that assign gender based on mation portals. The results indicate that stereotypes are
physical features that do not, in actuality, dictate a per- most likely to be challenged when human beings act
son's gender. Thus, recognition software like Clarifai directly to create and curate content in digital spaces that
reproduces normative, binary conceptions of gender are less controlled by algorithms. At the same time,
(Keyes, 2018). It also parses images without the consent highly algorithmic approaches reinforce societal realities
of those photographed, and thus has implications for con- more so than did largely human efforts, although the two
sent, privacy, and ongoing problems and inequities cannot be fully separated. This work adds nuance to the
related to facial recognition software and ways in which discussion of the biases found in content curation and
it may be used to police marginalized populations, identifies the scenarios in which certain gradations of
including trans and nonbinary people (Gates, 2015). algorithm-heavy or human-heavy curation may be better
Although our present study's results do not challenge suited. And it sheds light on ways to improve the creation
this systemic epistemic violence, we hope to draw contin- and curation of digital content such that existing stereo-
ued attention to these limitations and promote scholar- types can be countered in favor of gender representations
ship that brings a more inclusive orientation to work in that are more equitable, paving the way for a more equi-
human–computer interaction and information science, in table and just society.
order to destabilize assumptions about gender and tech-
nology and work towards equity and justice (see Spiel ORCID
et al., 2019). We call for future work that aligns with Vivek K. Singh https://orcid.org/0000-0002-8194-2336
these perspectives and addresses inequities beyond more
simplistic conceptions of bias, fairness, and binaires RE FER EN CES
(Hoffmann, 2019). Arslan, B., & Koca, C. (2007). A content analysis of Turkish daily
We also acknowledge that this focus on gender does newspapers regarding sportswomen and gender stereotypes.
not embrace intersecting identity categories—such as Annals of Leisure Research, 10(3/4), 310–327.
Baily, J., Steeves, V., Burkell, J., & Regan, P. (2013). Negotiating
race, class, sexuality, age, and disability—that complicate
with gender stereotypes on social networking sites: From “Bicy-
equitable representation across various professions and cle Face” to Facebook. Journal of Communication Inquiry, 37
media platforms (Crenshaw, 1991). Librarianship, for (2), 91–112.
example, has a significant underrepresentation of people Banet-Weiser, S. (2018). Popular feminism: Structural rage. In Los
of color (Kim & Sin, 2008). Future work, including that Angeles review of books. Retrieved from https://lareviewof
of the authors, can and will include a more intersectional books.org/article/popular-feminism-structural-rage/
perspective to understand a fuller range of occupation- Bastos, M., & Walker, S. (2018 ). Facebook's data lockdown is a
disaster for academic researchers. The conversation. Retrieved
based inequalities and stereotypes in the media.
from https://theconversation.com/facebooks-data-lockdown-is-
Despite these limitations, this work moves the litera-
a-disaster-for-academic-researchers-94533.
ture forward in multiple ways. It is the first concerted Baym, N. K. (2015). Personal connections in the digital age. Hobo-
effort at studying how different types of platforms, and ken, NJ: John Wiley & Sons.
content creation and curation efforts, reinforce or chal- Benders, J., Hoeken, P., Batenburg, R., & Schouteten, R. (2006).
lenge mediated gender biases. It also provides new and First organize, then automate: A modern socio-technical view
important insights into the role of human content crea- on ERP systems and teamworking. New Technology Work and
tion and curation in a highly algorithmic—and highly Empoyment, 21(3), 242–251.
Bhargava, H.K., & Feng, J. (2002). Paid placement strategies for
gender-segregated—digital environment. Further
internet search engines. In Proceedings of the 11th Interna-
research that would reveal, explain, and advance under- tional Conference on World Wide Web (pp. 117–123).
standing of the design and use of sociotechnical systems New York: ACM.
so as to highlight issues of social inclusion and justice Binder, M. (2018), Twitter is bringing back the reverse chronologi-
continues to be much needed. cal timeline today. Retrieved from https://mashable.com/
article/twitter-sparkle-icon-chronological-timeline/
Bligh, M. C., Schlehofer, M. M., Casad, B. J., & Gaffney, A. M.
9 | C ON C L U S I ON (2011). Competent enough, but would you vote for her?
Gender stereotypes and media influences on perceptions of
women politicians. Journal of Applied Social Psychology, 42(3),
As human beings continue to produce and consume digi- 560–597.
tal information online, the gendered imagery found in Bridges, J. (2017). Gendering metapragmatics in online discourse:
many of these messages can shape and influence human “Mansplaining man gonna mansplain…”. Discourse, Context &
attitudes, perceptions, behaviors, and norms. This study Media, 20, 94), 94–94), 102.
SINGH ET AL. 13

Burke, T. (2018). The inception. Retrieved from https://metoo Genner, S., & Süss, D. (2017). Socialization as media effect. The
mvmt.org/the-inception/ International Encyclopedia of Media Effects. https://doi.org/10.
Butler, J. (1988). Performative acts and gender constitution: An 1002/9781118783764.wbieme0138
essay in phenomenology and feminist theory. Theatre Journal, Green, M., Bobrowicz, A., & Ang, C. S. (2015). The lesbian, gay,
40(4), 519–531. bisexual and transgender community online:Discussions of bul-
Calmon, F., Wei, D., Vinzamuri, B., Ramamurthy, K. N., & lying and self-disclosure in YouTube videos. Behaviour& Infor-
Varshney, K. R. (2017). Optimized pre-processing for discrimi- mation Technology, 34(7), 704–712.
nation prevention. In Advances in neural information processing Goffman, E. (1978). Gender advertisements. Cambridge, MA: Har-
systems (pp. 3992–4001). Red Hook, NY: Curran Associates. vard University Press.
Chayko, M. (2018). Superconnected: The Internet, digital media, and Gowen, L. K., & Winges-Yanez, N. (2014). Lesbian, gay, bisexual,
techno-social life. Thousand Oaks, CA: Sage Publications. transgender, queer, and questioning youths' perspectives of
Chatman, E. A. (1996). The impoverished life-world of outsiders. inclusive school-based sexuality education. Journal of Sex
Journal of the American Society for Information Science, 47, Research, 51(7), 788–800.
193–206. Herdagdelen, A., & Baroni, M. (2011). Stereotypical gender actions
Clarifai. (2018). Demographics. Retrieved from https://www. can be extracted from web text. Journal of the American Society
clarifai.com/models/demographics-image-recognition-model- for Information Science and Technology, 62, 1741–1749.
c0c0ac362b03416da06ab3fa36fb58e3 Hill, B. M., & Shaw, A. (2013). The Wikipedia gender gap revisited:
Craig, S. L., & McInroy, L. (2014). You canform a part of yourself Characterizing survey response bias with propensity score esti-
online: The influence of new media on identitydevelopment mation. PLoS One, 8(6), e65782.
and coming out for LGBTQ youth. Journal of Gay & Hoffman, A. L. (2017). Data, technology, and gender: Thinking
LesbianMental Health, 18(1), 95–109. about (and from) trans lives. In A. Shew & J. Pitt (Eds.), Spaces
Crenshaw, K. (1991). Mapping the margins: Intersectionality, iden- for the future: A companion to philosophy of technology
tity politics, and violence against women of color. Stanford Law (pp. 3–13). London: Routledge.
Review, 43(6), 1241–1299. Hoffmann, A. L. (2019). Where fairness fails: Data, algorithms, and
Dasgupta, N., & Greenwald, A. G. (2001). On the malleability of the limits of antidiscrimination discourse. Information, Com-
automatic attitudes: Combating automatic prejudice with munication, & Society, 22(7), 900–915.
images of admired and disliked individuals. Journal of Person- Kay, M., Matuszek, C., & Munson, S.A. (2015). Unequal representa-
ality and Social Psychology, 81, 800–814. tion and gender stereotypes in image search results for occupa-
Datta, A., Tschantz, M. C., & Datta, A. (2015). Automated experi- tions. In Proceedings of the 33rd Annual ACM Conference on
ments on ad privacy settings. Proceedings on Privacy Enhancing Human Factors in Computing Systems (pp. 3819–3828).
Technologies, 2015(1), 92–112. New York: ACM.
Dave, K. (2019). Systemic algorithmic harms. Data & society: Keyes, O. (2018). The misgendering machnies: Trans/HIC implica-
points. Retrieved from https://points.datasociety.net/systemic- tions of automatic gender recognition. Proceedings of the ACM
algorithmic-harms-e00f99e72c42 on Human-Computer Interaction, 2(88), 1–22. https://doi.org/
Day, M. (2016). How LinkedIn's search engine may reflect a gender 10.1145/3274357
bias. The Seattle Times. Retrieved from https://www. Keys, O. (2019). Counting the countless. Real life. Retrieved from
seattletimes.com/business/microsoft/how-linkedins-search- https://reallifemag.com/counting-the-countless.
engine-may-reflect-a-bias/ Kim, K. S., & Sin, S. C. J. (2008). Increasing ethnic diversity in LIS:
De Cabo, R. M., Gimeno, R., Martínez, M., & López, L. (2014). Strategies suggested by librarians of color. The Library Quar-
Perpetuating gender inequality via the Internet? An analysis of terly, 78(2), 153–177.
women's presence in Spanish online newspapers. Sex Roles, 70 Kitzie, V. L. (2017). Affordances and constraints in the online iden-
(1–2), 57–71. tity work of LGBTQ+ individuals. Proceedings of the Annual
Dilevko, J., & Harris, R. M. (1997). Information technology and Meeting of the Association for Information Science and Technol-
social relations: Portrayals of gender roles in high tech product ogy, 54, 222–231.
advertisements. Journal of the American Society for Information Leonardi, P. M. (2012, 2012). Materiality, sociomateriality, and
Science, 48, 718–727. socio-technical systems: What do these terms mean? How are
Döring, N., Reif, A., & Poeschl, S. (2016). How gender-stereotypical they related? Do we need them? In P. M. Leonardi,
are selfies? A content analysis and comparison withmagazine B. A. Nardi, & J. Kallinikos (Eds.), Materiality and organizing:
adverts. Computers in Human Behavior, 55, 955–962. Social interaction in a technological world (pp. 25–48). Oxford,
Döring, N., & Pöschl, S. (2006). Images of men and women in UK: Oxford University Press.
mobile phones advertisements: A content analysis of advertise- Lih, A. (2015). Can Wikipedia survive? The New York Times.
ments for mobile communication systems in selected popular Retrieved from https://www.nytimes.com/2015/06/21/opinion/
magazines. Sex Roles, 55, 173–185. can-wikipedia-survive.html
Emery, F. (1959). Characteristics of sociotechnical systems. London: Mehra, B., Merkel, C., & Bishop, A. P. (2004). The internet for
Tavistock Institute. empowerment of minority and marginalized users. New media
Fox, J., & Ralston, R. (2016). Queeridentity online: Informal learn- & society, 6(6), 781–802.
ing and teaching experiences of LGBTQ individuals on social Miller, A.P. (2018). Want less-biased decisions? Use more algo-
media. Computers in Human Behavior, 65, 635–642. rithms. Harvard Business Review. Retrieved from https://hbr.
Gates, K. (2015). Can computers be racist? Juniata Voices, 15, 5–17. org/2018/07/want-less-biased-decisions-use-algorithms
14 SINGH ET AL.

Naaman, M., Boase, J., & Lai, C.H. (2010). Is it really about me?: Sink, D., & Mastro, D. (2017). Depictions of gender on primetime
Message content in social awareness streams. In Proceedings of television: A quantitative content analysis. Mass Communica-
the 2010 ACM Conference on Computer Supported Cooperative tion and Society, 20(1), 3–22.
Work (pp. 189–192). New York: ACM. Singer, D. G., & Singer, J. L. (2012). Handbook of children and the
Nechamkin, S. (2018). Wikipedia didn't think this woman Nobel media. Thousand Oaks, CA: Sage Publications.
Prize winner deserved a page. The Cut. Retrieved from https:// Smith, A. & Anderson, M. (2018). Social media use in 2018. Pew
www.thecut.com/2018/10/wikipedia-didnt-give-this-nobel-prize- Research Center. Retrieved from http://www.pewinternet.org/
winner-a-page.html 2018/03/01/social-media-use-in-2018/
Noble, S. U. (2013). Google search: Hyper-visibility as a means of rend- Spiel, K., Keyes, O., & Barlas, P. (2019, April). Patching Gender:
ing black women and girls invisible. Invisible Culture, 19, 1–23. Non-binary Utopias in HCI. In Extended Abstracts of the 2019
Retrieved from https://ivc.lib.rochester.edu/google-search-hyper- CHI Conference on Human Factors in Computing Sys-
visibility-as-a-means-of-rendering-black-women-and-girls-invisible/ tems (p. alt05). ACM.
Noble, S. U. (2018). Algorithms of oppression: How search engines Stvilia, B., Twidale, M. B., Smith, L. C., & Gasser, L. (2008). Infor-
reinforce racism. New York: New York University Press. mation quality work organization in Wikipedia. Journal of the
O'Neil, C. (2016). Weapons of math destruction. New York: Crown. American Society for Information Science and Technology, 59(6),
Otterbacher, J., Bates, J. & Clough, P.D. (2017) Competent men and 983–1001.
warm women: Gender stereotypes and backlash in image sea- Tiku, N. (2018). ACLUS says Facebook ads let employees favor men
rch results. In Proceedings of the 2017 CHI Conference on over women. wired Retrieved from https://www.wired.com/
Human Factors in Computing Systems (pp. 6–11) May 2017, story/aclu-says-facebook-ads-let-employers-favor-men-over-
Colorado Convention Center. Denver, CO: Association for women/
Computing Machinery. U.S. Department of Labor, Bureau of Labor Statistics. (2018). Labor
Powell, A., Scott, A.J., & Henry, N. (2018). Digital harassment and force statistics from the current population survey. Retrieved
abuse: Experiences of sexuality and gender minority adults. from https://www.bls.gov/cps/cpsaat11.htm
European Journal of Criminology. [epub ahead of print] doi: Wajcman, J. (1991). Feminism confronts technology. University Park,
https://doi.org/10.1177/1477370818788006, 147737081878800 PA: Pennsylvania State University Press.
Prot, S., Anderson, C. A., Gentile, D. A., Warburton, W., Saleem, M., Wajcman, J. (2008). The feminization of work in the information
Groves, C. L., & Brown, S. C. (2015). Media as agents of sociali- age. In D. G. Johnson & J. M. Wetmore (Eds.), Technology and
zation. In J. E. Grusec & P. D. Hastings (Eds.), Handbook of society: Building our sociotechnical future (pp. 459–474). Cam-
socialization (2nd ed., pp. 276–300). New York: Guilford Press. bridge, MA: MIT Press.
Ramasubramanian, S. (2011). The impact of stereotypical versus Wikipedia. (2018). Wikpedia: policies and guidelines. Retrieved
counterstereotypical media exemplars on racial attitudes, from https://en.wikipedia.org/wiki/Wikipedia:Policies_and_
causal attributions, and support for affirmative action. Commu- guidelines
nication Research, 38, 497–516. Winner, L. (1980). Do artifacts have politics? Daedalus, 109(1),
Rasul, S. (2009). Gender stereotypes in thelanguage of Pakistani 121–136.
newspapers. In Proceedings, 8th InternationalConference on
Language and Development (pp. 1–18). Bangladesh.
Ringrose, J., Harvey, L., Gill, R., & Livingstone, S. (2013). Teen girls, How to cite this article: Singh VK, Chayko M,
sexual double standards and ‘sexting’: Gendered value in digital Inamdar R, Floegel D. Female Librarians and Male
image exchange. Feminist theory, 14(3), 305–323. Computer Programmers? Gender Bias in
Roderick, L. (2017). How the portrayal of women in the media has
Occupational Images on Digital Media Platforms. J.
changed. Marketing week. Retrieved from https://www.
marketingweek.com/2017/03/08/portrayal-women-media/
Assoc. Inf. Sci. Technol. 2020;1–14. https://doi.org/
Shapiro, E. (2015). Gender circuits: Bodies and identities in a techno- 10.1002/asi.24335
logical age (2nd ed.). New York: Routledge.

You might also like