You are on page 1of 12

AI Art and its Impact on Artists

Harry Jiang Lauren Brown Jessica Cheng


Independent Researcher Artist Artist
Canada USA Canada
hhj@alumni.cmu.edu labillustration@gmail.com chengelingart@gmail.com

Anonymous Artist Mehtab Khan Abhishek Gupta


Artist Yale Law Montréal AI Ethics Institute
USA USA Canada
mehtab.khan@yale.edu abhishek@montrealethics.ai

Deja Workman Alex Hanna Jonathan Flowers


Penn State University The Distributed AI Research Institute California State University,
USA USA Northridge
dqw5409@psu.edu alex@dair-institute.org USA
johnathan.flowers@csun.edu

Timnit Gebru
The Distributed AI Research Institute
USA
timnit@dair-institute.org

ABSTRACT 1 INTRODUCTION
The last 3 years have resulted in machine learning (ML)-based In the two years since the publication of [18] which outlines the
image generators with the ability to output consistently higher dangers of large language models (LLMs), multimodal generative
quality images based on natural language prompts as inputs. As artificial intelligence (AI) systems with text, images, videos, voice,
a result, many popular commercial “generative AI Art” products and music as inputs and/or outputs have quickly proliferated into
have entered the market, making generative AI an estimated $48B the mainstream, making the generative AI industry valued at an es-
industry [125]. However, many professional artists have spoken timated $48B [125]. Tools like Midjourney [78], Stable Diffusion [5],
up about the harms they have experienced due to the proliferation and DALL-E [91] that take in text as input and output images, as
of large scale image generators trained on image/text pairs from well as image-to-image based tools like Lensa [97] which output
the Internet. In this paper, we review some of these harms which altered versions of the input images, have tens of millions of daily
include reputational damage, economic loss, plagiarism and copy- users [47, 127]. However, while these products have captured the
right infringement. To guard against these issues while reaping the public’s imagination, arguably to a much larger extent than any
potential benefits of image generators, we provide recommenda- prior AI system, they have also resulted in tangible harm, with
tions such as regulation that forces organizations to disclose their more to come if the ethical concerns they posit are not addressed
training data, and tools that help artists prevent using their content now. In this paper, we outline some of these concerns, focusing
as training data without their consent. our discussion on the impact of image based generative AI systems,
i.e. tools that take text, images, or a combination of both text and
ACM Reference Format: images as inputs, and output images. While other works have sum-
Harry Jiang, Lauren Brown, Jessica Cheng, Anonymous Artist, Mehtab Khan, marized some of the potential harms of generative AI systems more
Abhishek Gupta, Deja Workman, Alex Hanna, Jonathan Flowers, and Timnit generally [18, 28, 29], we focus our discussion on the impacts of
Gebru. 2023. AI Art and its Impact on Artists. In AAAI/ACM Conference on these systems on the art community, which has arguably been one
AI, Ethics, and Society (AIES ’23), August 08–10, 2023, Montréal, QC, Canada.
of the biggest casualties (Section 4) [40, 138].
ACM, New York, NY, USA, 12 pages. https://doi.org/10.1145/3600211.3604681
As we argue in Section 3, image based generative AI systems,
which we call image generators throughout this paper, are not
artists. We make this argument by first establishing that art is a
uniquely human endeavor, using perspectives from philosophies
of art and aesthetics. We further discuss how anthropomorphizing
This work is licensed under a Creative Commons Attribution International image generators and describing them as merely being “inspired”
4.0 License.
by their training data, like artists are inspired by other artists, is
AIES ’23, August 08–10, 2023, Montréal, QC, Canada not only misguided but also harmful. Ascribing agency to image
© 2023 Copyright held by the owner/author(s). generators diminishes the complexity of human creativity, robs
ACM ISBN 979-8-4007-0231-0/23/08.
https://doi.org/10.1145/3600211.3604681 artists of credit (and in many cases compensation), and transfers

363
AIES ’23, August 08–10, 2023, Montréal, QC, Canada

accountability from the organizations creating image generators, copy the texture of another image [38, 95, 120]. In the deep learning
and the practices of these organizations which should be scrutinized, era of computer vision (2012 until now), Convolutional Neural
to the image generators themselves. Networks (CNNs) enabled the ability to recognize a large amount
While companies like Midjourney, Stability AI and Open AI of latent attributes that do not conform to arbitrary statistical forms,
who produce image generators are valued at billions of dollars unlike early works in texture synthesis [69].
and are raising hundreds of millions of dollars1 , their products are In addition to CNNs, another architectural element of note is
flooding the market with content that is being used to compete with the variational autoencoder (VAE), like the one used in Yan et
and displace artists. In section 4, we discuss the impact of these al. [135]; VAEs, which use two mirrored neural network compo-
products on working artists, including the chilling effect on cultural nents to map the input space to a latent space (encoder) and vice
production and consumption as a whole. Merely open sourcing versa (decoder) [68], set the stage for the development of gener-
image generators does not solve these problems as they would still ative models, which significantly widened the capacity of image
enable people to plagiarize artists’ works, and impersonate their synthesis. A key element of VAEs is the reconstructive loss function
style for uses that the artists have not consented to. which allows an ML system to explicitly define its training objec-
In Section 5, we provide a summary of the relevant legal ques- tive as the re-creation of input features, with the expectation that
tions pertaining to image generators. While there have been legal the model can generalize beyond the training set during inference.
developments around the world, we focus our analysis on the US VAEs enabled the creation of image generation models such as
where a number of lawsuits have been filed by artists challenging VQ-VAE-2 [104] and are components of many subsequent models.
the use of image generators [129]. Given that copyright has been The next major breakthrough is the generative adversarial net-
the most frequently invoked law in such cases [28], we provide an work (GAN), which employs the use of two models trained simul-
overview discussing the relevance of US copyright law in protecting taneously [58]. Unlike conventional neural networks such as VAEs
artists, and conclude that it is largely unequipped to tackle many which directly and asymmetrically measure the divergence between
of the types of harms posed by these systems to content creators. a distribution known to be a reference and one known to be a hy-
As we discuss in Section 6, the AI research community has enabled pothesis, GANs indirectly measure the divergence between two
the aforementioned harms through data laundering, with for-profit distributions of masked origin through the intermediary of the
corporations partnering with academic institutions that help them discriminator. The introduction of conditional losses in [82] made
gather training data for commercial purposes while increasing their GANs the dominant architecture in image generation due to the
chances of courts finding these uses to be “fair use”. ability to now inform outputs with text tags as auxiliary informa-
We end our discussion with proposals for new tools and regula- tion; the paper itself used a handwriting generator trained on the
tions that tackle some of the harms discussed in this paper, as well MNIST dataset [35] as a demonstration. With GANs came the first
as encouraging the AI community to align themselves with those large-scale image generating models, allowing for output sizes of
harmed by these systems rather than powerful entities driving the up to 512 × 512 [24, 61, 65].
proliferation of generative AI models trained on the free labor of The adaptation of approaches from natural language processing
content creators. (NLP) such as transformers further enabled having complex text as
input for text-to-image models [42]. In [30], OpenAI adapted the
2 LITERATURE REVIEW architecture of GPT-2, a large language model (LLM), to output a
2.1 Background on Image Generation series of pixel values that could be rearranged into a recognizable
image. This research led to the original DALL-E [103], a tool that
We define “generative artificial intelligence (AI)” to encompass ma- outputs a 256 × 256 RGB image based on natural language prompts,
chine learning products that feature models whose output spaces this time using the GPT-3 architecture [25].
overlap in part or in full with their input spaces during training, In the last 3 years, the use of GANs for image generation has
though not necessarily inference. While generative AI systems been overtaken by diffusion models which take inspiration from
are based on generative models which statistically aim to model fluid dynamics [37, 86, 116, 118]. These models work by repeatedly
the joint distribution between a feature space and output space applying gaussian noise on an image (imitating the diffusion process
𝑝 (𝑥, 𝑦) [85], we distinguish between “generative AI” systems and of fluids or heat), and then denoising the result in equally many
generative models as the latter can be used in classification sys- steps [118]. In a departure from GANs’ implicit modeling, diffusion
tems. This paper focuses on products whose stated output space models return to using a reconstruction loss.
composes, in part or in full, of visual data (i.e. images), which will In 2022, Rombach et al. released the Stable Diffusion model [4,
be referred to as image generators; similarly, the scope of art dis- 106], which uses a conditional latent space based on text and im-
cussed within this work is largely limited to the fields of visual art. ages: in this case a pretrained model by OpenAI called CLIP [99].
We consider two different applications in the context of inference, This allowed for models that are not confined to natural language
text-to-image and image-to-image, though more recent multimodal understanding (NLU)-based architectures, and can generate high-
pretrained model architectures usually are capable of both (and quality images based on natural language prompts. In the same
often necessitate both). year, OpenAI released DALL-E 2 [91] which has a similar model
Early approaches to image synthesis such as [38, 95, 120], aimed architecture [102] but with a training dataset that is opaque to the
to achieve texture synthesis, i.e. modifying an existing image to public.
1 https://www.nytimes.com/2023/01/23/business/microsof t- chatgpt- artificial-
intelligence.html

364
AI Art and its Impact on Artists AIES ’23, August 08–10, 2023, Montréal, QC, Canada

In addition to different model architectures, massive image datasets in descriptions of image generators as if they are artists [39], even
such as JFT-300M (300M images) [124] have helped improve im- going as far as to claim that the image generators are “inspired”
age generation performance. The current crop of image genera- by the data they are trained on, similar to how artists are inspired
tors, primarily those based on Stable Diffusion, are pretrained on by other artists’ works [66]. In this section, we discuss why such
LAION [109], or its variants which are subsets of the original 5B arguments are misguided and harmful.
dataset. The dataset consists of 5.85 billion CLIP-filtered image-text Following philosophers of art and aesthetics from varied dis-
pairs, of which 2.32B contain English language text. An exploration ciplines (e.g. Chinese and Japanese Philosophy, American Prag-
of a subset of LAION can be found at [11]. matism, and Africana Philosophy), we define art as a uniquely
human endeavor connected specifically to human culture and ex-
2.2 Products for Image Generation perience [6, 36, 62, 74, 75, 88, 93]. Most philosophers of art and
The advent of Stable Diffusion and related models has resulted in aesthetics argue that while non-human entities can have aesthetic
a proliferation of commercial and non commercial image genera- experiences and express affect, a work of art is a cultural product
tion tools that use them. Stability AI’s Stable Diffusion [5] and its that uses the resources of a culture to embody that experience in a
commercial product Dream Studio2 , OpenAI’s DALL-E 2 [91], and form that all who stand before it can see. On this view, art refers to
Midjourney [78] are the most popular systems built on diffusion a process that makes use of external materials or the body to make
models, with StarryAI [122], Hotpot.ai [94], NightCafe [123], and present experience in an intensified form. Further, this process
Imagen [108] being a few others. Established art software company must be controlled by a sensitivity to the attitude of the perceiver
Adobe has also released its image generator product, Adobe Fire- insofar as the product is intended to be enjoyed by an audience.
fly [3], which the company says is trained on Adobe Stock images, The artwork, therefore, is the result of a process that is controlled
images in the public domain, and those under open licensing. The for some end and is not simply the result of a spontaneous activity
ecosystem is large and expanding, including organizations like Fo- ( [36] pp. 54, 55). This control over the process of production is what
tor [45], Dream by WOMBO [133], Images.AI [128], Craiyon [71], marks the unique contribution of humanity: while art is grounded
ArtBreeder [9], Photosonic [134], Deep Dream Generator [55], Run- in the very activities of living, it is the human recognition of cause
way ML [107], CFSpark [46], MyHeritage Time Machine [73], and and effect that transforms activities once performed under organic
Lensa [97]. While some advertise the model architectures they use, pressures into activities done for the sake of eliciting some response
such as StableCog [121] using diffusion-based techniques, others from a viewer. As an example, a robin might sing, a peacock might
provide little to no detail. For example, while the CEO of Stability AI dance, but these things are performed under the organic pressures
has written that Midjourney used Stable Diffusion in past releases 3 , of seeking a mate. In humans, song and dance are disconnected from
Midjourney does not disclose underlying model information for the organic pressures of life and serve purposes beyond the mere
its current releases, only mentioning “a brand-new AI architecture satisfaction and expression of organic pressures, and serve cultural
designed by Midjourney” in describing its releases since November purposes. In brief, art is a form of communication: it communicates.
2022 [79]. In contrast, the outputs of artifacts like image generators are not
Most of the products identified above emerged as specific com- framed for enjoyment because they merely imitate the technical
mercial offerings for users to generate images by providing text process, and then only those technical processes embodied in the
prompts. There are other services that have been introduced as fea- works that make up the training dataset. The image generator
tures in existing products, such as synthetic images in Canva [26], has no understanding of the perspective of the audience or the
Shutterstock [113], and Adobe Stock Images [2], which seek to experience that the output is intended to communicate to this
augment their stock image offerings with synthetic images. On the audience. At best, the output of image generators is aesthetic, in
other hand, companies like Getty Images took a stance against in- that it can be appreciated or enjoyed, but it is not artistic or art
cluding synthetic images in their portfolio of offerings in 2022 [130], itself. Thus, “Mere perfection in execution, judged in its own terms
although NVIDIA announced a collaboration with them in 2023 in isolation, can probably be attained better by a machine than by
to develop image generators [76]. Open source efforts in the space human art. By itself, it is at most technique. . . To be truly artistic, a
have focused on using Stable Diffusion and other open-source vari- work must also be esthetic—that is, framed for enjoyed receptive
ants to create plugins for Photoshop [7], Unreal Engine [43], and perception.” ( [36] pp. 54).
GIMP [20]. Some groups, such as Unstable Diffusion, are explicitly Thus, art is a uniquely human activity, as opposed to something
focused on generating not-safe-for-work (NSFW) content [59]. that can be done by an artifact. While image generators have to be
trained by repeatedly being shown the “right” output, using many
3 IMAGE GENERATORS ARE NOT ARTISTS examples of the desired target, and explicitly defining an objective
function over which to optimize, humans do not have such rigid
Many researchers have pointed out the issues that arise from the
instructions. In fact, while image generators have been shown to
anthropomorphization of AI systems, including shifting responsi-
even memorize their data and can output almost exact replicas of
bility from the people and organizations that build these systems,
images from their training set under certain conditions [27, 117],
to the artifacts they build as if those artifacts have agency on their
as artist Karla Ortiz writes, artists’ styles are so unique to them,
own [13, 16, 39]. This anthropomorphization is readily apparent
that it is very difficult for one artist to copy another’s work [92].
2 https://dreamstudio.com The very few artists who are able to do this copying are known for
3 https://web.archive.org/web/20220823032632/https://twitter.com/EMostaque/status this skill [92]. An artists’ ‘personal style’ is like their handwriting,
/1561917541743841280, referring to V3 authentic to them, and they develop this style (their personal voice

365
AIES ’23, August 08–10, 2023, Montréal, QC, Canada

and unique visual language) over years and through their lived compensation, and ascribes accountability to the image genera-
experiences [92]. tors rather than holding the entities that create them accountable.
The adoption of any particular style of art, personal or other- In [39], Epstein et al. performed a study with participants on Ama-
wise, is a result of the ways in which the individual is in transaction zon Mechanical Turk to assess the impact of anthropomorphization
with their cultural environment such that they take up the customs, of image generators, finding a relationship between the manner in
beliefs, meanings, and habits, including those habits of aesthetic which participants assign credit and accountability to stakeholders
production, supplied by the larger culture. As philosopher John involved in training and producing image generators, and the level
Dewey argues, an artistic style is developed through interaction of anthropomorphization. They advise “artists, computer scientists,
with a cultural environment rather than bare mimicry or extrap- and the media at large to be aware of the power of their words, and
olation from direct examples supplied by a data set [23]. Steven for the public to be discerning in the narratives they consume.”
Zapata argues, “our art ‘creates’ us as artists as much as we create
it” [138]. This experience is unique to each human being by virtue 4 IMPACT ON ARTISTS
of the different cultural environments that furnish the broader set The proliferation of image generators poses a number of harms to
of habits, dispositions towards action, that enabled the development artists, chief among them being economic loss due to corporations
of anything called a personal style through how an individual took aiming to automate them away. In this section, we summarize some
up those habits and deployed them intelligently. of these harms, including the impact of artists’ styles being mim-
Finally, an image generator is trained to generate images from icked without their consent, and in some cases, used for nefarious
prompts by mapping images and texts into a lower dimensional purposes. We close with a discussion of how image generators stand
representation in a latent space [58, 68, 106]. This latent space is to perpetuate hegemonic views and stereotyping in the creative
learned during the model’s training process. Once the model is world, and the chilling effects of these technologies on artists as
trained, this latent space is fixed and can only change through train- well as overall cultural production and consumption.
ing from scratch or fine-tuning on additional examples of image-text
pairs [57]. In contrast, human inspiration changes continuously 4.1 Economic Loss
with new experiences, and a human’s relationship with their lived
experiences evolves over time. Most importantly, these experiences While artists hone their craft over years of practice, observation,
are not limited to additional artistic training or viewing of images. and schooling, having to spend time and resources to pay for sup-
Rather, humans perform abstract interpretations between repre- plies, books, and tutorials, companies like Stability AI are using
sentational and imaginary subjects, topics, and of course, personal their works without compensation while raising billions from ven-
feelings and experiences that an artifact cannot have. ture capitalists to compete with them in the same market4 . Leaders
Let’s look at Katsuhiro Otomo’s seminal Akira as an example. of companies like Open AI and Stability AI have openly stated
Otomo notes that he created these images by drawing inspiration that they expect generative AI systems to replace creatives immi-
from his own teenage years, thinking about a rebuilding world, nently56 . Stability AI CEO Emad Mosque has even accused artists of
foreign political influence, and an uncertain future after World War wanting to have a “monopoly on visual communications” and “skill
II [12]. Similarly, Claude Monet created his defining Nymphéas segregation”7 . To the contrary, current image generation business
[Water Lilies] series during the last 30 years of his life, after the models like those of Midjourney, Open AI and Stability AI, stand
loss of his son in 1914 [63]. As shown by both these artists, and to centralize power in the hands of a few corporations located in
many other artists, the human experience both defines and inspires Western nations, while disenfranchising artists around the world.
creation across an artist’s personal lifetime. Each individual’s art is It is now possible for anyone to create hundreds of images in
unique to their life experiences. Otomo’s Akira is a fundamentally minutes, compile a children’s book in an hour8 , and a project for a
different form of artwork than Monet’s Nymphéas [Water Lilies] successful Kickstarter campaign in a fraction of the time it takes for
series not simply due to their different stylistic and pictorial media, an actual artist9 . Although many of these images do not have the
but due to the way in which each artists’ work was an expression full depth of expression of a human, commercial image generators
of a cultural inheritance that shaped the unique experiences that flood the market with acceptable imagery that can supplant the
gave rise to their particular art forms. While image generators can demand for artists in practice. This has already resulted in job
imitate the stylistic habits, the “unique voices” of a given artist, they losses for artists, with companies like Netflix Japan using image
cannot develop their own particular styles because they lack the generators for animation, blaming “labor shortage” in the anime
kinds of experiences and cultural inheritances that structure every industry for not hiring artists [32].
creative act. Even when provided with a human-written prompt, the
4 https://techcrunch.com/2022/10/17/stability- ai- the- startup- behind- stable-
sampling of a probability distribution conditional on a string of text
diffusion-raises-101m/
does not present a synthesis of concepts, emotion, and experience. 5 https://web.archive.org/web/20220912045000/https://twitter.com/sama/statu

In conclusion, image generators are not artists: they require s/1484950632331034625, https://web.archive.org/web/20220122181741/https:
//twitter.com/sama/status/1484952151222722562
human aims and purposes to direct their “production” or “repro- 6 https://web.archive.org/web/20230811193157/https://twitter.com/emostaque/status
duction,” and it is these human aims and purposes that shape the /1591436813750906882
7 https://web.archive.org/web/20230224175654/https://twitter.com/mollycrabapple/s
directions to which their outputs are produced. However, many
people describe image generators as if these artifacts themselves tatus/1606148326814089217
8 https://www.youtube.com/watch?app=desktop&v=ZbVRYqsntDY
are artists, which devalues artists’ works, robs them of credit and 9 https://web.archive.org/web/20230124003305/https://twitter.com/spiridude/status/1
616476006444826625

366
AI Art and its Impact on Artists AIES ’23, August 08–10, 2023, Montréal, QC, Canada

One of the more high profile cases of the labor impact can be dataset [106, 109]. Although the creators of LAION-5B have not
seen in the title sequence of Marvel Studio’s 2023 TV series Secret provided a way for people to browse the dataset, various artists
Invasion, which uses a montage of generated imagery [81]. While have reported finding their works in the training data without their
prior movies from the studio feature between 5 (The She-Hulk: consent or attribution [11]. Open AI has not shared the dataset
Attorney at Law 10 ) and 9 (Hawkeye11 ) artists and illustrators for that its image generator, DALL-E, was trained on, making it im-
their title sequences, Secret Invasion has only one “Sagans Carle” possible to know the extent to which their training data contains
credited as “AI Technical Director”12 . This labor displacement is copyright protected images. Using a tool16 built by Simon Willi-
evident across creative industries. For instance, according to an son which allowed people to search 0.5% of the training data for
article on Rest of World, a Chinese gaming industry recruiter has Stable Diffusion V1.1, i.e. 12 million of 2.3 billion instances from
noticed a 70% drop in illustrator jobs, in part due to the widespread LAION 2B [109], artists like Karen Hallion17 18 found out that their
use of image generators [139]; another studio in China is reported copyrighted images were used as training data without their con-
to have laid off a third of its character design illustrators [139]. sent [11]. And as noted in Section 3, image generators like Stable
In addition to displacing the jobs of studio artists, the noise Diffusion have been shown to memorize images, outputting replicas
caused by the amount of AI-generated content will likely be dev- of iconic photographs and paintings by artists [27, 92].
astating for self-employed artists in particular. This has become This type of digital forgery causes a number of harms to artists,
evident in the literary world with the advent of LLM based tools many of whom are already struggling to support themselves and can
like ChatGPT13 . Recently, Clarkesworld, a popular science fiction only perform their artistic work while having other “day” jobs [70].
magazine, temporarily closed open submissions after being over- First, as discussed in Section 4.1, using artists’ works without com-
whelmed by the number of ChatGPT generated submissions they pensation adds to the already precarious positions that the majority
received [31]. They announced that they will instead only solicit of professional artists are in [70, 92, 138]. In addition to the lack of
works from known authors, which disadvantages writers who are compensation, using artists’ works without their consent can cause
not already well known. It is not difficult to extrapolate such a them reputational damage and trauma. Users of image generated
result with visual art venues that receive too many AI-generated art can mimic an artist’s style by finetuning models like Stable
images. Contrary to “democratizing art,” this reduces the number Diffusion on specific artists’ images, with companies like Wombo
of artists who can share their works and receive recognition. even offering services to generate art in the style tied to specific
Regardless of their objections, some working artists have started groups of artists like Studio Ghibli [133]. A number of artists have
to report having to use image generators to avoid losing their described this practice as “invasive” and noted the manner in which
jobs, further normalizing its commercial use [139]. Artists have it causes them reputational damage. After a Reddit user posted
also reported being approached by companies producing image images generated using artist Hollie Mengert’s name as a prompt,
generators to work on modifying the outputs of their systems 14 . Mengert mentioned that “it felt invasive that my name was on this
This type of work reduces hard earned years of skill and artistic eye tool, I didn’t know anything about it and wasn’t asked about it.”19
to simple cleanup work, with no agency for creative decisions. In She further noted her frustration with having her name associated
spite of these issues, creatives in executive roles who can be isolated with images that do not represent her style except at “the most
from the realities of most working artists, may gravitate towards surface-level.”
using these tools without considering the effects on the industry at This type of invasive style mimicry can have more severe con-
large, such as a reduction in the economic earning power of many sequences if an artist’s style is mimicked for nefarious purposes
working artists. For instance, the director of Secret Invasion had such as harassment, hate speech and genocide denial. In her New
editorial control in deciding whether to use image generators15 , and York Times Op-ed [8], artist Sarah Andersen writes about how even
chose to replace illustrators’ works with image generated content. before the advent of image generators people edited her work "to
With the increasing barriers and job losses for creatives because reflect violently racist messages advocating genocide and Holocaust
of image generators, the pursuit of art could be relegated to the inde- denial, complete with swastikas and the introduction of people get-
pendently wealthy and those who can afford to develop their artistic ting pushed into ovens. The images proliferated online, with sites
skills while working a full-time job. This will disproportionately like Twitter and Reddit rarely taking them down." She adds that
harm the development of artists from marginalized communities, "Through the bombardment of my social media with these images,
like disabled artists, and artists with dependents. the alt-right created a shadow version of me, a version that advo-
cated neo-Nazi ideology. . . I received outraged messages and had to
4.2 Digital Artwork Forgery contact my publisher to make my stance against this ultraclear.” She
As discussed in Section 2, image generators are trained using bil- underscores how this issue is exacerbated by the advent of image
lions of image-text pairs obtained from the Internet. Stable Diffusion generators, writing "The notion that someone could type my name
V2, for instance, is trained using the publicly available LAION-5B into a generator and produce an image in my style immediately
disturbed me. . . I felt violated” [8]. As we discussed in Section 3, an
10 https://ondisneyplus.disney.com/show/she-hulk 16 https://laion-aesthetic.datasette.io/laion-aesthetic-6pls/images
11 https://ondisneyplus.disney.com/show/hawkeye 17 https://web.archive.org/web/20230811043246/https://twitter.com/Khallion/status/
12 https://www.disneyplus.com/series/invasion-secreta/3iHQtD1BDpgN
1615464905565429760
13 https://openai.com/blog/chatgpt 18 https://web.archive.org/web/20230117153958/https://twitter.com/shoomlah/status/
14 https://www.facebook.com/story.php?story_fbid=pfbid02L9Qkj6Bnidy6zL7hRjv 1615215285526757381
Q9MuYLQF3jSUXcGLRjjgZhxH1LysnV4DZRUgMyhLMvKxGl&id=882110175 19 https://waxy.org/2022/11/invasive-diffusion-how-one-unwilling-illustrator-
15 https://www.polygon.com/23767640/ai-mcu-secret-invasion-opening-credits found-herself-turned-into-an-ai-model/

367
AIES ’23, August 08–10, 2023, Montréal, QC, Canada

artist’s style is their unique voice, formed through their life experi- 2023, entrepreneur Danny Postma announced the launch of a com-
ences. Echoing Hollie Mengert’s point about the invasive nature pany, Deep Agency, that rents image generated synthetic models
of style mimicry, Andersen adds: “The way I draw is the complex as a service24 , making the type of practice described by Jackson
culmination of my education, the comics I devoured as a child and more likely to occur at scale.
the many small choices that make up the sum of my life. The details Due to these questions of who gets to use (and profit from) these
are often more personal than people realise.” Thus, tools trained on tools by representing which cultures in what way, participants from
artists’ works and which allow users to mimic their style without Pakistan, India and Bangladesh surveyed in [98] raised “concerns
their consent or compensation, can cause significant reputational about artist attribution, commodification, and the consequences
damage by impersonating artists and spreading messages that they of separating certain art forms from their traditional roots,” with
do not endorse. some questioning which cultural products should be included in
the training set of image generators. To expose these issues, Quadri
4.3 Hegemonic Views and Stereotyping et al. recommend further examination of the cultural harms posed
by image generators, including perpetuating cultural hegemony,
Beyond the appropriation of individual identities, image generators
erasure or stereotyping [98].
have been shown to appropriate and distort identities of groups,
encode biases, and reinforce stereotypes [87, 98, 119]. Introducing
In/Visible, an exhibition exploring the intersection of AI and art, 4.4 Chilling Effects on Cultural Production and
Senegalese artist Linda Dounia Rebeiz writes: “Any Black person Consumption
using AI today can confidently attest that it doesn’t actually know The harms discussed in the prior sections have created a chilling
them, that its conceptualization of their reality is a fragmentary, effect among artists, who, as artist Steven Zapata notes, are al-
perhaps even violent, picture. . . Black people are accustomed to ready a traumatized community with many members struggling to
being unseen. When we are seen, we are accustomed to being mis- make ends meet [137]. First, students who foresee image generators
represented. Too often, we have seen our realities ignored, distorted, replacing artists have become demoralized and dissuaded from hon-
or fabricated. These warped realities, often political instruments ing their craft and developing their style [138]. Second, both new
of exclusion, follow us around like shadows that we can never and current artists are becoming increasingly reluctant to share
quite shake off” [64]. In an interview, the artist gives examples of their works and perspectives, in an attempt to protect themselves
stereotypes perpetuated through image generators. For instance, from the mass scraping and training of their life’s works [92, 138].
she notes that the images generated by Dall-E 2 pertaining to her Independent artists today share their work on social media plat-
hometown Dakar were wildly inaccurate, depicting ruins and desert forms and crowdfunding campaigns, and sell tutorials, tools, and
instead of a growing coastal city [114]. Similarly, US-based artist resources to other artists on various sites or at art-centric trade
Stephanie Dinkins discusses encountering significant distortions shows 25 . For most artists, gaining enough visibility on any of these
when prompting image generators to generate images of Black platforms (online or in person) is extremely competitive, taking
women [114]. them years to build an audience and fanbase to sell their work and
There are already cases of people producing images embody- eventually have the ability to support themselves 26 . Thus, having
ing their view of other populations. In a 2018 New Yorker article, less visibility in an attempt to protect themselves from unethical
Lauren Michelle Jackson writes about a white British photogra- practices by corporations profiting from their work, further reduces
pher, Cameron-James Wilson, who created a dark skinned synthetic their ability to receive compensation for their work.
model which he called “Shudu Gram,” and the “World’s first Digital Artists’ reluctance to share their work and teach others also
Supermodel” [77]. The synthetic model, which he created using reduces the ability of prospective artists to learn from experienced
a free 3D modeling software called DAZ3D20 , first appeared on ones, limiting the creativity of humans as a whole. Similar to the
Instagram wearing “iindzila, the neck rings associated with the feedback loop created by next generations of large language models
Ndebele people of South Africa” [77]. Jackson licensed the image trained on the outputs of previous ones [18], if we, as humanity,
to various entities such as Balmain21 and Ellesse, 22 many of whom rely solely on AI-generated works to provide us with the media we
were criticized for their lack of diversity in hiring [96]. Now, with- consume, the words we read, the art we see, we would be heading
out compensation to any Ndebele people, magazines like Vogue23 towards an ouroboros where nothing new is truly created, a stale
profit off of an idealized conception of someone from that commu- perpetuation of the past. In [18], the authors warn against a similar
nity, imagined in the mind of a white man who is compensated for issue with future generations of large language models trained on
creating that image. Writer Francesca Sobande writes that this is outputs of prior ones, and static data that does not reflect social
another iteration of “the objectification of Black people, and the change.
commodification of Blackness” [115]. Five years later, on March 6 In his 1916 book titled Art, Clive Bell writes “The starting-point
for all systems of aesthetics must be the personal experience of a
peculiar emotion. The objects that provoke this emotion we call
20 https://www.daz3d.com/
works of art” [15]. As Steven Zapata notes, we need to “protect
21 https://projects.balmain.com/us/balmain/balmains-new-virtual-army
22 https://hypebae.com/2019/2/ellesse-ss19-campaign-shudu-virtual-cgi-digital-
24 https://www.deepagency.com/
influencer-model
23 https://www.vogue.com.au/fashion/trends/meet-shudu-the-digital-supermodel- 25 https://www.muddycolors.com/2019/09/results- of - the- artist- income- goals-

who-is-changing-the-face-of-fashion-one-campaign-at-a-time/news-story/80a96d survey-2019/
3d70043ed2629b5c0bc03701c1 26 https://news.artnet.com/art-world/artist-financial-stability-survey-1300895

368
AI Art and its Impact on Artists AIES ’23, August 08–10, 2023, Montréal, QC, Canada

the creative human spirit. . . Making art is one of the best ways to a piece of art work does not, on its own, render the resulting work
investigate one of the ways you are influenced, and the way to send protected by copyright law, meaning that the number of prompts
how you’re influenced to other people. If we don’t curb this, this or hours poured into the creation of an image using text-to-image
influence can come from AI, AI that can’t discern boundaries, and generators will not on its own qualify the work as copyrightable.28
influence feelings. Let’s not let it happen” [137]. Moreover, the prompts themselves may be protectable if they are in-
dependently creative, and the resulting work may be copyrightable
5 AI ART AND US COPYRIGHT LAW if the prompts were part of an active process by which the human
Given the speed at which image generators have been adopted creator exercised judgment by selecting, arranging, or designing
and their impact, countries around the world are grappling with the work 29 . US law also requires that the creator of the work be
what policies to enact in response. In particular, there is a lot of the source of the creativity and inventiveness of the work, and the
uncertainty about whether using copyrighted materials to train Copyright Office noted that image generators produce images in an
image generators is copyright infringement. Some governmental “unpredictable” way [90] and thus cannot be considered creative or
bodies, like the EU, will require companies to “document and make inventive.
publicly available a summary of the use of training data protected These dimensions of what it means to be an “author” under
under copyright law”27 [44], which could trigger copyright lawsuits copyright law as well as how the law understands the process of
if it becomes possible to identify specific instances of copyright creativity means that image generators on their own cannot create
infringement [72]. However, it is not clear what the scope of this copyright protected works. How artists interact with these tools
law is and if it requires an itemized list of what is included in the would determine the legal status of the output they create. Given
training data, or only a summary of other key information. this uncertainty about the legal status of the image generators’
While a number of artists have filed class action lawsuits in outputs, we can direct our policy attention to the inputs that go
the US against companies providing commercial image genera- into creating the tools. There is an opportunity here to exercise
tion tools [129], image generators represent a dynamic between more caution in the ex-ante processes of the tools’ development
artists and large-scale companies appropriating their work that has such that the artists whose works are used to create the tools are
previously not been examined in US copyright law [56]. This is not harmed, which we discuss in Section 7.
due to the unprecedented scale at which artists’ works are being
used to create image generators, the recent proliferation of publicly 5.2 Fair Use
available image generators trained on that content, and the level Fair use is a doctrine in copyright law that permits the unauthorized
to which the output of the image generators threatens to displace or unlicensed use of copyrighted works; whether it is to make copies,
artists. Furthermore, this dynamic is distinct because of the data to distribute, or to create derivative works. Whether something
collection practices by which image generators are developed in constitutes fair use is determined on a case-by-case basis and the
the first place [67]. analysis is structured around four factors30 . Most relevant for artists
While some of the harms discussed in Section 4 overlap with and generative AI systems are factors 1 and 4, which look at the
the rights protected by US copyright law, others are not. There purpose or character of the use and its impact on the market [14].
are also a number of unanswered legal questions when it comes Part of the first factor includes the question of whether the use is
to determining the ways in which copyright law applies to image commercial and “transformative”. Commercial use usually weighs
generators and both the inputs and outputs that go into creating against finding fair use. If the use is found to be transformative,
these tools. Hence, US copyright law is largely unequipped to tackle however, it can be considered fair use even for commercial purposes,
many of the types of harms posed by these systems to content but not always31 . This is in part due to the fourth factor, which
creators. This lack of certainty about whether copyright applies examines whether a use is a threat to the market of the original
means that the companies producing these tools can do so largely creator’s work.
without accountability, unless they are sued for specific violations The question of fair use arises at two points within the image
of copyright law. And waiting for court determinations on their generation ecosystem. First is when the images used to train the
lawsuits means that artists may not be able to get recourse until datasets are copyrighted, and especially if the copyright holders are
the cases are resolved. In this section, we highlight specific parts of small-scale artists. These small scale artists could have an interest
US copyright law that may be a source of uncertainty and tension in not allowing their work to be used to create synthetic images,
for artists and companies using their work. We conclude that there not only because image generators could be used to produce works
are gaps in the law that do not take into account the social and resembling theirs, but because of issues around consent and misuse
economic harm to artists. of their works for harassment, disinformation and hate speech as
described in Section 4.2. Artists may not want to participate in
5.1 Authorship the creation of an infrastructure that facilitates other informational
Thus far, no works created by an image generator have been given harms, even if the image generator is not creating works resembling
copyright protection, and authorship is limited to human creators.
28 Bellsouth Advertising & Pub. v. Donnelley Inf. Pub., 933 F. 2d 952 - Court of Appeals,
The US Copyright Office recently affirmed this position by declining
11th Circuit 1991
to recognize the copyrightability of works that were created by an 29 Feist Publications, Inc. v. Rural Tel. Serv. Co., 499 U.S. 340 (1991)
image generator [90]. In the US, the mere effort required to create 30 https://www.govinf o.gov/app/details/USCODE- 2011- title17/USCODE- 2011-
title17-chap1-sec107
27 https://www.euaiact.com/ 31 Fox News Network, LLC v. TVEyes, Inc., No. 15-3885 (2d Cir. 2018)

369
AIES ’23, August 08–10, 2023, Montréal, QC, Canada

theirs. In this case, their concerns wouldn’t be ones that can be companies funded doctors that claimed that cigarettes did not cause
addressed through fair use. cancer [1]. In her article, “You are not a stochastic parrot,” Liz Weil
Moreover, small-scale artists may not want to pursue copyright notes “The membrane between academia and industry is permeable
infringement claims because of a lack of resources to participate in almost everywhere; the membrane is practically nonexistent at
prolonged legal battles against powerful companies that claim such Stanford, a school so entangled with tech that it can be hard to tell
copying is fair use. This again means that copyright law is not the where the university ends and the businesses begin” [131]. This
most effective recourse for them and the question of fair use may corporate entanglement means that the academic research agenda
not be addressed in a case involving small-scale artists. The com- is increasingly being set by researchers who align themselves with
plaint32 filed by Getty Images (a large and well-resourced copyright powerful corporate interests [51, 52, 132].
holder) against Stability AI is illustrative of the resources needed
to assert copyright claims against companies producing image gen- 6.1 Data Laundering
erators and the power differential that exists between small-scale One of the results of this corporate academic partnership has been
copyright holders and these companies. Copyright infringement data laundering [34]. Similar to money laundering, where business
is a concern for small-scale artists but the overall system of how fronts are created to move money around while obfuscating the
image generators normalize appropriation of art at the input stages source of illicit funds, researchers have argued that companies use
is a problem that is beyond the scope of fair use considerations. data laundering to obtain data through nonprofits that are then
The second point where fair use is a question is if an image used in for profit organizations [10].
generator is used to create works that are similar to a human artist, The LAION dataset used to train Stability AI’s Stable Diffusion
and as a result compete with the human artist’s market, as we model, which is also used in their commercial Dream Studio product,
describe in Section 4.1. In such instances, the fourth factor may is one such example33 . While LAION is a nonprofit organization,
weigh heavily against finding fair use. If the work is being used the paper announcing the LAION-5B dataset notes that Stability AI
in ways that displaces the artist’s market share, or prevents them CEO Emad Mostaque “provided financial and computation support
from receiving appropriate attribution and compensation, there is for open source datasets and models” [109]. The dataset’s associated
a clear harm in place, and may be addressed through copyright law. datasheet further answers the question “Who funded the creation of
this dataset” with “This work was sponsored by Hugging Face and
5.3 Derivative Works and Moral Rights Stability AI.” As we mentioned in Section 6, while US copyright law
When companies design their products around “mimicking” the is not fully equipped to resolve disputes related to image generated
style of an artist, then it becomes difficult to justify the company’s content, companies are more likely to be granted fair use exceptions
use as fair use [49]. In such instances, there is a clear connection in US copyright law if they claim that the dataset was gathered
between the company’s product and the intended outcome being for research purposes, even if they end up using it for commercial
harm to the market for the original artist’s work. products. According to the US copyright office, “Courts look at
Such mimicking or use of an artist’s work and style may also how the party claiming fair use is using the copyrighted work, and
be covered under moral rights in copyright law. Moral rights vest are more likely to find that nonprofit educational and noncommer-
in “visual art”, such as paintings and photographs, and protect the cial uses are fair.”34 This allows corporations like Stability AI to
creator’s personal and reputational interest in their work by pre- raise $101M in funding with a $1B valuation35 , using datasets that
venting the distortion or defacement of the original work [80]. The contain artists’ works without their consent or attribution. The
scope of moral rights in US copyright law is narrowly constructed accountability for the dataset creation and maintenance, on the
for various policy reasons [89], but this area of copyright law may other hand, including copyright or privacy issues, is shifted to the
need more attention as artists try to articulate the harms they face. nonprofit that collected it. Thus, while there is no legal distinction
at present between data laundering and the normative data mining
6 SHORTCOMINGS OF THE AI RESEARCH practices in the machine learning communities, this question needs
COMMUNITY more attention when the issue of fair use discussed in Section 5.2
In the previous section, we provided a brief analysis of US copyright arises in the context of image generators.
law that may be relevant to artists’ fight against the harms they
face due to the proliferation of image generators. In this section, we 6.2 Power, ML Fairness, and AI Ethics
discuss how academic researchers’ partnerships with corporations In the Moral Character of Cryptographic Work, cryptographer
help the latter sidestep some of these laws aimed at protecting cre- Philip Rogaway notes that the cryptographic community bears
ators. In her paper titled The Steep Cost of Capture, whistleblower the responsibility of failing to stop the rise of surveillance [105].
Meredith Whittaker writes about the level to which academic AI One of the main reasons for this disconnect, according to him, is
research has been captured by corporate interests [132]. In The that cryptographers fail to take into account how power affects
Grey Hoodie Project: Big tobacco, big tech, and the threat on aca- their analyses, and have a “politically detached posture,” writing
demic integrity, Mohamed and Moustafa Abdalla liken this capture “if power is anywhere in the picture, it is in the abstract capacities
to the tobacco and fossil fuel industries, noting that corporations
fund academics aligned with their goals, the same way that tobacco 33 https://stability.ai/blog/stablediffusion2-1-release7-dec-2022
34 https://www.copyright.gov/fair-use/
32 https://news.bloomberglaw.com/ip-law/getty-images-sues-stability-ai-over-art- 35 https://techcrunch.com/2022/10/17/stability- ai- the- startup- behind- stable-
generator-ip-violations diffusion-raises-101m/

370
AI Art and its Impact on Artists AIES ’23, August 08–10, 2023, Montréal, QC, Canada

of notional adversaries or, in a different branch of our field, the Arte es Ética suggests that each image carry “a digital signature”
power expenditure, measured in watts, for some hardware.” Ex- in its metadata, which is disclosed along with the generated image.
cept for a few exceptions, the machine learning fairness and AI Regulation that mandates that organizations disclose their train-
ethics communities have similarly failed to stop the harms caused ing data, at the very least to specific bodies that can verify that
by image generators proliferated by powerful entities, due to their people’s images were not used without their consent, is needed
disproportionate focus on abstract concepts like defining fairness in order to enforce the opt-in requirements artists are demanding.
metrics [84, 110, 136], rather than preventing harm to various com- Such a mandate will likely exist outside of conventional copyright
munities. We urge the machine learning and AI ethics research requirements. However, algorithmic accountability regimes and
communities to orient their focus towards preventing and mitigat- recently proposed laws like the Algorithmic Accountability Act of
ing harms caused to marginalized communities, in order to prevent 2022 in the US [111], or the transparency requirements of the EU’s
further casualties of which the art community is only one. AI Act that would require datasheets [54] or similar data documen-
tation [44], may be preliminarily useful in instituting disclosure
7 RECOMMENDATIONS TO PROTECT requirements for companies.
ARTISTS However, most of these existing measures require individuals
to prove harm, rather than placing the onus on organizations to
To fight back against the harms that artists have already experi-
show lack of harm before proliferating their products. There need
enced, they have filed class action lawsuits in the US against Mid-
to be pathways toward better accountability of the entities and
journey, Stable Diffusion and DeviantArt [129], organized protests,
stakeholders that create the image generators in the first place,
boycotted online services like ArtStation that allowed image gener-
rather than placing additional burdens on artists to prove that they
ated content on their platforms36 , and continue to raise aware-
have been harmed. While auditing, reporting, and transparency are
ness about the impact of image generators on their communi-
well-known possible proposals for regulating AI in general [17, 22,
ties [92, 137, 138]. However, as discussed in Section 5, the US courts
54, 83, 100], formulating sector and industry specific proposals is
can take years to issue a decision, during which more artists would
essential when it comes to effective governance [100], and is what
be harmed, and current US copyright law is ill equipped to protect
will be needed for image generators and art.
artists. Because of this, artists themselves have suggested a number
Regulation, even if successfully passed, takes a long time to be
of regulations to protect them.
enforced however, and is by its very nature reactive. As artist Steven
A letter to members of The Costume Designers Guild, Local
Zapata asks: “What are we going to do. . . to prevent this recurring
892, a union of professional costume designers, assistant costume
over and over again” [137]? This is a fundamental question that re-
designers, and illustrators working in film, television, commercials
quires us to understand why we are in a position where prominent
and other media37 , suggests legislation to allow “using AI derived
machine learning researchers have used their skills to disenfran-
imagery strictly for reference purposes and making it unacceptable
chise artists. One answer is the corporate capture of AI research
to hand over a fully AI generated work as a finished concept” [126].
that we discussed in Section 6.1. To combat this capture, computer
Visual artists who paint in a more representational style usually
scientist Timnit Gebru suggests having government research fund-
work from photo reference or build sculptures to understand how
ing that is not tied to the military, in order to have “alternatives to
lighting works, for example, using stock/licensed photography and
the hugely concentrated power of a few large tech companies and
assets, or the artist’s own work 38 . This would allow artists to use
the elite universities closely intertwined with them” [51].
image generators to provide inspiration in the way that nature,
A few researchers in machine learning have come to the defense
for example, is a source of inspiration to many artists. The art
of artists but they are much smaller in number than those working
collective Arte es Ética suggests having a metric to quantify the
on image generators without attempting to mitigate their harms.
amount of human interaction with an image generator to determine
For instance, University of Chicago student Shawn Shan and his col-
whether or not a generated image is copyrightable, with a 25% or
laborators, advised by security professor Ben Y. Zhao, created a tool
less interaction level being uncopyrightable [41].
called Glaze that allows artists to add perturbations to their images
While these proposals may address the issue of economic loss,
which would prevent diffusion model based generators from being
they do not stop the use of artists’ work for training image genera-
used to mimic their styles [112]. The researchers collaborated with
tors without their consent or compensation. Additional proposed
1000 artists, going to town halls and creating surveys to understand
regulations by Arte es Ética address this issue by recommending
their concerns. While building Glaze, Shawn Shan et al. measured
legislation that requires the explicit consent of content creators
their success by how much the tool was addressing the artists’ con-
before their material is used for generative AI models [41]. In order
cerns. This is an example of research that is conducted in service
to do this, they suggest having detection and filtering algorithms
of specific groups, using a process that identifies stakeholders and
to ensure that uploaded content belongs to creators who have con-
values that should be incorporated in the work, rather than the
sented to their work being licensed or opted-in for use as training
current trend of claiming to build models with “general” capabilities
data. Similar to [18]’s recommendations to ensure that synthetic
that do not perform specific tasks in well defined domains [53, 101].
texts generated by LLMs be “watermarked and thus detectable,”
We echo [18]’s recommendations to use methodologies like value
sensitive design and design justice [33, 48] to identify stakeholders
36 https://www.theverge.com/2022/12/23/23523864/artstation-removing-anti-ai-
and their values, and work on systems that meaningfully incorpo-
protest-artwork-censorship
37 https://www.costumedesignersguild.com/ rate them. These processes encourage researchers and practitioners
38 https://cynthia-sheppard.squarespace.com/#/burn-out/ to consult with visual artists and build tools that make their lives

371
AIES ’23, August 08–10, 2023, Montréal, QC, Canada

easier, rather than claiming to create tools that "democratize art" (AIES ’21). Association for Computing Machinery, New York, NY, USA, 287–297.
without consulting them, as a number of artists have noted [40, 60]. https://doi.org/10.1145/3461702.3462563
[2] Adobe. 2022. Generative AI Content. Retrieved March 13, 2023 from https:
In summary, we advocate for regulation that prevents organiza- //helpx.adobe.com/ca/stock/contributor/help/generative-ai-content.html
tions from using people’s content to train image generators without [3] Adobe. 2023. Generative AI for creatives - Adobe Firefly. Retrieved July 6, 2023
from https://www.adobe.com/sensei/generative-ai/firefly.html
their consent, funding for AI research that is not entangled with [4] Stability AI. 2022. Stable Diffusion Launch Announcement. Retrieved March
corporate interests, and task specific works in well defined domains 13, 2023 from https://stability.ai/blog/stable-diffusion-announcement
that serve specific communities. It is much easier to accomplish [5] Stability AI. 2022. Stable Diffusion Public Release. https://stability.ai/blog/stable-
diffusion-public-release/
these goals if machine learning researchers are trained in a manner [6] Thomas M Alexander. 2013. The human eros: Eco-ontology and the aesthetics of
that helps them understand how technology interacts with power, existence. Fordham University Press.
rather than the “view from nowhere” stance that has been critiqued [7] Abdullah Alfaraj. 2023. Auto-Photoshop-StableDiffusion-Plugin. Retrieved
March 13, 2023 from https://github.com/AbdullahAlfaraj/Auto-Photoshop-
by feminist scholars, which teaches scientists and engineers that StableDiffusion-Plugin
their work is neutral [50, 105]. We thus advocate for a computer [8] Sarah Andersen. 2022. The Alt-Right Manipulated My Comic. Then A.I. Claimed
It. New York Times (Dec. 31, 2022). https://www.nytimes.com/2022/12/31/opini
science education system that stresses the manner in which power on/sarah-andersen-how-algorithim-took-my-work.html
interacts with technology [19, 105]. [9] Artbreeder. 2022. artbreeder. Retrieved March 13, 2023 from https://www.artb
reeder.com/
[10] Andy Baio. 2021. AI Data Laundering: How Academic and Nonprofit Researchers
8 CONCLUSION Shield Tech Companies from Accountability. Waxy (2021).
In this paper, we have reviewed the chilling impact of image gener- [11] Andy Baio. 2022. Exploring 12 Million of the 2.3 Billion Images Used to Train
Stable Diffusion’s Image Generator. Retrieved July 6, 2023 from https://wa
ators on the art community, ranging from economic loss, to reputa- xy.org/2022/08/exploring-12-million-of -the-images-used-to-train-stable-
tional damage and stereotyping. We summarized recommendations diffusions-image-generator/
[12] Ollie Barder. 2017. Katsuhiro Otomo On Creating ’Akira’ And Designing The
to protect artists, including new regulation that prohibits training Coolest Bike In All Of Manga And Anime. Forbes (2017).
image generators on artists’ works without opt-in consent, and spe- [13] Alexis T Baria and Keith Cross. 2021. The brain is a computer is a brain:
cific tools that help artists protect against style mimicry. Our work neuroscience’s internal debate and the social significance of the Computational
Metaphor. arXiv preprint arXiv:2107.14042 (2021).
is rooted in our argument that art is a uniquely human endeavor. [14] Barton Beebe. 2008. An Empirical Study of U.S. Copyright Fair Use Opinions,
And we question who its further commodification will benefit. As 1978-2005. University of Pennsylvania Law Review 156, 3 (2008). https://schola
artist Steven Zapata asks, “How can we get clear on the things we rship.law.upenn.edu/penn_law_review/vol156/iss3/1/
[15] Clive Bell. 1916. Art. Chatto & Windus.
do not want to forfeit to automation?” [137] [16] Emily M Bender. 2022. Resisting Dehumanization in the Age of “AI”. (2022).
Image generators can still be a medium of artistic expression [17] Emily M Bender and Batya Friedman. 2018. Data statements for natural lan-
guage processing: Toward mitigating system bias and enabling better science.
when their training data is not created from artists’ unpaid labor, Transactions of the Association for Computational Linguistics 6 (2018), 587–604.
their proliferation is not meant to supplant humans, and when [18] Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret
the speed of content creation is not what is prioritized. One such Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models
Be Too Big?. In Proceedings of the 2021 ACM Conference on Fairness, Account-
example is the work of artist Anna Ridler, who created a piece called ability, and Transparency (Virtual Event, Canada) (FAccT ’21). Association for
Mosaic Virus in 201939 , generating her own training data by taking Computing Machinery, New York, NY, USA, 610–623. https://doi.org/10.1145/
photos of 10, 000 Tulips, which itself is a work of art she titled 3442188.3445922
[19] Ruha Benjamin. 2020. Race after technology: Abolitionist tools for the new jim
Myriad (Tulips). She then trained a GAN based image generator code.
with this data, creating a video where the appearance of a tulip is [20] blueturtleai. 2023. gimp-stable-diffusion. Retrieved March 13, 2023 from
https://github.com/blueturtleai/gimp-stable-diffusion
controlled by the price of bitcoin, “becoming more striped as the [21] Ruby Boddington. 2019. Anna Ridler uses AI to turn 10,000 tulips into a video
price of bitcoin goes up—it was these same coveted stripes that controlled by bitcoin. It’s Nice That (2019).
once triggered tulip mania...a 17th-century phenomenon which [22] Karen L Boyd. 2021. Datasheets for datasets help ml engineers notice and
understand ethical issues in training data. Proceedings of the ACM on Human-
saw the price of tulip bulbs rise and crash...It is often held up as Computer Interaction 5, CSCW2 (2021), 1–27.
one of the first recorded instances of a speculative bubble" [21]. If [23] Jo Ann Boydston. 2008. The Middle Works of John Dewey, Volume 9, 1899-1924:
we orient the goal of image generation tools to enhance human Democracy and Education 1916. Southern Illinois University Press.
[24] Andrew Brock, Jeff Donahue, and Karen Simonyan. 2018. Large Scale GAN
creativity rather than attempt to supplant it, we can have works of Training for High Fidelity Natural Image Synthesis. https://doi.org/10.48550/A
art like those of Anna Ridler that explore its use as a new medium, RXIV.1809.11096
[25] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan,
and not those that appropriate artists’ work without their consent Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda
or compensation. Askell, et al. 2020. Language models are few-shot learners. Advances in neural
information processing systems 33 (2020), 1877–1901.
[26] Canva. 2022. Text to Image - Canva Apps. Retrieved March 13, 2023 from
ACKNOWLEDGMENTS https://www.canva.com/apps/text-to-image
Thank you to Emily Denton, Énora Mercier, Jessie Lam, Karla Ortiz, [27] Nicholas Carlini, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash Sehwag,
Florian Tramèr, Borja Balle, Daphne Ippolito, and Eric Wallace. 2023. Extracting
Neil Turkewitz, Steven Zapata, and the anonymous peer reviewers Training Data from Diffusion Models. https://doi.org/10.48550/ARXIV.2301.13
for giving us invaluable feedback. 188
[28] Electronic Privacy Information Center. 2023. Generating Harms. Generative
AI’s Impact & Paths Forward. Retrieved July 6, 2023 from https://epic.org/doc
REFERENCES uments/generating-harms-generative-ais-impact-paths-forward/
[1] Mohamed Abdalla and Moustafa Abdalla. 2021. The Grey Hoodie Project: Big [29] Chen Chen, Jie Fu, and Lingjuan Lyu. 2023. A Pathway Towards Responsible
Tobacco, Big Tech, and the Threat on Academic Integrity. In Proceedings of AI Generated Content. https://doi.org/10.48550/ARXIV.2303.01325
the 2021 AAAI/ACM Conference on AI, Ethics, and Society (Virtual Event, USA) [30] Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan,
and Ilya Sutskever. 2020. Generative Pretraining From Pixels. In Proceedings of
the 37th International Conference on Machine Learning (Proceedings of Machine
39 http://annaridler.com/mosaic-virus

372
AI Art and its Impact on Artists AIES ’23, August 08–10, 2023, Montréal, QC, Canada

Learning Research, Vol. 119), Hal Daumé III and Aarti Singh (Eds.). PMLR, 1691– limitations-and-storytelling/
1703. https://proceedings.mlr.press/v119/chen20s.html [65] Tero Karras, Samuli Laine, and Timo Aila. 2018. A Style-Based Generator
[31] Neil Clarke. 2023. A Concerning Trend. (2023). Architecture for Generative Adversarial Networks. https://doi.org/10.48550/A
[32] Samantha Cole. 2023. Netflix Made an Anime Using AI Due to a ’Labor Shortage,’ RXIV.1812.04948
and Fans Are Pissed. Vice (2023). [66] Kevin Kelly. 2022. Picture Limitless Creativity at Your Fingertips. Wired (2022).
[33] Sasha Costanza-Chock. 2020. Design justice: Community-led practices to build [67] Mehtab Khan and Alex Hanna. 2022. The Subjects and Stages of AI Dataset
the worlds we need. The MIT Press. Development: A Framework for Dataset Accountability. (Sept. 13, 2022). https:
[34] Subhasish Dasgupta. 2005. Encyclopedia of virtual communities and technologies. //doi.org/10.2139/ssrn.4217148
IGI Global. [68] Diederik P Kingma and Max Welling. 2013. Auto-Encoding Variational Bayes.
[35] Li Deng. 2012. The mnist database of handwritten digit images for machine https://doi.org/10.48550/ARXIV.1312.6114
learning research. IEEE Signal Processing Magazine 29, 6 (2012), 141–142. [69] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. ImageNet Clas-
[36] John Dewey. 1934. Art as Experience. Penguin Group (USA) Inc. sification with Deep Convolutional Neural Networks. In Advances in Neural
[37] Prafulla Dhariwal and Alex Nichol. 2021. Diffusion Models Beat GANs on Image Information Processing Systems, F. Pereira, C.J. Burges, L. Bottou, and K.Q. Wein-
Synthesis. https://doi.org/10.48550/ARXIV.2105.05233 berger (Eds.), Vol. 25. Curran Associates, Inc. https://proceedings.neurips.cc/p
[38] Alexei A. Efros and William T. Freeman. 2001. Image Quilting for Texture aper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf
Synthesis and Transfer. In Proceedings of the 28th Annual Conference on Computer [70] Jennifer C Lena and Danielle J Lindemann. 2014. Who is an artist? New data
Graphics and Interactive Techniques (SIGGRAPH ’01). Association for Computing for an old question. Poetics 43 (2014), 70–85.
Machinery, New York, NY, USA, 341–346. https://doi.org/10.1145/383259.383296 [71] Craiyon LLC. 2022. Craiyon. Retrieved March 13, 2023 from https://www.crai
[39] Ziv Epstein, Sydney Levine, David G Rand, and Iyad Rahwan. 2020. Who gets yon.com/
credit for AI-generated art? Iscience 23, 9 (2020). [72] Freshfields Bruckhaus Deringer LLP. 2023. Has copyright caught up with the
[40] Arte es Ética. 2023. Arte es Ética presente en la ACM FAccT Conference 2023. AI Act? Retrieved July 6, 2023 from https://www.lexology.com/library/detail.
Retrieved July 6, 2023 from https://arteesetica.org/arte-es-etica-presente-en-la- aspx?g=d9820844-8983-4aec-88d7-66e385627b4a#:~:text=Among%20the%20a
acm-facct-conference-2023/ mendments%20to%20the,used%20for%20training%20those%20models
[41] Arte es Ética. 2023. Un manifiesto para comprender y regular la I.A. generativa. [73] MyHeritage Ltd. 2022. AI Time Machine. Retrieved March 13, 2023 from
(April 2023). https://arteesetica.org https://www.myheritage.com/ai-time-machine
[42] Patrick Esser, Robin Rombach, and Björn Ommer. 2020. Taming Transformers for [74] Michele Marra. 1995. Japanese Aesthetics: The construction of meaning. Philos-
High-Resolution Image Synthesis. https://doi.org/10.48550/ARXIV.2012.09841 ophy East and West (1995), 367–386.
[43] Alberto Cerdá Estarelles. 2023. ue-stable-diffusion. Retrieved March 13, 2023 [75] Michael F Marra. 1999. Modern Japanese aesthetics: a reader. University of
from https://github.com/albertotrunk/ue-stable-diffusion Hawaii Press.
[44] European Commission. 2021. The Artificial Intelligence Act. (2021). [76] Rick Merritt. 2023. Getty Images bans AI-generated content over fears of legal
[45] Everimaging. 2022. Fotor. Retrieved March 13, 2023 from https://www.fotor.co challenges. NVIDIA Blog ( 21, 2023). https://blogs.nvidia.com/blog/2023/03/21
m/features/ai-image-generator/ /generative-ai-getty-images/
[46] Creative Fabrica. 2022. CF Spark AI Art Generator. Retrieved March 13, 2023 [77] Lauren Michele Jackson. 2018. Shudu Gram Is a White Man’s Digital Projection
from https://www.creativefabrica.com/spark/ai-image-generator/ of Real-Life Black Womanhood. New Yorker (2018).
[47] Mureji Fatunde and Crystal Tse. 2022. Stability AI Raises Seed Round at $1 Billion [78] Midjourney. 2022. Midjourney. Retrieved March 13, 2023 from https://www.mi
Value. Bloomberg (Oct. 17, 2022). https://www.bloomberg.com/news/articles/ djourney.com/home/
2022-10-17/digital-media-firm-stability-ai-raises-funds-at-1-billion-value [79] Midjourney. 2023. Version. Retrieved July 7, 2023 from https://docs.midjourne
[48] Batya Friedman. 1996. Value-sensitive design. interactions 3, 6 (1996), 16–23. y.com/docs/model-versions
[49] Thania Garcia. 2023. David Guetta Replicated Eminem’s Voice in a Song Using [80] Martin Miernicki and Irene Ng (Huang Ying). 2021. Artificial intelligence
Artificial Intelligence. Variety (Feb. 8, 2023). https://variety.com/2023/music/n and moral rights. AI & SOCIETY 36, 1 (March 1, 2021), 319–329. https:
ews/david-guetta-eminem-artificial-intelligence-1235516924/ //doi.org/10.1007/s00146-020-01027-6
[50] Timnit Gebru. 2019. Chapter on race and gender. Oxford Handbook on AI Ethics. [81] Zosha Millman. 2023. Yes, Secret Invasion’s opening credits scene is AI-made —
Available from: http://arxiv. org/abs/1908.06165 [Accessed 28th April 2020] (2019). here’s why. Polygon (June 22, 2023). https://www.polygon.com/23767640/ai-
[51] Timnit Gebru. 2021. For truly ethical AI, its research must be independent from mcu-secret-invasion-opening-credits
big tech. Guardian (2021). [82] Mehdi Mirza and Simon Osindero. 2014. Conditional Generative Adversarial
[52] Timnit Gebru. 2022. Effective Altruism is Pushing a Dangerous Brand of ‘AI Nets. https://doi.org/10.48550/ARXIV.1411.1784
Safety’. Wired (2022). [83] Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasser-
[53] Timnit Gebru. 2023. Eugenics and The Promise of Utopia through Artificial man, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru.
General Intelligence. 2019. Model cards for model reporting. In Proceedings of the conference on
[54] Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman fairness, accountability, and transparency. 220–229.
Vaughan, Hanna Wallach, Hal Daumé Iii, and Kate Crawford. 2021. Datasheets [84] Arvind Narayanan. 21. Fairness definitions and their politics. In Tutorial pre-
for datasets. Commun. ACM 64, 12 (2021), 86–92. sented at the Conf. on Fairness, Accountability, and Transparency.
[55] Deep Dream Generator. 2022. Deep Dream Generator. Retrieved March 13, [85] Andrew Ng and Michael Jordan. 2001. On Discriminative vs. Generative Classi-
2023 from https://deepdreamgenerator.com/ fiers: A comparison of logistic regression and naive Bayes. In Advances in Neural
[56] Jessica Gillotte. 2020. Copyright Infringement in AI-Generated Artworks. UC Information Processing Systems, T. Dietterich, S. Becker, and Z. Ghahramani
Davis Law Review 53, 5 (2020). https://ssrn.com/abstract=3657423 (Eds.), Vol. 14. MIT Press. https://proceedings.neurips.cc/paper/2001/file/7b7a
[57] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep Learning. MIT 53e239400a13bd6be6c91c4f6c4e-Paper.pdf
Press. http://www.deeplearningbook.org. [86] Alex Nichol and Prafulla Dhariwal. 2021. Improved Denoising Diffusion Proba-
[58] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde- bilistic Models. https://doi.org/10.48550/ARXIV.2102.09672
Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative [87] Leonardo Nicoletti and Dina Bass. 2023. Humans Are Biased. Generative AI Is
Adversarial Networks. https://doi.org/10.48550/ARXIV.1406.2661 Even Worse. Bloomberg (2023). https://www.bloomberg.com/graphics/2023-
[59] Abhishek Gupta. 2022. Unstable Diffusion: Ethical challenges and some ways generative-ai-bias/
forward. Retrieved March 13, 2023 from https://montrealethics.ai/unstable- [88] Nkiru Nzegwu. 2005. Art and community: A social conception of beauty and
diffusion-ethical-challenges-and-some-ways-forward/ individuality. A companion to African Philosophy (2005), 415–424.
[60] The Distributed AI Research Institute. 2023. Stochastic Parrots Day: What’s [89] United States Copyright Office. 2019. Authors, Attribution, and Integrity: Ex-
Next? A Call to Action. Retrieved July 6, 2023 from https://peertube.dair- amining Moral Rights in the United States. https://www.copyright.gov/policy
institute.org/w/a2KCCzj2nyfrq5FvJbL9jY /moralrights/full-report.pdf
[61] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. 2016. Image-to- [90] United States Copyright Office. 2023. Re: Zarya of the Dawn (Registration #
Image Translation with Conditional Adversarial Networks. https://doi.org/10 VAu001480196). https://www.copyright.gov/docs/zarya-of-the-dawn.pdf
.48550/ARXIV.1611.07004 [91] OpenAI. 2022. DALL-E 2. Retrieved March 13, 2023 from https://openai.com/p
[62] Philip J Ivanhoe. 2000. Confucian moral self cultivation. Hackett Publishing. roduct/dall-e-2
[63] Nina Kalitina and Nathalia Brodskaya. 2012. Claude Monet. Parkstone Interna- [92] Karla Ortiz. 2022. Why AI Models are not inspired like humans. (Dec.10 2022).
tional. https://www.kortizblog.com/blog/why-ai-models-are-not-inspired-like-
[64] KALTBLUT. 2023. Feral File presents In/Visible: Where AI Meets Artistic humans
Diversity, A Fascinating Encounter of Limitations and Storytelling. Retrieved [93] Stephen Owen. 2020. Readings in Chinese literary thought. Vol. 30. BRILL.
July 6, 2023 from https://www.kaltblut-magazine.com/f eral-file-presents- [94] LLC Panabee. 2022. Hotpot. Retrieved March 13, 2023 from https://hotpot.ai/
in-visible-where-ai-meets-artistic-diversity-a-fascinating-encounter-of - [95] Javier Portilla and Eero P. Simoncelli. 2000. A Parametric Texture Model Based
on Joint Statistics of Complex Wavelet Coefficients. International Journal of

373
AIES ’23, August 08–10, 2023, Montréal, QC, Canada

Computer Vision 40, 1 (Oct. 1, 2000), 49–70. https://doi.org/10.1023/A:10265536 [121] Inc. Stablecog. 2022. Stablecog. Retrieved March 13, 2023 from https://stableco
19983 g.com/
[96] Anshuman Prasad, Pushkala Prasad, and Raza Mir. 2011. ‘One mirror in another’: [122] starryai. 2022. starryai. Retrieved March 13, 2023 from https://starryai.com/
Managing diversity and the discourse of fashion. Human Relations 64, 5 (2011), [123] NightCafe Studio. 2022. AI Art Generator. Retrieved March 13, 2023 from
703–724. https://nightcafe.studio/
[97] Inc. Prisma Labs. 2022. Lensa. Retrieved March 13, 2023 from https://prisma- [124] Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. 2017.
ai.com/lensa Revisiting Unreasonable Effectiveness of Data in Deep Learning Era. https:
[98] Rida Qadri, Renee Shelby, Cynthia L. Bennett, and Emily Denton. 2023. AI’s //doi.org/10.48550/ARXIV.1707.02968
Regimes of Representation: A Community-centered Study of Text-to-Image [125] Kentaro Takeda, Akira Oikawa, and Yukiko Une. 2023. Generative AI mania
Models in South Asia. In 2023 ACM Conference on Fairness, Accountability, and brings billions of dollars to developers. Nikkei Asia (March 4, 2023). https:
Transparency. ACM. https://doi.org/10.1145/3593013.3594016 //asia.nikkei.com/Spotlight/Datawatch/Generative-AI-mania-brings-billions-
[99] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, of-dollars-to-developers
Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, [126] The Costume Designers Guild, Local 892. 2022. A Letter to Membership. (2022).
Gretchen Krueger, and Ilya Sutskever. 2021. Learning Transferable Visual [127] Nitasha Tiku. 2022. AI can now create any image in seconds, bringing wonder
Models From Natural Language Supervision. https://doi.org/10.48550/ARXIV and danger. Washington Post (Sept. 28, 2022). https://www.washingtonpost.c
.2103.00020 om/technology/interactive/2022/artificial-intelligence-images-dall-e/
[100] Inioluwa Deborah Raji. 2022. From Algorithmic Audits to Actual Accountability: [128] Unite.AI. 2022. Images.AI. Retrieved March 13, 2023 from https://images.ai/
Overcoming Practical Roadblocks on the Path to Meaningful Audit Interventions [129] James Vincent. 2022. AI art tools Stable Diffusion and Midjourney targeted with
for AI Governance. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, copyright lawsuit. The Verge (2022).
and Society. 5–5. [130] James Vincent. 2022. Getty Images bans AI-generated content over fears of
[101] Inioluwa Deborah Raji, Emily M Bender, Amandalynne Paullada, Emily Den- legal challenges. The Verge (Sept. 21, 2022). https://www.theverge.com/2022/9/
ton, and Alex Hanna. 2021. AI and the everything in the whole wide world 21/23364696/getty-images-ai-ban-generated-artwork-illustration-copyright
benchmark. arXiv preprint arXiv:2111.15366 (2021). [131] Liz Weil. 2023. You are not a stochastic parrot. New York Magazine (2023).
[102] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. [132] Meredith Whittaker. 2021. The steep cost of capture. Interactions 28, 6 (2021),
2022. Hierarchical Text-Conditional Image Generation with CLIP Latents. https: 50–55.
//doi.org/10.48550/ARXIV.2204.06125 [133] WOMBO. 2022. Dream by WOMBO. Retrieved March 13, 2023 from https:
[103] Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Rad- //dream.ai/
ford, Mark Chen, and Ilya Sutskever. 2021. Zero-Shot Text-to-Image Generation. [134] Writesonic. 2022. Photosonic. Retrieved March 13, 2023 from https://photoson
https://doi.org/10.48550/ARXIV.2102.12092 ic.writesonic.com/
[104] Ali Razavi, Aaron van den Oord, and Oriol Vinyals. 2019. Generating Diverse [135] Xinchen Yan, Jimei Yang, Kihyuk Sohn, and Honglak Lee. 2015. Attribute2Image:
High-Fidelity Images with VQ-VAE-2. https://doi.org/10.48550/ARXIV.1906.00 Conditional Image Generation from Visual Attributes. https://doi.org/10.48550
446 /ARXIV.1512.00570
[105] Phillip Rogaway. 2015. The moral character of cryptographic work. Cryptology [136] Meg Young, Michael Katell, and PM Krafft. 2022. Confronting Power and
ePrint Archive (2015). Corporate Capture at the FAccT Conference. In 2022 ACM Conference on Fairness,
[106] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Accountability, and Transparency. 1375–1386.
Ommer. 2021. High-Resolution Image Synthesis with Latent Diffusion Models. [137] Steven Zapata. 2023. AI/ML Media Advocacy Summit Keynote: Steven Zapata.
https://doi.org/10.48550/ARXIV.2112.10752 https://www.youtube.com/clip/UgkxNwkkN9SnPpSJlnbkDK4q-NgxnaAyPHL
[107] Inc. Runway AI. 2022. Runway. Retrieved March 13, 2023 from https://runway L Presented through the Concept Art Association YouTube Channel.
ml.com/ [138] Steven Zapata. 2023. Who Makes the Art in Utopia? https://www.youtub
[108] Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily e.com/clip/UgkxNwkkN9SnPpSJlnbkDK4q- NgxnaAyPHLL Presented at
Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mah- TEDxBerkeley, Berkeley, CA.
davi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J Fleet, and [139] Viola Zhou. 2023. AI is already taking video game illustrators’ jobs in China.
Mohammad Norouzi. 2022. Photorealistic Text-to-Image Diffusion Models with Rest of World (April 11, 2023). https://restofworld.org/2023/ai-image-china-
Deep Language Understanding. https://doi.org/10.48550/ARXIV.2205.11487 video-game-layoffs/
[109] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross
Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell
Wortsman, Patrick Schramowski, Srivatsa Kundurthy, Katherine Crowson,
Ludwig Schmidt, Robert Kaczmarczyk, and Jenia Jitsev. 2022. LAION-5B:
An open large-scale dataset for training next generation image-text models.
https://doi.org/10.48550/ARXIV.2210.08402
[110] Andrew D Selbst, Danah Boyd, Sorelle A Friedler, Suresh Venkatasubramanian,
and Janet Vertesi. 2019. Fairness and abstraction in sociotechnical systems. In
Proceedings of the conference on fairness, accountability, and transparency. 59–68.
[111] Ron Sen. Wyden. 2022. The Algorithmic Accountability Act of 2022. (2022).
[112] Shawn Shan, Jenna Cryan, Emily Wenger, Haitao Zheng, Rana Hanocka, and
Ben Y Zhao. 2023. GLAZE: Protecting Artists from Style Mimicry by Text-to-
Image Models. arXiv preprint arXiv:2302.04222 (2023).
[113] Shutterstock. 2022. Shutterstock. Retrieved March 13, 2023 from https:
//www.shutterstock.com/
[114] Zachary Small. 2023. Black Artists Say A.I. Shows Bias, With Algorithms Erasing
Their History. New York Times (July 4, 2023). https://www.nytimes.com/2023/0
7/04/arts/design/black-artists-bias-ai.html
[115] Francesca Sobande. 2021. CGI influencers are here. Who’s profiting from them
should give you pause. Fast Company (2021).
[116] Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli.
2015. Deep Unsupervised Learning using Nonequilibrium Thermodynamics.
https://doi.org/10.48550/ARXIV.1503.03585
[117] Gowthami Somepalli, Vasu Singla, Micah Goldblum, Jonas Geiping, and Tom
Goldstein. 2022. Diffusion Art or Digital Forgery? Investigating Data Replication
in Diffusion Models. https://doi.org/10.48550/ARXIV.2212.03860
[118] Jiaming Song, Chenlin Meng, and Stefano Ermon. 2020. Denoising Diffusion
Implicit Models. https://doi.org/10.48550/ARXIV.2010.02502
[119] Ramya Srinivasan and Kanji Uchino. 2021. Biases in generative art: A causal
look from the lens of art history. In Proceedings of the 2021 ACM Conference on
Fairness, Accountability, and Transparency. 41–51.
[120] A. Srivastava, A.B. Lee, E.P. Simoncelli, and S.-C. Zhu. 2003. On Advances in
Statistical Modeling of Natural Images. Journal of Mathematical Imaging and
Vision 18, 1, 17–33. https://doi.org/10.1023/A:1021889010444

374

You might also like