You are on page 1of 7

Website

Video

The 2023 Art-ificial Dark Age: Four Perspectives

“I used to paint, to celebrate truth and encourage people to seek truth themselves.”

Steve Henderson, a painter of landscape work based in Central


Washington, dedicated nearly seven decades of his life to capturing
the essence of light, life, and joy in art.

However, he finds that the profession he once loved has changed.

“Now, my paintings are scraped off the Internet without permission


and used to generate personal income.”

In May of 2023, Henderson woke up to variations of his art


everywhere on the Internet.

Since Stability AI’s release of Stable Diffusion, netizens have


reshuffled the platform to produce images in the style of any artist.
All that’s needed is a collection of a hundred or so images.
A portrait of the artist Steve Henderson.

Thus, Henderson, who showcased more than 3,000 paintings on his site, was an obvious first victim.

Nameless Internet-dwellers scraped hundreds of his paintings posted online to train the AI to regurgitate
images in his signature style: landscapes, seascapes, romantic, and figurative work.

“I couldn’t believe my eyes that this is happening, and it is happening to me,” Henderson stated.

Henderson is just one of almost 300 million artists whose work was, in short, non-consensually taken, and
fed into machine learning databases.

In fact, those impressionistic art pieces you’ve seen across the internet? There is a fifty-percent chance
they originate from an AI algorithm involving Henderson’s art.

In the August 2023 StableDiffusion dataset breach conducted by British programmer Simon Willison, it
was found that 600 million artistic pieces were fed into models and commercially distributed without
consent.

For instance, type in “fairy with green dress and glowing pink wings Steve Henderson,” and presto
Magico!—the system generates multiple images encapsulating Henderson’s hand, skill, and art.
Website
Video

The images that StableDiffusion generated when typing in a prompt involving the keywords “Steve Henderson.”

These open-source programs, producing innumerable images in Henderson’s style, are constructed by
collecting pieces from the Internet without permission nor a thread of attribution to the creators.

And artists such as Henderson have had enough.

“The issue to me isn’t just style,” Henderson says, “It’s taking the art of a living artist, putting it into
databases, and reselling it. In a few years, my genuine work will be lost in a flood of AI. That’s what’s
concerning.”

Further concerning is the


transparency of these AI
art site databases: OpenAI
claimed to train DALL-E 2
on hundreds of millions of
images, yet refused to
release a single thread of
proprietary data. Similarly,
Stable Diffusion’s team has
blocked any sort of an
opt-out choice for artists to
remove their works from
the Stable Diffusion
database, and has
notoriously ignored appeal.
A A portrait of the artist Karla Ortiz.

This era of artists have thus chosen to take action. Ever since Karla Ortiz, an illustrator established in San
Francisco, found her work in Stable Diffusion’s data set, she has been shedding public awareness on AI
art and commercial copyright issues.
Website
Video
“It’s not just artists,” Ortiz maintained. “It’s photographers, models, actors and actresses, directors,
cinematographers. Any sort of visual professional is having to deal with this particular issue right now.”

The issue goes beyond a loss of income for these artists, however. It breaches the creators’ personal
identities with their art, Ortiz argues.

“It’s been just a month. What about in a year?” Polish artist Greg Rutkowski, who has been used in an AI
prompt 93,000 times, stresses. “I probably won’t be able to find my work out there because the Internet
will be flooded with AI art.”

In this Artificial Dark-Age, where silenced artists, deprived credit, and a $15,000 loss in artists’ wages is
the norm, data protection and privacy issues proved an addition of another vast realm of controversy.

The Expert Perspective

Holly Willis, Ph.D., is a Professor at the University of Southern California of Cinematic Arts affiliated
with the divisions of Media and AI. She currently works as the co-director of the USC Center of AI,
specializing in the realm of AI and Media.

A portrait of Professor Holly Willis, PhD.

Willis has been working alongside AI for the entirety of the USC Generative Center for AI’s existence,
which was formed with 10 million dollars in seed money and the input of innumerable experts from all
over media, CS, media, and film.
Website
Video
However, when asked of the legality of AI distribution, she answered, “They definitely don’t follow
ethical, human copyright protocols. This is a big issue in the industry right now. While AI is currently
perfectly legal in realms like cinema, moral misuse is another issue. Likely, what will happen is copyright
laws being altered to adhere to these new uses of art.”

Willis elaborated: “It’s the opposite intent of AI knowing users of, say, StableDiffusion, are misusing
these tools. The well-being of artists is most crucial to these services that could make worlds of a
difference in cinema.”

However, still pervasive was the issue of the six-hundred-billion copyrighted images still existent in
Stable Diffusion’s database.

Stability AI found itself in the crosshairs of numerous legal cases, consisting of allegations that the
company infringed on the intellectual property rights of countless artists through its copyrighted,
web-scraped image tools. Furthermore, even stock suppliers—namely, Getty Images—have taken the
issue to court for Stability AI’s usage with no permission nor recompensation.

Thus, a burning question comes up: what marks the limit of this copyright issue?

The Copyright Issue

“I won, and I didn’t break any rules,” Jason Allen, who won a monetary prize from AI art, says.

What are these rules exactly?

California art patent attorney Lawrence Townsend answered.

A portrait of patent lawyer Lawrence Townsend.


Website
Video
“The class action lawsuit is tricky. It seems obvious to claim copyright, knowing copyrighted pictures
were commercially used, but the complaint encapsulates multiple issues,” Townsend stated.

“First, diffusion requires images to be copied and remade as the model is tested. This is unlicensed use of
protected works, and we can see that image generators essentially call back to the dataset and mash
together millions of bits of millions of images to create a requested, named image. The artists’ argument
is that this resulting product is a derivative work, that is, a work not significantly transformed from the
source, and a key standard in ‘fair use.’”

He then introduced the notion of the class action suit, which was introduced to protect human artists and
provided the argument of an artist’s name’s meaning their relation to a text prompt. This labels it
derivative.

“Stability AI’s argument against human artists was that identifying specific creative, protected choices are
measured against the allegedly derivative work,” Townsend elaborated.

This showcase of the two stances sheds light to a new question: what does the court say?

The United States Copyright Office released its statement of policy on AI-generated works in mid-March.

The final ruling stated that components of a work made using AI were not eligible for copyright.

After a brief relief to artists who feared for their values, however, the issue continued to persist.

Indeed, tracking down plagiarists proves humanly impossible.

Most anonymous users disappear as soon as legality enters the picture, and the damage is still done.

What We Can Do

In the span of four months since the ruling, the world has realized that the law is far from the moral,
magical obstruction to the issues of AI art.

Thus, the public is faced with a burning issue: what can we do?

Artist and coder Mathew Dryhurst took the first step in taking action.

“AI has a data problem. It requires large amounts of curated data to train models, and as of yet there are
few examples where the providers of that data are compensated for it,” Dryhurst says.
Website
Video

A portrait of artist and technological researcher Mathew Dryhurst.

“I built HaveIBeenTrained to give people the ability to see whether their work or likeness is being used to
train AI systems, and to opt-out of the process should they like. I’m also working on building opt-in tools
to allow for the licensing of datasets.”

A website showcase of “Have I been Trained” created by Dryhurst.

What Dryhurst started was an unofficial opt-out system in response to the lack of artists’ freedom in AI
art platforms including Stable Diffusion.

“Our API now delivers 1.4 billion opt-outs to AI trainers. We verify who produced the work, and simply
ask AI developers to omit those works from their processes.”
Website
Video

He, like countless others, believes an opt-out system is an effective start, as it allows creators to determine
what they want to be affiliated with, and simultaneously, the option to offer their own models with
assurances that the same data is not available elsewhere.

“I hope initiatives like this can help guide sensible policy that gives creators options to thrive in this new
era, and doesn’t unnecessarily stymy AI development.”

Dryhurst ended on a note of hope—hope for action from anyone driven by moral duty.

He hopes for a future where AI coexists with human artists on a basis of personal strengths.

You might also like