You are on page 1of 16

Artificial Intelligence, Artists, and Art: Attitudes Toward

Artwork Produced by Humans vs. Artificial Intelligence

JOO-WHA HONG and NATHANIEL MING CURRAN, University of Southern California


Annenberg School for Communication and Journalism, Los Angeles

This study examines how people perceive artwork created by artificial intelligence (AI) and how presumed
knowledge of an artist’s identity (Human vs. AI) affects individuals’ evaluation of art. Drawing on Schema
theory and theory of Computers Are Social Actors (CASA), this study used a survey-experiment that con-
trolled for the identity of the artist (AI vs. Human) and presented participants with two types of artworks
(AI-created vs. Human-created). After seeing images of six artworks created by either AI or human artists, 58
participants (n = 288) were asked to evaluate the artistic value using a validated scale commonly employed
among art professionals. The study found that human-created artworks and AI-created artworks were not
judged to be equivalent in their artistic value. Additionally, knowing that a piece of art was created by AI
did not, in general, influence participants’ evaluation of art pieces’ artistic value. However, having a schema
that AI cannot make art significantly influenced evaluation. Implications of the findings for application and
theory are discussed.
CCS Concepts: • Applied computing → Media arts;
Additional Key Words and Phrases: Artificial intelligence, creativity, art, CASA, schema theory, human-
computer interaction, human-machine communication
ACM Reference format:
Joo-Wha Hong and Nathaniel Ming Curran. 2019. Artificial Intelligence, Artists, and Art: Attitudes Toward
Artwork Produced by Humans vs. Artificial Intelligence. ACM Trans. Multimedia Comput. Commun. Appl. 15,
2s, Article 58 (July 2019), 16 pages.
https://doi.org/10.1145/3326337

1 INTRODUCTION
Artificial intelligence (AI) is no longer a hypothetical technology but is instead a technology that
inundates our daily lives through personal assistants like Siri, algorithm-based suggestive searches
on Google, and self-driving cars [1]. In light of the increasing importance of artificial intelligence,
AI is a topic of heated discussion, with some people viewing recent trends as positive, while others
view them as negative. Some believe that AI will make human life safer and more prosperous, while
others argue that AI could become uncontrollable and eventually threaten human societies [2].
Other concerns about AI are rooted in more prosaic concerns, such as worries about how the labor
market will be affected by AI, with many fearing that AI will render many human jobs obsolete.

This research is funded by the USC Graduate School and an Annenberg Doctoral Student Summer Research Fellowship.
Author’s addresses: J.-W. Hong and N. M. Curran, University of Southern California Annenberg School for Communication
and Journalism, Los Angeles, CA 90089 USA; emails: {joowhaho, ncurran}@usc.edu.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee
provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and
the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored.
Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires
prior specific permission and/or a fee. Request permissions from permissions@acm.org.
© 2019 Association for Computing Machinery.
1551-6857/2019/07-ART58 $15.00
https://doi.org/10.1145/3326337

ACM Trans. Multimedia Comput. Commun. Appl., Vol. 15, No. 2s, Article 58. Publication date: July 2019.
58:2 J.-W. Hong and N. M. Curran

This is clearly not an unfounded concern, as this is already a trend in call centers, assembly lines,
and the fast food industry.
Creative endeavors are another arena in which AI seem poised to outperform humans. For exam-
ple, in 2016, Google Deepmind’s AlphaGo, a Go playing computer program, successfully mastered
the notoriously complex game and was quickly lauded for its “creativity” [3]. Traditional art is an-
other domain in which AI has demonstrated rapid improvement. This has important implications,
because art, and specifically painting, has been for millennia regarded around much of the world
as the pinnacle of human creativity. In the West, painting has been viewed as imbued with reli-
gious symbolism, and has been typically seen as representing humanity’s most pure and artistic
expression. Advances in AI artwork thus necessarily complicate contemporary understandings of
creativity and aesthetic beauty in art.

1.1 Turing Test


The present research inquires how artwork created by AI is evaluated and whether such evaluation
differs from the evaluation of artwork created by human artists. In this regard, this project is
theoretically linked to research concerned with and subsumed under the neologism, “Turing Test.”
The “Turing Test” refers to a thought experiment posed by the AI pioneer Alan Turing that asked
whether an AI could “fool” a judge as to its identity, AI or human.
Participants in this research project can be conceptualized as themselves engaging in a sort of
“Turing Test,” in which they are evaluating a piece of art for its “human” merits. Here it should be
noted that participants in the original parlor game on which the Turing Test was based sought to
identify not whether a fellow player was an AI or a human, but a man or a woman. More modern
observations about the social constructed nature of gender points to the fact that the entire test may
itself be rather tautological. Regardless, the idea has captured the imagination of both academics
and the public ever since and serves as a plotline in many contemporary television shows and
movies (e.g., HBO’s Westworld, Ex Machina, the Terminator series, etc.)
This project could be conceived of as pushing the Turing Test one step further, in that it asks the
question of how participants’ reception of a piece of art is changed by knowledge of its creator’s
identity. Rather than asking participants (as in the original parlor game) whether the concealed
figure is a man or a woman, it asks the fundamentally more important question of how their
evaluation of the individual changes on the basis of that information.

1.2 Artificial Intelligence and Art


Coeckelbergh argues AI-generated products can be associated with the concept of “art,” fulfilling
both objective and subjective criteria [4]. If there are objective criteria that determine art, then it
follows that AI can easily be built to create products that suit the criteria. If whether a product
can be deemed “art” relies on a subjective judgment, then it means that anything, including AI-
generated products, have a chance to be deemed as art. Therefore, the question “Can AI create art?”
should be differentiated from the question “Can AI create art that is good and worthy?” [5]. Thus,
instead of asking whether products created by AI should be included in the traditional definition
of art, one question asked in this study is whether products created by AI can be positioned and
accepted equally as artworks created by human artists, and if so, then how does knowledge of the
artist’s identity (AI or human) affect participants’ evaluation of the artwork?
In the field of art, the role of non-humans in creative processes is receiving attention as their role
in creative practices becomes increasingly crucial [6]. There are now growing attempts to measure
subjective appreciation of art [7]. Heuristic and empirical approaches to art-creating AI can provide
unexpected insights, and this study contributes to such scholarship. It is also expected that this

ACM Trans. Multimedia Comput. Commun. Appl., Vol. 15, No. 2s, Article 58. Publication date: July 2019.
Artificial Intelligence, Artists, and Art 58:3

inquiry into art created by artificial intelligence has implications for future studies related to AI’s
creativity in general, which in turn has implications for overall attitudes toward human creativity
as well. In this vein, Bostrom and Yudkowsky point out that once machines become better than
humans at something, the capabilities necessary for success in that realm are no longer viewed
as representing true “intelligence” [8]. If “creativity” is viewed an innately human capability, then
how might we be forced to reconceptualize our understanding of artwork when AI produces more
aesthetically pleasing artwork or passes the artistic “Turing Test?”
Artificial intelligence’s pursuit of art is not limited to visual arts. Currently, there are studies on
building artificial intelligence to create music and poems [9–11]. Some might say these creative
products created by AI merely imitate human work. However, we should consider that even human
creative products started from the imitation of others, since humans are also imitative creatures
[12, 13].

1.3 Evaluating AI Artwork


Twenty years ago, there was already research that examined AI creativity in the domain of paint-
ing [14]. The implications of AI in art creation has more recently gained attention from the field of
digital art [15]. There have been many attempts to create art using AI, and some of its ambitions
have been realized by Google’s DeepDream project and the Creative Adversarial Networks (CAN).
DeepDream is a program using a convolutional neural network to analyze patterns and shapes in
a given image and generate a new hallucinogenic image [16]. According to Elgammal et al., Cre-
ative Adversarial Networks (CAN) is an AI program that generates art by “maximizing deviation
from established styles and minimizing deviation from art distribution” [17]. One interesting find-
ing from CAN development was that people could not distinguish CAN-generated artwork from
human-created artwork and that raters actually gave a higher rating to CAN-generated artworks
based on how novel, aesthetically appealing, intentional, visually structured, communicative, and
inspiring they were.
Chamberlain et al. conducted an important and conceptually related study of people’s attitudes
toward art produced by non-human agents (computer and robot). Their study found that while
people could not discern human-created artwork from computer-generated artwork, they held bi-
ases toward computer-generated art, though this bias was reverted when participants were shown
anthropomorphized agents creating the art [18]. Importantly, their study argued that a bias toward
belief that AI could be creative is crucial in differential judgments of AI artwork versus human
artwork. However, unlike the present study, which asked participants to evaluate artwork using
multiple criteria, Chamberlain et al.’s study merely asked respondents how attractive given art
pieces were. Considering the subjectivity of attractiveness in art evaluation, which specific fac-
tors influenced participants’ aesthetic evaluation of the artwork could not be discerned from the
study. As mentioned in their paper, a limitation of such studies is that due to subjectivity in art
assessment, and the coarse scales used to evaluate the art, the conclusions from such studies are
limited to the artwork used in the experiment and cannot be generalized. Moreover, their evalua-
tion was done based on participants’ presumption that a given artwork was produced by AI or a
human rather than based on participants being explicitly informed of the creator’s identity (i.e., AI
or human), which arguably plays a larger—and more interesting—role in the ongoing discussions
around AI and creativity.
In light of these previous works of scholarship, this study will focus on external factors in AI art-
work evaluations, such as bias or expectation, with the view that the different perceptions toward
AI artwork may not be due solely to the qualities of the image itself, but are related to people’s
overall attitudes toward AI as well. This study applies an objective scale to measure what factors

ACM Trans. Multimedia Comput. Commun. Appl., Vol. 15, No. 2s, Article 58. Publication date: July 2019.
58:4 J.-W. Hong and N. M. Curran

are crucial in assessing artwork when the identity of the identity of creators (whether AI or human)
is declared beforehand.

1.4 Schema Theory


Schema Theory provides a useful theoretical framework for understanding audience perceptions
of art based on the identity of the artist. A schema is “an active processing data structure that or-
ganizes memory and guides perception, performance, and thought” [19]. Schemata about art, for
example, would include knowledge about art concepts, our perceptions of what makes art more
or less artistic, art we have viewed and enjoyed or not, situations in which we have viewed art,
and so forth. Humans also have schemata that include stereotypes about artificial intelligence and
the creativity of AI. According to Dixon, “These stereotypes are part of an associative network of
related opinion nodes or schemas that are linked in memory and activating one node in network
spreads to other linked nodes” (p. 163) [20]. Schemata, based on prior experience, can be acti-
vated when we are interpreting new information. Thus, it is possible to say that schema and bias
(or stereotype) function similarly in the cognitive process. Schemata function as heuristics that
allow us to make decisions when presented with new information by drawing on our previous
experiences. Schema theory is a useful theory to illustrate how stereotypes may affect cognitive
processing: For example, when we view someone of another race, a schema may be activated that
affects how we process information about that person. Not surprisingly, Schema theory is widely
used in media influence studies where researchers are interested in how bias affects individuals’
media portrayals of certain ethnicities and influence media users’ perceptions.
Because art is a medium that conveys messages, schema theory is applicable to research focused
on artwork. Previous research has suggested that visuals are particularly effective in triggering
schema, and thus this theory is applicable for understanding how stereotypes regarding artificial
intelligence alter perceivers’ views toward artificial intelligence’s artwork. McCarthy points out
that there are people who would doubt AI are truly capable of “humanlike” performance, even if
their performances are objectively indistinguishable (p. 1180) [21]. Similarly, even if AI-created art-
works are indistinguishable from human-created artworks, people may still believe that AI cannot
make art, due to their belief that art is that which is created by humans. Hence, this study examines
how people react differently to artwork labeled as created by AI art compared to artwork labeled
as created by human artists. When artwork is created by two different entities, how their artwork
is evaluated likely varies based not only on objective differences in the composition but also based
on the viewer’s stereotypes/beliefs about the artist.
Previous studies about AI artwork showed a negative bias against artificial intelligence produced
art [17, 18]. Thus, the first hypothesis is derived from the argument that people are apt to give a
lower rating to artwork if the artwork is labeled as AI-created artwork.

Hypothesis 1. Artworks that are identified as created by AI artists (attributed AI identity) receive
a lower rating on artistic value compared to artworks that are identified as created by human artists
(attributed human identity).

1.5 Computers Are Social Actors (CASA)


Among many arguments and disputes related to the acceptance of AI-created artworks, one
approach is to treat art as a social interaction involving communication and speculate whether
AI can perform as a social actor [22]. According to Nass and Moon, people tend to perform
social behaviors and apply social rules without thought when interacting with computers [23].
Moreover, people tend to treat computers as entities independent from their programmers and
having their own source of information [24]. These views are subsumed under the theory known

ACM Trans. Multimedia Comput. Commun. Appl., Vol. 15, No. 2s, Article 58. Publication date: July 2019.
Artificial Intelligence, Artists, and Art 58:5

as Computers Are Social Actors (CASA). CASA has been expanded and applied to studies about
interactions between human and artificial intelligence and seeks to understand how behaviors
and personality traits change when interacting with AI [25]. Because CASA theory suggests that
people unconsciously apply social norms/attitudes toward interactions with AI, it is expected
that people would perform similarly when evaluating AI-created artworks compared with human
artwork. This suggests that a scale used for evaluating human-created artworks in the professional
field of art is applicable to AI-created art evaluations.
While the negative bias from the identity of an artist (AI or human) may influence the evaluation
of artwork, it remains to be seen whether human-created artworks and AI-created artworks are
judged to be equivalent in their artistic value. Based on a previous AI-created art study that found
that people cannot distinguish between human- and AI-created artworks, it was presumed that
there is no significant difference between people’s evaluation of the two types of artworks, unless
primed as to their identities [17]. However, a standard null hypothesis test cannot be used to test
this question, given the assumptions of this test. Therefore, this question will be analyzed using
the Equivalence test instead.
Hypothesis 2. AI-created artworks and human-created artworks are judged to be equivalent in
their artistic value.
Rather than concealing the identity of artists, this study used a method of providing both correct
and incorrect identities of the artists. This method was employed in order examine the influence
of bias about the identity of artists on the evaluation of their artwork. If the evaluation of both
AI-created artworks and human-created artworks change based on the identification of the artists,
then it indicates that evaluation is not dependent solely upon the aesthetic value of the artwork
but is instead a byproduct of the art creator’s identity.
Finally, this study looked at the interaction between the attributed identity of artists and the
actual identity of artists, to see whether congruence between attributed identity and actual identity
has any influence on the evaluation of art pieces. This illustrates whether the deception used to
attribute identity to the artists has any influence on the evaluation of the artwork.
Hypothesis 3. The discrepancy in artistic value between AI-created artworks and human-created
artworks is influenced by the attributed identity of the artists.

1.6 Theory of Mind and the Intentional Stance


Because this study is concerned with the way that people evaluate artwork based on the presumed
identity of its creator (AI vs. human), it draws on insights from the literature on Theory of Mind
and the “intentional stance” [26]. Theory of Mind refers to “knowing that other people know, want,
feel, or believe things” (p. 38) [27], while the notion of intentional stance similarly examines how
we ascribe intentionality to different systems/agents. Work drawing on these theories explores the
ways in which humans assess the behavior of other agents based on how/whether we assign them
mental states: rationality, desire, and so on. For example, we tend to attribute certain intentions
to a human’s action that we generally do not to a computer’s; we evaluate the former’s actions as
mentalistic and the latter’s as mechanistic [28].
In their foundational work, Heider and Simmel (1944) demonstrated that people are able
to socially relate to and anthropomorphize simple two-dimensional triangle shapes, based on
the shapes’ movements [29]. Advances in technology have provided both more complex ac-
tors/interlocutors (such as AI) and also more sophisticated tools for measuring how people relate
to agentic others and thus how the intentional stance functions. Thus, recent work has examined
how knowledge of an agent’s identity (human vs. machine), affect individuals’ responses to that

ACM Trans. Multimedia Comput. Commun. Appl., Vol. 15, No. 2s, Article 58. Publication date: July 2019.
58:6 J.-W. Hong and N. M. Curran

agent [30–32]. For example, different parts of the brain have been found to become activated based
on whether participants believe they are interacting with a human, a humanoid-robot, or a ran-
domly acting computer [30], and fMRI studies have identified parts of the brain that are associated
with interacting with intentional vs. non-intentional agents, i.e., minds vs. machines [31].
These studies lend some support for the hypothesis that participants will evaluate AI-attributed
art differently than they do human-attributed art. Indeed, such a result might be expected given
that previous studies have found that believing that one is interacting with a human vs. a machine
results in the activation of different parts of the brain [30] and even affects the sensory processing
of information [33].

2 METHOD
2.1 Participants
Initially, 330 participants were recruited for this study, using Amazon Mechanical Turk (MTurk).
In addition to excluding participants who omitted any question in the survey, participants who
reported recognizing any of the provided images were also removed from the study, to ensure
participants lacked preexisting bias about the images, leaving 288 participants from the 330 initially
recruited. Using MTurk allowed for recruiting people from diverse socio-economic and ethnic
backgrounds [34]. However, ethnicity was not evenly distributed, with the majority of participants
being Caucasian. Ages ranged from 21 years-old to 76 years-old (M = 37.66, SD = 11.22). Of the
288 participants, 162 were male and 126 were female. Table 1 contains a summary of participants’
demographic information.

2.2 Procedures
First, four groups, each containing 72 randomly assigned participants, were formed based on the
real identities of artists (AI vs. Human) and attributed identities of artists (AI vs. Human). The
groups were: (A) AI artist (real) x AI artist (attributed), (B) human artist (real) x AI artist (attrib-
uted), (C) human artist (real) x human artist (attributed), and (D) AI artist (real) x human artist
(attributed). There were three types (two images per type) of AI-created artworks and three types
of human-created artworks. The three types of AI-created artworks were based on different AI art
generators. These included Google’s art generating AI DeepDream and a generator in the Creative
Adversarial Networks (CAN) study mentioned above, as well as Aaron, a computer program that
has been producing artwork since the 1970s [35, 36]. These particular art generators were selected
based on the first author’s familiarity with them, and to provide a diverse range of AI-produced
images, as well as to allow for generalizability beyond a single art generator. Human-created art-
works were chosen based on the similarity of a style or theme to each AI-created artwork, to create
pairs of similar artwork. The human-created artworks were chosen for ostensible similarity in both
composition and style to their AI counterpart images, after being initially evaluated by the first
author in consultation with a non-artist, non-author colleague. They were selected by Google im-
age searching and matched on similarity of a style or theme to each AI-created artwork, to create
pairs of similar artwork. For instance, images created by DeepDream are illustrated in Wikipedia
using key terms, “convolutional” and “hallucinogenic” [37]. To find human-created images that
match with images created by DeepDream, those two key terms were used during Google image
searching.
Participants in the attributed-AI-group were shown an identical set of six images and those in
the human-created group were shown a different set of six images. That is, six images, either AI-
or human-created, were shown to all participants. Before being shown the images, those in the
“attributed-AI group” were told that the images they were viewing were created by AI. However,

ACM Trans. Multimedia Comput. Commun. Appl., Vol. 15, No. 2s, Article 58. Publication date: July 2019.
Artificial Intelligence, Artists, and Art 58:7

Table 1. Demographic Information of Participants

Groups Variable N %
Ethnicity* Caucasian 207 71.9
Hispanic/Latino 40 13.9
Black or African American 24 8.3
Asian 30 10.4
Other 5 1.7
Total 306 106.2
Education Less than high school degree 0 0.0
High school graduate or equivalent 34 11.8
Some college but no degree 60 20.4
Associate degree in college (2-year) 38 13.2
Bachelor’s degree in college (4-year) 110 38.2
Master’s degree 37 12.8
Doctoral degree 3 1.0
Professional degree (JD, MD) 6 2.1
Total 288 100
Income Less than $20,000 41 18.7
$20,000 to $35,000 65 16.9
$35,000 to $50,000 63 19.4
$50,000 to $75,000 59 23.9
$75,000 to $100,000 36 10.9
More than $100,000 24 10.2
Total 288 100.0
Note: * Number for “ethnicity” was more than the number of participants as mul-
tiple selections were possible.

the attributed identity of human artists was not given before showing images to the human-created
group. Instead, a question was asked after participants evaluated the images: “During evaluations,
did you consider non-human artists?” Then, participants who answered “Yes” were removed from
the data set. Participants were screened in this manner, because telling participants that “The
following images are created by human artists” may have revealed the intention of the experiment
and affected participants’ demand characteristics, including provoking them to provide answers
they believed fit with the intention of the study rather than their own opinions [38]. Also, for
both attributed AI and human artist groups, the questions “Have you seen this artwork before?”
and “Do you have specific information about this art and its creator?” were asked after each
evaluation, and participants who answered “Yes” for either one of these questions were removed.
Such measures were conducted to ensure that all participants had the same preexisting knowledge
and lack of bias and resulted in a final total of 288 completed surveys. Two example images used
in the study are shown in Figures 1 and 2 (see Appendix 1 for the list of artwork information).
All participants were asked to evaluate the given artwork on the same set of dependent variables.
The dependent variables were chosen from those actually used among art studios, which consist
of criteria related to originality, the degree of improvement or growth, composition, development
of personal style, experimentation or risk-taking, expression, successful communication of idea,
and aesthetic value (see Appendix 2 for the list of questions) [39]. Participants were shown one
image at a time and then asked to evaluate that image according to eight criteria, each measured
on a 5-point Likert scale (1 = Lowest, 5 = Highest). After evaluating the artwork, participants

ACM Trans. Multimedia Comput. Commun. Appl., Vol. 15, No. 2s, Article 58. Publication date: July 2019.
58:8 J.-W. Hong and N. M. Curran

Fig. 1. Sample AI-created image (CAN). Fig. 2. Sample human-created image


(Gillian Lindsay).

were asked a yes/no binary question about whether they believed AI could/could not make art.
During the statistical analysis of the results, alpha was set at the 0.05 level. After the survey ended,
participants were debriefed and told the actual purpose of the study and the real identity of the
creator of each artwork.

3 RESULTS
To verify the efficacy of the manipulation of the identity of AI and human artists, the outcomes
from the question “In the perspective of the identity, how similar do you think you are to the
creator of shown images?” were analyzed using an independent-samples t-test. The efficacy of
the manipulation showed a significant outcome between human artists (M = 3.30, SD = 1.44) and
AI artists (M = 2.78, SD = 1.57); t (286) = 2.94, p = 0.004, d = 0.345. This manipulation check was
conducted with the assumption that people would feel more similarity with human artists than AI
and to test to what degree participants considered the identity of the artist while evaluating the
art piece. The results show that participants distinguished AI artists and human artists, helping to
support the validity of the data from the manipulation for this experiment.
A two-way analysis of variance (ANOVA) was conducted on the influence of two variables, real
identities and attributed identities of artists, on the evaluation of artworks. Real identities and
attributed identities consisted of two levels, respectively (AI and human). Based on Levene’s test
for equality of variances, the assumption of homogeneity of variances was met, F (3, 284) = 1.25,
p = 0.290, d = 0.132. All effects were statistically non-significant at the 0.05 significance level. The
main effect for attributed identity of artists, taken as the mean of the scores on all eight criteria,
yielded an F ratio of F (1, 284) = 0.325, p = 0.569, d = 0.083, indicating a non-significant difference
between the evaluation of artworks that were presumed to be created by AI artists (M = 3.13,
SD = 0.64) and human artists (M = 3.18, SD = 0.56). Thus, Hypothesis 1 about the influence of
different identity of artists on the evaluation of their artworks was unsupported. The main effect
for actual identity of artists yielded an F ratio of F (1, 284) = 3.435, p = 0.065, d = 0.235, indicating

ACM Trans. Multimedia Comput. Commun. Appl., Vol. 15, No. 2s, Article 58. Publication date: July 2019.
Artificial Intelligence, Artists, and Art 58:9

Table 2. Descriptive Statistics for Evaluation of Artworks Based


on Real and Attributed Identity of Artists

Real Identity Attributed Identity M SD N


AI artist AI artist 3.10 0.65 75
Human artist 3.09 0.57 80
Human artist AI artist 3.18 0.62 59
Human artist 3.27 0.54 74

a peripheral difference between the evaluation of artworks created by AI artists (M = 3.09, SD =


0.61) and human artists (M = 3.23, SD = 0.58). Also, the interaction effect between the attributed
identity of artists and actual identity of artists on the evaluation of given artworks was found to
be non-significant, F (1, 284) = 0.445, p = 0.505, d = 0.079, which rejects Hypothesis 3.
The R package TOSTER was used to conduct a Two-One-Sided Test (TOST) to see the similarity
between human-created artworks and AI-created artworks. An equivalence test was conducted in
addition to a t-test, because the purpose of an equivalence test is to identity similarity between
variables, while t-tests are intended to measure their difference. In other words, a p-value from an
equivalence test under 0.05 indicates two variables are significantly similar. The result from the
equivalence test was non-significant, t (286) = −1.56, p = 0.94, implying that Hypothesis 2, which
stated that AI-created artwork and human-created artwork are equivalent in artistic value, was not
supported. The results indicate that human-created artworks had higher evaluation scores com-
pared to AI-created artworks in both circumstances in which participants were told that attributed
artworks are created by AI artists and human artists. Table 2 provides a summary of descriptive
statistics for the analyzed data on the evaluation of artworks.
For an in-depth understanding of the results, the score of each variable in the scale—originality,
degree of improvement or growth, composition, development of personal style, the degree of
expression, experimentation and risk-taking, aesthetic value, and successful communication of
ideas—was compared using t-tests. First, the variables were compared based on which identity
of artists was presumed (attributed identity). Only the variable “Development of Personal Style”
showed a significant difference in answers between those who were told the images were cre-
ated by AI artists (M = 3.19, SD = 0.69) and human artists (M = 3.35, SD = 0.67); t (286) = 1.98,
p = 0.04, d = 0.235. The other variables used in the scale showed non-significant results. These
results illustrate that “Development of Personal Style” is the only distinctive schema people have
toward AI creating artworks. Table 3 shows mean comparisons of each variable in the artwork
evaluation scale between the two attributed identities of artists.
Then, the comparison of each variable was done between human-created artworks and AI-
created artworks (real identity) regardless of which artist identity was told to participants. There
were several variables that showed a significant result. The variable “Composition” showed
the most significant difference in answers between artworks created by AI artists (M = 3.34,
SD = 0.65) and human artists (M = 3.63, SD = 0.72); t (286) = −3,57, p < 0.000, d = 0.423. The
variable “Degree of Expression” also showed a significant outcome between artworks created
by AI artists (M = 3.22, SD = 0.70) and human artists (M = 3.41, SD = 0.66); t (286) = −2,28,
p = 0.02, d = 0.279. “Aesthetic Value” is the other variable with significant difference between AI
artists (M = 3.16, SD = 0.61) and human artists (M = 3.34, SD = 0.63); t (286) = −2,41, p = 0.02,
d = 0.290. The outcomes from other variables were not significant. Table 4 shows mean compar-
isons of each variable in the artwork evaluation scale between artworks created by human and AI
artists.

ACM Trans. Multimedia Comput. Commun. Appl., Vol. 15, No. 2s, Article 58. Publication date: July 2019.
58:10 J.-W. Hong and N. M. Curran

Table 3. Sample Descriptive Using t-test for Evaluation of Artworks with


Attributed Identity of Artists

Attributed Identity
Variables AI Artists Human Artists t df
Originality 3.27 (0.65) 3.30 (0.66) −0.45 286
Degree of Improvement or Growth 2.71 (0.92) 2.69 (0.78) 0.20 286
Composition 3.44 (0.73) 3.50 (0.66) −0.81 286
Development of Personal Style 3.20 (0.69) 3.35 (0.67) −1.98* 286
Degree of Expression 3.28 (0.71) 3.34 (0.67) −0.75 286
Experimentation or Risk Taking 3.12 (0.67) 3.14 (0.65) −0.18 286
Aesthetic Value 3.22 (0.65) 3.26 (0.60) −0.54 286
Successful Communication of Ideas 2.90 (0.82) 2.90 (0.75) −0.03 286
Note: * = P < 0.5. Standard Deviation appear in parentheses below means.

Table 4. Sample Descriptive Using t-test for Evaluation of Artworks Created


by Human and AI Artists

Type of Artworks
Variables AI-Created Human-Created T df
Originality 3.24 (0.66) 3.35 (0.66) −1.31 286
Degree of Improvement or Growth 2.65 (0.88) 2.76 (0.82) −1.22 286
Composition 3.34 (0.65) 3.63 (0.72) −3.57*** 286
Development of Personal Style 3.22 (0.70) 3.33 (0.66) −1.36 286
Degree of Expression 3.23 (0.70) 3.41 (0.66) −2.28* 286
Experimentation or Risk Taking 3.12 (0.67) 3.13 (0.65) −0.07 286
Aesthetic Value 3.16 (0.60) 3.33 (0.62) −2.41* 286
Successful Communication of Ideas 2.86 (0.82) 2.94 (0.73) −0.87 286
Note: * = P < 0.05, *** = P < 0.001. Standard Deviation appear in parentheses below means.

An independent-samples t-test was conducted to examine the influence of the perception toward
AI creating art on the evaluation of AI-created artworks. First, only data from participants who
were told that images they received were created by AI art generator were used. There was a signif-
icant difference in the perception that AI cannot make art (M = 2.81, SD = 0.59) and no perception
(M = 3.26, SD = 0.61); t (132) = 3.86, p < 0.000, d = 0.750. Then, the same analysis was repeated
with data only from participants who were told that the images they received were created by
human artists. There was a non-significant difference in the negative perception of AI creating art
(M = 3.10, SD = 0.60) and no perception (M = 3.23, SD = 0.54); t (152) = 1.33, p = 0.19, d = 0.228.
These results illustrate that the negative perception toward AI creating art has a strong influence
on the evaluation of artworks when people believe they are created by AI. Additionally, we con-
ducted a 2 × 2 ANOVA to test the relationship between the attributed identity of artists and the per-
ception of AI creating art and found they showed a significant interaction effect; F (3, 284) = 4.87,
p = 0.028, d = 0.261. Furthermore, multiple independent-samples t-tests were done to compare a
score for each variable in the art evaluation scale based on the negative perception toward AI cre-
ating art. This analysis also was done based on data only from participants who were told they
were viewing images created by AI. There was a significant difference in the score of every vari-
able involving the negative perception, except for “Aesthetic Value.” The findings suggest that
the aesthetic value of artworks are less influenced by the negative perception toward AI creating

ACM Trans. Multimedia Comput. Commun. Appl., Vol. 15, No. 2s, Article 58. Publication date: July 2019.
Artificial Intelligence, Artists, and Art 58:11

Table 5. Sample Descriptive Using t-test for Evaluation of Artworks Based on the Perception
Toward AI-made Art

Perception of AI creating art


Variables Negative Positive t df
Originality 2.96 (0.64) 3.39 (0.62) 3.52** 132
Degree of Improvement or Growth 2.45 (0.84) 2.81 (0.94) 2.03* 132
Composition 3.03 (0.67) 3.59 (0.70) 4.14*** 132
Development of Personal Style 2.80 (0.68) 3.33 (0.65) 4.17*** 132
Degree of Expression 2.97 (0.69) 3.40 (0.68) 3.23** 132
Experimentation or Risk Taking 2.84 (0.63) 3.22 (0.66) 3.03** 132
Aesthetic Value 3.19 (0.58) 3.23 (0.67) 329 132
Successful Communication of Ideas 2.57 (0.63) 3.02 (0.85) 3.29** 132
Note: * = P < 0.05, ** = P < 0.01, *** = P < 0.001. Standard Deviation appear in parentheses below means.

art. Table 5 shows mean comparisons of each variable in the artwork evaluation scale between
participants who held the belief that “AI can make art” and those who did not hold such a belief.

4 DISCUSSION
Results from this survey experiment indicate clear differences in evaluation between human-
created artworks and AI-created artworks, and it is possible to assume that such difference is
due to human-created artworks having a significantly higher rating in “composition,” “degree of
expression,” and “aesthetic value.” These variables can be seen as either strengths of human artists
or alternatively the goals that AI art generators should fulfill to produce worthy art products. Thus,
this finding offers evidence against the argument from Elgammal et al. that suggests that images
created by AI and human artists cannot be distinguished [17]. That is, this article argues that there
may still be discrepancies between AI-created artworks and human-created artworks based on the
dissimilar aesthetic ratings of the art pieces. Elgammal et al.’s study directly asked participants
to distinguish whether given pieces were created by human or computer. However, asking the
question directly may itself induce bias about the artwork, which may result in receiving answers
not reflective of the true attitudes of participants. Additionally, the present study had enough sta-
tistical power to support its results while the study conducted by Elgammal and colleagues used
fewer than 20 participants, which is far less than desirable to make an argument that outcomes
are generalizable. Hence, AI artists appear to have not yet completely passed the Turing Test.
Differences between AI-created artworks and human-created artworks may not be distinctive at
first sight, but careful observation using criteria used in art fields allows for distinguishing between
the two types of artworks. When aspects of AI-created art that need to be improved becomes clear,
it would be easier for AI artists to match the human-level of art creation, but currently it appears
that there are objective differences in the art produced by AI and humans.
Interestingly, the acknowledgment of the identity of the artist, either AI or human, did not
influence the evaluation of artworks. However, people with the stereotype “AI cannot produce art”
gave significantly lower ratings compared to people without the stereotype. This result illustrates
that possessing a particular schema regarding artificial intelligence influences evaluation of the
artwork, but the stereotype should be specified. In other words, the word “AI” does not trigger
the stereotype that “AI cannot produce art,” but possessing such stereotypes influences the
evaluation of artistic value. Even though the acknowledgment of the identity of artists did not
influence the evaluation of artistic values generally, the variable “development of personal style”
showed a significant difference between the attributed identities. It is possible to conjecture that

ACM Trans. Multimedia Comput. Commun. Appl., Vol. 15, No. 2s, Article 58. Publication date: July 2019.
58:12 J.-W. Hong and N. M. Curran

a stereotype people have about AI would be relevant or similar to the concept/question that AI
cannot develop its own style. From the viewpoint of CASA, the negative attitude toward “AI
developing its own style” is one obstacle for AI to be accepted as a social actor. Moreover, this
variable may be a starting point to trace other stereotypes about artificial intelligence. Finally,
unlike the negative perception toward “AI creating art,” which produced the significant difference
in evaluation of artistic value, both holistically and, respectively, “aesthetic value” is the only
variable that was not influenced. Thus, it can be assumed that evaluation of aesthetic value is
done independently from bias related to the artwork and its artist.
This study had several limitations. For example, Mechanical Turk has previously come under
criticism, with some scholars going so far as to suggest that its use may “undermine key assump-
tions of experimental research methods” [40]. Attempts were made to mitigate some of these con-
cerns. For example, the survey-experiment included control questions to ensure that participants
lacked familiarity with both the purpose of the study as well as the experimental stimulus used.
Those results of those participants who “failed” these tests were subsequently removed. One ad-
ditional limitation that was difficult to mitigate is the fact that the validated aesthetic scales used
were adopted from the art world, and thus were developed for use by a specific sample. However,
despite these limitations, the project’s general methodological rigor was high and efforts were
made to mitigate unavoidable limitations, including the issues of external validity that plague all
experiment-based research.
As research into AI continues, there will be undoubtedly be more research that addresses the
technical aspects of AI art creation [41, 42]. It is important that such research is informed by not
only technical perspectives, but humanistic perspectives as well. For instance, measuring aesthetic
value requires nuanced consideration of stimulus, personality, and situation. Thus, the aesthetic of
AI-created art can be better understood when those aspects are considered, rather than focusing
solely on the technical competence of an AI artist [43]. Indeed, despite the high technical profi-
ciency of the AI art machines considered here, the results of this study indicate that AI art has yet
to pass the “Turing Test” for art, in that people consistently rated the human-created art as higher
in a number of categories, despite their superficial similarity.
This study has implications for CASA theory, because art is itself a communicative activity and
art pieces function as a communicative medium. This communicative function of art is summarized
by head of auction house Christie’s Print and Multiples department, Richard Lloyd’s reference to
the importance of the “link with someone on the other side” in his discussion of selling AI art [44].
AI artists may be more likely to be treated as social actors than other AI entities due to the strong
affective link between art and authorship.
This study makes a contribution regarding public perceptions of AI in a novel circumstance,
that of art. In particular, this study has implications for understanding how people’s preconcep-
tions about AI affect their attitudes toward AI performing a “creative” task. This is an especially
important site of inquiry, because attitudes toward AI art, as evidenced by this study, have impli-
cations for attitudes toward understandings of art and creativity in general. It is likely that as AI
continues to advance and excel in activities like art, these questions will be of growing importance.

APPENDICES
APPENDIX 1
AI-created Artworks

(1) [CAN 1] Ahmed Elgammal, Bingchen Liu, Mohamed Elhoseiny, and Marian Maz-
zone. 2017. CAN: Creative Adversarial Networks, Generating “Art” by Learning About

ACM Trans. Multimedia Comput. Commun. Appl., Vol. 15, No. 2s, Article 58. Publication date: July 2019.
Artificial Intelligence, Artists, and Art 58:13

Styles and Deviating from Style Norms. arXiv preprint arXiv:1706.07068. Retrieved from
https://arxiv.org/pdf/1706.07068.pdf.
(2) [CAN 2] Ahmed Elgammal, Bingchen Liu, Mohamed Elhoseiny, and Marian Maz-
zone. 2017. CAN: Creative Adversarial Networks, Generating “Art” by Learning About
Styles and Deviating from Style Norms. arXiv preprint arXiv:1706.07068. Retrieved from
https://arxiv.org/pdf/1706.07068.pdf.
(3) [Deep Dream 1] Juan “Zeno” Sanchez Ramos. [n.d.] Jennifer Ouellette. This Is Your
Brain on Google’s Deep Dream Neural Network. Gizmodo. Web. 7 Sep 2015. Retrieved
from https://gizmodo.com/this-is-your-brain-on-googles-deep-dream-neural-network-
1728947099.
(4) [Deep Dream 2] Memo Akten. 2015. All watched over by machines of loving grace: Deep-
dream edition. Web. Retrieved from http://www.memo.tv/portfolio/all-watched-over-
by-machines-of-loving-grace-deepdream-edition/.
(5) [AARON 1] AARON. 2004. 040502. Richard Moss. Creative AI: The robots that would be
painters. New Atlas. Web. 16 Feb 2015. Retrieved from https://newatlas.com/creative-ai-
algorithmic-art-painting-fool-aaron/36106/.
(6) [AARON 2] AARON. 1992. Aaron with Decorative Panel. Harold Cohen. 1995. The further
exploits of AARON, Painter. Stanford Humanities Review. 4, 2 (1992), 141–158. Retrieved
from https://web.stanford.edu/group/SHR/4-2/text/cohen.html.
Human-created Artworks
(1) [CAN 1 counterpart] Gillian Lindsay. [n.d.] Light Imitating Art: Backlight. Gillian
Lindsay homepage. Web. Retrieved from https://gillianlindsay.ca/artwork/663012-Light-
Imitating-Art-Backlight.html.
(2) [CAN 2 counterpart] Stripes [Digital image]. [N.p., n.d.] JPG file. Online access
unavailable.
(3) [Deep Dream 1 counterpart] Dragon-of-Midnight. [n.d.] Hallucination. Deviant Art.
Web. Retrieved from https://www.deviantart.com/dragon-of-midnight/art/Hallucination-
167476769.
(4) [Deep Dream 2 counterpart] Wolfgang Beyer. 2005. Mandel zoom 11 satellite double spi-
ral, Wikipedia. Web. Retrieved from https://en.wikipedia.org/wiki/File:Mandel_zoom_11_
satellite_double_spiral.jpg.
(5) [AARON 1 counterpart] Freedesignfile. [n.d.] Colored oil paint art backgrounds vector.
Free vector. Web. Retrieved from https://all-free-download.com/free-vector/download/
colored-oil-paint-art-backgrounds-vector_581521.html.
(6) [AARON 2 counterpart] Françoise Nielly. [n.d.] Untitled 693. Françoise Nielly homepage.
Web. Retrieved from https://www.francoise-nielly.com//index.php/galerie.

APPENDIX 2
Based on your impression from this artwork, please rate it with following criteria (Lowest = 1,
Highest = 5).

Originality

1 2 3 4 5
    

ACM Trans. Multimedia Comput. Commun. Appl., Vol. 15, No. 2s, Article 58. Publication date: July 2019.
58:14 J.-W. Hong and N. M. Curran

Degree of Improvement or Growth (“I feel I learned something new”)

1 2 3 4 5
    

Composition (i.e., use of space)

1 2 3 4 5
    

Development of personal style

1 2 3 4 5
    

Degree of Expression

1 2 3 4 5
    

Experimentation of Risk Taking (Change from previous works)

1 2 3 4 5
    

Aesthetic Value

1 2 3 4 5
    

Successful Communication of Ideas

1 2 3 4 5
    

ACKNOWLEDGMENT
Thanks to Gillian Lindsay.

REFERENCES
[1] Robert L. Adams. 2017. 10 Powerful examples of artificial intelligence in use today. Forbes. Retrieved from
https://www.forbes.com/sites/robertadams/2017/01/10/10-powerful-examples-of-artificial-intelligence-in-use-
today/#658cdafc420d.
[2] Michael Shermer. 2017. Why artificial intelligence is not an existential threat. Skeptic 22, 2 (2017), 29.

ACM Trans. Multimedia Comput. Commun. Appl., Vol. 15, No. 2s, Article 58. Publication date: July 2019.
Artificial Intelligence, Artists, and Art 58:15

[3] Matt McFarland. 2016. What AlphaGo’s sly move says about machine creativity; Google’s machine is leaving
the smartest humans in the dust. The Washington Post. Retrieved from https://www.washingtonpost.com/news/
innovations/wp/2016/03/15/what-alphagos-sly-move-says-about-machine-creativity/?utm_term=.e213e59a2038.
[4] Mark Coeckelbergh. 2017. Can machines create art? Philos. Technol. 30, 3 (2017), 285–303.
[5] Flash Qfiasco. 2018. GarryKasparovMigGreengardDeep thinking, where machine intelligence ends and human cre-
ativity begins2017John MurrayLondon978-1-47365-350-4262 pages with notes and an index. Artific. Intell. 260 (2018),
36–41.
[6] Mark Cypher. 2017. Unpacking collaboration: Non-human agency in the ebb and flow of practice-based visual art
research. J. Vis. Art Pract. 16, 2 (2017), 119–130.
[7] Annukka Lindell and Julia Mueller. 2011. Can science account for taste? Psychological insights into art appreciation.
J. Cogn. Psychol. 23, 4 (2011), 453–475.
[8] Nick Bostrom and Eliezer Yudkowsky. 2014. The ethics of artificial intelligence. In The Cambridge Handbook of Arti-
ficial Intelligence. 316–334.
[9] Eduardo R. Miranda. 1995. Artificial intelligence and music: An artificial intelligence approach to sound design. Com-
put. Music J. 19, 2 (1995), 59.
[10] Samuel Gibbs. 2016. Google AI project writes poetry which could make a Vogon proud. The Guardian. Retrieved from
https://www.theguardian.com/technology/2016/may/17/googles-ai-write-poetry-stark-dramatic-vogons.
[11] Roger Dannenberg. 2006. Computer models of musical creativity. Artific. Intell. 170, 18 (2006), 1218–1221.
[12] Tony E. Jackson. 2017. Imitative identity, imitative art, and AI: Artificial intelligence. Mosaic Interdisc. Crit. J. 50, 2
(2017), 47–63.
[13] Sherry Turkle. 2005. The Second Self: Computers and the Human Spirit. MIT Press.
[14] Margaret A. Boden. 1998. Creativity and artificial intelligence. Artific. Intell. 103, 1 (1998), 347–356.
[15] Yu Yu. 2016. Research on digital art creation based on artificial intelligence. Revista Ibérica De Sistemas E Tecnologias
De Informação 18B (2016), 116–126.
[16] Gilberto Marzano and Alessandro Novembre. 2017. Machines that dream: A new challenge in behavioral-basic ro-
botics. Procedia Comput. Sci. 104 (2017), 146–151.
[17] Ahmed Elgammal, Bingchen Liu, Mohamed Elhoseiny, and Marian Mazzone. 2017. CAN: Creative adversarial net-
works, generating “Art” by learning about styles and deviating from style norms. arXiv preprint. arXiv:1706.07068.
[18] Rebecca Chamberlain, Caitlin Mullin, Bram Scheerlinck, and Johan Wagemans 2017. Putting the art in artificial:
Aesthetic responses to computer-generated art. Psychol. Aesthet. Creat. Arts. DOI:10.1037/aca0000136.
[19] Donald A. Norman and David E. Rumelhart. 1981. The LNR approach to human information processing. Cognition
10, 1 (1981), 235–240.
[20] Travis L. Dixon. 2006. Psychological reactions to crime news portrayals of black criminals: Understanding the mod-
erating roles of prior news viewing and stereotype endorsement. Commun. Monographs 73, 2 (2006), 162–187.
[21] John McCarthy. 2007. From here to human-level AI. Artific. Intell. 171, 18 (2007), 1174–1182.
[22] Aaron Hertzmann. 2018. Can computers create art? Arts 7, 2 (2018), 18.
[23] Clifford Nass and Youngme Moon. 2000. Machines and mindlessness: Social responses to computers. J. Soc. Issues 56,
1 (2000), 81–103.
[24] Shyam S. Sundar and Clifford Nass. 2000. Source orientation in human-computer interaction: Programmer, networker,
or independent social actor. Commun. Res. 27, 6 (2000), 683–703.
[25] Yi Mou and Kun Xu. 2017. The media inequality: Comparing the initial human-human and human-AI social interac-
tions. Comput. Hum. Behav. 72 (2017), 432–440.
[26] Daniel C. Dennett. 1987. The Intentional Stance. MIT Press.
[27] Simon Baron-Cohen, Alan M. Leslie, and Uta Frith. 1985. Does the autistic child have a “theory of mind”? Cognition
21, 1 (1985), 37–46.
[28] Serena Marchesi, Davide Ghiglino, Francesca Ciardo, Ebru Baykara, and Agnieszka Wykowska. 2018. Do we adopt
the intentional stance towards humanoid robots? Psyarxiv. Retrieved from https://psyarxiv.com/6smkq/.
[29] Fritz Heider and Marianne Simmel. 1944. An experimental study of apparent behavior. Amer. J. Psychol. 57, 2 (1944),
243–259.
[30] Thierry Chaminade, Delphine Rosset, David Da Fonseca, Bruno Nazarian, Ewald Lutscher, Gordon Cheng, and
Christine Deruelle. 2012. How do we think machines think? An fMRI study of alleged competition with an artifi-
cial intelligence. Front. Hum. Neurosci. 6 (2012), 103.
[31] James K. Rilling, Alan G. Sanfey, Jessica A. Aronson, Leigh E. Nystrom, and Jonathan D. Cohen. 2004. The neural
correlates of theory of mind within interpersonal interactions. Neuroimage 22, 4 (2004), 1694–1703.
[32] Sam Thellman, Annika Silvervarg, and Tom Ziemke. 2017. Folk-psychological interpretation of human vs. humanoid
robot behavior: Exploring the intentional stance toward robots. Front. Psychol. 8 (2017), 1962.

ACM Trans. Multimedia Comput. Commun. Appl., Vol. 15, No. 2s, Article 58. Publication date: July 2019.
58:16 J.-W. Hong and N. M. Curran

[33] Agnieszka Wykowska, Eva Wiese, Aaron Prosser, and Hermann J. Müller. 2014. Beliefs about the minds of others
influence how we process sensory information. PLoS One 9, 4 (2014), e94339.
[34] Krista Casler, Lydia Bickel, and Elizabeth Hackett. 2013. Separate but equal? A comparison of participants and data
gathered via Amazon’s MTurk, social media, and face-to-face behavioral testing. Comput. Hum. Behav. 29, 6 (2013),
2156–2160.
[35] Cade Metz. 2012. Google’s Artificial Brain Is Pumping Out Trippy-And Pricey-Art. Wired. Retrieved from https://
www.wired.com/2016/02/googles-artificial-intelligence-gets-first-art-show/.
[36] Jane Wakefield. 2015. Intelligent Machines: AI art is taking on the experts. BBC. Retrieved from http://www.bbc.com/
news/technology-33677271.
[37] DeepDream. n.d. Retrieved from https://en.wikipedia.org/wiki/DeepDream.
[38] Monte M. Page. 1974. Demand Characteristics and the Classical Conditioning of Attitudes Experiment. J. Personal.
Soc. Psychol. 30, 4 (1974), 468–476.
[39] Sabol F. Robert. 2006. Identifying exemplary criteria to evaluate studio products in art education. Art Educat. 59, 6
(2006), 6–11.
[40] Jesse Chandler, Pam Mueller, and Gabriele Paolacci. 2014. Nonnaïveté among Amazon Mechanical Turk workers:
Consequences and solutions for behavioral researchers. Behav. Res. Methods 46, 1 (2014), 112–130.
[41] Manfred Eppe, Ewen Maclean, Roberto Confalonieri, Oliver Kutz, Marco Schorlemmer, Enric Plaza, and Kai-Uwe
Kühnberger. 2018. A computational framework for conceptual blending. Artific. Intell. 256 (2018), 105–129.
[42] Christoph Walther. 1994. On proving the termination of algorithms by machine. Artific. Intell. 71, 1 (1994), 101–157.
[43] Thomas Jacobsen. 2006. Bridging the arts and sciences: A framework for the psychology of aesthetics. Leonardo 39,
2 (2006), 155–162.
[44] Meilan Solly. 2018. Christie’s will be the first auction house to sell art made by artificial intelligence. In Smithsonian
Magazine. Retrieved from https://www.smithsonianmag.com/smart-news/christies-will-be-first-auction-house-
sell-art-made-artificial-intelligence-180970086/.

Received August 2018; revised February 2019; accepted April 2019

ACM Trans. Multimedia Comput. Commun. Appl., Vol. 15, No. 2s, Article 58. Publication date: July 2019.

You might also like