You are on page 1of 15

International Review of Law, Computers & Technology

ISSN: 1360-0869 (Print) 1364-6885 (Online) Journal homepage:

The role of automated technology in the creation

of copyright works: the challenges of artificial

Jesus Manuel Niebla Zatarain

To cite this article: Jesus Manuel Niebla Zatarain (2017) The role of automated technology in the
creation of copyright works: the challenges of artificial intelligence, International Review of Law,
Computers & Technology, 31:1, 91-104, DOI: 10.1080/13600869.2017.1275273

To link to this article:

Published online: 22 Feb 2017.

Submit your article to this journal

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at

Download by: [FU Berlin] Date: 23 February 2017, At: 10:22

VOL. 31, NO. 1, 91104

The role of automated technology in the creation of copyright

works: the challenges of artificial intelligence
Jesus Manuel Niebla Zatarain
Doctoral Research Student, School of Law, SCRIPT Centre, University of Edinburgh, Edinburgh, UK

Technology today has an increasingly relevant role in areas copyright; legal informatics;
traditionally considered restricted to humans. This position has been artificial legal intelligence
changing due to the increasing capacity of devices to carry out
complex tasks without the need for any direct human intervention
at all. An example of this can be found in devices that emulate, to a
certain extent, human creative processes such as the conception of
artistic works. These present a new (and from the commercial point
of view interesting) reality: One where the human element is no
longer considered irreplaceable in the creational stage of artistic
works. This has a direct impact on the legal framework that
surround these works; questioning whether the current legal
framework of copyright is still capable of effectively protecting and
incentivising human generated works. In the following article, this
situation will be explored through a description of the potential
impact of technology on human generated work and the position
of current legal jurisdictions on this emerging landscape.

1. Introduction
Technology and copyright law possess a symbiotic relationship that has remained practically
untouched for more than a century. Nevertheless, constant developments in technology in
the twentieth century, especially in intelligent technology have forced us to reconsider this
relationship in recent years. Traditionally, technical developments were used as a mere
extension of human creativity and invention; as tools that help humans express intention.
The arrival of automated technology, however, could modify this situation.
In this article, the role that intelligent technology plays (or could play) in relation to the
creation of copyright works will be addressed in the context of the digital environment.
Through this, the potential effects on the legal, commercial and technological framework
will be explored and incorporated into a solution that departs from the traditional purely
legal approach by embracing, rather than competing with, technological advancement.

2. Dealing with copyright doomsday: the arrival of Qentis

The creation of potentially copyrighted works is a notion that has been around for long
time. However, it was not until relatively recently that technical tools have enabled

CONTACT Jesus Manuel Niebla Zatarain

2017 Informa UK Limited, trading as Taylor & Francis Group

creators to generate works automatically in a volume that could be considered commer-

cially valuable. This was precisely the approach taken by the Russian company Qentis,
which stated in 2014 that it had found the way to create every possible text that could
be written under a certain length
Qentis has generated and deployed 97.42% of all possible texts of ten to four hundred words
in length. (Qentis)

Later the same company claimed to have been migrating to another source of material:

Qentis is also working on other copyrighted material such as images, sound and 3D items. The
company has grown to a team of one hundred and twenty associates mostly focused on web
storage and management capacity.

By 2020 every possible photograph will have been created and registered by Qentis. Text
content generation advances even faster (New Company Claims It Uses Algorithms To
Create Content Faster Than Creators Can, Making All Future Creations Infringing). This
auto-generation of works shakes the very core of the copyright business model and
attacks the traditional conception of elements such as the role of author and the impor-
tance of the human element in the creation of artistic works.
This company has shown what proper data management can achieve by addressing
the large volume of information already stored on the Internet. Its business model is actu-
ally very simple: They now own every artistic work recently created or about to be made
The Qentis Corporation works with a powerful network of international law firms that rep-
resent our clients. The law firms notify authors, bloggers, news corporations, publishers and
website owners whenever we feel they have breached the copyrights of our clients. As
Qentis approaches 100 percent of content generation, all content owners will eventually
have to pay royalties to our clients or face massive lawsuits. (Qentis)

To prove its potential suitability, this project shown that the lyrics of the song Applause by
Lady Gaga, were created by them four years before it was released to the public in 2009,
this can be found under the tag TEXT HDOL-92265-UZS-2300616:36

You stand here waiting for me to bang the gong

To crash the critic ask, Is it right or is it wrong?

Being far away from you, I found the vein, put it here

I love the applause, applause, applause

I love the applause-plause

I love the applause-plause

For the way that you cheer and scream for me

Applause, applause, applause

Under this new scenario Qentis (at least in theory) becomes the de facto owner of every
potential work that can ever qualify for copyright protection anywhere in the world
(Komuves et al. 2015).

Despite the doomsday scenario described above, this experiment ended up being
just another work created by Michael Marcovici (Manifesto). Marcovici is an artist who
aims to show the role that creativity and copyright have in relation to technology, includ-
ing shining a light on the potentially unwanted results of their interaction. His work pro-
vides a view of the potential scenario that may occur if the unregulated use of
computational techniques is allowed to be applied to the realization of artistic works.
It is worth pointing out that there are several flaws on the current concept of Qentis
itself, as has been pointed out by Komuves et al. (2015): the capacity to create a bulk of
data system, in which every potential human creation can be contained, will demand
such processing capacity that it would take hundreds of years to finish. Also, even after
this process is finished another monumental task will be to retrieve any given type of
work: A volume of data of this large will be just as difficult to operate as it was to produced.
Nonetheless, irrespective of its many operation challenges Qentis does provide an inter-
esting approach to the use of automatic technology in relation to copyright works and
highlights some of the key issues involved.
In the following sections, several aspects of this relationship with intelligent technology
will be addressed. To begin with, however, a description of the history of the area will first
be necessary.

3. Automated technology and its relation with copyright law

Regardless of its apparent novelty, the idea of implementing automated technology to
create artistic material has been present for quite some time now.
Early examples refer us to Simrock that goes back as far as 1792. This mechanical device
was capable of creating up to 45 trillion set of waltzes, and despite its primitive compo-
sition (its internal mechanics were based on a dice game) its efficiency in producing
new yet similar material was such that it could not be matched by any contemporary
human author (Searle 1980).
With the arrival of artificial intelligence, a new lease of life was given to this approach,
and some began to believe that its impact would reach the market. For example, a Cyber-
neti Spatiodynamic Sculpture known as CYSP aimed to comprise several sensors and base
its performance upon scanning a determined platform to gather its shape, form and
dimensions before then exploring the possible combinations of those inputs (Cybernetic- A History of Cybernetic Animals and Early Robots).
Similarly, automated technology was used to create attractive combinations of space
and colours, with an early example of this being Sims (1991). Sims proposed the use of
a small set of mathematical functions and local information to determine the colour of
the pixels. This influenced future works such as Harts (Harts Abstract Art), who based
his work on controlling evolving colours and forms to create non-traditional, yet beautiful,
images. Another relevant project is the Painting Fool. This aims not only to reproduce the
human cognitive process taking place while a painter is creating his work but also to be
able to understand and apply elements such as colours and relief by using simulation
of natural media such as paints, pastels and pencils in the same motion than a human
painter would have used.
Another implementation of automated technology is the creation of literature and
poetry. The classic example for this is RACTER (Hartman 1996), created in 1985, which is

credited as the first automated device to create a book. This book, a poetry anthology
called The Policemans Beard is Half Constructed, was based on randomly chosen
words from a hand-crafted lexicon to fill the gaps provided in a template-based
grammar structure. A sample of the book is presented in the following lines:
A hot and torrid bloom which

Fans wise flames and begs to be

Redeemed by forces black and strong

Will now oppose my naked will

And force me into regions of despair.

The question or condition is interesting. Nevertheless to

embarrass Benton will enrage Helene. Clearly they watch their

affairs. They recognize that doves wing and dogs bark, at all

events they try to aid each other in inciting these creatures of

fantasy. They dream of dogs and jackals riding down some hedge

studded turnpike and this widens their famished and crazy

dreams. (Chamberlain 1984)

The success of RACTER incentivised the development of research that aimed to reproduce
cognitive processes used to create literature. RACTERs book was largely praised as one of
the earliest artistic works and one of the most successful ones. As part of this, the eld of
Natural Language Generation (NLG) developed and grew, creating a eld of articial intel-
ligence and linguistics that focuses on computer systems which can reproduce under-
standable texts in a particular language (Reiter, Dale, and Feng 2000).
In relation to musical works, the case of stochastic music is the best classic example.
Created by Rumanian composer Iannis Xenakis in the 1950s, this introduced a theory of
probability to music composition. Ten years later after its original inception its developer
incorporated the use of computers and even allowed for the automation and acceleration
of stochastic operations (Serra 1993). This methodology was based on decomposing three
basic elements of music: melody, rhythm and tonal accompaniment.
A more modern example is Evolutionary Music Composition (Jensen 2011), which is the
branch of evolutionary computation that aims to create musical works. This approach is
based on the implementation of Genetic Algorithms (GA) (Holland 1975) which process
the elements contained in a musical work to provide an outcome based on the best poss-
ible combination (genotype) (Ip, Law, and Kwong 2005). Such a process aims to provide
only those elements considered most suitable after going through a process that discards
those that are below the pre-established standard.
Another implementation is JenGam (Biles 1994), which selects the best combinations of
an already existing song to create potential Jazz solos. In this case, variations are created
based on the best possible combinations of the provided outcomes. As part of this, GA
parameters operate by managing large amount of information and providing potentially

relevant data combinations. Here, each generation (improvement of a data set) is

measured to obtain the feasibility of the improvement, creating a new subsequent gener-
ation based on the already existing material.
The previous cases demonstrated the impact that technology has had in the creation of
artistic works to date, providing an early description of human cognitive process. However,
the human element was not excluded from the creational process: its presence was
needed to provide the necessary input in each of the above approaches. This final require-
ment is, however, now being threatened by the arrival of automated technology. Now,
creational processes can be performed directly by the device, removing the human
element from the equation entirely. In the following lines, the initial insertion of this
newer technology into the news industry will be described.

3.1. The process becomes fully automated: have we opened the Pandoras box?
Artificial intelligence provides a new scenario in which the creation of copyrightable works
differs greatly from the traditional one. This new method is the result of several research
projects that aimed to analyse the suitability of reproducing human cognitive processes
via computers (Sawyer 2011). As part of many potential uses for this area of AI it has
the potential to, under certain given conditions, overcome the human element in terms
of efficiency. However, AI operating in this fashion has also been compared to Pandoras
Box. Human curiosity and desire to make tasks more efficient has led to the creation of
devices that may, some argue, present a threat to human presence.
From a practical point of view, such an approach is certainly seductive from the indus-
trial perspective. Autonomous devices have no working time; they do not get tired; and
they can in theory produce works of better quality than those made by humans.
Opponents have argued that this will lead to a slow but steady replacement of the
human element from tasks that were traditionally considered an expression of our nature.
There is a reason for this concern. Machines are in fact becoming smarter (MIT Scien-
tists Confirm that the AI Pandoras Box Has Been Opened). An example of this is
Concept Net, a part of the Open Mind Common Sense project (MIT Media Lab),
which was developed to explore social interactions and identify the capacity for unfore-
seen potential social replies such as word reasoning and vocabulary interaction. The
results were astonishing: it scored 60 while its human counterpart barely made it to
50. While there are certain variables that need to be taken in consideration such as
the fact that the test aimed to measure its capacity in relation to the cognition of chil-
dren which may seem to suggest that the test does not pose a threat it nonetheless
provides an interesting example of the exploration in this field. Intelligent devices are
currently being designed to reproduce and understand human cognitive process, cover-
ing every human task regardless of the age of the subject.
This has not been ignored by the industry when it comes to potential commercial appli-
cations. A good example of this is tailored news services (Guide to Automated Journalism).
The news industry has been traditionally inclined to implement technological tools to
manage enormous amount of information, but in recent times has turned the use of arti-
ficial intelligence due to its multiple applications and high rate efficiency.
As Hamilton and Turner (2009) stated in recent years, ubiquitous computation has
transformed the landscape of journalism. It has undermined business models, rebalanced

the relative power of reporters and audiences, and accelerated the delivery of information
worldwide. While
computational journalism cannot transform the business situation of contemporary journal-
ism, it can create new tools that may reduce the cost of watchdog reporting1 in certain circum-
stances, take better advantage of the new information environment and ultimately help
sustain watchdog work during the technological sea change now under way. (Hamilton
and Turner 2009)

These devices begin by managing large volumes of data and processing them under
certain directives: for example, following pre-defined patterns relating to topic, area,
person, location, etc. This demands the capacity to manage large amounts of data,
which is handled by analytical computational tools and new statistical methods to
control and measure large sets of information so called Big Data. This allows the
news service to deliver accurate, relevant and up to date information about a particular
topic pre-selected by the customer without any human interaction required.
From a technological point of view, these devices contain two key concepts. The first of
these is capacity to assimilate external input, decompose it and extract relevant yet
general elements to create an outcome. To illustrate this approach in a newspaper
setting the following example output is provided:
A shallow magnitude 4.7 earthquake was reported Monday morning five miles from West-
wood, California, according to the U.S. Geological Survey. The temblor occurred at 6:25 a.m.
Pacific time at a depth of 5.0 miles. According to the USGS, the epicentre was six miles
from Beverly Hills, California, seven miles from Universal City, California, seven miles from
Santa Monica, California and 348 miles from Sacramento, California. In the past ten days,
there have been no earthquakes magnitude 3.0 and greater centered nearby. (The First
News Report on the L. A. Earthquakes Was Written by a Robot)

A relevant information about the author is provided a few lines later:

This information comes from the USGS Earthquake Notification Service and this post was
created by an algorithm written by the author.

This was created by Quakebot (Earthquakes), a software developed by the Los Angeles
times to provide in situ information about earthquakes. Regardless its apparent simplicity,
this text provides tailored and accurate information about a specic and relevant event:
Precisely what many experts consider the future of this industry will be. Similar approaches
have also been inserted into other types of news, such as Crimespotting, which aims to
developed statistical information about crime rates in the San Francisco and Oakland
areas in California.
The second key element involved taking advantage of several relevant features of
the Cyberspace such as online delivery, connection to relevant repository sites, ongoing
addition of new and potentially relevant information and, most importantly, a seemingly
endless source of potential new clients. In this scenario, the technical benefits overlap with
economic ones creating an innovative phenomenon in which automation becomes a far
more attractive option than the human journalist.
The presence of autonomous technology in the commercial market is a reality that
needs to be addressed. Its effect, however, goes beyond the purely economic, since it
also has the potential to generate content that qualifies for copyright protection. This is

one of the key issues that legal science aims to solve. With the increasing use of intelligent
technology to create potentially copyrightable material, how should the law proceed? In
the following lines, the position of three main jurisdictions will be addressed in relation to
protection of material generated by machines.

4. The legal effect of the creation of automated works

As has been described throughout this paper, artificial intelligence holds the capacity to
produce new works and is in fact being used already by different industries to create
material based on pre-existing sources.
This approach, however, presents a series of doubts around the legal status of these
works. In particular, copyright protection will be in doubt because machines do not
perform strict cognitive creative processes. They merely aim to detect certain features
before to processing them to create new works. Material created by automated works
thus has very little chance of aspiring to legal protection as it fails to understand even
the basic concepts of expression. Machines simply look for features that allow them to
interact with a work without knowing exactly what their function is in the emulation of
creativity. This has been to date the main barrier preventing any serious attempts to
provide a legal recognition to works created by an AI.
In relation to other jurisdictions, the case of the UK stands out. Considered a pioneer in
the management of technology within the artistic field, the Copyright, Designs and
Patents Act 1988 (c.48) states that [i]n the case of a literary, dramatic, musical or artistic
work which is computer-generated, the author shall be taken to be the person by
whom the arrangements necessary for the creation of the work are undertaken. In
other words, computer-generated works can qualify for copyright protection.
Crucially, this legal framework does not remove the human element entirely; it merely
moves it to another point of the creation process. While the human element does not
perform the cognitive process, it still provides the input needed to perform this. As
such, copyright protection is awarded to the person providing the input.
This approach, however, has a serious limitation when it comes into contact with the
leading edge of computer science techniques. Advances are being made towards fully
automated systems: Devices that are able to operate based on the particularities of the
environment alone to produce new works. In this increasingly viable scenario the role
of the human element is relegated to a secondary even disposable position and yet
in it scenario the current UK legal framework does not provide copyright protection for
the resulting works.
For the European Union the panorama of protection is even narrower. According to
Directive 2009/24/EC of the European Parliament and of the Council on Legal Protection
of Computer Programs (Article 1 Section 3) a work should be protected in the sense that it
is the authors own intellectual creation. No other criteria shall be applied to determine its
eligibility for protection. This makes evident the need for human intellectual effort to be
present in the creation of a work, and leaves no room for works created independently.
This, however, may not be the case in the near future. This as part of the what if
machine project (What if Machine), which aims to replicate human creativity trough
the implementation of software. This has already produced outcomes: The worlds first
computer-generated musical has been scheduled to take place from 22nd of February

to 5th of March 2016 at the Arts Theatre in Londons West End. (Worlds First Computer-
Generated to Debut in London)
The US Copyright Office (Compendium of US Copyright Office Practices) follows as
similar path to its European counterpart: [a]s discussed in Section 306, the Copyright
Act protects original works of authorship, 17 U.S.C. 102(a). To receive the label of
authorship a work has to be created by a human. Later, it states that [t]he Office will
not register works produced by nature, animals, or plants and, to clarify this, it even pro-
vides examples such as a photograph taken by a monkey or a mural painted by an ele-
phant. Some lines later, it is stated explicitly that the Office will not register works
produced by a machine or mere mechanical process that operates randomly or automati-
cally without any creative input from a human author. For example [a] claim based on a
mechanical weaving process that randomly produces irregular shapes in the fabric
without any discernible pattern would not be granted copyright.
The position of the US Copyright Office denies the possibility of legal protection
without human presence and even clarifies that such right cannot be extended to any
other life form nor mechanical devices. It protects the human element as the sole entity
capable of producing, at least in its classical conception, art. Beyond this the concept is
not defined and is largely left to local interpretation, creating confusion and delaying
the implementation of an actual, generally accepted definition. National courts are left
to their own interpretations and they provide these based on their own national law.
Needless to say, this has not helped in the common understanding of the term and has
had repercussion in litigation.
It would appear therefore that copyright currently still protects the position of the
human being as the sole entity capable of providing the cognitive requirements (in the
strict sense) necessary to be granted the label of creator. On the contrary, computers
are therefore not capable of performing spontaneous thoughts or imitating improvised
cognitive functions leaving them adrift from the legal conception of creator. This has
only served to delay the opening of Pandoras Box and shelters the idea that there are
still opportunities to contain its potentially unwanted outcomes.
In relation to the legal status of advanced computer-generated works, however, a key
element is contained in the digital environment. A brief description of its role on the cog-
nitive process of these devices will be therefore next delivered.

5. Understanding the environment: the role of the input in automated law

So far, technology has evolved to the point where intelligent devices can operate almost
fully automated. In the case of copyright works, they can already create pieces based on
pre-established goals giving rise to the possibility of machines producing legal conse-
quences. These devices therefore need to possess the necessary skills and capacity to
deal with the eventualities of their (evolving) capabilities.
Let us suppose, for a moment, the case of an intelligent agent (Russell et al. 2003) that
has been set to gather an artistic work with certain characteristics: In this case, a fantasy
text that contains dragons. Ideally, the device should be capable of accessing open
access (Open Access Scholarly Information Source) sites to search for potentially relevant
material. Once it finds the site, it must then be capable of deducing the rules that govern

not only its access to the material but also what uses of the material are legally permitted.
However, it might be the case that the agent also finds relevant material in two other
different storage places under two separate legal regimes: The first one a privately
owned site and the second one a business-type site. The first offers access and use as
long as the author is properly cited in the new work; whilst the latter denies free
access, imposing a fee to see the works contained there and restricting in all cases a
readers ability use them to create new works.
Having these three potential scenarios, the device thus needs to perform legal reason-
ing based on the specifications of these sites to decide which will be more likely to offer
legally compliant option. In order to do this, the device needs legal reasoning that is able
to contain and process the elements relevant to each of the three different situations,
including identifying the relevant environmental data.
A potential solution for this is the use of a reasoning technique that is capable of incor-
porating environmental information into the construction of legal reasoning. This is the
approach presented in the Extended Mind method (Clark and Chalmers 1998). This meth-
odology is based on active externalism, which states that the cognitive process contained
in the reasoning module of the device would create an adapted response instead of select-
ing a pre-defined one contained in the devices database. This allows the device to detect
any relevant aspect contained in the environment and to use these within its reasoning
process to create an accurate legal outcome. Indeed this is the scenario already
implemented in the news generation example discussed previously.
This approach provides a suitable platform that can be used in the embodied version of
these devices. Let us assume the situation of a visually impaired person; one who enjoys
art and decides to visit a museum. This person has an assistive robot (Harmo et al. 2005)
whose main task is to aid him to perform daily tasks. Nevertheless, when the user takes
this device to the museum, a potential new scenario is created: Copyright law creates
several rules regarding access of the museum material, which can have a direct effect
on how the device will be able to process them.
In this case, these legal elements could potentially be contained in the environment in
the form of data. For example, the device could be presented with a symbol on a wall say,
as a common scenario, a camera with a diagonal line crossing it which means that no
visual capturing of the elements contained on that part of the exhibition is allowed. Yet
a few metres forward, the same symbol could be displayed but without the diagonal
line, indicating that now image processing is permitted.
In scenarios such as this one, the device needs to be able to adapt its reasoning process
to the external input. In the case of the first exhibition, for example, the device under-
stands that it cannot take images directly from the works so would fall back on looking
for other input that could contain relevant information, such as Quick Response Codes
(QR codes, What is a QR Code) or electronic tags. After detecting a legal source of infor-
mation, the robot can then deliver the service to its owner in a legally compliant way. In
the second scenario the device could simply gather its information from the work directly.
Similarly, in the case of an embodied generator of artistic material a potential situation
could be that of a device that aims to take photographs of different open areas to create
visual art based on them. Some of these images may, by chance, contain the faces of
bystanders producing a potential legal situation involving privacy law. In this situation
the device needs to be able to understand when the intended image describes a

general situation in which the presence of people is merely incidental and non-intentional
and behave accordingly, avoiding unwanted legal consequences.
These examples show the potential implications of intelligent technology and its
capacity to produce legal situations. Unlike humans these devices are, however, capable
of performing a full legal assessment of their situation in real time. However, this may
present an operational difficulty: How big does the knowledge base need to be in order
to have the capability to process situations that differ so much for each other?
This could be achieved through two different routes. First, by providing the device with
the relevant knowledge for a particular situation in the form of a plug in: A piece of software
that contains relevant information related to a particular situation. This approach provides
an advantage in closed scenarios such as the museum, where administrators can predefine
the operative rules that these devices should follow. For devices that interact in open scen-
arios operative instructions can be found in gadgets located in the environment, such as
pins located in a persons clothing which will provide the information related to how its
user should (or should not) appear in the image that device aims to capture.
This provides a potential solution to legal challenges faced by these devices, which is
based on environmental characteristics. It deals with the current situation of legal compli-
ance but ensures that the thinking process stops being restricted to a set of pre-defined
rules and starts instead becoming inclusive; detecting and understanding the role that
external elements have on the legal consequences of actions. This method can be
enhanced by developments in other areas, such as indicators in movement detection
systems (Ullman and Richards 1984), situated cognition (Suchman 1986), studies of real-
world-robotics (Beer 1989) and dynamical approaches to child development (Thelen
and Smith 1996) as well as research into the cognitive properties of collectives of
agents (Hutchins 1995).

6. Copyright enforcement through autonomous technology: an inclusive

As has been mentioned above, the interaction of technology with copyright law from an
enforcement perspective has come a long way. Departing from the notions provided by tra-
ditional Digital Rights Management (DRM) devices, new developments are now being con-
ceived that have the capacity to implement law based on the particularities of the licences
that apply to each work and in each situation. The implementation environment has
changed: Now devices need to have the capacity to interact with external data. This sets
the rules for a new scenario, one in which interactions related to copyright management
are no longer secluded to humans but are now performed directly by devices. Under this
paradigm a new situation related to legal use of these works is presented: Is the material
gathered going to be reproduced by the device; or is it going to be used to create new
Ironically, this new situation seems to free technology from some of the traditional
negative conceptions that DRM has attracted since the early days of its implementation.
Now, these devices can be provided with a set of rules through which the device can
evaluate the licence status of a work and decide whether it is compatible with the
users intended use or not. A far cry from DRM that simply denied use in the absence of
definitive proof of legality. Nevertheless, this presents a new challenge: Devices need to

be able to properly manage any given situation by understanding its unique particularities.
In other words devices have to be capable of performing legal reasoning as discussed
above. Processes based on the traditional conception of legal AI however do not fits
the particularities of the digital environment, where sheer volume can lead to the potential
overload of the reasoning process, causing a decrease in the operative performance and
unnecessary consumption of operative resources.
An interesting proposal to address this is provided by Oberle et al. (2012). They suggest
the separation of the design environment from the application environment. The first
stage contains a description of the scenario, including all the actions that can take place
there and most importantly their legal relevance. The second will contain the coding necess-
ary to perform isolated actions of reasoning within this environment and, through this, to
achieve operative and legal efficiency. As an example, imagine the following case:
A convenience store aims to provide a fully autonomous service to their clients. Here,
the payment device receives external information and decides to authorize such action. As
part of this, information is gathered by the device some of which is of private nature: the
computer knows that James Doe has bought groceries and cosmetics, on date January 12,
2016, and paid with his credit card. As consequence of this, the device is capable of pro-
viding a legal assessment based on the nature of the data it just received. This, however,
irrespective of being the capacity to providing legal accuracy, is not suitable for an
implementation that is constantly interacting with customers due to the sheer mass of
transaction (and therefore processing) required.
To solve this, this author proposes that the technological implementation should be
based on the nature of the interaction itself (design environment), being capable of iden-
tifying those processes that require legal assessment (operative environment). This allows
the implementation to make a crucial distinction: in the physical world, we do not make
deep complex legal reasoning every time we pay something with our credit card. Instead,
we decide based on the particularities of the individual event whether payment safe or
not. This is the key element provided by the design environment: it provides the
implementation with relevant operative management to proceed without depending
on unnecessary legal reasoning in every single case. Instead it merely focusses on those
relevant instances where such operations are necessary.
Currently this approach is not yet practically feasible to in large-scale environments
such as management of copyrighted works. As this involves the law interacting directly
with humans in scenarios that vary constantly it demands the capacity to adapt to unfore-
seen events a feature is not present in the scenario above. However, this approach pro-
vides some interesting elements and ideas for developing an efficient approach towards
the management of digital works that offer potential for the future.

7. Conclusions
Technology is an element that has been constantly present in the creation of copyrighta-
ble material. However, its position has been historically been considered as a mere tool; an
instrument that allows human creativity to expand and to develop new concepts and cre-
ations. Beyond the unreal threat that Qentis poses is the actual need to have a deep look at
the potential scenarios that may arise as a direct consequence of managing artistic
material through automated technology which has the capacity to generate new works.

One of the early reasons to implement traditional DRM devices was to secure legal com-
pliance for digital objects. This scenario changes when robots come into the scene: Instead
of being a merely restrictive tool, these allow devices to operate based on pre-defined par-
ameters that can ensure both managerial legal compliance and the capacity to designing a
proper licensing scheme for the recently created work.
Such operation, however, might not always be a direct process and the device needs to
have the capacity to adapt to elements contained in the environment. This would be the case
for a device that is given the task of finding a literary work, which contains a licence compa-
tible with the use pretended by the user. This apparently direct process might be affected by
the site that contains the work, which the device needs to be capable of understanding in
order to provide a fully law-compliant outcome. Similarly, where access regulations of a par-
ticular jurisdiction differ from the one the device comes from this needs to be accounted for.
As can be seen, legal reasoning should capable of adapting to external features that
could inform the legal status of the material gathered. This can also affect operative effi-
ciency: non-essential legal processes may cause a device to slow and become too resource
consuming to be considered efficient. A proposal to develop the proper cognitive features
for such a device can be based on the already mentioned Extended Mind. Here the reason-
ing process is able to adapt to the particularities of the environment allowing the device to
detect and include any legally relevant elements that do not belong to either the operative
instructions of the intelligent device nor the licence of the work. This would potentially
allow these devices to operate fully automated and yet still in compliance with the law.
The implementation of automated devices to create new artistic works, however, needs
to be addressed carefully. Having stated that under most jurisdictions they do not gener-
ate works that attract copyright status they still present an interesting option to the indus-
try from an economic point of view. There are instances such as the device developed by
Professor Philip Parker where an artificial author is capable of creating up 200,000 short
stories yearly. A current version of this software aims to assist with human writers correc-
tions and compositions for publishing (Computers are Writing Novels. Read a Few
Samples Here).
In situations such as this one, what should the role of the aspiring human author be? It is
well known that during their initial attempts to earn a place within the artistic market
economic difficulties and industry filters ended up discouraging many potential creators.
The mass implementation of technological tools such as the ones described here would
make this process anything but easier to them. To protect the human element and
specially to avoid rendering one of the most important human features to machines
the law should provide for a regulative framework through which incentives to prefer
human made work are provided to the industry. Complementary to this, the legal hierar-
chy of human authors should be given a preferential status, leaving the position of artificial
created works in a non-preferable position and, it is hoped, thus avoiding opening the
Pandoras Box that AI could represents for the creative industries.

1. Watchdog reporting or watchdog journalism refers to informing the public about goings-on in
institutions and society, usually in the form of investigations that will benefit the public. To
know more see: Waisbord (2013).

This work was funded under Mexicos National Council of Science and Technology (CONACYT) Scho-
larship under register 218517.

Beer, Randall. 1989. Intelligence as Adaptive Behavior. New York: Academic Press.
Biles, John. 1994. GenJam: A Genetic Algorithm for Generating Jazz Solos. In Proceedings of the
International Computer Music Conference, 131131. International Computer Music Association.
Chamberlain, William. 1984. The Policemans Beard is Half Constructed. New York: Warner Books.
Clark, Andy, and David Chalmers. 1998. The Extended Mind. Analysis 58 (1): 719.
Hamilton, James T., and Fred Turner. 2009. Accountability Through Algorithm: Developing the Field
of Computational Journalism. In Report from the Center for Advanced Study in the Behavioral
Sciences, Summer Workshop, 2741.
Harmo, Panu, Tapio Taipalus, Jere Knuuttila, Jos Vallet, and Aarne Halme. 2005. Needs and
Solutions Home Automation and Service Robots for the Elderly and Disabled. In 2005 IEEE/
RSJ International Conference on Intelligent Robots and Systems, 32013203. IEEE, 2005.
Hartman, Charles O. 1996. Virtual Muse: Experiments in Computer Poetry. Hanover: University Press of
New England, 32.
Holland, John H. 1975. Adaptation in Natural and Artificial Systems: An Introductory Analysis with
Applications to Biology, Control, and Artificial Intelligence. Ann Arbor, MI: U Michigan Press, 22.
Hutchins, Edwin. 1995. Cognition in the Wild. Cambridge, MA: MIT Press, 155.
Ip, Horace Ho-Shing, Ken C. K. Law, and Belton Kwong. 2005. Cyber Composer: Hand Gesture-driven
Intelligent Music Composition and Generation. In 11th International Multimedia Modelling
Conference, 47. IEEE, 2005.
Jensen, Johannes Hydahl. 2011. Evolutionary Music Composition: A Quantitative Approach.
Master Diss., Norwegian University of Science and Technology, 7.
Komuves, D., J. Niebla, B. Schafer, and L. Diver. 2015. Monkeying Around with Copyright Animals,
AIs and Authorship in Law. CREATe Working Paper 2015/02: 1.
Oberle, Daniel, Felix Drefs, Richard Wacker, Christian Baumann, and Oliver Raabe. 2012. Engineering
Compliant Software: Advising Developers by Automating Legal Reasoning. SCRIPTed 9 (2): 280313.
Reiter, Ehud, Robert Dale, and Zhiwei Feng. 2000. Building Natural Language Generation Systems. Vol.
33. Cambridge: Cambridge University Press, 1.
Russell, Stuart Jonathan, Peter Norvig, John F. Canny, Jitendra M. Malik, and Douglas D. Edwards.
2003. Artificial Intelligence: A Modern Approach. Vol. 2. Upper Saddle River: Prentice Hall, 31.
Sawyer, R. Keith. 2011. Explaining Creativity: The Science of Human Innovation. New York: Oxford
University Press, 114138.
Searle, John R. 1980. Minds, Brains, and Programs. Behavioral and Brain Sciences 3 (3): 417457.
Serra, Marie-Hlne. 1993. Stochastic Composition and Stochastic Timbre: Gendy3 by Iannis
Xenakis. Perspectives of New Music 31 (1): 237.
Sims, Karl. 1991. Artificial Evolution for Computer Graphics. ACM 25 (4): 319328.
Suchman, Lucy. 1986. Plans and Situated Actions. New York: Cambridge University Press.
Thelen, Esther, and Linda B. Smith. 1996. A Dynamic Systems Approach to the Development of
Cognition and Action. Cambridge MA: MIT Press, 348351.
Ullman, Shimon, and Whitman Richards. 1984. Image Understanding: Advances in Computational
Vision. Westport, CT: Greenwood Publishing Group Inc.
Waisbord, Silvio Ricardo. 2013. Watchdog Journalism in South America: News, Accountability, and
Democracy. New York: Columbia University Press.

Electronic References
Big Data: The Next Frontier for Innovation, Competition, and Productivity. McKinsey Global
Institute. Last accesed December 6, 2016.

Crimespotting. Last accessed December 4, 2016.

Compendium of US Copyright Office Practices. US Copyright Office. Last modified date December
22, 2014.
Computers are Writing Novels. Read a Few Samples Here. Business Insider UK. Last accessed
December 6, 2016. A History of Cybernetic Animals and Early Robots. Last accessed December 6,
Earthquakes. Los Angeles Times. Last accessed November 3, 2016.
Editorial Analytics: How News Media Are Developing and Using Audience Data and Metrics. Oxford
University Reuter Institute for the Study of Journalism Available at SSRN 2739328:12. Last accessed
December 6, 2016.
Guide to Automated Journalism. Town Center for Digital Journalism. Last accessed December 4,
Manifesto. Last accessed December 6, 2016.
New Company Claims It Uses Algorithms To Create Content Faster Than Creators Can, Making All
Future Creations Infringing. TechDirt. Last accessed December 6, 2016. https://www.techdirt.
Harts Abstract Art. David Augustus Hart. last accessed December 6, 2016.
MIT Media Lab. Last accessed November 7, 2016.
MIT Scientists Confirm that the AI Pandoras Box Has Been Opened. Upriser. Last accessed
December 4, 2016.
Open Access Scholarly Information Source. Last accessed December 6, 2016. http://www.
Qentis. Last accessed December 6, 2016.
The First News Report on the L. A. Earthquakes Was Written by a Robot. Slate. Retrieved 4
November 2016.
What if Machine. Community Research and Develoment Information Service. Project. Last accessed
December 6, 2016.
What is a QR Code. Last accessed December 6, 2016.
Worlds First Computer-Generated to Debut in London. The Guardian. Last accessed December 6,