You are on page 1of 15

I THE US

(a) United States Patent and Trademark Office decisions

on 29 July 2019, US Patent Application No. 16/524,350 entitled ‘Devices and Methods for
Attracting Enhanced Attention’ was filed at the United States Patent and Trademark Office
(USPTO), which named only a single inventor with the name ‘[DABUS]’ and the family
name ‘(Invention generated by artificial intelligence)’. This patent application named Stephen
L Thaler as the applicant, the assignee, and the legal representative of DABUS.

As a result of the filing information listed in the Applicant Data Sheet (ADS), on 8
August 2019 the USPTO issued a Notice to File Missing Parts that stated that the ADS ‘did
not identify each inventor by his or her legal name’. Following this, on 29 August 2019, a
Petition under 37 CFR 1.181 was filed, requesting supervisory review and a vacation of the 8
August 2019 Notice. The Petitioner asserted that the claimed invention in the ‘350
Application was developed by a creativity machine named DABUS, which was ‘trained with
general information in the field of endeavour to independently create the invention’. 1 The
Petitioner further argued that DABUS should be named as the inventor and that inventorship
in general should not be limited to ‘natural persons’. The USPTO dismissed the 29 August
2019 Petition on 17 December 2019. An instant Petition was then filed on 20 January 2020,
requesting reconsideration.

The USPTO issued a decision on the Petition under 37 CFR 1.1812 on 27 April 2020,
refusing to vacate the Notice. Engaging in an interpretive exercise, the USPTO stated that 35
U.S.C explicitly and consistently refers to inventors as being ‘natural persons’. For example,
35 U.S.C. § 101 uses phrases such as ‘whoever invents…’ which infers that a natural person
is intended to be the subject. As pointed out by the USPTO, to interpret the term ‘inventor’ as
including non-human machines ‘would contradict and was precluded by the plain readings of
the patent statutes’,3 as exemplified by 35 U.S.C. Furthermore, 35 U.S.C § 115 uses terms
such as, ‘person’, ‘individual’, ‘himself’, and ‘herself’. 4 The USPTO ultimately decided that

1
United States Patent and Trademark Office Decision on Petition In re Application of Application No.:
16/524,350 27 April 2020 3-4.
2
United States Patent and Trademark Office Decision on Petition In re Application of Application No.:
16/524,350 27 April 2020.
3
United States Patent and Trademark Office Decision on Petition In re Application of Application No.:
16/524,350 27 April 2020 4.
4
United States Patent and Trademark Office Decision on Petition In re Application of Application No.:
16/524,350 27 April 2020 4.
the plain language reading of the statutes and complementary case law was instructive and
that AI machines cannot be an inventor for the purposes of the law.5

In supporting its decision, the USPTO referred to decisions in various US courts (that
concerned state and corporate inventorship), which held that a key element of inventivity is
the mental act of conceptualising an invention in the mind – something which only a natural
person is capable of.

The decision by the US Court of Appeals for the Federal Circuit in University of Utah v
Max-Planck-Gesellschaft zur Forderung der Wissenschaften E.V,6 in which the court found
that states could not be inventors as conception is (put all the deicisons in the footnotes)

The USPTO found further support in the Manual of Patent Examining Procedure
(MPEP), which follows the notion of conception being a requisite for inventivity. 7 The MPEP
defines conception as ‘the complete performance of the mental part of the inventive act’, and
that conception is ‘the formation in the mind of the inventor…’.8

Whilst in the context of states and corporate personalities, the impossibility of the
performance of a mental act is somewhat clearer. States and corporates, or juristic persons
acquire legal status from their relationship with persons who act on their behalf. However, as
AI becomes increasingly more ‘independent’ and ‘inventive’, as described above, it is
questionable whether a form of legal personality should be extended to AI machines.

The Board essentially ruled that inventivity consists of the mental act of conception,
something which is only capable of taking place in a mind. This is problematic because it
assumes that both a mind and conceptualising is exclusively a human trait. However, this is
increasingly being challenged.

Whilst originally the ideas of Intellectual Property may point to property being the
product of intellectual labour, intellect, intelligence or consciousness may no longer be
something that is exclusively a human trait, or requires human involvement. If one of the
reasons that humans are afforded legal status is because of our capacities and possession of
various traits (humans with restricted or reduced traits have limited legal capacity such as

5
United States Patent and Trademark Office Decision on Petition In re Application of Application No.:
16/524,350 27 April 2020 7.
6
No. 12-1540 (Fed. Cir. 2013).
7
United States Patent and Trademark Office Decision on Petition In re Application of Application No.:
16/524,350 27 April 2020 6.
8
United States Patent and Trademark Office Decision on Petition In re Application of Application No.:
16/524,350 27 April 2020 6.
children who are presumed to have reduced intellectual capacity, experience, and
understanding are absolved of certain legal capacities, and people who are mentally
incapacitated do not possess full legal standing as they are no longer conscious beings
capable of performance, let alone forming intentions or thoughts) such as intellectual capacity
and ability, understanding, appreciation, and ability to form intentions, experience and
learning capability, consciousness and so forth, then machines that are capable of the same
should acquire a moral right to a form of legal status capable of bearing some, if not all
(when able to replicate all human traits) rights and obligations – such as inventorship rights.
See Autonomy v automation discussion and arguments for and against section for more the
discussion.

Since the decision in October 2020, the USPTO has published the ‘Public Views on
Artificial Intelligence and Intellectual Property Policy’,9 in which industry experts from all
fields, as well as the general public were invited to comment in 2019 on issues surrounding
the challenges that increasingly developed AI poses to patent law, and in particular the issue
of AI inventorship and ownership of patents. 10 Interestingly enough, a majority of the
comments stated that AI is capable of equalling and surpassing human intelligence in the near
future.11 However, as this has not yet eventuated, humans are still integral to the innovation
process of AI.

II European Patent Office and UKIPO

On 24 July 2019,12 an application was filed which listed Stephen Thaler as the applicant and
DABUS as the inventor for patents EP18275163 and EP18275174, with Thaler having a right
to successor in title to the prospective patent.13 The applicant contended to the European
Patent Office (EPO) that the invention contained in the patent was the result of DABUS
identifying the novelty of the invention before a natural person did and that granting

9
United States Patent and Trademark Office ‘Public views on artificial intelligence and intellectual property
policy’ (2020) USPTO.
10
A conference was held prior to this engagement, which can be accessed at: United States Patent and
Trademark Office ‘Artificial intelligence: Intellectual property considerations’ USPTO 7 October 2019,
available at https://www.uspto.gov/about-us/events/artificial-intelligence-intellectual-property-policy-
considerations, accessed on 2 October 2020.
11
United States Patent and Trademark Office ‘Public views on artificial intelligence and intellectual property
policy’ (2020) USPTO i-ii.
12
EPO decision of 27 January 2020 on EP 18 275 163 para 1; EPO decision of 27 January 2020 on EP 18 275
174 para 1.
13
EPO decision of 27 January 2020 on EP 18 275 163 para 4; EPO decision of 27 January 2020 on EP 18 275
174 para 4.
inventorship to DABUS would consequentially fulfil the aims of patent law, namely
disclosure, facilitation of innovation, and incentivisation.

The applicant argued that the legislators of the European Patent Convention (EPC) did
not intend to exclude AI-generated inventions from patentability.14 Importantly, the applicant
argued that DABUS was the actual devisor (the UK equivalent to the idea of conception in
the US) of the invention and that it was is trite to list the actual devisor of the invention for
the purposes of law. Furthermore, failure to list the actual inventor, is a criminal offence in
some jurisdictions in which patents were sought.15

On 27 January 2020, the EPO released the full rationale for the refusal for patents
EP18275163 and EP18275174.16 The EPO stated that the applications which listed DABUS
as the sole inventor did not meet the provisions in Article 81 and Rule 19 of the EPC which
required that the inventor be a natural person – which a machine was not. 17 The EPO also
indicated that the name of a machine, DABUS, does not satisfy rule 19(1) of the EPC as
names given to objects do not serve the same function as names given to persons in that they
neither identify them nor enable them to exercise rights that stem from their personality.18

The EPO went on to further reason that the EPC does not allow non-juristic persons
(natural, legal, and quasi-legal entities) as applicants or inventors.19 Based on the wording of
the EPC, inventorship in particular is reserved for natural persons only, as previously
inventorship rights for juristic persons were contemplated by the legislatures, but were not
included in the final version of the EPC.20

The EPO noted that the granting of the inventorship title did in fact confer
complementary rights on the inventor.21 This was problematic as AI systems do not have
14
The applicant argued that patentability requirements are exclusively encapsulated in Articles 52 and 57 of the
EPC which is in accordance with the Agreement on Trade-Related Aspects of Intellectual Property Rights
(TRIPS) and the Strasbourg Agreement. He argued that a procedural requirement under Rule 19(1) of the EPC
cannot create a substantive exclusion from patentability for inventions made by AI. EPO decision of 27 January
2020 on EP 18 275 163 para 11; EPO decision of 27 January 2020 on EP 18 275 174 para 12.
15
EPO decision of 27 January 2020 on EP 18 275 163 para 12; EPO decision of 27 January 2020 on EP 18 275
174 para 13.
16
EPO decision of 27 January 2020 on EP 18 275 163; EPO decision of 27 January 2020 on EP 18 275 174.
17
EPO decision of 27 January 2020 on EP 18 275 163 para 8; EPO decision of 27 January 2020 on EP 18 275
174 para 9.
18
EPO decision of 27 January 2020 on EP 18 275 163 para 22; EPO decision of 27 January 2020 on EP 18 275
174 para 23.
19
EPO decision of 27 January 2020 on EP 18 275 163 para 24; EPO decision of 27 January 2020 on EP 18 275
174 para 25.
20
Travaux Préparatoires, document IV/4860/61-F 18.
21
These include the inventor’s right, vis-à-vis the applicant for or proprietor of a European patent, to be
mentioned as such before the EPO (Article 62 of the EPC); the right to be designated in the European patent
application (Article 81 of the EPC); the right to be notified of the designation (Rule 19(3) of the EPC); the right
legal status. The EPO further held that non-natural juristic persons are afforded legal
personality on the basis of legal fiction. Legal fictions are created directly by legislation or
developed through jurisprudence. Whilst natural persons are afforded legal status as a result
of being natural persons, in the case of AI there is no legal status currently as there is no right
created by legislation or jurisprudence. As a result, AI machines cannot acquire the status of
inventor, nor the rights associated with inventorship.22

Furthermore, the EPO rejected the applicant’s contention that the applicant had
acquired the patent rights from DABUS as employer of the machine. 23 The EPO argued that
AI systems cannot be employed nor can they hold or transfer title rights. They are incapable
of being a party to an employment agreement, and rather, are owned by the natural person.
However, the owner of an AI system may own the output of the AI system, 24 but noted that
ownership is not the same as inventorship and the rights flowing from inventorship.

III SOUTH AFRICA

Formal examination poses a challenge to AI inventorship

Before evaluating the patentability of inventions under s25 of the South African Patent Act
57 of 1978, there are other challenges to AI inventivity that relate to formal examination.
This section aims to highlight these and future challenges.

(1) Do AI systems qualify for inventorship?

(a) Inventorship definition

South African Patents Act 57 of 1978 (Patent Act)

to be mentioned as inventor in the published European patent application and the European patent specification
(Rule 20(1) of the EPC); and, in the event of a dispute with the applicant or proprietor of the patent, the right to
be mentioned even against the wishes of the applicant or proprietor if a national court has issued a final decision
whereby the applicant or proprietor is required to designate him as inventor (Rule 20(2) of the EPC). The
inventor’s legal position is further protected by Article 60(1) of the EPC which vests with the inventor the initial
right to the European patent and foresees that the inventor can transfer this right to a successor in title. EPO
decision of 27 January 2020 on EP 18 275 163 para 26; EPO decision of 27 January 2020 on EP 18 275 174
para 27.
22
EPO decision of 27 January 2020 on EP 18 275 163 para 27; EPO decision of 27 January 2020 on EP 18 275
174 para 28.
23
EPO decision of 27 January 2020 on EP 18 275 163 para 30; EPO decision of 27 January 2020 on EP 18 275
174 para 31.
24
EPO decision of 27 January 2020 on EP 18 275 163 para 30; EPO decision of 27 January 2020 on EP 18 275
174 para 31.
In South African patent law, a ‘person’ for the purposes of an ‘applicant’ or ‘patentee’
includes, in section 2 of the Patent Act 57 of 1978 (Patent Act), natural persons and juristic
persons, such as companies registered in terms of the Companies Act 71 of 2008 (Companies
Act). Both juristic persons and natural persons can thus be applicants for, and holders of,
patents.

The current Patent Act does not define ‘inventor’. However, the previous Patent Act 37
of 1952 defined inventor in section 1 as including ‘the legal representative of a deceased
inventor or of an inventor who is a person under disability, but does not include a
communicatee’.25 Burrell suggests that this old definition should apply to the current Patent
Act. Importantly, even if this definition of ‘inventor’ is adopted, it is broad and would not
preclude non-natural persons or AI from being an inventor. In the absence of enabling
legislation or an enabling provision, AI would not be able to acquire even limited legal status,
and hence would not be able to bear rights, titles or obligations.

In examining the Patent Act as a whole, there are challenges in including AI systems as
inventors, such as in section 10(1)(a) of the Patent Act, which requires the names and
addresses of inventors, applicants, and patentees to kept on record with Patent Offices. Under
the Companies Act, a company has to have a legal address by registering the physical address
of an office (the principle office if there is more than one). In the case of AI, as there is no
enabling legislation such as the Companies Act, there is no legal address. In the case of AI
systems that are contained in immovable hardware, this can be resolved by citing the legal
address as the place where the hardware is situated. However, difficulties arise the case of AI
systems that are enclosed in movable hardware, or which are not enclosed in hardware at all
and instead function in a networked or cloud environment that spans multiple geographic
locations – or, in the future, systems that are not stored on Earth, or which are duplicated and
in the control of multiple users concurrently.

Sections 27(1) and (2) of the Patent Act provide rights to inventors in that it enables
inventors to apply for patents and be a holder, which also provides for assignment or sharing
of title rights. As mentioned, there is no enabling legislation that grants AI systems any form
of legal status that would allow for the bearing of rights or obligations.

Section 28 of the Patent Act deals with disputes between parties regarding rights in
inventions and patents, whilst section 62 of the Patent Act deals with grounds for revocation
of patents. Inventorship currently, much like rights to a patent, can only be disputed by
25
Section 1 of the Patent Act 37 of 1952.
parties who bring the matter before the Commissioner; the Commissioner is not enabled by
the Patent Act to challenge the accuracy of applications. Whilst an application that names an
AI as the inventor can be refused during formal examination, if the examiner realises that it is
an AI machine that listed as an inventor (AI names in the future may not be as apparent or
may be a mononym, such as DABUS), this still leaves the door open for fraudulent behavior
from someone ‘in control’ over the AI system claiming inventorship and thus being granted
potentially undeserving patent rights.

(b) Does the first and true inventor test pose a challenge to AI inventorship?

There is South African case law that aids in determining who, or what, acquires inventorship
rights. In the 1887 case of Hay v African Gold Recovery Co, which was decided under the old
Law 6 of 1887 of the Zuid-Afrikaansche Republiek, Morice J held that –

‘the words ‘first and true inventor’ are not to be taken in the artificial sense of the English Law,
but in their natural sense. They are not to be limited to persons within the State; nor can
‘inventor’ carry the meaning of ‘importer’. The ‘first and true inventor’ signifies that the person
so described made the discovery himself, and that he did so before anyone else in any part of
the world’.

The concept of ‘the first and true inventor’ originated in English law and has thus formed part
of our law. It first originated in the State of Monopolies 1623, 26 which in section VI,27 first
used this phrase to denote those who are entitled to inventorship rights.

In the Canadian case of University of Southampton’s Application [2006] RPC 31 (CA),


Laddie J formulated a two-step test to identify the first and true inventor. He noted that –

‘First it is necessary to identify the inventive concept or concepts in the patent or patent
application. Secondly, it is necessary to identify who came up with the inventive concept or
concepts. He or they are the inventors. Thirdly, a person is not an inventor merely because he
‘contributes to a claim’. His contribution must be to the formulation of the inventive concept’.

26
21 Jac 1 c 3.
27
Section VI, entitled ‘Proviso for future patents for 14 years or less, for new inventions’, reads: Provided alsoe
That any Declaracion before mencioned shall not extend to any tres Patents and Graunt of Privilege for the
tearme of fowerteene yeares or under, hereafter to be made of the sole working or makinge of any manner of
new Manufactures within this Realme, to the true and first Inventor and Inventors of such Manufactures,
which others at the tyme of makinge such tres Patents and Graunts shall not use, soe as alsoe they be not
contrary to the Lawe nor mischievous to the State, by raisinge prices of Commodities at home, or hurt of Trade,
or generallie inconvenient; the said fourteene yeares to be [accomplished] from the date of the first tres Patents
or Grant of such priviledge hereafter to be made, but that the same shall be of such force as they should be if this
Act had never byn made, and of none other’ (own emphasis).
Step one is straightforward, however step two speaks to the identity of the person who first
created the inventive concept – in other words, the first person who conceptualised or devised
the invention. This requirement still forms part of English law today as per section 7(3) of the
UK Patent Act of 1977.

This deviser test was enshrined in South African law in the cases of Galison
Manufacturing (Pty) Ltd v Set Point Industrial Technology (Pty) Ltd and Shock Proof
Investments 82 (Pty) Ltd (ZACCP) (unreported case no 98/4753, 30-1-2009) (Galison
Manufacturing), where the court held that the proper approach to be adopted in a section 28
of the Patent Act ownership dispute is as follows28 –

‘The task of the court is to identify the inventive concept of the patent or application
and identify who devised it … [t]he court is not concerned with issues of validity or
inventiveness: merely with the concept as described’ (own emphasis).

The test for inventorship in our law is thus the same as the English law approach; the devisor
of the invention is the one who is entitled to inventorship rights. That conceptualisation also
must be inventive. The question then becomes whether this mental requirement precludes AI
systems from

(i) Inventorship requires conceptualisation or devising

The jurisprudence clearly envisages that conceptualisation/devision as something that is


limited to people, by the use of words such as ‘he’ or ‘she’, denoting that it is a purely human
trait. The case law however is old, and it could not have been imagined by the courts at that
time that technology could have advanced to this degree. Given the advances in technology,
and the potential, the language used should not restrict us to the idea that devision is purely a
human trait. The first step would then be to determine the content of
devision/conceptualisation and determine whether AI can, or will be able to perform this
function, or something similar.

The notion is that only humans are capable of devising, as this ability is exclusively
related to the mind which, in turn, is a domain/experience exclusively created by the human
brain. Related to that, is the state of consciousness which is deemed to be a product of the
mind, and thus is exclusively a human state/experience.

There is no agreed upon definition of what consciousness is. If you asked a doctor in
the ICU, consciousness would be determined by an MRI scan to denote brain activity and a
28
Galison Manufacturing para 14.
patients GCS rating. If you asked a neuroscientist, they would tell you that the seat of
consciousness lies in the brain’s neocortex and the state of consciousness is achieved by
interactions between the cortex and the thalamus in which the cortex ‘reflects’ upon the
information presented by the thalamus. If you asked a philosopher, the answer will be a state
of knowing oneself, one’s desires and the factors that has shaped one’s identity. If you asked
a lawyer, the answer would be related to contrahendi,, animus or whether one is incapax. If
you asked a student, they would tell you it’s being ‘woke.’

The different definitions are mentioned to emphasise that there are many ways in
which consciousness can be interpreted, and there may be ways in which AI machines may
be attain a form of consciousness which we do not understand. There are various theories of
consciousness that can be applied such as the information integrated theory, which examines
whether something is conscious, and to what degree, based on mathematical models, which
do not require a brain per se. Conceptualisation may be possible in domains that exist outside
of the human mind, and as such, is not purely a human trait.

(ii) Intention, foresight, or planning is necessary for division

The term ‘devise’ is contemporary language and is often connoted with the idea of intending
to do something. In patent law, if the University of Southampton’s Application’s approach is
followed, then the term means the formulation of the inventive concept. This does not mean
that intention or formulation of the inventive concept are necessary from the beginning of the
inventive process. Accidental or co-incidental inventions are still patentable. What this turns
on is the appreciation/understanding of the inventivity of the invention, which can come after
the invention has been conceptualised or ‘created’. For example, if an AI system uses data
regarding chemical compounds that play a role in altering Mycobacterium Tuberculosis Pilli
(the bonding structure of the bacterium that facilitates cell infection) and, as a result, a novel
chemical compound formula is produced/identified as output by an AI machine that is
recognised by the AI as novel and effective (akin to producing the technical effect
requirement as per US patent law) which is then understood/recognised to be both novel and
effective in fighting off mycobacterium tuberculosis. In this example, that
recognition/understanding post output would be sufficient to satisfy the division requirement.
A real example of this is Thaler’s claims in the disputed EPO patents above.
(iii) The devised concept must be inventive

Without tackling the substantive provisions and tests for inventivity in this piece, the concept of
automation is evaluated. One of the current arguments against the granting of inventorship rights to
AI systems is that any output produced is not actually inventive, as it is a result of (1) automation
and (2) the human programmer or developer of the AI system plays too large a role in the process.

For the purpose of understanding the human-computer interaction in the process of developing an
invention, it is helpful to outline the main stages of computational problem solving. Let us take a
closer look at those components that are most closely related to the design of the problem-solving
mechanism.

Computational problem solving is a process that involves: 29 (1) Problem formulation30; (2)
Abstraction and modelling; (3) The design of a new or adjustment of an existing algorithm; (4)
Programming; (5) Data manipulation; (6) Execution; (7) Interpretation and communication of results.

(1) Problem formulation

This is the very first step in AI system development, which is the formulation of the problem or
the question that is going to be asked of the AI system to compute. In doing this, the developer
is setting parameters for the AI and directly creating the requirement to solve a problem. In
doing so, the developer is thus not setting the initiative by introducing a problem, but also
shaping the way in which it will be solved by setting the parameters of the problem.

(2) Abstraction and modelling

The problem created earlier is now reduced into essential characteristics that can be used to
determine which data is necessary and to enable the understanding and solving of the
problem31. Encoded into this, are the mathematical correlations (in the form of equations, logics,
rules and frames of reference and nets) necessary for computations 32 that move from input to
outputs. These correlations control how input is dealt with and thus, what the output is. 33 This is
known as computational modelling. This reduction is based on the perception of the developer
as the developer decides what the essential elements are, what the conditions of the solution
are, what possibilities are viable and which are not, and which values are used and what their
weight may be (for example in setting ethical rules for solution, the developer is creating a
parameter of ethics for the AI system to function within.)

(3) The design of a new or adjustment of an existing algorithm


The algoethym is the key to solving the problem. It is designed as an explicitly defined set of
instructions that in code form that how input is treated in achieving the output. 34 The
29
Daria Kim ‘AI-Generated Inventions’: Time to Get the Record Straight? GRUR International, 69(5), 2020, 443–
456 doi: 10.1093/grurint/ikaa061 pg 449.
30

31
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/
file/682579/computational-modelling-blackett-review.pdf pg 112.
32
Jeannette M Wing, ‘Computational Thinking and Thinking about Computing’ 366 Phil Trans R Soc A 3717,
3719 (2008). According to Wing, ‘[m]echanisation is possible due to our precise notations and models’
33
Dietmar P F Moeller, Mathematical and Computational Modeling and Simulation. Fundamentals and Case
Studies (Springer 2004) page 5 and g Angel Garrido, ‘Mathematics and Artificial Intelligence, Two Branches of
the Same Tree’ 2(2) Procedia – Social and Behavioral Sciences 1133, 1134 (2010)
34
Daria Kim ‘AI-Generated Inventions’: Time to Get the Record Straight? GRUR International, 69(5), 2020, 443–
456 doi: 10.1093/grurint/ikaa061 pg 450.
development and training of an algorthtm is commonly said to require the most ingenuity in
the development process of computational systems. 35

(4) Programming
This involves the coding of the developed algorhtym into hardware such as a computer
which can then perform the computations. 36

(5) Data preparation


This is usually done by persons that did not develop the algorthym. Development of
computational modelling systems usually involve many persons with different expertise.
Data scientists are usually hired to determine what data is necessary and what data is not
etc. This raises another issue as to inventorship, the large groups of persons involved in the
development of such systems may themselves have as much as a claim to inventorship rights
as the end user who runs the computation.

From the above, it is clear that currently much human involvement is necessary for the creation of
the AI system. One could argue that the system simply runs an exponential number of simulations
based on the parameters and instructions coded into it by humans, which is simply an automated
process, devoid of any true inventiveness. If solving a problem is what makes something inventive (in
English law, the problem solution approach is used to determine the substantial requirement of
whether inventiveness exists), then the question becomes when is the solution now obvious? If we
accept the notion that steps (1)-(5) is where the problem is actually solved as we have reduced it
into a form where it can be solved that would render the solution to the problem at steps (6)-(7) as
obvious and hence not inventive. The latter steps is simply crunching the numbers. Division,
arguably, has taken place within steps (1)-(5) by all those involved.

However, in reality, the human innovative process is not much different to this more efficient form
of inventing. Consider lab work, much of which is operating under instruction to solve a particular
problem. The solver is also provided with the tools, provided with requisite skills training, provided
with the current up to date information about the problem, and is advised on how to proceed.
Essentially the lab work is just running simulations to see which work, and which don’t, while
adapting methods along the way to solve any issues that arise. This is how deep learning models
work. Consider once more the creativity machine. Dr Thaler provided the machine data on existing
toothbrush designs and their effectivity. From this information, the machine created the world’s first
ever cross bristle design. Whilst Dr Thaler may have decided on what data to feed the machine, the
machine on it’s own accorded weighted which aspects of the data is more important in determining
the solution.

Autonomy versus automation and the future

One of the critiques about AI inventivity is that output is simply a result of automation as described
above. There are various types of automation – however in the AI machines and robotics machines
world, automation is commonly defined as a process that involves machines, tools, devices or
35
John Paul Mueller and Luca Massaron, Deep Learning For Dummies (John Wiley & Sons 2019) 272
36
Daria Kim ‘AI-Generated Inventions’: Time to Get the Record Straight? GRUR International, 69(5), 2020, 443–
456 doi: 10.1093/grurint/ikaa061 pg 449.
systems that were developed by humans to perform a function/s without human involvement, or to
minimise this involvement (page 14 part A of Springer Handbook of Automation ISBN: 978-3-540-
78830-0). The core word of autonomation (much like autonomy) developed from the Greek word
automatos which means to act independently or by a self-violition. Automation does not only
encapsulate the ability to perform physical acts of labour, but also includes cognitive functions such
as mathematical problem solving (e.g a calcultaor performing functions). Within the machine
learning environment automation is linked to developing and using algorithms and other techniques
that can be used to automate solutions for complex problems that normal programming may find
difficult. (Gopinath Rebala • Ajay Ravi • Sanjay Churiwala An Introduction to Machine Learning page
1 ISBN 978-3-030-15728-9).

The word automation therefore can be differentiated from autonomy in both it’s philosophical and
legal meanings. Whilst there are various concepts,tests,definitons that surround the concept/value
of autonomy, the widely agreed upon meaning is that it is one of authenticity of choice. The capacity
for autonomous choice is an important consideration in attaching rights and obligations to living
beings. For a machine system to autonomous, at the very least, it would need to possess a sense of
self, subjectivity, desires and the ability to alter its behaviour and make choices that are in
accordance with its authentic self.

This could be in the form of reprogramming itself to achieve an end that it so desires, and perhaps
even going against and programming against the initial programming of its human developer. At the
moment, machines can programme themselves to optimise their functions- but this is within the
programming parameters that was ingrained into it by its human developer. Machines have
optimised themselves in other ways to optimise their functionality and efficiency, such as creating a
new language for just machines to communicate with one another.

Is AI autonomy possible in the future? It depends on what autonomy means for AI systems. The
human understanding of autonomy as mentioned above may be very different to what autonomy or
autonomous choice is to an AI system. There may be values that exist in domains that humans
cannot comprehend.

In the future however, as Nick Bostrom postulates, there may come a time whereby machines
optimise themselves in ways that goes against their initial programming (such as extreme example
that in the future AI could determine than humans are not efficient and in order to optimise their
function, humans may need to take a back seat or even be removed from the planet entirely).

Possible consequences if inventorship is not granted to AI systems and solutions

Without inventorship rights, an invention may not be patentable, especially in the case where
the AI was solely responsible for inventing. One of the possible solutions to this is that of
King v SA Weather Service 2009 (3) SA 13 (SCA) ; [2009] 2 All SA 31 (SCA) where a copywrite was
disputed. In this case, it was stated that if a work is created by an employee within the course and
scope of their employment, then the copyright in that work belongs to the employer. This is an
attractive approach and could render the invention patentable and within the right of the
‘controller’ of the AI system. AI systems are not employable as they do not possess legal status, so
instead, it would have to be approached from an ownership perspective. The owner of the AI system
would have utilised the AI to create the invention, and hence is entitled to the rights flowing from it.
The AI system is thus a simple tool. In this case assignment issues do arise, such as what if the
controller of the AI is different to the owner of the AI? What if an AI system has multiple users at
once. These issues could be solved with license user agreements as Abbott suggests.

Legal forecasting

Whilst neuralink has thus far been disappointing, some of the aims Musk has for this
technology raise very interesting questions for patent law. If a technology (such as a chip) is
created that implants itself in the brain, which uses the bodies tools of perception to both
absorb and create data, which results in an inventive output that is recognised by the brain. In
that case, the inventor becomes less and less clear. In this scenario, the brain and the body
operate more like hardware for the software that created the novel invention.

You might also like