You are on page 1of 16

The feasibility of artificial consciousness through the lens of neuroscience

Jaan Aru1 , Matthew E. Larkum2 & James M. Shine3

1
Institute of Computer Science, University of Tartu, Estonia
2
Institute of Biology, Humboldt University Berlin, Germany
3
Brain and Mind Center, The University of Sydney, Sydney, Australia

Abstract
Interactions with large language models have led to the suggestion that these models
may be conscious. From the perspective of neuroscience, this position is difficult to
defend. For one, the architecture of large language models is missing key features of
the thalamocortical system that have been linked to conscious awareness in
mammals. Secondly, the inputs to large language models lack the embodied,
embedded information content characteristic of our sensory contact with the world
around us. Finally, while the previous two arguments can be overcome in future AI
systems, the third one might be harder to bridge in the near future. Namely, we
argue that consciousness might depend on having ‘skin in the game’, in that the
existence of the system depends on its actions, which is not true for present-day
artificial intelligence.
Large language models and consciousness

Over the course of documented human history (and probably long before), humans
have speculated about how and why we are conscious. Why is it that we experience
the world around us, as well as our own thoughts, memories and plans? And how
does the organisation of our brain, shaped as it is over evolutionary time [1] and
steeped in social and cultural factors [2], coordinate the activity of its constituent
neurons and glia to allow us to experience anything at all? Recent advances in
neuroimaging have enabled neuroscientists to speculate about how these
mechanisms might arise from the seemingly endless complexity of the nervous
system [3]. In the last few years, a new player has entered the arena – Large
Language Models (LLMs). Through their competence and ability to converse with
us, which in humans is indicative of being awake and conscious, LLMs have forced
us to refine our understanding of what it means to understand, to have agency and
even to be conscious.

LLMs are sophisticated, multi-layer artificial neural networks whose weights are
trained on hundreds of billions of words from natural language conversations
between awake, aware humans. Through text-based queries, users that interact with
LLMs are provided with a fascinating language-based simulation. If you take the
time to use these systems, it is hard not to be swayed by the apparent depth and
quality of the internal machinations in the network. Ask it a question, and it will
provide you with an answer that drips with the kinds of nuance we typically
associate with conscious thought. As a discerning, conscious agent yourself, it’s
tempting to conclude that the genesis of the response arose from a similarly
conscious being – one that thinks, feels, reasons and experiences. Using this type of a
“Turing test” as a benchmark, many users are tempted to conclude that LLMs are
conscious [4], which in turn raises a host of moral quandaries, such as whether it is
ethical to continue to develop LLMs that could be on the precipice of conscious
awareness.

This perspective is often bolstered by the fact that the architecture of LLMs is loosely
inspired by features of brains (Fig. 1) – the only objects to which we can currently
(and confidently) attribute consciousness. However, while early generations of
artificial neural networks were designed as a simplified version of the cerebral cortex
[5], modern LLMs have been highly engineered and fit to purpose in ways that do
not retain deep homology with the known structure of the brain. Indeed, many of
the circuit features that render LLMs computationally powerful have strikingly
different architectures from the systems to which we currently ascribe causal power
in the production and shaping of consciousness in mammals [3]. For instance, most
theories of consciousness would assign a central role in conscious processing to
thalamocortical [6–11] and arousal systems [12–14], both features that are
architecturally lacking in LLMs. It is in principle possible for future LLMs to
approximate the crucial computations of the brain, such as global broadcasting
[15,16] or context-dependent signal augmentation [6,7,17], however at this stage,
these features appear to be unrelated to the remarkable capacities of modern LLMs.

Figure 1 – Macroscopic topological differences between mammalian brains and large language
models. Left – a heuristic map of the major connections between macro-scale brain structures: dark
blue – cerebral cortex; light blue – thalamus; purple – basal ganglia; orange – cerebellum; red =
ascending arousal system (colours in the diagram are recreated in the cartoon within the insert). Right
– a schematic depicting the basic architecture of a large language model.

One might ask why it is so important for the architecture of LLMs to mimic features
of the brain. The primary reason is that the only version of consciousness that we can
currently be absolutely sure of arises from brains embedded within complex bodies.
This could be further collapsed to humans, though many of the systems-level
features thought to be important for subjective consciousness are pervasive across
phylogeny, stretching back to mammals [7,18,19], and even to invertebrates [20]. We
will return to this point, but we will start with the question about precisely what we
mean by the term ‘consciousness’.

Consciousness and Neuronal Computation

Typically, people rely on interactive language-based conversations to develop an


intuitive sense about whether LLMs are conscious or not. Although these
conversations are remarkable, they are not formal objective measures of
consciousness and do not constitute prima facie evidence for conscious agency. The
advent of LLMs has demanded a re-evaluation of whether we can indeed infer
consciousness directly from interactions with other agents. Thus, there is an
emerging view that the criteria for attributing human-like intelligence need to be
re-assessed [21]. To make sensible progress on this matter, we need to better
understand what exactly people think and assume when talking about
consciousness.

There are different meanings associated with the word “consciousness”: neurologists
often refer to levels of consciousness (e.g., the fact that you are [or are not] conscious;
i.e., state of consciousness; Fig. 2), whereas psychologists often interrogate the
contents of consciousness (e.g., consciousness of something; i.e., content of
consciousness) [11,15]. Furthermore, there is a distinction between different contents
of consciousness: our experiences can be described as primarily phenomenal [22]
(e.g., experiential, the sight/smell of an apple; or the feel of your arm) or more
cognitive (i.e., how we access and report conscious experiences [22]; or how we
manipulate abstract ideas and concepts, such as our sense of self, or ephemeral ideas
such as “justice” or “freedom”; Fig. 2).

Figure 2 – Intransitive vs. Transitive Consciousness. Different states of consciousness characterised


according to Arousal (state of consciousness; x-axis) and Awareness (content of consciousness; y-axis).
While awake, our phenomenal consciousness can involve relatively concrete perceptual experiences
(such as our perception of our body or objects in the world that we can interact with), but also far
more abstract concepts (such as love and justice) that don’t have a requisite phenomenal experience.

There is a relatively sizeable literature on the neural correlates of consciousness


[10,23,24], with many different theories about the neural mechanisms that underlie
conscious processing [3,10,24]. Despite the different views, there is a consensus that
consciousness is supported by ongoing processing within the dense, re-entrant
thalamocortical network that forms the core of our brains [6–8,10,19,24–26]. One
theory that builds on this consensus and perhaps best encapsulates the level of
biological detail relevant for the discussion about consciousness in LLMs, is
Dendritic Integration Theory (DIT; [6]; Fig. 3).

In DIT, it is proposed that the levels and contents of consciousness arise at the level
of deep pyramidal neurons, which are large excitatory neurons that hold a central
position in both thalamocortical and corticocortical loops [6]. Unlike other theories of
consciousness, DIT focusses on a key physiological characteristic of thick-tufted,
deep (i.e., the bodies of these sit in layer 5 of the cerebral cortex; L5TT), pyramidal
cells – namely, that L5TT have two major compartments (Fig. 3, orange and red
cylinders) that process categorically distinct types of information: the basal
compartment (red) processes externally-grounded information whereas the apical
compartment (orange) processes internally-generated information. At rest, these two
compartments are separated from one another, allowing information to flow
through a network formed by the basal (red) compartments, unencumbered by
modulatory influence of the apical (orange) compartments. Without this modulation,
this processing remains unconscious. Crucially, the thalamus – a deep yet
highly-connected subcortical hub [27] – controls a biochemical switch that allows the
apical network to interact with the basal network, integrating contextually
modulated cortical processing into conscious experience [6,11]. In DIT, the level of
consciousness is determined by whether the process as a whole is active (or silent),
whereas the contents of consciousness relate to whether a particular pyramidal
neuron (or set of neurons) become active, relative to some other set of neurons.

Figure 3 – The neural mechanisms of consciousness. Dendritic Integration Theory (DIT) associates
consciousness with the subset of thick-tufted layer 5 pyramidal neurons (L5TT; right) that are
burst-firing, which occurs when depolarisation of the cell body via basal dendrites (red) coincides
temporally with descending cortical inputs to apical dendrites (orange), particularly in the presence
of gating inputs from higher-order, matrix-type thalamus (blue). Worked example (left): when inputs
hit the retina, they contain information content that will drive feed-forward basal activity in the
ventral visual stream, however only the patterns that are coherent with the information content will
be augmented (e.g., bilateral arrows and simultaneously active apical/basal compartments in ventral
stream). The top-down prediction of these features from frontal or parietal cortex can augment certain
features of the input stream, causing them to stand out from the background, leaving others inactive
(e.g., the light red basal compartment in primary visual area). Note that some predicted features may
not be augmented, and the primary stimulus can suggest potential associations that are not realised
(e.g., light red in basal compartment of a parietal neuron). The thalamus (light blue; dotted line) plays
a crucial role in shaping/gating the contents of consciousness. Note that other areas, such as the
hypothalamus, ascending arousal system and superior colliculus play crucial roles in this process, but
are omitted for clarity.

DIT makes a compelling (and now confirmed) prediction: artificially severing access
to the model (i.e., the top compartment), which is apparently achieved by various
different anaesthetic agents [6,28], should prevent conscious experience, despite
ongoing neural spiking activity in sensorimotor cortical areas. In other words,
activity in sensorimotor areas that leads to conscious perception is in part
determined by only the “right” activity generated in the top-down stream of
information that projects to the sensorimotor cortex. This aligns closely with the
view that conscious perception is a kind of “controlled hallucination” [29], in which
the hallucination in question would correspond to the top-down information that
predicts (and must match to) the bottom-up sensory information stream.

Importantly (and also somewhat trivially), the architecture of LLMs is devoid of


these features: there is no equivalent in LLMs to dual-compartment pyramidal
neurons, nor a centralised thalamic architecture, nor the many arms of the ascending
arousal system [7,8,14,19,25,30]. Without these features, it is effectively impossible
for LLMs to broadcast their activity into a global workspace [24] that maximises
information content [31] in a way that leads to the recursive, self-reflective activity
[32] required to support a complex sense of self [7,33]. In other words, LLMs are
missing the very features of brains that are currently hypothesized to support
conscious awareness. Although we are not arguing that the human brain is the only
architecture capable of supporting conscious awareness, the evidence from
neurobiology suggests that very specific architectural principles (i.e., more than
simple connections between integrate-and-fire neurons) are responsible for
mammalian consciousness. Leading theories of consciousness, such as Integrated
Information Theory [35], the Global Neuronal Workspace Theory [24] and our own
DIT [6], emphasize neurobiological complexity, positing that consciousness emerges
from highly interconnected architectures. According to DIT, for instance, the
explanation for loss of consciousness due to anaesthesia results from the catastrophic
loss of interconnectivity brought about by removing feedback information en block
[6]. According to the best evidence from nature, therefore, we are hesitant to ascribe
phenomenal consciousness to LLMs, which are topologically extremely simple in
comparison.
The Impoverished Umwelt of an LLM
A distinct topological architecture is not the only problem associated with ascribing
consciousness to LLMs. Importantly, as currently designed, LLMs do not even have
access to the kinds of information that we process to support our conscious
awareness. The portion of the world that is perceptually ’available’ to an organism
has been described as its “Umwelt” (from the German ‘environment’ [34]). For
instance, human retinas respond to wavelengths of light ranging from ~380-740
nanometers, which we perceive as a spectrum from blue to red (Fig. 3). Without
technological augmentation, we are not susceptible to light waves outside of this
narrow band – in the infrared (>740 nm) or ultraviolet (<380 nm) bands. We have a
similar Umwelt for the auditory (we can perceive tones between 20-20,000 Hz),
somatosensory (we can differentiate stimulation up to 1 mm apart on some parts of
our body) and vestibular domains (yoked to the 3-dimensional structure of our
semicircular canals). Other species can detect other portions of the electromagnetic
spectrum. For instance, honeybees can see light in the ultraviolet range [35], and
some snakes can detect infrared radiation in addition to more traditional visual light
cues [36] – that is, the bodies and brains of other animals place different constraints
on their susceptibility to the sensory world around them.

Crucially, our Umwelt defines our modes of experience – it provides the filter that
shapes the information that can impact our phenomenal conscious awareness. James
Gibson referred to this information – that which is pragmatically available to us – as a
set of “affordances” [19,37–39]. Our unique set of affordances has been shaped over
evolutionary time to contain echoes of previously useful shortcuts and tricks that are
directly yoked to our perceptual experience of our own Umwelt, which in turn
directly constrains our phenomenal capacities. For instance, bright lights in the
visible spectrum trigger cascades that innervate the spinothalamic system which we
in turn experience as pain [40], however similar bursts in the infrared or ultraviolet
spectrum fail to evoke the same discomforting sequelae.

If anything at all, what is the Umwelt of an LLM? In other words, what kinds of
affordances does an LLM have access to? By the very nature of its design, an LLM is
only ever presented with binary-coded patterns fed to the network algorithms
inherent within the complex transformer architectures that comprise the inner
workings of the artificial neural networks [41,42]. The Umwelt of an LLM – the
information afforded to it – is written in a completely different language to the
information that bombards our brain from the moment we open our eyes. This
informational stream has a highly non-linear, complex structure that LLMs are quite
obviously capable of parsing. However, crucially, the information stream does not
itself make robust contact with the world as it is in any way. They are infinitely more
different from humans in this regard than we are from bats, and we’re pretty darned
different from bats [43]. Our affordances not only constitute access to direct
experience of the physical world, they also inextricably link us to the consequences
so that we, like bats, pay a price for misinterpreting real-world signals.

While it has been argued that perceptual and cognitive aspects of consciousness are
inextricably linked [44,45], individuals with aphantasia provide an existence
disproof of this argument [46]. Aphantasia is a subtle idiosyncratic perceptual trait
wherein individuals are incapable of voluntarily inducing contents of mental
imagery – for instance, when asked to imagine a purple dinosaur balancing on a
yellow beach-ball with their eyes closed, they simply experience the black colour of
the back of their eyelids (note: author JMS is a card-carrying aphantasic).
Fascinatingly, despite this impaired capacity for mental imagery, aphantasics are
perfectly capable of conducting tasks consistent with cognitive consciousness (e.g.,
spatial working memory) without a modicum of phenomenal experience (e.g., they
don’t experience this phenomenally). Although individuals with aphantasia are
phenomenally conscious when they (unsuccessfully) attempt to imagine an item “in
their mind’s eye” (i.e., they are conscious of the backs of their eyelids), they do not
integrate the cognitive and experiential features of their consciousness into a unified
whole. In other words, they are living proof that phenomenal and cognitive
consciousness are dissociable entities.

Figure 4 – The curious case of aphantasia. When an individual perceives an apple (left), they
experience both the primary features of the apple, as while also anticipating the crunch the apple will
make when they bite into it. In the case of imagination (middle), an individual can perceive a weaker
visual experience of an apple, and also anticipate its crunchiness. In the case of aphantasia (right), the
individual has no phenomenal visual experience of the apple, yet is still capable of anticipating the
crunchiness.

This dissociation provides a crucial distinction that exposes the idiosyncrasies of


LLMs: namely, we propose that LLMs are capable of mimicking the signatures of
cognitive consciousness, but without any capacity for experiential contents of
experience. Importantly, while both forms of consciousness rely on informational
relationships, the differences between the two forms of consciousness are telling.
Phenomenal consciousness relates to an analogue mapping between the nervous
system and patterns of information in the environment (e.g., photons or
sound-waves) or in the body (e.g., the machinations of different organ systems). In
contrast, the relationships in cognitive consciousness do not rely on analogue
mappings between a sensor/effector and features of the world, but rather on the
more abstract inter-feature relationships indicative of the concepts in question.
Crucially, the informational structure of our language is inextricably linked to the
same structure that is present in cognitive consciousness – these two entities are
couched in the same informational relationships. For this reason, language-based
descriptions of cognitive consciousness can provide a robust simulation of the
‘shape’ of cognitive consciousness. In contrast, this is simply not the case for
phenomenal consciousness – the script of which is not written in language, but
rather in experiences.

The Importance of Skin in the Game


The previous two arguments can be applied to present-day LLMs and help to
demonstrate that they do not have consciousness. However, future AI systems (and
LLMs) might be equipped with different types of inputs, they might have a global
workspace [16] and perhaps something equivalent to computations happening on
the dendrites, thus they might overcome these two arguments. Our third argument,
however, will also apply to future LLMs and AI systems with a different
architecture: perhaps consciousness is related to specific processes within living
organisms [47–50].

Our main goal is not to convince readers that consciousness can only arise in living
systems, but rather to draw attention to the fact that the common assumption that
consciousness can be captured within computer software might be misleading. For
instance, real biological neurons may have properties that cannot be reinstated
within simulations. In particular, a clear but neglected difference between AI
systems (such as LLMs) and brains is that biological neurons exist as real physical
biological entities, whereas ‘neurons’ in deep neural networks and LLMs are just
pieces of code. One cannot take an artificial neuron out of the software – it exists
only within a simulation. In contrast, when a biological neuron is studied under in
vitro conditions, it still exhibits plasticity and can interact with other neurons
through neurotransmission. Furthermore, inside the cells, there is also no code, but a
further cascade of real physical intracellular complexity. For instance, consider the
Krebs cycle that underlies cellular respiration, a key process in maintaining cellular
homeostasis. Cellular respiration is a crucial biological process that enables cells to
convert the energy stored in organic molecules into a form of energy that can be
utilized by the cell, however this process is not compressible into software. In other
words, capturing cellular respiration in a computer simulation will not keep the cell
alive: processes like cellular respiration need to happen with real physical
molecules.

Another way of stating this is that living systems differ from software and machines
[47,48,51]. Sometimes it is assumed that the difference is mainly in embodiment –
LLMs do not have a body and they do not interact with the world through their
body – but the argument that living systems are different from software and
machines is not so much about embodiment, but rather about having “skin in the
game” [52]. Having skin in the game means, in simple terms, that the organism has
something to lose. An LLM could claim in a conversation that it does not want to be
shut down, but an LLM does not have skin in the game as there is no real
consequence to the software when it is actually shut down. In contrast, in biology the
system has something to lose on several levels [53]. It cannot stop living, as
otherwise it will die. As the philosopher Hans Jonas has said: “The organism has to
keep going, because to be going is its very existence” [54]. If cellular respiration stops, the
cell dies; if cells die, organs fail; if organs fail, the organism will soon die. The system
has skin in the game across levels of processing [53] and these levels are not
independent from each other. There is complex causality between real physical
events at the microscale and consciousness. As of now, there is no scientifically
validated argument for why only living systems should have consciousness (but see
[47-50]), however, given the above, we have to at least entertain the possibility that
consciousness is linked to this complex “skin in the game” process underlying life.

Perhaps consciousness cannot be captured within software as consciousness, as we


know it, is seemingly inextricably linked to life, biology, biochemistry and the
multi-level organisation of the nervous system. Attempting to “biopsy”
consciousness and remove it from these embodied dwellings may be as nonsensical
as attempting to remove the Krebs cycle from a cell. In this light, consciousness is
more akin to a process performed by a living being with a sophisticated nervous
system. A sophisticated enough computer program running a similar computation
may mimic features of consciousness, such as the ability to ‘join the dots’ between
concepts, to abstract away irrelevant details, or even complete complex tasks that
seemingly require anticipation, counterfactual reasoning and even theory-of-mind,
however these are all, in our estimation, mere echoes of cognitive consciousness that
inevitably miss the key feature of phenomenal consciousness. That is, they lack the
self-evidencing [55] algorithmic core of agentic beings that afford them conscious
capacities in the first place.

Concluding remarks
Here, we have attempted to provide a systems neuroscientists perspective on LLMs.
We conclude that, while fascinating and alluring, LLMs are not conscious, and also
not likely to be capable as such in their current form. Firstly, we argued that the
topological architecture of LLMs, while highly sophisticated, is sufficiently different
from the neurobiological details of circuits empirically linked to consciousness in
mammals that there is no a priori reason to conclude that they are even capable of
phenomenal conscious awareness (Fig. 1). Secondly, we detailed the vast differences
between the Umwelt of mammals – the ‘slice’ of the external world that they can
perceive – and the Umwelt of LLMs, with the latter being highly impoverished and
limited to keystrokes, rather than the electromagnetic spectrum. Importantly, we
have major reasons to doubt that LLMs would be conscious if we did feed them
visual, auditory or even somatosensory information – both because their
organisation is not consistent with the known mechanisms of consciousness in the
brain, but also for the fact that they don’t have ‘skin in the game’. That is, LLMs are
not biological agents, and hence don’t have any reason to care about the implications
of their actions. In toto, we believe that these three arguments make it extremely
unlikely that LLMs, in their current form, have the capacity for phenomenal
consciousness, but rather represent sophisticated simulacra that echo the signatures
of cognitive aspects of consciousness (Fig. 4), filtered as it is through the language we
use to communicate with one another [21].

Rather than representing a deflationary account, we foresee a number of very useful


implications from this perspective. For one, we should worry much less about any
potential moral quandary regarding sentience in LLMs (Box 1). In addition, we
believe that a refined understanding of the similarities and differences in the
topological architecture of LLMs and mature brains provides opportunities for
advancing progress in both machine learning (by mimicking features of brain
organisation) and neuroscience (by learning how simple distributed systems can
process elaborate information streams). For these reasons, we are optimistic that
future collaborative efforts between machine learning and systems neuroscience
have the potential to rapidly improve our understanding of how our brains make us
conscious.
Box 1: LLMs and moral competence - do we have a moral quandary?

The issue of the moral competence of LLMs is already a thorny issue given that they
can in principle produce statements that can aid and abet in malevolent purposes on
the part of the human user. Although this is a large and interesting topic, it is not the
main question we would like to address. Rather, in this essay we want to ask, “but
does it really matter if LLMs are conscious”? If LLMs can match and even exceed our
expectations in terms of getting superficially human-like responses that are useful
and informative, is there any need to speculate about what an LLM experiences?
Some argue forcefully, yes from a moral perspective [56]. According to this view, we
should carefully consider the ethical implications for any conscious entity, including
artificial intelligence, principally because it is assumed that some experiences could
be negative and that in this case the AI could also suffer. At this point, it is claimed,
we should care or at least be cautious about whether an AI really does suffer.

To this extent, we predict that LLMs do not (and will not) have experiences that can
be considered suffering in any sense that should matter to human society. One way
of arguing this point is analogous to the notion of “skin in the game” [52] which
emphasizes the importance of personal investment and engagement in moral
decision-making, and suggests that those who have a personal stake in an issue are
more competent to make ethical judgments than those who do not [52]. Here, we
would argue that not having the capacity for phenomenal consciousness would
preclude suffering and therefore personal investment. This reasoning also extends to
the common disincentives that are used for legal issues that LLMs could become
entangled with such as contracts, libel, etc., which are commonly penalized with
actions ranging from monetary compensation to incarceration. Without personal
investment on the part of the LLM, these disincentives will not be taken seriously by
injured parties and would therefore likely destabilize the rule of law.
Acknowledgements
We would like to thank Jakob Howhy, Kadi Tulver, Christopher Whyte and Gabriel
Wainstein for their helpful comments on the manuscript.
References

1. Cisek, P. (2019) Resynthesizing behavior through phylogenetic refinement. Attention,


perception & psychophysics 26, 535
2. Park, C.L. (2010) Making sense of the meaning literature: An integrative review of
meaning making and its effects on adjustment to stressful life events. Psychological
Bulletin 136, 257–301
3. Seth, A.K. and Bayne, T. (2022) Theories of consciousness. Nat Rev Neurosci 23, 439–452
4. Chalmers, D.J. (manuscript) Could a large language model be conscious?
5. Rumelhart, D.E., ed. (1999) Parallel distributed processing. 1: Foundations / David E.
Rumelhart, (12. print.), MIT Pr
6. Aru, J. et al. (2020) Cellular Mechanisms of Conscious Processing. Trends in Cognitive
Sciences 24, 814–825
7. Shine, J.M. (2021) The thalamus integrates the macrosystems of the brain to facilitate
complex, adaptive brain network dynamics. Progress in Neurobiology 199, 101951
8. Llinás, R. and Ribary, U. (2001) Consciousness and the brain. The thalamocortical
dialogue in health and disease. Annals of the New York Academy of Sciences 929,
166–175
9. Tasserie, J. et al. (2022) Deep brain stimulation of the thalamus restores signatures of
consciousness in a nonhuman primate model. Sci. Adv. 8, eabl5547
10. Koch, C. et al. (2016) Neural correlates of consciousness: progress and problems. Nat Rev
Neurosci 17, 307–321
11. Aru, J. et al. (2019) Coupling the State and Contents of Consciousness. Front. Syst.
Neurosci. 13, 43
12. Parvizi, J. and Damasio, A. (2001) Consciousness and the brainstem. Cognition 79,
135–160
13. Fischer, D.B. et al. (2016) A human brain network derived from coma-causing brainstem
lesions. Neurology 87, 2427–2434
14. Shine, J.M. (2023) Neuromodulatory control of complex adaptive dynamics in the brain.
Interface focus 13, 20220079
15. Dehaene, S. et al. (2017) What is consciousness, and could machines have it? Science
358, 486–492
16. VanRullen, R. and Kanai, R. (2021) Deep learning and the Global Workspace Theory.
Trends in Neurosciences 44, 692–704
17. Larkum, M. (2013) A cellular mechanism for cortical associations: an organizing principle
for the cerebral cortex. Trends in Neurosciences 36, 141–151
18. Merker, B. (2007) Consciousness without a cerebral cortex: A challenge for neuroscience
and medicine. Behav Brain Sci 30, 63–81
19. Shine, J.M. (2022) Adaptively navigating affordance landscapes: How interactions
between the superior colliculus and thalamus coordinate complex, adaptive behaviour.
Neuroscience & Biobehavioral Reviews 143, 104921
20. Barron, A.B. and Klein, C. (2016) What insects can tell us about the origins of
consciousness. Proc Natl Acad Sci USA 113, 4900–4908
21. Mitchell, M. and Krakauer, D.C. (2023) The debate over understanding in AI’s large
language models. Proc. Natl. Acad. Sci. U.S.A. 120, e2215907120
22. Block, N. (1995) On a confusion about a function of consciousness. Behav Brain Sci 18,
227–247
23. Koch, C. (2018) What Is Consciousness? Nature 557, S8–S12
24. Mashour, G.A. et al. (2020) Conscious Processing and the Global Neuronal Workspace
Hypothesis. Neuron 105, 776–798
25. Jones, E.G. (2001) The thalamic matrix and thalamocortical synchrony. Trends in
Neurosciences 24, 595–601
26. Redinbaugh, M.J. et al. (2020) Thalamus Modulates Consciousness via Layer-Specific
Control of Cortex. Neuron 106, 66-75.e12
27. Bell, P.T. and Shine, J.M. (2016) Subcortical contributions to large-scale network
communication. Neuroscience & Biobehavioral Reviews 71, 313–322
28. Suzuki, M. and Larkum, M.E. (2020) General Anesthesia Decouples Cortical Pyramidal
Neurons. Cell 180, 666-676.e13
29. Seth, A.K. (2021) Being you: a new science of consciousness, Faber & Faber
30. Jones, B.E. (2020) Arousal and sleep circuits. Neuropsychopharmacol. 45, 6–20
31. Tononi, G. et al. (2016) Integrated information theory: from consciousness to its physical
substrate. Nat Rev Neurosci 17, 450–461
32. Lau, H. and Rosenthal, D. (2011) Empirical support for higher-order theories of conscious
awareness. Trends in Cognitive Sciences 15, 365–373
33. Metzinger, T. (2009) The ego tunnel: the science of the mind and the myth of the self,
BasicBooks
34. Uexküll, J. von (2010) A foray into the worlds of animals and humans: with A theory of
meaning, (1st University of Minnesota Press ed.), University of Minnesota Press
35. Wakakuwa, M. et al. (2007) Spectral Organization of Ommatidia in Flower-visiting
Insects†. Photochemistry and Photobiology 83, 27–34
36. Chen, Q. et al. (2012) Reduced Performance of Prey Targeting in Pit Vipers with
Contralaterally Occluded Infrared and Visual Senses. PLoS ONE 7, e34989
37. Gibson, J.J. (1966) The Senses Considered as Perceptual Systems., Houghton Mifflin,
Boston.
38. Greeno, J.G. (1994) Gibson’s affordances. Psychological Review 101, 336–342
39. Pezzulo, G. and Cisek, P. (2016) Navigating the Affordance Landscape: Feedback Control
as a Process Model of Behavior and Cognition. Trends in Cognitive Sciences 20, 414–424
40. Digre, K.B. and Brennan, K.C. (2012) Shedding Light on Photophobia. Journal of
Neuro-Ophthalmology 32, 68–81
41. Vaswani, A. et al. (2017) Attention Is All You Need. DOI: 10.48550/ARXIV.1706.03762
42. Brown, T.B. et al. (2020) Language Models are Few-Shot Learners. DOI:
10.48550/ARXIV.2005.14165
43. Nagel, T. (1974) What Is It Like to Be a Bat? The Philosophical Review 83, 435
44. Brown, R. et al. (2012) On Whether the Higher-Order Thought Theory of Consciousness
Entails Cognitive Phenomenology, Or: What Is It Like to Think That One Thinks That P?:
Philosophical Topics 40, 1–12
45. Carruthers, P. and Veillet, B. (2011) The Case Against Cognitive Phenomenology. In
Cognitive Phenomenology (Bayne, T. and Montague, M., eds), pp. 35–56, Oxford
University Press
46. Keogh, R. et al. (2021) Aphantasia: The science of visual imagery extremes. In Handbook
of Clinical Neurology 178, pp. 277–296, Elsevier
47. Thompson, E. (2010) Mind in life: biology, phenomenology, and the sciences of mind,
(First Harvard University Press paperback edition.), The Belknap Press of Harvard
University Press
48. Deacon, T.W. (2012) Incomplete nature: how mind emerged from matter, (1st ed.), W.W.
Norton & Co
49. Seth, A.K. and Tsakiris, M. (2018) Being a Beast Machine: The Somatic Basis of Selfhood.
Trends in Cognitive Sciences 22, 969–981
50. Seth, A. (2021). Being you: A new science of consciousness. Penguin.
51. Weber, A., & Varela, F. J. (2002). Life after Kant: Natural purposes and the autopoietic
foundations of biological individuality. Phenomenology and the cognitive sciences, 1(2),
97-125.
52. Taleb, N.N. (2018) Skin in the game: hidden asymmetries in daily life, (First edition.),
Random House
53. Man, K. and Damasio, A. (2019) Homeostasis and soft robotics in the design of feeling
machines. Nat Mach Intell 1, 446–452
54. Jonas, H. (2001) The phenomenon of life: toward a philosophical biology, Northwestern
University Press
55. Hohwy, J. (2016) The Self-Evidencing Brain: The Self-Evidencing Brain. Noûs 50, 259–285
56. Metzinger, T. (2021) Artificial Suffering: An Argument for a Global Moratorium on
Synthetic Phenomenology. J. AI. Consci. 08, 43–66.

You might also like