Professional Documents
Culture Documents
Questions: Is human brain a computer? Can computers think? How are minds like
computers?
Basics
What is a computer? – small portable computer (phones); sit on desk computer (CPU); wrist
computer (smart watch – fit bit); computer like pacemaker; standard 4 function calculator;
WIFI router
Every computer is different yet has common attributes – unified way to model computers and
computation
- Machines receive input from external source// and input is provided sequentially,
- input is provided one discrete unit at time (both case – calculator and abacus)
- Each input causes device to change configuration
- Mechanism for reading the answer off after performing the calculation based on the
configuration of device
Intuition: finite automata can only keep track of finitely many different pieces of information
– cannot solve problems requiring remembering one of infinitely different options like
checking arithmetic expressions for correctness; determining if 2DNA strands are
complementary; searching a user provided text database for user provided string. = can be
formalized using Myhill-Nerode theorem – idea mathematically using equivalence classes of
particular equivalence relation
1936 story – Alan Turning – can you automate mathematics? Can you write some kind of
program? Can you build some kind of computing device that given input mathematical; gives
answer that is the theorem true or not?
- Computation driven by a finite state control// machine doing calculations has only
finite size
- Input provided as a finite length sequence of characters – input could be written down
on finitely many sheets of paper
- Computer has access to large working memory – as calculation unfolds, access to as
much auxiliary storage space as it needs
Similar to finite automata but has access to additional memory as and when required
Machine’s memory is an infinite tape (one dimensional line of symbol extending on both
sides); which is subdivided into tape cells
In principle – has access to all of the tape and can use as it needs; but in any point of time can
read only one tape cell; This cell is marked by a tape head
At each step- machine changes state a la finite automata; updates symbol under the tape head;
and moves the tape head left or right
- Turning’s paper introduced in 1963 before modern computers – so they do not work
generally how Turning described it as
- Many other models of general-purpose computers have been proposed:
o Random access machine – electronic computes – instead of infinite tape; has
infinite collection of memory locations
o Unrestricted grammars – model syntactic structures
o Lambda calculus – function composition (church – lambda calculus)
o Others: cellular automata, aperiodic tilings
Despite all these different models; turning machine has remained as the model of
computation proposed is either: weaker than it, equal to it or impossible as per physics laws.
Church Turing Machine – Hypothesis about nature of computation in our universe – states
that any model of computation that could be physically realised is either weaker than or
equivalent to Turing machine. Turing machine capture what computation with unbounded
memory really is.
Prof. Ryan Williams – “It is not a theorem – it is a falsifiable scientific hypothesis and it has
been thoroughly tested”
Emergent Properties – of a system is a property that arises out of smaller pieces that do not
seem to exist in any individual pieces. But an emergent property of a system is some property
that comes out of that system that doesn’t seem to appear in any individual component.
Example: individual neurons work leads to consciousness, love and ennui//individual atoms
obey laws of quantum mechanics and interact; combining them to make iPhones and
pumpkin pies
1. There is the existence of a universal computer – device that can perform any
computation that could be done by any feasible computing system
2. Self-reference: idea that computing machines can ask questions about themselves
Computer program always terminates after finite amount of time elapsed? Or infinite loop? –
problem cannot be solved by computers – limitation
Cannot do this because of self-reference – time travel paradox [if knows terminate, never
finishes and goes in loop// if not, terminates immediately]
- Clever self-reference arguments powerful and can be used to rule out all sorts of
problems from realm of solvability by computers. Example: if software claims to
voting machine, cannot actually determine it is one; claims to not steal user data,
cannot determine; most power-efficient version of software or not?
Haugeland – Basic idea of cognitive science is that intelligent beings are semantic engines –
automatic formal systems with interpretations under which they consistently make sense
Formal systems –
1. checkers has set of theses – definite rules as to how to move; starting position, definite
set of possible aims of checkers [what are tokens; what is starting position and given a
position what moves are allowed]
2. frog jumper game – frog jumps over other and removed, etc. [starting position can
vary, definite set of frogs; and moves of specific jumping]
3. Logic gate simulator – rewriting strings as other strings – if it matches can rewrite the
top as bottom
Key features:
Automatic Formal system - A machine that plays chess, creates Boolean circuits, generates
theorems, ..., by itself ...
Control problem – When many moves are legal, an automatic formal system has to choose
among multiple legal problems
- Clever play involves selecting good moves, not just legal ones
- Rules of thumb for selecting moves are called heuristics
- Newell & Simon: keys to intelligence are symbols and search – formulate problem,
generate candidate solutions and test them – minds as automatic formal system nexus
Levels of Description – computer is complex layers of virtual machines, compiled into other
machines, compiled into others, compiled into others – because some universal machines are
cheaper to build, while others are more convenient to use
Key lesson: ' ... a particular physical object can be, at one and the same time, any number of
different machines. There is no single correct answer to the question: which machine is that
(really)?' what machine that is currently implemented? – implementing an infinite number of
machines because there is an infinite number of different kinds of automatic formal systems
that could be rendered equivalent to the laptop using and considering as automatic formal
system.
- Idea: AFS are built and structured so that meaning-blind rule manipulation leads to
meaningful results when interpreted by some outside criteria
- Example: good choices, correct choices.
- “If you take care of syntax, the semantics takes care of itself” – Hoagland’s vision
- Cognitive science reconciles meaningful thought with meaningless physics
Pragmatics - 'The basic idea of cognitive science is that intelligent beings are semantic
engines - in other words, automatic formal systems with interpretations under which they
consistently make sense."
- This would mean - We are automatic formal systems and things the activities that we
engage in as automatic of formal systems are not intrinsically meaningful – can only
be given meaning by interpreted from outside
- Computational abilities are not intrinsic to an AFS – it has to be interpreted as
performing a computation,
- as a rule, many some sensible and some not interpretations are possible,
- very strange models of computation are possible (John Searle) – extremely
implausible
"You insist that there is something that a machine can't do. If you will tell me precisely what
it is that a machine cannot do, then I can always make a machine which will do just that." -
John von Neumann
Less obvious (cf. recent AI) – coping with uncertainty “Brains are prediction machines, not
Turing machines” – false psychology = rather “Brains are prediction machines, which are a
certain kind of well programmed Turing machines”, perception, creativity and intuition
(Turing) and language use
Strong: • 'all mental processes are computational; there is nothing more to cognition'
Weak: • 'some kinds of mental processes are well modeled as computations'; 'treating thinking
as computation is a useful working hypothesis for theorizing'
The theoretical limitations of computers provide no useful dividing line between human
beings and machines. As far as we know, the brain is a kind of computer, and thought is just a
complex computation. Perhaps this conclusion sounds harsh to you, but in my view it takes
nothing away from the wonder or value of human thought.... To me, life and thought are both
made all the more wonderful by the realization that they emerge from simple, understandable
parts. I do not feel diminished by my kinship to Turing’s machine.
LEVELS OF ANALYSIS
GOFAI – Good old fashioned artificial intelligence – style of cognitive science – symbol
driven cognitive science
Vision of this approach – cognitive representations which have a kind of symbolic structure
and that they can be transformed by application of rules
Objections: if there is language of thought, like symbols where can we find them?
Neuroscience? what about rules? Does successful application of rules imply intelligence,
understanding? Or stimulation of intelligence?
"First, we can ask if we understand what the system does at the computational level: what is
the problem it is seeking to solve via computation? We can ask how the system performs this
task algorithmically: what processes does it employ to manipulate internal representations?
Finally, we can seek to understand how the system implements the above algorithms at a
physical level. What are the characteristics of the underlying implementation (in the case of
neurons, ion channels, synaptic conductances, neural connectivity, and so on) that give rise
to the execution of the algorithm? Note that at each level, we could conceive of multiple
plausible solutions for the level below. This view demands for an understanding at all levels,
and thus sets the bar for "understanding" considerably higher."
== Start by trying to understand what is the problem the system is trying to solve –
computational level // & then what internal representations is it using to try to solve the
problem – algorithmic level // then given some particular representations and processes, how
are those actually implemented in the microprocessor = suggestion is to start at the top, solve
this problem first and then examine solution space at the lower level; if we start below, there
are too many candidates and many kinds of mapping
Marr levels
"Almost never can a complex system of any kind be understood as a simple extrapolation
from the properties of its elementary Components" – David Marr (Vision book)
Not a claim that brain has powers that go above and beyond parts of brain and the way they
interact with each other
It is a claim about what we can know by analysing the parts of the brain
- [I]t is the top level, the level of computational theory, which is critically important
from an information-processing point of view. The reason for this is that the nature of
the computations that underlie perception depends more upon the computational
problems that have to be solved than upon the particular hardware in which their
solutions are implemented. To phrase the matter another way, an algorithm is likely to
be understood more readily by understanding the nature of the problem being solved
than by examining the mechanism (and the hardware) in which it is embodied.
- Marr : '[T]rying to understand perception by studying only neurons is like trying to
understand bird flight by studying only feathers : it just cannot be done. In order to
understand bird flight, we have to understand aerodynamics; only then do the
structure of feathers and the different shapes of birds' wings make sense.'
- 'The ability to reduce everything to simple fundamental laws does not imply the
ability to start from these laws and reconstruct the universe.' (P.W. Anderson)
What this means for cognitive science - "The three levels at which any machine carrying out
an information-processing task must be understood"
2. Algorithmic: the systems do the same things in different ways. Look to runtime, other
traces of intermediate computations.
3. Implementation: the systems do the same things, in the same ways, on different hardware.
Examine the hardware.
Pylkkanen & McElree (2007): ► Coercion activity in brain areas associated with social
cognition ► Coercion activity in brain areas associated with language ► Involves social
reasoning (pragmatics), not grammar