You are on page 1of 4

Mindware - Chapters 2-3

Tuesday, January 29, 2019 3:01 PM

The idea is to slowly get an idea about the features of various types of functionalism. We will also get
really clear on Clark's views and objections to these various

Recap of last time:


- Two key features of the mind
○ Reason-respecting transitions
▪ We have good reason to go from one belief to another
▪ We perform actions that connect rationally with our beliefs
○ Conscious experience
▪ Qualia; the subjective experience of being, the experiences of the senses
▪ This is distinct from merely sensing and reacting to, for example, coffee. A machine
could do that without tasting the coffee.
- How can we then understand the mind as a physical system that is reason-respecting?
○ Two pieces to this
▪ Advances in logic made this project intellectually feasible. Through formal logic, we
have a structural account of reason, which encodes its semantic content as syntactic
symbols.
▪ Advances in computation give us the physical means to enact such a system in the
outside world. A machine can conduct symbolic processes that correspond to the
inferences of formal logic, arithmetic, etc.
□ e.g. we can specify a set of rules that can perform addition on a string of
symbols that represents two numbers
○ These two pieces (logic and computation) do not address the question of conscious
experience, however
- If the mind is like a program, couldn't it then be "run" on machines made of other substances, like
silicon? (pp. 23-25)
○ Objections to this
▪ Simulation is not the same thing as instantiation
▪ Emotions are fundamentally mediated through chemicals
▪ Such a machine would not be conscious
○ Searle's thought experiment
▪ Imagine we slowly replace someone's neurons with tiny machines until all the meat is
gone. These machines do exactly the same thing that neurons do. Would you then
slowly lose all of your conscious experience on the inside, while retaining the same
behaviors on the outside (becoming, in effect, a p-zombie)?
□ Clark doesn't like this argument
 Pizza order argument (p. 24)
◊ A pizza order is an informational phenomenon which can come in
many media. However, the pizza itself is not informational: in order
to have a pizza, you have to have an actual pizza.

--------------------------------------------------------------------------------------------------------------------------------

Chapter 2:

A classical AI (physical symbol system) approach to mind


- Thinking is understood as rules among concepts that neatly match our deliberative level of
thought.
- As long as our algorithms are rich enough, this will constitute thinking.

philosophy of mind Page 1


- As long as our algorithms are rich enough, this will constitute thinking.
- For example, a chess program has symbols that represent the spaces, pieces, and positions on the
board, and it also has a set of rules representing the possible moves and a process to determine
the optimal move. It might think at exactly the same level that you would, deliberatively.
- These algorithms are semantically transparent, such that we can follow their reasoning step-by-
step and recognize it as corresponding to our own
- Possible strengths of this position
○ It could be a sufficient condition for thinking
○ It could be a necessary condition for thinking

What is it like to be in pain on a deliberative level? How would you make a "pain" program? On a
functionalist view, any physical system that enacted this program would be in pain.
- What inputs?
○ Physical damage to the body
- What kinds of internal effects?
○ Desire to avoid pain
○ Can be overridden by other internal states (e.g. desiring to finish a marathon)
▪ Will interact with lots of other different "programs" or modules in the brain
- What external effects?
○ Response to avoid or escape the apparent source of pain
A program for believing that today is Tuesday
- What inputs?
- What kinds of internal or external effects?
○ Consequences to other beliefs (e.g. tomorrow is Wednesday)

SOAR is an AI that encodes knowledge as a script (pp. 31-33)

Assumptions behind the PSS view (and SOAR)


1. All thinking is captured at the level of software (as opposed to lower levels like hardware)
2. There is a single unified format of information (e.g. the scripts given to SOAR)
3. Intelligence is a matter of moving through problem spaces (e.g. possible moves in tic tac toe)
4. Programs are done at the level of deliberation and we could recognize their workings as particular
thoughts (semantic transparency)

Problem spaces
- For example, there are certain goal states in Tic Tac Toe (three in a row)
- In order to get to these goal states, you can branch out different trees of possible moves
- Intelligence is a matter is selecting the possible move that maximizes the chances of winning
(reasoning the correct way through the problem space)

How would you work through a Tic Tac Toe problem space?
You might follow these rules for winning tic tac toe
- If you can win, win
- If you can block a loss, do it
- Go center
- Go corner
- Go edge

Objections to the functionalist view of the mind:


- Searle's Chinese Room (p. 34)
○ Searle's response
▪ Give up on functionalism--syntactic structures cannot actually have semantic content
○ Clark's response (p. 39)
▪ We must have a finer grained instantiation of understanding Chinese for the room to

philosophy of mind Page 2


▪ We must have a finer grained instantiation of understanding Chinese for the room to
"understand" something, which may not be done at the level of conscious
deliberation (i.e. a program that really "understands" Chinese must break assumption
4)

One of the motivations of functionalism over identity theory is that mental states are multiply realizable
across many "hardwares". But this seems to run counter to Clark's view. Maybe consciousness must be
physically instantiated?

Everyday coping objection


- Presses on assumptions 3 and 4
- We have a ton of richness in our everyday lives. Isn't life too complicated to be written as a
semantically transparent program?
○ Two responses
▪ We've just got to make a bigger program
▪ No matter how big the program is, it's not ever going to be able to replicate human
understanding
□ This is Dreyfus' response. He supports this by saying that experts rely on
intuition, while novices think very consciously about rules (p. 38)
□ Points towards connectionism and artificial neural networks (we'll discuss these
next week)

Another objection to assumption 3: we do not live in finite problem spaces; instead there is a seemingly
infinite collection of possibilities.

A rejection of assumption 2: there are many streams of consciousness instead of one single unified
conscious experience (pp. 40-41)

--------------------------------------------------------------------------------------------------------------------------------

Chapter 3:

Folk psychology, propositional attitude psychology


- The idea is that we make sense of other people's behavior by attributing propositional attitudes to
them
- You can get a better explanation of someone dialing 911 by using folk psychology than by using
neurobiology, physics, etc.

Propositional attitudes
- Beliefs, desires, hopes, fears, etc.
- Examples: Jake believes that today is Tuesday; Jake desires the time to be 6:00 PM.

What lessons can we draw from the success of folk psychology?


- Three broad camps:
○ Fodor (realist)
▪ Language of thought
▪ The sentences behind your belief are in your head in some form
▪ If you believe that dogs bark, then inside your head there are mental representations
(symbolic token) of 'dog' and 'bark' and these two are brought together by an
operation that combines the two
▪ Under this view, the reason that folk psychology is successful is that it tracks causally
potent symbols in your mind
▪ A realism like traditional scientific realism. (Why think there are electrons? Because
they explain and predict a lot of things. Why think thought is like language? Because

philosophy of mind Page 3


they explain and predict a lot of things. Why think thought is like language? Because
propositional attitude psychology explains and predicts a lot of things.)
○ Churchland (eliminativist)
▪ Agrees with Fodor that folk psychology has realist ambitions
▪ Believes like Fodor that folk psychology is like a scientific theory. Disagrees with Fodor
in that he thinks that it is not an adequate theory.
□ He says that folk psychology is unsuccessful because it only works in certain
cases (i.e. normal functioning adults)
□ Behavioral economics shows that people are predictably irrational
□ The evolution of folk psychology gives concern--it has not altered or progressed
over the years. For example, our folk theory about the earth being flat didn't
turn out too well.
□ Folk psychology has not showed signs of being translatable into the language of
neurobiology
○ Dennett (instrumentalist)
▪ Rejects the claim that folk psychology is in the business of making causal claims
▪ Instead, what we're doing with folk psychology is interpreting behavior as falling
within a certain pattern
□ This involves examining several different stances of analysis (interpretations) of
behavior
 Physical stance
◊ Assume that you're subject to the laws of physics
◊ E.g. the mechanisms of the alarm clock will cause it to sound at a
particular time
 Design stance
◊ Assume that you're designed towards a purpose
◊ E.g. an alarm clock will sound at a particular time to wake someone
up
 Intentional stance
◊ Assume that you're rational
□ Folk psychology involves taking an intentional stance towards other people

philosophy of mind Page 4

You might also like