You are on page 1of 40

Learning & Behavior (PSYC 3130) 

Learning & Behavior​ by Paul Chance, 7​th​ edition (Cengage) 

Chapter 1: Learning to change 5


Natural Selection 5
Evolved behavior 5
Reflexes 5
Modal Action Patterns (MAP) 5
General behavior traits 5
Limits of natural selection 6
Habituation 6
Nature vs Nurture 6

Chapter 2: this chapter is dumb 7

Chapter 3: Pavlovian Conditioning 8


There are two types of reflexes 8
Basic conditioning 8
Higher order conditioning 8
Measuring Pavlovian Learning 8
Problems with attempting to measure Pavlovian Learning 8
Pseudoconditioning 8
Variables affecting Pavlovian Conditioning 9
How the CS and US are paired 9
CS-US Contingency 9
CS-US Contiguity 9
Stimulus Features 9
Prior experience with CS and US 10
Number of CS-US Pairings 10
Other variables 10
Extinction of conditional responses 10
Theories of why conditioning happens 11
Stimulus Substitution Theory 11
Preparatory Response Theory 11

Chapter 5: Operant Learning - Reinforcement 12


Thorndike 1911: Law of Effect 12
Types of Operant Learning 12
Positive reinforcement 12
Negative reinforcement 12
Kinds of Reinforcement 12
Primary reinforcement 12
Secondary reinforcement 13
Natural reinforcement 13
Contrived reinforcement 13
Variables that Affect Operant Learning 13
Contingency 13
Contiguity 13
Reinforcer characteristics 13
Behavior characteristics 14
Motivating Operations 14
Those that increase the effectiveness of a consequence 14
Those that decrease the effectiveness 14
Other Variables 14
Theories of Positive Reinforcement 15
Hull’s Drive Reduction Theory 15
Relative Value Theory & the Premack Principle 15
Response-Deprivation Theory 16
Theories of Avoidance 16
Two Process Theory 16
One Process Theory 16

Chapter 6 - Reinforcement: Beyond Habit 17


Shaping New Behavior 17
Insightful Problem Solving 17
Creativity & Problem Solving 17
Superstition 17
Helplessness 18

Chapter 7 - Schedules of Reinforcement 19


Beginning 19
Simple Schedules 19
Continuous Reinforcement 19
Fixed Ratio 19
Variable Ratio 19
Fixed Interval 19
Variable Interval 20
Fixed Duration 20
Variable Duration 20
Noncontingent Reinforcement 20
Fixed Time Schedule 20
Variable Time Schedule 20
Progressive Ratio Schedule 20
Extinction 20
Stretching the Ratio 21
Compound Schedules 21
Multiple Schedule 21
Mixed Schedule 21
Chain Schedule 21
Tandem Schedule 22
Cooperative Schedules 22
Concurrent Schedules 22
The Partial Reinforcement Effect 22
Discrimination Hypothesis 22
Frustration Hypothesis 23
Sequential Hypothesis 23
Response Unit Hypothesis 23

Chapter 8 - Operant Learning: Punishment 25


Types of Punishment 25
Positive Punishment 25
Negative Punishment 25
Variables Affecting Punishment 25
Contingency 25
Contiguity 25
Punisher Intensity 25
Introductory Level of Punisher 25
Reinforcement of the Punished Behavior 26
Motivating Operations 26
Other variables 26
Theories of Punishment 26
Two Process Theory 27
One-Process Theory 27
Problems with Punishment 27
Alternatives to Punishment 28

Chapter 10: Observational Learning 29


Types of Observational Learning 29
Social Observational Learning 29
Asocial Observational Learning 29
Imitation 29
Variables Affecting Observational Learning 29
Difficulty of the Task 29
Skilled vs Unskilled Model 29
Characteristics of the Model 30
Characteristics of the Observer 30
Consequences of Observed Acts 30
Consequences of Observer Behavior 30
Theories of Observational Learning 30
Bandura’s Social Cognitive Theory 30
Operant Learning Model 31
Applications of Observational Learning 31
Education 31
Social Change 31

Chapter 11: Generalization, Discrimination and Stimulus Control 32


Generalization 32
Discrimination 32
Stimulus Control 33
Generalization, Discrimination and Stimulus Control in the Analysis of Behavior 33
Mental Rotation as Generalization 33
Concept Formation as Discrimination Learning 34
Smoking Relapse as Stimulus Control 34
Theories of Generalization and Discrimination 35
Pavlov’s Theory 35
Spence’s Theory 35
Lashley-Wade Theory 35

Chapter 12: Forgetting 36


Defining Forgetting 36
Measuring Forgetting 36
Sources of Forgetting 37
Degree of Learning 37
Prior Learning 37
Subsequent Learning 37
Changes in Context 37
Applications 38
Eyewitness Testimony 38
Learning to Remember 38

Chapter 13: The Limits of Learning 39


Physical Characteristics 39
Notability of Learned Behavior 39
Heredity and Learning Ability 39
Neurological Damage and Learning 39
Critical Periods 40
Preparedness and Learning 40
Chapter 1: Learning to change 

Natural Selection 
- the process whereby organisms better adapted to their environment tend 
to survive and produce more offspring.  
- The theory of its action was first fully expounded by Charles Darwin, and it 
is now regarded as the main process that brings about evolution. 

Evolved behavior 

- Reflexes 
- Relationship between a specific event and a simple response to that 
event 
- It is a relationship between certain kinds of events 
- They are either present at birth or appear at predictable stages of life 
- They may serve to protect the individual from injury 
- Examples of reflexes in humans 
- Withdrawing a limb from a painful object 
- Pupillary reflex 
- Sneeze 
- Vomit reflex 
- Rooting reflex in babies 
- Modal Action Patterns (MAP) 
- Series of related acts found in all or nearly all members of a species 
- They have strong genetic basis 
- Little variability between individuals in species 
- Often elicited by an event called a ​releaser 
- They involve the entire organism not just one or a few muscles or 
glands 
- It is unthinking, there is no logic behind it 
- Examples of MAPs 
- Rattlesnake shakes its rattle if faced with harm 
- House cat arches its back and hisses when faced with dogs 
- Male peacock attracts a female by spreading their tail and 
shaking it 
- General behavior traits 
- Defined as the tendency to engage in a certain kind of behavior 
- Tendency to be shy or aggressive or anxious or OCD etc 
- They occur in a wide variety of situations 
- Heredity does play a role in behavior traits 
Limits of natural selection 
- It is very slow (occurs over generations) 
- Limited in value coping with abrupt changes 

Habituation 
- Reduction in the intensity or probability of a reflex response as a result of 
repeatedly evoking the response 
- Loud noises on cats; their reaction declined the more it happened 
- Many things can affect habituation 
- Loudness of the sound 
- Variations in the quality of the sound 
- Number of times the sound occurs 
- Time interval between repeated sounds  

Nature vs Nurture 
- I’m not gonna write notes on nature vs nurture I’ve already taken notes on 
it in 5 otHER COURSES IM SO DONE WITH THIS MAJOR 
 
Chapter 2: this chapter is dumb 

 
Chapter 3: Pavlovian Conditioning 

- There are two types of reflexes 


- Unconditional reflexes 
- Inborn and permanent reflex found in all members of a 
species 
- Conditional reflexes 
- Not present at birth 
- Acquired through experience 
- Impermanent 
- Is a conditioned reflex 
- Basic conditioning 
- Initiate a stimulus to be the CS  
- Do it at the same time as the US 
- The UR will become linked to the CS and become a CR 
- An example 
- Clap while giving the dog food 
- The dog will salivate 
- Clap without giving food 
- The dog will salivate at the clap 
- Higher order conditioning 
- Add a neutral stimulus with a well established conditioned stimulus 
- You do not need to add the neutral stimulus to the response 

Measuring Pavlovian Learning 


- You can measure the amount of learning by measuring latency of response 
- This means measuring the interval between the beginning of the 
conditioned stimulus and the first appearance of response 
- You can use test trials 
- This means you present the conditioned stimulus alone to see if the 
response still happens 
- Measure the intensity or strength of the conditioned response 
- See how much saliva for example drops in Pavlov’s experiment  

Problems with attempting to measure Pavlovian Learning 

- Pseudoconditioning 
- The tendency of a neutral stimulus to elicit a conditioned response 
after an unconditioned stimulus to elicit a reflex response 
- If a nurse coughs when giving you an injection and you wince, 
you might wince for the second time if the nurse coughs 
again 
Variables affecting Pavlovian Conditioning 

- How the CS and US are paired 


- Trace conditioning 
- CS begins and ends before US appears 
- I.e buzzer sound and THEN a puff of air in the eye to condition 
a blink when the buzzer sounds 
- Delay conditioning 
- The US appears before the CS disappears 
- Buzzer sounds for 5 seconds and the puff of air happens 
during the 5 seconds 
- Simultaneous conditioning 
- The US and CS coincide exactly 
- Ring a bell and blow a puff of air at the exact same time 
- Backward conditioning 
- The CS follows the US 
- Puff of air followed by the sound of a buzzer 
- It is very difficult to produce a conditioned response with the 
backward procedure 
- CS-US Contingency 
- A contingency is a if-then statement 
- X is contingent (dependent) on Y happening 
- The amount of learning depends on how reliably the conditioned 
stimulus predicted the conditioned response 
- I.e, if the stimulus does not always lead to the conditioned 
response, learning does not occur as much 
- Contingency is essential to Pavlovian learning 
- CS-US Contiguity 
- The closeness in time or space between two events 
- Contiguity in Pavlovian conditioning means the interval between 
the conditioned stimulus and the unconditioned stimulus 
- Sometimes called the Interstimulus interval (ISI) 
- The shorter the ISI, the quicker conditioning occurs 
- However, there are obviously exceptions 
- Eye blink conditioning needs to be within half a second 
- Taste aversion conditioning can be with intervals of several 
hours 
- It also varies with the level of stress 
- Stimulus Features 
- Some stimuli are better for conditioning than other stimuli 
- Compound stimuli are better for conditioning  
- A stimulus that has more than one stimuli at the same time 
i.e a red light and a buzzer at the same time 
- However, some stimuli will overshadow other stimuli 
- Strong stimuli overshadow weak ones 
- Intensity of the stimulus is also very important (stronger is better) 
- They can be too intense  
- I.e a bright light if too bright can lead to blinking and that 
may interfere with learning 
- Prior experience with CS and US 
- A dog that has heard a bell repeatedly before would be more 
difficult to condition to a bell sound 
- The appearance of a stimulus without the unconditioned stimulus 
interferes with the ability for the stimulus to become a CS later 
- This is called latent inhibition 
- Novel stimuli (never before seen) are easier to make into conditional 
stimuli 
- Blocking = Kamin Blocking 
- Failure to learn (failure to produce conditioned response) 
when a conditioned stimulus is presented to an animal as 
part of a compound when that compound contains a 
stimulus already established to bring about the 
conditioned response 
- Number of CS-US Pairings 
- Conditioning usually follows a decelerating curve 
- Conditioning occurs very rapidly and strength of conditioned 
response increases rapidly in the beginning, then stabilizes 
- “The sooner the individual adapts, the better” 
- Other variables 
- The younger the subject, the easier to condition 
- Temperament also affects conditioning 
- More excitable dogs in trials learn faster 
- Stress also affects conditioning 
- Higher stress = easier to condition 

Extinction of conditional responses 


- Once the conditional response is established it can be maintained 
indefinitely so long as the unconditioned stimulus comes back every now 
and then 
- If the conditioned stimulus comes around repeatedly without the 
unconditioned stimulus, the response becomes weaker and weaker 
- The procedure of repeatedly presenting the conditioned stimulus alone is 
called extinction 
- Forgetting = deterioration in performance, extinction = the contingency is 
dissolved 
- Extinction is basically re-conditioning but without the US 
- Spontaneous recovery = w ​ hen the training was stopped and then the 
conditioned stimulus was presented again, the dog would salivate after 
extinction 
Theories of why conditioning happens 

Stimulus Substitution Theory 


- The conditioned stimulus simply replaces the unconditioned stimulus to 
evoke the response 
- A piece of evidence for this is that Pavlov turned on a lamp when giving 
food, and after repetition, the dog began to lick the lamp, which supports 
the idea that the dog replaced the food with the lamp  
- There is evidence which does not support this theory 
- The conditioned response is weaker and less reliable than the 
unconditioned response 

Preparatory Response Theory 


- The unconditioned response is a response designed to deal with an 
unconditioned stimulus (i.e salivation for food) but  
Chapter 5: Operant Learning - Reinforcement 

Thorndike 1911: Law of Effect 


- There is a relationship between behavior and its consequences 
- This is called the law of effect 
- Responses to a situation are either followed by satisfaction or 
annoyance 
- When followed by satisfaction, the animals connection to this 
behavior strengthens and vice versa 

Types of Operant Learning 

- Positive reinforcement 
- The consequence of doing something is the appearance of (or 
increase in intensity of) a stimulus 
- This stimulus is something that someone would want 
- Sometimes called reward learning 
- Negative reinforcement 
- Behavior is strengthened by the removal or decrease in intensity of a 
stimulus 
- This leads to escape learning or escape-avoidance learning 
- Both positive and negative reinforcement strengthen behavior 

Kinds of Reinforcement 

- Primary reinforcement 
- Primary reinforcements that appear to be innately effective 
- They are not dependent on learning experiences 
- They are often called unconditioned reinforcers 
- Examples are food, water, sexual stimulation, sleep, social contact, 
environmental control 
- You can be satiated by them 
- Secondary reinforcement 
- Not innate, but the result of learning 
- Examples are praise, recognition, smiles, applause 
- Called conditioned reinforcers 
- Somewhat weaker than primary reinforcers 
- They satiate much more slowly 
- Much easier to reinforce immediately 
 

- Natural reinforcement 
- Events that follow from a behavior 
- If you pedal the bike, the bike moves forward 
- You climb the stairs and you reach the top 
- Contrived reinforcement 
- Events provided with the purpose of modifying behavior 
- Giving a child a cookie if they say cookie 

Variables that Affect Operant Learning 

- Contingency 
- The degree of correlation between a behavior and its consequence 
- Reinforcers are contingent on many aspects of behavior 
- Contiguity 
- Refers to the gap in time between a behavior and the reinforcing 
consequence 
- The shorter the interval, the faster learning occurs 
- Reinforcer characteristics 
- Some reinforcers work better than other reinforcers 
- A large reinforcer can counteract delay in reinforcement 
- However, the higher the increase the reinforcer the less benefit you 
get from the increase 
- Knowing which reinforcers are preferred can improve the 
effectiveness of a reinforcement procedure 
- Behavior characteristics 
- Behavior that involves smooth muscles and glands are harder to 
reinforce than skeletal muscles 
- Evolved tendencies can make the reinforcement of behavior more 
or less difficult 
- For example getting a bird to peck a disc would be easier if 
the bird is a pigeon not a hawk because pigeons peck by 
nature but hawks rip their food apart  
- Motivating Operations 
- Defined as anything that changes the effectiveness of a 
consequence 
- There are two kinds 
- Those that increase the effectiveness of a consequence 
- Called ‘establishing operations’ 
- Example: depriving an animal of food makes the 
consequence of giving food as a reinforcer more 
effective 
- The greater the level of deprivation, the more 
effective the reinforcer 
- Pain and fear can both be establishing operations as 
well as deprivations 
- Those that decrease the effectiveness 
- Called ‘abolishing operations’ 
- For example, drugs that reduce the reinforcing effects 
of food 
- Can help people lose weight 
- Or drugs that reduce the reinforcing power of nicotine 
or heroin 
- Can help people quit their addiction 
- Other Variables 
- Competing contingencies 
- The effect of reinforcing a behavior will be very different if the 
behavior also produces punishing consequences or if 
reinforcers are simultaneously available for other kinds of 
behavior 
Theories of Positive Reinforcement 

Hull’s Drive Reduction Theory 


- Animals and people behave because of motivational states called drives 
- There is no evidence that secondary reinforcers reduce physiological drives  
- I.e, giving positive praise does not make you less hungry or thirsty 
- Hull suggested that secondary reinforcers derive their reinforcing powers 
from (and therefore are dependent on their association with) 
drive-reducing primary reinforcements 
- This is not a perfect theory because not all reinforcers can be classified 
easily into primary or secondary 
- Also, there are too many reinforcers which neither reduce drives nor get 
their reinforcing properties from their association with primary reinforcers 

Relative Value Theory & the Premack Principle 

- Reinforcers are usually thought of as stimuli but Premack began to think of 
them as behavior 
- I.e, usually the reinforcer is the delivery of food to a rat but if we 
consider the reinforcer to be eating food, then it becomes a behavior 
not a stimulus 
- Premack thought that some behaviors are more likely to occur than other 
behaviors 
- I.e, a rat is more likely to eat than to press a lever given the 
opportunity 
- Different kinds of behavior have different values 
- These ‘relative values’ determine the reinforcing properties of behavior 
- This is called the ‘relative value theory’ 
- Premack suggested measuring the amount of time a participant engages 
in both activities when given a choice between them 
- According to him reinforcement involves a relation between two 
behaviors when one reinforces another 
- Therefore the probable response will reinforce the less probable one 
- This is known as the Premack principle 
- High probability behavior reinforces low probability behavior 
 
 
 
Response-Deprivation Theory 
- A variation of the relative value theory, is sometimes called the equilibrium 
theory or the response-restriction theory 
- Behavior becomes reinforcing when the individual is prevented from 
engaging in the behavior at its normal frequency 
- If we prevent a rat from drinking water after it has had access to 
water and has established a routine, it will engage in behaviors that 
provide access to water 
- Drinking will become reinforcing 
- A behavior is reinforcing to the extent that the individual has been 
prevented from performing that behavior at its normal rate 
 

Theories of Avoidance 

Two Process Theory 


- Two kinds of learning experiences are involved in avoidance learning: 
Pavlovian and operant 
- For example, if a dog learns to jump a hurdle to escape a shock, and a light 
goes off right before the dog receives a shock, then the dog will escape the 
shock and the escape is negatively reinforcing 
- As the trial continues, the dog will keep jumping even when there is no 
shock happening because of the light 

One Process Theory 


- Avoidance only involves one process: operant learning 
- Both escape and avoidance behavior are reinforced by a reduction in 
aversive stimulation 
- You can stop unnecessary avoidance behavior by preventing the avoidance 
behavior from occurring, i.e you can stop a dog from jumping a hurdle to 
escape a shock after completely stopping shocks IF you make it impossible 
for the dog to jump hurdles, and stop the shocks   
Chapter 6 - Reinforcement: Beyond Habit 

Shaping New Behavior 


- You cannot reinforce a behavior that does not occur 
- What should you do if you want a rat to press a lever to reinforce it, but the 
rat never presses the lever? 
- One answer is to reinforce something that is like lever pressing 
- The reinforcement of successive approximations of a desired behavior is 
called shaping 
- Shaping makes it possible to establish behavior in a few minutes that never 
or rarely occurs spontaneously 
- Shaping naturally occurs and people often shape undesirable behavior 
without noticing 
- Shaping explains the appearance of new forms of behavior 
- It parallels the process by which natural selection works 
- Behavior chain: performing a number of acts in a particular sequence 
- Teaching an animal or person to perform a behavior chain is called 
chaining 
- The first step in chaining is to break the task down into components 
(task analysis) 
- You can do forward chaining or backward chaining 
- Forward chaining: trainer reinforces performance of the first 
link in the chain and follows  
- Backward chaining: training the last link in the chain and 
working backwards 

Insightful Problem Solving 


- Problem: a situation in which reinforcement is available but the behavior 
necessary to produce it is not 
- Trial and error: accidental success 
- Insight: a solution that occurs without the benefit of learning 

Creativity & Problem Solving 


- Malia was given a fish when she performed a novel behavior 
- To get novel behavior you have to reinforce novel behavior 
- She had learned to be creative 
- Small brained species can also show creativity 
- Novel behaviors were observed in pigeons 
- It can also work on people 
- Reinforcement can however make people less creative 
- Rewards undermine creativity if the reward is promised beforehand 
 
Superstition 
- Superstitious behavior is any behavior that occurs repeatedly even though 
it does not produce the reinforcers that maintain it 
- When the reinforcer arrives, the animal is doing something, and this 
behavior is what is accidentally reinforced 

Helplessness 
- When faced with a series of difficult challenges, some people make a weak 
effort to cope and then when faced with failure, they give up 
- Learned helplessness is when an inescapable negative stimulus teaches 
people to do something 
- They had literally learned to be helpless 
- Further research found that it was not the prior experience to shock, but 
the inescapability 
- Immunization training produces resilience 
- Rats in a trial that had been previously able to escape shocks by pressing a 
lever shuffled at a constant rate and refused to give up 
- Reinforcing a high level of work effort and persistence increases the 
tendency to work hard at difficult tasks for a prolonged period 
- This is called learned industriousness 
 

 
Chapter 7 - Schedules of Reinforcement 

Beginning 
- rules describing the contingency between behavior and reinforcement are 
called schedules of reinforcement 
- Different reinforcement patterns produce varying patterns of behavior 
- Difference in productivity is likely the result of different schedules of 
reinforcement 
- Learning can mean a change in which no behavior appears or a change in 
the pattern of behavior 

Simple Schedules 

Continuous Reinforcement 
- A behavior is reinforced each time it occurs 
- Rat receives food every time it presses a lever 
- Continuous reinforcement leads to rapid increases in the rate of behavior 
- Rare in the natural environment 
- When reinforcement is given on some occasions but not others, it is on an 
intermittent schedule 

Fixed Ratio 
- A behavior is reinforced when it has occurred a fixed number of times 
- After a behavior has been shaped the experimenter may switch to a 
schedule of every third time the behavior is done 
- There is a ratio of every three levers to every reinforcement 
- Animas on fixed ratio schedules perform at a high rate and there are often 
pauses after reinforcements 
- These are called post-reinforcement pauses 
- The longer the work done required to get a response, the longer the 
post-reinforcement pause 
- Often called pre-ratio pauses or between-ratio pauses 

Variable Ratio 
- When the reinforcer is provided around some average 
- Instead of doing it every fifth, you can do it second, eighth, fixth, fourth, etc 
- Produce steady performance at run rates similar to fixed ratio schedules 
- Post-reinforcement pauses appear less often than in fixed ratio schedules 
- They are common in natural environments 
Fixed Interval 
- Behavior under study is reinforced the first time it occurs after a constant 
interval 
- For example, food given 5 seconds after pecking a disc 
- Produce post-reinforcement pauses 
- Produce a scallop-shamed cumulative record 
- Does not reinforce steady performance 

Variable Interval 
- Instead of reinforcing the variable after a fixed interval, reinforce it after a 
varied interval 
- Instead of reinforcing disc pecking after 5 seconds, we can reinforce it after 
2, 8, 6, 4, etc. 
- Length of the interval during which performing is not reinforced varies 

Fixed Duration 
- Reinforcement is contingent on the continuous performance of a behavior 
for some period of time 
- A child who practices piano for 30 minutes gets a reinforcement 

Variable Duration 
- Required period of performance varies about some average 
 
 

Noncontingent Reinforcement 

Fixed Time Schedule 


- Reinforcer is delivered after a given period of time regardless of what 
behavior occurs 

Variable Time Schedule 


- Reinforcer is delivered periodically at irregular intervals regardless of 
behavior varying about some average 

Progressive Ratio Schedule 


- Requirement for reinforcement increases in a predetermined way  
- Progression is either arithmetic or geometric 
- Whatever form the progression takes, it continues until the rate of behavior 
stops sharply 
- This is called the break point 
Extinction 
- Extinction means that a previously reinforced behavior is never followed by 
reinforcers 
- If no reinforcer is provided then extinction is not a reinforcement schedule 
but it is an FR schedule requiring an infinite number of responses for 
reinforcement 
- The overall effect of distinction is to reduce the frequency of the behavior 
- The immediate effect is often an abrupt increase 
- This is called an extinction burst 
- Another effect is resurgence 
- This is when you reinforce a behavior, extinguish it, reinforce 
another, extinguish it, and the first behavior appears again 

Stretching the Ratio 


- How can you get a person or a rat to press a lever hundreds of times for a 
very small reinforcement? 
- The answer is shaping 
- An experimenter starts with a CRF schedule, and then raises the ratio to FR 
3, and then 5, then 8, and so on.  
- This is called stretching the ratio 
- Stretching the ratio occurs in ratio 
 
 
 
 

Compound Schedules 

Multiple Schedule 
- A behavior is under the influence of two or more simple schedules 
- Two reinforcement schedules alternate with the changes indicated in the 
color of the light 

Mixed Schedule 
- Same as a multiple schedule but that there are no stimuli associated with 
the change in reinforcement 
- There would be no clear indication that the schedule has changed 

Chain Schedule 
- Reinforcement is delivered only on completion of the last in a series of 
schedules 
- Something would signal that the reinforcement schedule has changed 
- The bird for example would receive reinforcement when it finishes the 
chain link  

Tandem Schedule 
- Identical to a chain schedule except that there is no distinctive event that 
signals the end of one schedule and the beginning of the next 

Cooperative Schedules 
- The reinforcement that one subject out of a pair or group gets is partly 
dependent on the behavior of the other subject 
- Group of five birds might receive food when the group as a whole produces 
100 disc pecks provided that each bird pecks the disc at least 10 times 
- Used with people but often inefficient  
- The reinforcement is not contingent on how the work is shared, but on 
what the group as a whole produces 

Concurrent Schedules 
- Two or more schedules are available at once 
- A pigeon may have the option of pecking a red disk on a VR 50 schedule or 
a yellow disc on a VR 20 schedule 
- They involve a choice 
- The animal would choose the yellow disc 

The Partial Reinforcement Effect 


- Tendency of behavior that has been maintained on an intermittent 
schedule to be more resistant to extinction than behavior that has been on 
continuous reinforcement 
- This is called the partial reinforcement effect (PRE) 
- The thinner the reinforcement schedule, the greater the number of lever 
presses during extinction 
- Human beings also show resistance to extinction following intermittent 
reinforcement 
- It is paradoxical because the law of effect implies that unreinforced 
responses should weaken the tendency to press, not make it stronger 

Discrimination Hypothesis 
- Extinction takes longer after intermittent reinforcement because it is 
harder to distinguish between extinction and an intermittent schedule 
than between extinction and continuous reinforcement 
- Discriminating between extinction and a VR 100 schedule would take 
longer because in that schedule, a behavior would occur 150 or more times 
before producing reinforcement 
- The discrimination explanation of the PRE proposes that behavior 
extinguishes more slowly after intermittent reinforcement than after 
continuous reinforcement because the difference between CRF and 
extinction is greater than the difference between an intermittent schedule 
and extinction 

Frustration Hypothesis 
- Nonreinforcement of previously reinforced behavior is frustrating 
- Anything that reduces frustration will be reinforcing 
- There is no frustration in continuous reinforcement because there is no 
non-reinforcement 
- But when the behavior is placed on extinction, there is plenty of 
frustration 
- With each non-reinforced act, frustration builds 
- Any behavior that reduces an aversive state is likely to be negatively 
reinforced 
- During extinction, frustration may be reduced by not performing the 
behavior 
- When a behavior is reinforced intermittently, there are periods of 
non-reinforcement and frustration so the individual continues to perform 
during the periods of frustration 
- Therefore, lever pressing ​while frustrated ​is reinforced 
- The emotional state called frustration becomes a cue or signal for pressing 
the lever 

Sequential Hypothesis 
- PRE happens due to differences in the sequence of cues during training 
- During training, each performance of a behavior is followed by one or two 
events, either reinforcement or non-reinforcement 
- In continuous reinforcement, all lever presses are reinforced 
- During extinction, no lever presses are reinforced, so an important cue for 
lever pressing is absent 
- Extinction proceeds rapidly after continuous reinforcement because an 
important cue is missing 
- This is different in intermittent reinforcement 
- Some lever presses are followed by reinforcement and some by 
non-reinforcement 
- The sequence of reinforcement and non-reinforcement therefore 
becomes a signal for pressing the lever 
- The thinner the reinforcement schedule, the more resistant a rat will 
be to extinction because a long stretch of non-reinforced lever 
pressing has become the cue for more pressing 

Response Unit Hypothesis 


- To understand the PRE we have to think differently about the behavior 
being intermittently reinforced 
- In CRF: 
- Each time the rat presses the lever far enough to activate the 
recorder, it receives food 
- If we change it to a FR 2, it goes from press-reward to 
press-press-reward 
- If lever pressing is on an FR 2 schedule, the unit of behavior being 
reinforced is 2 lever presses 
- Mowrer and Jones think that the PRE is an illusion 
- Rats produced an average of 128 response units during extinction in 
the CRF group 
- Response unit was 2 in the FR 2 group, rats produced an average of 
94 responses (188/2) 
- In the FR 3 group, the response unit was 3 lever presses and the rats 
produced 71.8 responses (215.5/3) 
- Behavior on intermittent reinforcement only seems to be resistant 
to extinction because we fail to take into account the response units 
required for reinforcement 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Chapter 8 - Operant Learning: Punishment 

Types of Punishment 

Positive Punishment 
- The consequence of a behavior is the appearance of a stimulus 
- Reprimands, electric shock, physical blows 

Negative Punishment  
- Behavior is weakened by the removal of or decrease in intensity of a 
stimulus 
- Fines 
- Taking away privileges  
- Also called penalty training 

Variables Affecting Punishment 

Contingency 
- The degree to which punishment weakens a behavior changes based on 
the degree to which a punishing event is dependent on that behavior 

Contiguity 
- The interval between a behavior and a punishing consequence is also very 
important 
- The longer the delay, the less effective the punisher is 
- Delays reduce the effectiveness of punishment because during the delay 
interval, other behaviors occur 

Punisher Intensity 
- Clear relationship between the intensity of a punisher and its effects 
- The electric shock is the easiest way to see this 
- The greater the intensity of the punishing stimulus, the greater is the 
reduction of the punished responses 

Introductory Level of Punisher 


- Is it better to start with a strong aversive or with a weaker aversive and 
gradually increase the intensity? 
- Using an effective level of punishment from the very beginning is 
extremely important 
- Punished behavior persists if you start with a weak punisher 
- In the end a greater punishment is needed to suppress the behavior 
- The goal is to begin with a punisher that is intense enough to suppress the 
behavior at the outset 
- Beginning with a strong aversive is also problematic 
- It is not obvious which level is effective 

Reinforcement of the Punished Behavior 


- Unwanted behavior almost certainly is reinforced 
- The effectiveness of a punishment depends on the frequency, amount and 
quality of reinforcers the behavior produces 
- Success of efforts to punish behavior will depend on the reinforcing 
consequences of the behavior 

Motivating Operations 
- Food is more reinforcing when an animal is hungry 
- If an unwanted behavior is maintained by food reinforcement, reducing 
the level of food deprivation would make punishment more effective 

Other variables 
- Qualitative features of the punisher can be important 
- A high pitched sound can be more effective than a low pitched sound 
- Different variables interact in complex ways 
- Punishment, like reinforcement, is more complicated than it appears 
 

Theories of Punishment 
- Early theories of punishment pointed out that response suppression was 
due to the disruptive effects of aversive stimuli 
- When a rat is shocked it might jump, freeze or run around, and this is 
incompatible with lever pressing, so lever pressing will decline 
- Other research undermined this explanation by producing two key 
findings 
- Effects of punishment are not as transient as Skinner thought if 
sufficiently strong aversives are used 
- Punishment has a greater suppressive effect on behavior than does 
aversive stimulation that is independent of behavior 
- If punishment reduces behavior rates merely because it evokes 
incompatible behavior, then it should make no difference whether the 
aversive stimuli used are contingent on behavior 
- But it does make a difference 
- Some rats received shocks contingent on lever pressing, others received 
the same number of shocks independent on their behavior 
- The noncontingent shocks did suppress lever pressing, but it was 
nothing compared to contingent shocks 
- Therefore, the disruption theory does not explain the discrepancy between 
contingent and noncontingent aversives 
- There are two leading theories, the two-process and one-process 
 

Two Process Theory 


- Punishment involves both Pavlovian and operant procedures 
- If a rat presses a lever and receives a shock, the lever is paired with the 
shock 
- The lever then becomes a CS for the same behavior aroused by the shock 
(fear) 
- The shock is aversive → the lever becomes aversive 
- The rat may escape the lever by moving away from it 
- Moving away from the lever is reinforced by a reduction of fear 

One-Process Theory 
- Only one process (operant conditioning) is involved 
- Punishment weakens behavior in the same manner that reinforcement 
strengthens it 
- High probability behavior reinforces low probability behavior, therefore low 
probability behavior should punish high probability behavior 
- If a hungry rat is made to run following eating, it will eat less 
- The low probability behavior (running) suppresses high probability 
behavior (eating) 
- One-process theorists conclude that Thorndike was right 
- Punishment and reinforcement have essentially the same effects on 
behavior 

Problems with Punishment 


- It is effective 
- Rapid and substantial reduction in unwanted behavior 
- No need to continue the practice for days or weeks, it works immediately 
- Has beneficial side effects 
- Autistic children became more sociable, cooperative and 
affectionate 
- Has problems, though 
- Escape, aggression, apathy, abuse and imitation of the punisher 
- Escape by cheating or lying  
- Suicide 
- Attack those who punish 
- Aggression in general 
- Potential for abuse, whether accidental or purposeful 
- Those who punish tend to imitate 
 
Alternatives to Punishment  
- Response prevention 
- Instead of punishing a behavior, prevent it from happening 
- Extinction 
- Adult attention is usually the reinforcer of bad behavior in children 
- Extinction is effective but it involves an extinction burst 
- Also provokes emotional outbursts 
- Hard to implement outside the laboratory  
- Differential reinforcement 
- Combines nonreinforcement of an unwanted behavior with 
reinforcement of another behavior 
- Differential reinforcement of alternative behavior (DRA) gives 
another way of retaining the same reinforcement 
- Ignore a child making noise but reinforce when she colors in a 
coloring book 
- Differential reinforcement of incompatible behavior (DRI); reinforce a 
behavior that is incompatible with the unwanted behavior 
- Instead of punishing presses on lever A, reinforce the behavior 
of standing away from lever A 
- Differential reinforcement of Low Rate (DRL) 
- Behavior is reinforced but only if occurring in a low rate 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Chapter 10: Observational Learning 
 
Types of Observational Learning 

Social Observational Learning  


- Observing the behavior of another individual and the consequences of that 
model  
- If the consequences strengthen the observer’s tendency to behave 
similarly, the behavior has been ​vicariously reinforced​ and vice versa for 
vicariously punished 

Asocial Observational Learning 


- Learning from observed events in the absence of a model 
- This is called a ​ghost condition​ - where no person causes the movement 
 
Imitation 
- Behaving in a way that resembles the behavior of a model 
- To perform an observed act whether or not it is modeled 
- Can be acts which are not rewarded at all 
- Over-imitation​: a tendency to imitate irrelevant acts  
- Some believe that by over-imitating we ensure success as a species 
- It may be learned because we are taught to imitate from infancy 
- You can reinforce a tendency to imitate generally 
- This is called g​ eneralized imitation 
 
Variables Affecting Observational Learning 

Difficulty of the Task 


- The more difficult a task, the less learning is likely to occur during 
observation 

Skilled vs Unskilled Model 


- There are models which are s​ killed​ (i.e watching someone shoot foul shots 
in basketball) and models which are u ​ nskilled​ or learning models 
(watching someone learn to shoot fouls in basketball) 
- Some researchers get better results with skilled models, and other 
researchers found them to be the same in effectivity 
- Researchers do not know what is more effective 
Characteristics of the Model 
- Human observers learn more from models who are attractive, likable and 
prestigious 
- General appearance affects how we learn from observing 
- The more attractive or attention-grabbing a model, the more likely we are 
to learn 
- Models who are attractive, powerful or popular are more likely to be 
imitated 

Characteristics of the Observer 


- Humans get the most out of observational learning more than any other 
species 
- Learning history changes level of observational learning 
- Language skills are important also 
- Age is sometimes a factor 
- Influence of age varies with gender 
- Developmental age is more important than chronological age for obvious 
reasons  

Consequences of Observed Acts 


- In a study where children saw aggressive behavior get praised were more 
aggressive, and children who saw aggressive behavior criticized were less 
aggressive 
- Consequences of observed events that are not modeled are also important 

Consequences of Observer Behavior 


- If observing pays off, we spend more time doing it 
- Consequences of imitating are also powerful 
- If a behavior produces one set of consequences for a model and a 
different consequence for an observer, the observer consequence 
will win 
- Children learn to imitate when the imitating worked, and not imitate when 
it didn’t work 
 
Theories of Observational Learning 

Bandura’s Social Cognitive Theory 


- Identified four kinds of cognitive processes: ​attentional​, r​ etentional​, 
motor-productive​ and m ​ otivational 
- Attentional​ have to do with the individual directing his attention to the 
relevant aspects of the model behavior and consequences (observational 
learning is self-directed) 
- Retentional​ processes involve representing the model behavior in some 
way to aid recall, i.e observing a model using a particular route and then 
encoding it in images  
- Motor-productive​ processes consist of using symbolic representations used 
during retentional processes to guide action, i.e using the images to move 
through the route 
- Motivational​ processes are about evaluating the consequences of imitating 
model behavior. Consequences are important because of their effects on 
our expectations about the outcomes of behavior 

Operant Learning Model 


- You can treat observational learning as a variant of operant learning 
- Modeled behavior and consequences serve as cues that similar behavior 
will be reinforced or punished in the observer 
- We learn to imitate acts that have positive consequences and avoid the 
ones that have negative consequences because it pays off 
 
Applications of Observational Learning 

Education 
- Observational learning plays a large role in education  
- Very important in acquiring a first language 
- Plays a large role in classroom learning  
- It can allow students to learn through observation what is being instructed 
to other students 

Social Change 
- Learning is an individual phenomenon but we can learn from someone 
else solving a problem 
- Social transmission of learning happens through observational learning 
- Very important in both animal and human societies 
- Plays a part in helping societies deal with problems 
 
 
 
 
 
 
 
 
 
 
 
Chapter 11: Generalization, Discrimination and 
Stimulus Control 
 
Generalization 
- The tendency for effects of a learning experience to spread (sometimes 
called transfer) 
- There are four kinds of generalizations 
- Across people (vicarious generalization) 
- Generalization of learning experiences of a model to those of 
an observer 
- Across time (response maintenance) 
- Generalization of behavior over time (forgetting) 
- Across behaviors (response generalization) 
- Tendency for changes in behavior to spread to other 
behaviors 
- If a rat receives food after pressing a lever with right foot, it 
might press the lever with left foot 
- If a child is rewarded for expressing willingness to share a toy 
then she is more likely to share a toy 
- Across situations (stimulus generalization) 
- Tendency for changes in behavior in one situation to spread 
to other situations 
- Tendency to respond to stimuli not present during training 
- A dog may salivate to a tuning fork at 1000 cps, and may 
salivate to a tuning fork of 900 or 1,100 cps even though not 
exposed to it before  
- Conditional response spreads to stimuli different from the 
conditioned stimuli 
- Generalization gradient 
- Shows the tendency for a behavior to occur in situations that differ 
systematically from the conditioned stimulus 
- How to increase generalization of training effects 
- Provide training in a wide variety of settings 
- Vary the consequences (kind, amount and schedule of reinforcers) 
- Reinforce generalization when it occurs 
 
Discrimination 
- Stimulus discrimination​ is the tendency for behavior to occur in certain 
situations but not in others 
- The more discrimination, the less generalization and vice versa 
- Generalization gradients reflect the degree of discrimination 
- Any procedure for establishing descrimination is called d​ iscrimination 
training 
- Discrimination training can take many different forms 
- Simultaneous discrimination training ​is where the discriminative 
stimuli are presented at the same time  
- Successive discrimination training​ is where the discriminative 
stimuli alternate, usually randomly 
- In ​matching to sample​, the task is to select from two or more 
alternatives the stimulus that matches a standard 
- There is a variation where the bird may be required to peck 
one that is different from the given sample (this is called 
oddity matching​) 
- Errors can be reduced through e ​ rrorless discrimination training 
- The discriminatory stimulus is presented in very weak form 
and for short periods of time and gradually faded in 
- Improved performance in discrimination training as a result of 
different consequences is called the ​differential outcomes effect 
- Practical applications of discrimination learning 
- Helpful in learning new languages 
- Training animals to help humans with tasks 
 
Stimulus Control 
 
- When discrimination training brings behavior under the same influence of 
discriminative stimuli, the behavior is said to be under ​stimulus control 
- Stimulus control can be exerted by a complex array of stimuli 
- Understanding the control that an item exerts over your behavior can give 
you the power to change that environment 
- It can work against us but we can also use it to our advantage 
 
Generalization, Discrimination and Stimulus Control in the 
Analysis of Behavior 

Mental Rotation as Generalization 


- People were shown a letter that had been rotated from their upright 
position and asked whether the letters were mirrored or not 
- The greater the rotation, the longer it took people to answer 
- People mentally rotate an internal representation of the letter until it is 
normal and upright and then see if it is mirrored 
- People respond quickly to the training stimulus, and therefore the less the 
stimulus resembles the training stimulus, the slower the response 
- The subjective experience of mental rotation does not explain differences 
in reaction times 
 
Concept Formation as Discrimination Learning 
- The word c ​ oncept​ usually refers to any class the members of which share 
one or more defining features 
- It is a name for a kind of behavior, i.e one does not ​have​ a concept but 
demonstrates conceptual behavior​ by acting in a certain way 
- They require both generalization and discrimination 
- You must generalize in the conceptual class and discriminate between it 
and other classes 
- One way concepts are learned is through discrimination training 
- Kenneth Space (1937) taught chimps to find food under white metal 
covers that differed in size, and they learned the concept “larger 
than” 
- Wolfgang Kohler (1939) trained chickens to select the lighter of two 
gray squares, and they learned the concept “lighter than” 
- Herrnstein & Loveland (1964) trained pigeons to pick pictures that 
had people in them  

Smoking Relapse as Stimulus Control 


- Smoking is reinforced 73,000 times a year in a pack-a-day smoker 
- Environmental factors, including drug-associated stimuli and social 
pressure, are important influences of initiation, use patterns, quitting and 
relapse 
- There is stimulus control over smoking to a large degree 
- The use of tobacco and the reinforcing effects of nicotine have frequently 
occurred together in these situations, they have become discriminative 
stimuli for having a cigarette 
- Because smokers typically smoke throughout the day, many different 
situations become discriminative stimuli for smoking 
- Smoking in situations previously associated with smoking seems 
particularly likely to lead to an abrupt return to regular smoking 
- There are two basic approaches to preventing relapse  
- The former smoker can avoid situations in which he or she often 
smoked in the past to avoid the ability of these situations eliciting 
smoking 
- This is basically impossible 
- The smoker can undergo training to reduce the control these 
situations have over his or her behavior  
- The former smoker can gradually expose themselves to 
situations where they used to smoke and resist the urge 
 
 
Theories of Generalization and Discrimination 

Pavlov’s Theory 
- Discrimination training produces physiological changes in the brain 
- It establishes an area of excitation associated with the CS+ and an area of 
inhibition associated with the CS-  
- If a novel stimulus is similar to the CS+, it will excite an area of the brain 
near the CS+ area 
- There was no independent validation of its happening and there was 
circular 

Spence’s Theory 
- Pairing a CS+ with a US results in an increased tendency to respond to the 
CS+ and to stimuli resembling the CS+ 
- Generalization gradient that results is called an ​excitatory gradient 
- Presenting a CS- without the US results in a decreased tendency to 
respond to the CS- and to stimulus resembling the CS-  
- The generalization gradient that results is called an ​inhibitory 
gradient 
- The tendency to respond to any given stimulus was the result of the 
interaction of the increased and decreased tendencies to respond  
- ******* ADD FURTHER NOTES 
 

Lashley-Wade Theory 
- Karl Lashley and M Wade (1946) proposed an approach to generalization 
and discrimination that differs from Pavlov and Spence 
- They argued that generalization gradients depend on prior experience 
with stimuli similar to those used in testing 
- Discrimination training increases the steepness of the generalization 
gradient because it teaches the animal to tell the difference between the 
Sd and other stimuli 
- The generalization gradient is not usually flat, even in the absence of 
training 
- Lashley and Wade explain this by saying that the animal has undergone 
discrimination training in the course of its everyday life 
- If an animal is prevented from having any experience with a certain kind of 
stimulus such as color, its behavior following training will be affected 
- This was tested and results were ambiguous  
- ******* ADD FURTHER NOTES 
 
 
 
 
Chapter 12: Forgetting 
Defining Forgetting 
- Defined as a deterioration in the performance of learned behavior 
following a retention interval 
- The phrase r​ etention interval​ means a period during which learning or 
practice of the behavior does not occur 
- Some scientists argue that deterioration is the wrong word, and a more 
accurate one is that the behavior merely changes 
- There is no such thing as forgetting, some say, only a lack of presence of 
stimulus which mean that the behavior is not elicited. There are just 
changes in behavior due to changes in the environment  
- Forgetting concerns a deterioration in measurable behavior, not 
neurological structures 
 
Measuring Forgetting 
- Free recall:​ individual is given the opportunity to perform a previously 
learned behavior 
- Example can be to ask a student to recite a poem they learned 
- Can also be used to study animal forgetting 
- Free recall does not recognize that not all information is lost 
- Prompted/cued recall: ​presenting hints or prompts to increase the 
likelihood that the behavior will be produced 
- These prompts were not present during training 
- You can also measure forgetting by seeing how many prompts are 
needed to produce the behavior 
- Animal behavior can be studied with prompted recall  
- Relearning method​: measures forgetting in terms of the amount of 
training required to reach the previous level of performance 
- It is also called the s​ avings method  
- The greater the savings, the less the forgetting 
- Can be used to study forgetting in animals 
- Recognition: p ​ articipant has to identify the material previously learned 
- This is done by presenting the participant with the original learning 
materials as well as some new material 
- Delayed matching to sample (DMTS): ​you are taught to match a sample 
and then after a specific delay between the sample and the two 
alternatives, called a retention interval, you try to match to sample 
- More often used in animal research 
- Extinction method:​ put the behavior on extinction and when it extinction 
proceeds more rapidly than it would have immediately after training, 
forgetting has occurred 
- Gradient degradation:​ forgetting measured as a flattening of a 
generalization gradient 
Sources of Forgetting 
- The neurological record of learning breaks down or decays with the 
passage of time 
- Ebbinghaus (1885) found that it took him longer to relearn lists of nonsense 
syllables after a long retention interval than a short one 
- Forgetting increases with the passage of time 
- McGeoch (1932) argues that the passage of time does not cause forgetting 
because time is not an event and therefore cannot be said to cause other 
events 

Degree of Learning 
- The better something is learned, the more slowly it is forgotten 
- The greater the amount of overlearning, the less forgetting 

Prior Learning 
- Forgetting occurs rapidly when we learn unrelated words, random digits 
and nonsense syllables 
- More meaningful material is easier to hold onto 
- Previous learning can however interfere with recall and this is called 
proactive interference 
- Paired associate learning​ -​ invented by Mary Calkins near the end of the 
19th century 
- Objective is for the person to learn a list of word pairs, such as 
hungry-beautiful, so that when given the first word (hungry), the 
participant produces the second (beautiful)  
- Previous learning about how stories are constructed interfered with 
recalling a different sort of story (Bartlett 1932) 

Subsequent Learning 
- Jenkins & Dallenbach (1924) got students to learn lists of nonsense syllables 
- Researchers tested students for forgetting after one, two, four or 
eight hours of sleep or wakefulness 
- They forgot less after a period of sleep than after a period of 
activity  
- Other research shows periods of inactivity produce less forgetting than 
comparable periods of activity 
- What we learn increases forgetting of previous learning 
- This is called r​ etroactive interference 

Changes in Context 
- Context ​refers to stimuli present during learning that are not directly 
relevant to what is learned 
- Changes in context in which learning occurs affects forgetting 
- Learning inevitably occurs within a particular context 
- These stimuli then act as cues that evoke the behavior learned in that 
context 
- Performance suffers because of c ​ ue-dependent forgetting 
 
Applications 

Eyewitness Testimony 
- Loftus found that the use of the word smash resulted in higher estimates 
of car speed than the use of the word hit 
- The use of the article ‘the’ instead of ‘a’ indicates that there was a hat and 
therefore reporting that you saw a hat might be reinforced 
- Trial lawyers and some law enforcement officers now know that eyewitness 
testimony is of questionable value 

Learning to Remember 
- Overlearn​: there is a strong inverse relationship between the degree of 
learning and the rate of forgetting. To forget less, learn more 
- Practice with feedback​: positive feedback tells you what you got right, 
negative feedback tells you what you got wrong 
- Distribute practice​: distributing practice means doing your learning over a 
period of time, but we are still not sure how far apart practice sessions be 
distributed 
- Test yourself​: periodic testing improves retention and some evidence 
suggests that testing can be more effective than studying in reducing 
forgetting 
- Use mnemonics​: a mnemonic is any device used for aiding recall, for 
example rhyming  
- Use context cues​: students should study under conditions that closely 
resemble the conditions under which testing will take place, and you 
should study in a variety of situations 
- Take a problem solving approach​: prompt something when remembering 
by giving yourself cues  
 
 
 
 
 
 
 
 
 
 
 
 
 
Chapter 13: The Limits of Learning 
Physical Characteristics 
- The structure of an animal’s body makes certain kinds of behavior possible 
and other behaviors impossible 
- Gardner & Gardner showed that the failure of chimpanzees to learn to 
speak may be due more to differences in anatomical structures 
- Physical characteristics set important but not always obvious limits on 
what an individual can learn 
 
Notability of Learned Behavior 
- Another limitation of learning is that learned behavior is not inherited 
- Reflexes and fixed action patterns are passed on from generation to 
generation  
- Behaviors acquired through learning die with the individual 
- McDougall tried to prove that if each generation learned something, the 
offspring should find it easier to learn, and his research convinced him that 
his hypothesis held true 
- Other psychologists ran similar experiments with better controls and did 
not find the same results 
- Inheritability of learning is one of the severest of all its limitations 
- Learned behavior is not passed on to future generations 
 
Heredity and Learning Ability 
- There are genetic differences among species in the capacity for learning 
- Wolves did better than dogs in solving problems even though they are 
almost genetically identical 
- Domestication has relieved pressure toward intelligence in the dog 
- There are differences in learning abilities within a given species partly due 
to heredity 
- Heredity is not the sole determinant, learning history also has a powerful 
effect on learning ability 
 
Neurological Damage and Learning 
- Prenatal exposure to alcohol and other drugs can interfere with 
neurological development resulting in limited learning ability 
- Damage is not often apparent at birth 
- Neurotoxins are also a threat to learning ability after birth, they damage 
neural tissues 
- Head injury can also diminish learning ability 
- Malnutrition can also prevent normal neurological development and result 
in reduced learning  
 
Critical Periods 
- Animals are especially likely to learn a particular kind of behavior at one 
point in their lives and these stages for optimum learning are referred to as 
critical periods 
- Many animals are likely to form an attachment to their mothers due to a 
critical period soon after birth 
- If they don’t form this attachment if the mother is unavailable, the young 
animal will become attached to any moving object that passes by 
- This is called imprinting 
- John Paul Scott (1958) showed that social behavior of dogs depends on 
experiences during certain critical periods 
- Maternal behavior may also be learned during critical periods 
- If you take a lamb away from sheep during the first ten days of its 
life, it won’t be a good mother later on 
- It is possible that there is a critical period in infancy or early childood for 
learning to care about others 
- There is evidence that the first 12 years of life is critical for learning 
language 
 
Preparedness and Learning 
- Researchers began to notice that an animal might learn readily in one 
situation but be stupid in another situation 
- Brelands theorized that innate tendencies interfere with learning by 
facilitating learning in one situation and inhibit it in another 
- This tendency of an animal to revert to a fixed action pattern is called 
instinctive drift  
- Autoshaping​ is when a behavior is shaped without reinforcement  
- Tendencies can be characterized as a c ​ ontinuum of preparedness: a​ n 
animals comes to a learning situation genetically prepared to learn (in 
which case learning proceeds quickly), unprepared (in which case learning 
proceeds steadily but more slowly), or contraprepared (in which case the 
course of learning is slow and irregular) 
- Seligman proposed that humans are prepared to acquire certain fears 
- People are far more likely to fear sharks, spiders, snakes and dogs 
than they are lambs, trees, houses and cars 
- People are far more likely to form strong attachments to some 
objects rather than others, i.e many kids have security blankets, not 
security shoes 

You might also like