You are on page 1of 35

CDN ED Psychology Themes and Variations 3rd Edition

Full download at:


Solution Manual:
https://testbankpack.com/p/solution-manual-for-cdn-ed-psychology-
themes-and-variations-3rd-edition-by-weiten-mccann-isbn-0176503730-
9780176503734/
Test bank:
https://testbankpack.com/p/test-bank-for-cdn-ed-psychology-themes-and-
variations-3rd-edition-by-weiten-mccann-isbn-0176503730-9780176503734/

Chapter Six:
Learning
Chapter Outline

Learning Objectives ......................................................................................................... ............................. 166


Key Concepts: Why is this Chapter Important to Psychologists? ........................................................... ....... 166
Student Motivation: Why Should Students Care? ................................................................................... ...... 166
Barriers to Learning: What are Common Student Misconceptions and Stumbling Blocks? .......................... 166
Reflections on Teaching: How Can I Assess My Own “Performance”? ............................................... ........ 166
Lecture/Discussion Topic: Key Themes in Chapter 6 ................................................................................... 167
Psyk.Trek Modules and Simulation ............................................................................................................... 167
Demonstration/Activity: Defining Learning .................................................................................................. 169
Demonstration/Activity: Showing Students Prior Examples of Classical Conditioning ................................ 169
Demonstration/Activity: Classically Conditioning Students in Class ............................................................ 170
Demonstration/Activity: Rehearsing Classical Conditioning Concepts ......................................................... 171
Lecture/Discussion Topic: Inhibitory Classical Conditioning ....................................................................... 171
Lecture/Discussion Topic: Biological Constraints on Learning .................................................................... 173
Lecture/Discussion Topic: Preparedness in Learning .................................................................................... 174
Lecture/Discussion Topic: Cognitive Interpretations of Classical Conditioning ........................................... 175
Demonstration/Activity: Classical and Instrumental Conditioning of Planaria ............................................. 176
Lecture/Discussion Topic: Instrumental versus Operant Conditioning ......................................................... 176
Demonstration/Activity: Shaping in the Classroom ...................................................................................... 179
Demonstration/Activity: Shaping a Gerbil .................................................................................................... 179
Lecture/Discussion Topic: Generalization and Discrimination ..................................................................... 179
Demonstration/Activity: Reinforcement in Operant Conditioning ................................................................ 180
Demonstration/Activity: Rehearsing Operant Concepts ................................................................................ 180
Lecture/Discussion Topic: Negative Reinforcement versus Punishment ....................................................... 181
Lecture/Discussion Topic: Why Doesn’t Punishment Work? ....................................................................... 182

6: LEARNING 165
Lecture/Discussion Topic: Behaviour Modification ......................................................................................183
Demonstration/Activity: Planning and Evaluating Behaviour Modification Strategies .................................183
Demonstration/Activity: Applying Self-Modification Strategies ...................................................................184
References for Additional Demonstrations/Activities ....................................................................................185
Suggested Readings for Chapter 6 .................................................................................................................186
Handout Masters (HM) ..................................................................................................................................187
Transparency Masters (TM) ...........................................................................................................................192

166 ENRICHED INSTRUCTOR’S MANUAL


LEARNING OBJECTIVES

After completing this chapter, students should be able to


 Describe the sequence of events that are necessary for classical conditioning to occur.
 Describe the important elements that are necessary for classical conditioning to occur.
 List the types of emotional responses that are controlled by classical conditioning.
 Explain how conditioned responses are acquired and weakened.
 List the key components of operant conditioning.
 Describe how organisms acquire new responses through operant conditioning.
 Describe how primary and secondary reinforcers differ.

KEY CONCEPTS: WHY IS THIS CHAPTER IMPORTANT TO PSYCHOLOGISTS?

 Learning theory is an important component of many areas of psychology including


developmental psychology, neuroscience, clinical psychology, sports psychology, etc.
Having an understanding of how behaviour can be shaped is important for psychologists
and other professionals.
 Psychologists can use learning theory methods to help motivate students and help them
learn important concepts related to psychology.

STUDENT MOVITATION: WHY SHOULD STUDENTS CARE?

 This chapter provides a good foundation for further study in various areas of psychology
including developmental psychology, neuroscience, clinical psychology, sports
psychology, etc.
 Students will learn the methods necessary to shape behaviour. This information can be
used by students in different situations throughout their lives. They can learn to
motivate themselves and become successful at school, in their careers and with their
own health goals.
 Learning theory can provide useful tools for understanding how and when to reward or
punish a child’s behaviour. Not to suggest that a child and a pet are equivalent, but the
same methods can be used to shape a pets behaviour.

6: LEARNING 167
BARRIERS TO LEARNING: WHAT ARE COMMON STUDENT MISCONCEPTIONS AND STUMBLING
BLOCKS?

 Most of the students prior experience with learning theories will be related to Pavlov
and classical conditioning. Students often struggle with understanding the similarities
and differences between classical and operant conditioning.
 Many students have difficulty accepting the fact that animal research is important and
valuable. Students are interested in issues related to the ethical treatment of animals.
 It is sometimes difficult for students to accept the fact that we are similar to animals and
that the research we perform on animals can be generalized to humans.

REFLECTIONS ON TEACHING: HOW CAN I ASSESS MY OWN “PERFORMANCE”?

Checklist for Instructor Self-Assessment


1. What worked? What didn’t?
2. Were students engaged? Were they focused or did they go off on tangents?
3. Did my assessments suggest that they understood the key concepts?
4. What should I do differently next time?
5. How can I gather student feedback?

168 ENRICHED INSTRUCTOR’S MANUAL


LECTURE/DISCUSSION TOPIC: KEY THEMES IN CHAPTER 6
Theme 3 (Psychology evolves in a sociohistorical context) is stressed in Chapter 6. Learning principles and theories
have applications and influence far beyond the field of psychology. People around the world use learning principles
daily without realizing that they are doing so and without even knowing the specific principles they are using. For
example, almost everyone has seen a trained animal act. Many have noticed that the animal receives a “treat” after it
performs the desired behaviour—an example of the use of positive reinforcement. Parents use reinforcement with their
children quite often. You probably learned to tie your shoes and make your bed under the influence of positive
reinforcement. Although your parents did not continue to reinforce you every time you tied your shoes or made your
bed, the behaviour did not disappear. This example illustrates the learning of behaviour under continuous reinforcement
and its maintenance with partial reinforcement. Unfortunately, the lack of understanding of learning principles leads to
their misapplication, causing learning to be less effective than it could be. A good example is the use of punishment.
Punishment is often applied in a way that actually weakens its effects. As society has become more open to accepting
advice from psychologists, learning principles have been applied in a wider variety of situations. Few education majors
graduate without taking a course that deals in part, or exclusively, with behaviour management principles. Most of these
courses consist primarily of principles of learning, reinforcement, and punishment, direct from the psychology lab.
Theme 6 (Heredity and environment jointly influence behaviour) is also emphasized in Chapter 6. Early work by the
behaviourists found considerable evidence to support the notion that environment affects learning. There is a long
history of research studying the effects of such variables as number of learning trials, reinforcement schedules, types of
stimuli, and the timing of stimuli on learning. This research has found many different environmental factors that affect
how much or how well we learn. In more recent years, research has shown that biological factors also have important
influences on learning. The Weiten & McCann text mentions instinctive drift, taste aversions, and preparedness as
instances in which traditional behaviouristic explanations of learning have given way to explanations based on biological
factors. The old behaviouristic notion that any organism can learn any response under any conditions has been shown to
be false. Our biological predispositions can dictate what we learn and under what conditions.
If you wish to continue this train of thought, you can also emphasize Theme 2 (Psychology is theoretically diverse).
This belief in diversity allowed new ideas to surface and either replace or amend behaviouristic notions. Behaviourism
was a powerful and dominant influence during the first half of the 20th century, but even it was not strong enough to
keep cognitive explanations of learning processes and principles from arising. It is this theoretical diversity that keeps
psychology strong as a discipline and raises hope that we will someday have more complete answers about a variety of
phenomena.

PSYK.TREK MODULES AND SIMULATION


Unit 5 of Psyk.Trek contains six modules that you can use in the Learning chapter. In addition, Simulation 4 (Shaping in
Operant Conditioning) allows students to shape a rat.
Module 5a (Overview of Classical Conditioning) presents Pavlov’s demonstration, terminology, and classical
conditioning in everyday life. Students get to see a video clip of Pavlov and one of the dogs in his research. The section
on conditioning in everyday life and the quiz at the end of the module will assist students in applying these concepts
more broadly than just to salivating dogs.
Module 5b (Basic Processes in Classical Conditioning) reviews the processes of acquisition and extinction,
spontaneous recovery, generalization and discrimination, and higher-order conditioning. The highlight is a video clip of
Watson and Rayner working with Little Albert. Watson wears a Santa Claus mask and appears to chase Little Albert to
obtain a fear reaction.
Module 5c (Overview of Operant Conditioning) introduces terminology and procedures, acquisition and shaping,
and extinction. To solidly reinforce these concepts, students should complete Simulation 4 soon after this module.
Module 5d (Schedules of Reinforcement) presents continuous versus intermittent reinforcement, types of intermittent
schedules, and the effects of intermittent schedules. This module is mostly review, but the examples of partial schedules
in the real world should be helpful.

6: LEARNING 169
Module 5e (Reinforcement and Punishment) deals with positive reinforcement, negative reinforcement, and
punishment. The module clearly differentiates negative reinforcement from punishment, so it may be helpful to your
students. You probably make the same points in lecture, but repeated practice on this difficult discrimination may help.
Module 5f (Avoidance and Escape Learning) presents escape learning, avoidance learning, and the two-process
theory of avoidance. The module shows how avoidance learning is built on previous escape learning and provides a nice
computer sequence of avoidance learning. Mowrer’s two-process theory of avoidance may not be a concept you cover,
so be prepared for questions from your students concerning it.

Simulation 04 (Shaping in Operant Conditioning) gives students a chance to shape Morphy the rat to press a bar to
criterion (15 presses). Morphy emits a variety of behaviours in a variety of locations in a Skinner box. Students must be
alert to dispense reinforcement quickly before Morphy moves or emits a different behaviour. Students may learn best
from this simulation if they complete it more than once. Students will probably enjoy trying to lower the time that it
takes them to train Morphy to criterion.

DEMONSTRATION/ACTIVITY: DEFINING LEARNING


Rocklin (1987) presented an exercise about defining learning to be used in a learning course. Minor adaptation of the
exercise makes it applicable to the introductory psychology course.
Give students copies of Rocklin’s list of 10 events/behaviours (HM 6-1). Have them discuss each event, with the
class as a whole or in smaller groups, to decide whether or not it exemplifies learning. It is likely that you will get
widespread disagreement on several of the items.
After this discussion, Rocklin suggested that you present the class with Hilgard and Bower’s definition of learning:
Learning refers to the change in a subject’s behaviour to a given situation brought about by his repeated
experiences in that situation, provided that the behaviour change cannot be explained on the basis of native
response tendencies, maturation, or temporary states of the subject (e.g., fatigue, drugs, etc.). (1975, p. 17)
This definition focuses on observable behaviour and allows you to develop the notion of behavioural approaches to
learning. It is an unusual definition in that it defines by excluding certain types of events or behaviours. Rocklin noted
that deciding whether or not an event represents learning is typically easier after hearing Hilgard and Bower’s definition.
You can use this point to emphasize the importance of operational definitions in psychology. Rocklin reported that there
is still disagreement over the computer examples (he included two on his list of events).

Hilgard, E. R., & Bower, G. H. (1975). Theories of learning (4th ed.). Englewood Cliffs, NJ: Prentice-Hall.
Rocklin, T. (1987). Defining learning: Two classroom activities. Teaching of Psychology, 14, 228–229.

DEMONSTRATION/ACTIVITY:

SHOWING STUDENTS PRIOR EXAMPLES OF CLASSICAL CONDITIONING


After hearing about Pavlov’s work, students often jump to the erroneous conclusion that classical conditioning occurs
only in lower animals. A good way to dissuade them of this impression is to convince them that they have been
classically conditioned.
Smith (1987) developed such an exercise revolving around the distinctive few notes of music from the movie Jaws.
Have your students close their eyes and imagine that they are at the beach. Talk for 1 or 2 minutes about sitting at the
beach in the hot sun and about wanting to go for a swim to cool off. Guide them into the water to splash about. Then
have them go deeper into the surf. At this time, surreptitiously start a tape on which you have recorded the Jaws music.
You will hear immediate gasps and laughs of recognition from many students. The beauty of this demonstration is that
students can describe in their own words what happened during the movie to classically condition their reaction to the
music—even if you have not yet discussed classical conditioning. After you provide them with the proper terminology,
students can figure out that the unconditioned stimulus was the shark, the unconditioned response was fear or disgust, the
conditioned stimulus was the music, and the conditioned response was a similar feeling of fear or disgust.

170 ENRICHED INSTRUCTOR’S MANUAL


If you wish to go into more depth, you can point out that the shark is actually a CS because of previous conditioning
about sharks (fear of sharks is not an innate reaction, although a disgust reaction to blood and carnage may be).
Considering the shark to be a CS allows you to talk about higher-order conditioning, in which a new CS (music) is
paired with the old CS (shark) to condition us to the new CS.
Vernoy (1987) presented another demonstration to show students that they have been conditioned. All you need for
this activity is several balloons and a needle (the larger the better—Vernoy recommended buying a large magician’s
needle). Hand out several inflated balloons in class, and walk around popping them with the needle. Notice that students
flinch as you pop the balloons and then begin to flinch as you approach the balloons with the needle. The needle serves
as the neutral stimulus (although it may already be a CS relative to balloons) that becomes a CS through its repeated
pairing with the UCS, the noise made by the popping balloons. After conditioning the class to the noise, pick up a
balloon and pierce it without popping it. This can be done by inserting the needle at one of the spots where there is little
tension on the balloon—at the nipple or near the knot. Note that the students flinch as you put the needle near and on the
balloon.
You can follow one or both of these demonstrations with a class discussion, asking students to relate other examples
of classical conditioning that have taken place in their lives. If they have problems coming up with examples, you might
prime them by asking if particular smells remind them of someone, something, or somewhere.

Smith, R. A. (1987). Jaws: Demonstrating classical conditioning. In V. P. Makosky, L. G. Whittemore, & A. M. Rogers (Eds.),
Activities handbook for the teaching of psychology: Vol. 2 (pp. 65–66). Washington, DC: American Psychological Association.
Vernoy, M. W. (1987). Demonstrating classical conditioning in introductory psychology: Needles do not always make balloons pop!
Teaching of Psychology, 14, 176–177.

DEMONSTRATION/ACTIVITY:

CLASSICALLY CONDITIONING STUDENTS IN CLASS

Sparrow and Fernald (1989) developed a technique for classically conditioning students during a class session and for
demonstrating other related concepts such as generalization, discrimination, and spontaneous recovery. They built a
conditioner that consisted of a light with a dimmer switch, a siren, and a buzzer (details are included in their article).
However, a light with a dimmer switch and a compressed-air horn should suffice to provide much the same
demonstration.
Sparrow and Fernald took four steps to classically condition the class:
1. Illuminate the light at a middle range on the dimmer switch. The light serves as the originally neutral stimulus,
and students should show essentially no response to it.
2. Sound the siren or horn several times. This stimulus should be sufficiently loud to evoke a startle response in
the students. Thus the sound serves as the UCS to elicit the UCR (startle response).
3. Pair the light with the sound about 10 times (CS paired with UCS).
4. Demonstrate a CR by presenting the light alone several times. Students should show a small startle response if
conditioned.
Sparrow and Fernald advised diagramming these four steps for the class after the demonstration, using classical
conditioning terms and actual stimuli, to ensure that students make the connection between the demonstration and the
conditioning process.
If students are simply asked whether a startle response occurred, demand characteristics may influence their reports.
Sparrow and Fernald suggested two ways to circumvent this potential problem:
• Have some students serve as confederates and observe the other students to note their responses as they are
being conditioned. Of course, it is still possible that the observers could be influenced by expectancies.
• The startle response of a student (or perhaps several) could be monitored with a galvanic skin response (GSR)
meter. Gibb (1983) advocated the use of a GSR meter with a clear back and front for classical conditioning
demonstrations in class. You can place the clear meter on an overhead projector so that the class can see the
GSR readings.

6: LEARNING 171
After conditioning has taken place, generalization can be demonstrated by varying the intensity of the light with
the dimmer switch. Students will likely continue to show a startle response to the altered stimulus. These
generalization trials will tend to weaken the CR somewhat. Discrimination can be demonstrated by presenting the
original CS and a brighter and dimmer light about 10 times each, pairing the sound only with the original CS.
Announce that there will be six more light presentations, and ask students to carefully monitor their reaction to each
one. Randomly present each of the three intensity lights twice. If students have learned to discriminate, a startle
response should occur only to the original CS. Extinction is shown by presenting the light alone 10 to 15 times. The
startle response should die out. After several minutes have passed, presentation of the light will result in some
startle, thus demonstrating spontaneous recovery. Sparrow and Fernald used the buzzer to demonstrate the notion of
higher-order conditioning, although they were not able to create higher-order conditioning in their students.
Sparrow and Fernald recommended this procedure because it recreates Pavlov’s procedures fairly accurately and
avoids some problems inherent in other descriptions of classical conditioning exercises. For a similar activity using a
water gun, see Shenker (1999).

Gibb, G. D. (1983). Making classical conditioning understandable through a demonstration technique. Teaching of Psychology, 10,
112–113.
Shenker, J. I. (1999). Classical conditioning: An all-purpose demonstration using a toy watergun. In L. T. Benjamin, Jr., B. F.
Nodine, R. M. Ernst, & C. Blair-Broeker (Eds.), Activities handbook for the teaching of psychology: Vol. 4 (pp. 163–165).
Washington, DC: American Psychological Association.
Sparrow, J., & Fernald, P. (1989). Teaching and demonstrating classical conditioning. Teaching of Psychology, 16, 204–206.

DEMONSTRATION/ACTIVITY:

REHEARSING CLASSICAL CONDITIONING CONCEPTS


Students often find the four components of classical conditioning—conditioned stimulus, unconditioned stimulus,
conditioned response, and unconditioned response—confusing, even after covering them in class. Often, they simply
memorize these concepts and get them correct as long as they deal with Pavlov’s salivating dogs. If you ask them to
apply those concepts to real-life situations, however, they often become confused and fail. Assuming that you are
interested in having your students learn the concepts and be able to apply them, you can help them by giving them
situations on which they can practice determining how the classical conditioning components apply.
HM 6-2 contains four scenarios that involve classical conditioning. In each case, students should determine the
conditioned stimulus, unconditioned stimulus, conditioned response, and unconditioned response. You can use this
exercise in a variety of different ways, depending on your teaching preference. Students could work on the HM
individually or in groups, in class, or at home.
You may wish to enlarge the exercise to include additional classical conditioning concepts such as acquisition,
extinction, spontaneous recovery, higher-order conditioning, generalization, and discrimination. This exercise gives you
a chance to reinforce the concepts that students often struggle with.

LECTURE/DISCUSSION TOPIC: INHIBITORY CLASSICAL CONDITIONING


Weiten & McCann’s discussion of classical conditioning in the text centers on excitatory conditioning, a situation in
which a CS signals the occurrence of a UCS. Pavlov’s dogs learned that the presentation of a bell signalled the
presentation of meat powder. Similarly, we may learn that the sight of blue or red flashing lights in our rear-view mirror
signals the presentation of a traffic ticket. However, excitatory conditioning is not the only form of classical conditioning
that takes place. Inhibitory conditioning, a situation in which a CS signals the absence of a UCS that the organism
expects, is also an important type of classical conditioning. Just as Pavlov’s dogs learned that the bell and meat powder
were associated, they also learned that silence and the absence of meat powder were associated. Thus, the dogs were
unlikely to salivate during silence. Inhibitory conditioning is sometimes hard to conceptualize, because it may involve
the absence of a CS that has been learned, as mentioned above. However, suppose Pavlov presented a light when the bell
was not present. Inhibitory conditioning would occur to the light; the dogs would learn not to salivate in the presence of
the light.

172 ENRICHED INSTRUCTOR’S MANUAL


Two conditions help to increase the probability of inhibitory conditioning occurring (Purdy, Markham, Schwartz, &
Gordon, 2001):
• If the two CSs are presented in a discrimination situation, inhibitory conditioning occurs more readily. It is
easier to learn that a light signals the absence of shock if you are also presented with a bell that signals the
occurrence of shock.
• A long interval between the CS and UCS may result in inhibitory conditioning rather than excitatory
conditioning.
• Apparently, in this case, the organism learns that the CS is followed by no UCS, simply because of the length of
the interval. Thus, trace conditioning procedures may be particularly ineffective in producing excitatory
conditioning. There are also two conditions that students often mistakenly assume will produce inhibitory
conditioning that do not (Purdy, Markham, Schwartz, & Gordon, 2001):
• Random presentations of the CS and UCS result in some pairings of the CS and UCS and in some solitary
presentations of the CS and of the UCS. In this situation, no conditioning (either excitatory or inhibitory)
occurs. Thus, no relationship between the CS and UCS is learned.
•  Presentation of the CS alone is not sufficient to result in inhibitory conditioning. The expectation of a UCS is
necessary. If exposure to the CS alone occurs before conditioning begins, conditioning (both excitatory and
inhibitory) is slowed. Apparently the organism learns that the CS signals nothing.

Another difficulty in conceptualizing inhibitory conditioning is the fact that it is difficult to distinguish between
inhibitory conditioning and no learning at all. For example, if an organism does not respond to a given stimulus, it is not
clear whether inhibitory conditioning is being displayed or whether the organism has never learned anything about the
stimulus. Purdy, Markham, Schwartz, and Gordon (2001) noted that there are two methods for determining whether or
not inhibitory conditioning has taken place: the retardation and summation tests.
In the retardation test, a CS is presented in the absence of an expected UCS. Thus, the CS should come to produce
the withholding or suppression of a response. After the inhibitory conditioning, the situation is reversed. The CS is used
in an excitatory conditioning paradigm; it is used as a signal for the occurrence of the UCS. If previous inhibitory
conditioning has taken place, the CS will be difficult to condition in an excitatory fashion, more difficult than a neutral
stimulus. Thus, the inhibitory conditioning put this CS at a disadvantage for excitatory conditioning rather than merely
making it the zero point. In the laboratory, an animal might learn that a bell signals the presentation of a shock and a
light signals the absence of shock. At some later point, the light is then paired with shock. The learning of this
association will be slower than pairing the shock with a new neutral stimulus. Imagine trying to learn that a police car
driving by is now a signal to speed up rather than a signal to slow down. After years of experience, slowing down at the
sight of police cars would be a difficult response to overcome.
The summation test is somewhat simpler. In this case, an inhibitory CS and an excitatory CS are presented at the
same time. If the inhibitory CS actually produces inhibition, it should reduce or eliminate the response that normally
would occur to the excitatory CS. Remember the example in which a bell is used to signal shock and a light to signal the
absence of shock. Suppose you present both the bell and light at the same time. The animal will probably be confused
and show a lessened fear response to the bell than normally would occur. Imagine driving down the street and coming to
a traffic signal on which both the red and green lights are lit at the same time. What do you do? You would probably
hesitate, which would reduce or eliminate the normal response to the green light.
Although students typically have more difficulty learning about inhibitory conditioning than excitatory conditioning,
comprehending inhibitory conditioning is important to understanding how discrimination occurs. In discrimination, we
typically have to learn to withhold a response to a stimulus that is somehow different from the stimulus to which we
should respond. Thus, we must experience inhibitory conditioning to the “wrong” stimulus in order for discrimination to
occur.

Purdy, J. E., Markham, M. R., Schwartz, B. L, & Gordon, W. C. (2001). Learning and memory (2nd ed.). Belmont, CA:
Wadsworth.

6: LEARNING 173
LECTURE/DISCUSSION TOPIC: BIOLOGICAL CONSTRAINTS ON LEARNING
Two major lines of research have undermined the behaviouristic interpretation of conditioning: biological constraints
and cognitive interpretations. A major tenet of behaviourists was the notion that laws of conditioning exist. Thus,
principles of learning could be developed that would apply to all organisms for all behaviours. It did not matter to
behaviourists whether they studied animals or humans. They chose animals because they could exert greater control over
animals, house them more conveniently, turn over generations more rapidly, study simpler organisms with simpler
methods—just to name a few reasons. Regardless, behaviourists did not believe that they were studying only one
particular animal or one specific behaviour. Behaviourists believed that they were learning general principles that would
apply across the board. For example, Skinner (1938) wrote, “The general topography of operant behaviour is not
important, because most if not all specific operants are conditioned. I suggest that the dynamic properties of operant
behaviour may be studied with a single reflex” (pp. 45–46). Pavlov (1927) wrote that “it is obvious that the reflex
activity of any effector organ can be chosen for the purpose of investigation, since signaling stimuli can get linked up
with any of the inborn reflexes” (p. 17). Any information that weakens this concept of generality, therefore, strikes at
one of the key principles of behaviourism.
The Weiten & McCann text summarizes several biological limitations on learning. This information implies that an
organism is not a blank slate when it approaches a learning situation. The biology and heredity of an organism have
placed certain limitations on the organism’s potential for learning.
One limitation mentioned in the text is instinctive drift. The Brelands, who found that raccoons would not let go of
coins, found other examples of animal misbehaviour (Breland & Breland, 1966). For example, one of the Brelands’ most
famous trained animal acts was the dancing chicken: A chicken comes out of a cage, gets up on a platform, and “dances”
to music. In actuality, they had tried to train a chicken to simply stand on a platform for a short period of time. However,
they “found that over 50% developed a very strong and pronounced scratch pattern, which tended to increase in
persistence as the time interval was lengthened” (p. 682). Therefore, the Brelands developed a new act around the
instinctive behaviour of the chicken. In another instance, they were able to train a chicken to play baseball by pulling a
loop that made a bat hit a ball—as long as the baseball field was in a cage. When they removed the cage for
photographic purposes, they ran into a problem with the chickens. Even well-trained chickens “became wildly excited
when the ball started to move. They would jump up on the playing field, chase the ball all over the field, even knock it
off on the floor and chase it around, pecking it in every direction, although they had never had access to the ball before”
(p. 683). The Brelands also trained a pig to pick up large wooden coins and put them in a piggy bank. After several
weeks, the behaviour deteriorated as the pigs began to drop the coins and root them. In all of these cases, instinctive
behaviours began to interfere with learned behaviours—thus the term instinctive drift.
Taste aversions are another good example of biological influences on learning. The text provides good coverage of
this topic, particularly of some of the classic studies in this area. However, students often have difficulty understanding
these results. Garcia and Koelling (1966) paired “bright-noisy-tasty” water (a light, a click, and a flavour were presented
when rats drank) with either shock or lithium chloride (a nausea-producing drug). After the negative stimulus was
administered, the rats were given preference testing between bright-noisy water and tasty water. They obtained the
results shown in TM 6-1. Rats who experienced nausea tended to avoid the tasty water, whereas rats who had been
shocked avoided the bright-noisy water. These results illustrate the concept of belongingness: External consequences
tend to be associated with external stimuli, and internal consequences tend to be associated with internal stimuli. It
seems clear that belongingness is simply another way of describing preparedness. Organisms are clearly prepared to
associate nausea with tastes in order to form taste aversions (see “Lecture/Discussion Topic: Preparedness in Learning”).
If you are wandering in the woods without food, eat some purple berries, and get sick later, you need to learn not to eat
the purple berries in case they are poisonous. Notice, however, that organisms are not prepared to associate nausea with
locale. Taste aversions are learned, not location aversions. When you get sick from eating a McDonald’s hamburger,
aversion occurs to the burgers, not to the restaurant.

Breland, K., & Breland, M. (1966). The misbehavior of organisms. American Psychologist, 16, 681–684.
Garcia, J., & Koelling, R. A. (1966). Relation of cue to consequence in avoidance learning. Psychonomic Science, 4, 123–124.
Pavlov, I. (1927). Conditioned reflexes. Oxford, UK: Oxford University Press.
Skinner, B. F. (1938). The behavior of organisms: An experimental analysis. New York: Appleton-Century-Crofts.

174 ENRICHED INSTRUCTOR’S MANUAL


LECTURE/DISCUSSION TOPIC: PREPAREDNESS IN LEARNING
Preparedness refers to the degree to which biology has made certain associations possible to learn. Seligman (1970) said
that an “organism may be more or less prepared by the evolution of its species to associate a given CS and US or a given
response with an outcome” (p. 408). Seligman’s notion is that preparedness is a continuum, and we observe differences
in the learning of various behaviours because they differ in their level of preparedness. Preparedness is measured simply
by looking at the amount of input (trials, pairings, or whatever) required before the desired output (response) occurs.
Seligman differentiated among prepared, unprepared, and contraprepared behaviours. If a response is made after the
first trial, Seligman would refer to the behaviour as prepared; if only a few trials are required, the behaviour is somewhat
prepared. If many trials are required, the behaviour is unprepared. If the response is never learned, or is learned only
after an exceptionally large number of trials, the response is contraprepared.
Seligman used the concept of preparedness to explain the controversy between ethologists and learning theorists. He
maintains that ethologists tend to work with prepared behaviours, whereas psychologists are more likely to deal with
unprepared behaviours. This difference would account for the fact that various behaviours appear to operate under
totally different “laws” of learning. Seligman noted that “it is possible that different laws of learning may vary with the
dimension” (1970, p. 416).
Seligman explained the Garcia and Koelling results outlined in “Lecture/Discussion Topic: Biological Constraints
on Learning” in terms of associating nausea with taste and external stimuli with shock as prepared behaviours and
associating nausea with shock and external stimuli with nausea as contraprepared behaviours. Preparedness can also be
seen in instrumental learning. Seligman (1970) pointed out that Thorndike ran into difficulty when he attempted to train
cats to engage in such behaviours as scratching or licking in order to escape his puzzle boxes. On the other hand, in
autoshaping paradigms, pigeons are found to peck at lighted disks even when the pecking is not related to food delivery.
Apparently scratching or licking to escape is contraprepared behaviour in cats, and pecking for food is prepared
behaviour in pigeons.
Of course, humans learn taste aversions. After describing taste aversions in class, you can ask for a show of hands
from students who have had the experience. You will find a sizable portion of students who avoid some particular food
at all costs. Many students can even relate the incident that caused their aversion.
Other examples of preparedness seem to occur in humans. Developing phobias to threatening objects or situations
may actually be a prepared behaviour related to adaptive survival behaviour, just as taste aversions appear to be.
Seligman (1970) proposed that language may be a prepared behaviour for humans. Skinner’s behaviouristic theory of
language acquisition, involving imitation and reinforcement (see text Chapter 8), is typically faulted for being too
restrictive. Many theorists do not believe that we learn language in the same trial-and-error fashion that rats learn to
press bars. As Seligman noted, “We do not need to arrange sets of linguistic contingencies carefully to get children to
speak and understand English” (1970, p. 414). For example, children make unique utterances that they obviously are not
imitating. According to Seligman, the early language development of children with deaf parents follows the same pattern
as that of children with hearing parents. This fact supports the notion of preparedness in language development, at least
early in life. It is only later, when interaction is important to language development, that children with deaf parents begin
to fall behind. Also, perhaps infant behaviours we often refer to as instinctive (for example, smiling, laughing, crying)
may simply be prepared behaviours. Any behaviour that is learned very easily and is virtually universal is a candidate for
falling in the prepared category—walking, for example.
Another apparent biological constraint, the critical period, seems to interact with preparedness. A critical period is a
time during which learning of a specific type takes place extremely easily, whereas that learning may not take place
outside of the critical period. In Seligman’s (1970) terminology, a critical period may be a time during which an
organism is prepared to learn a particular behaviour. Outside of the critical period, the organism may be unprepared or
even contraprepared to learn.
The most commonly cited example of behaviour affected by a critical period is imprinting. Imprinting, which takes
place in many avian species, refers to “the development of a social attachment to stimuli experienced during a sensitive
period of development” (Klein, 2002, p. 474). Ducks and geese, for example, have been found to imprint on moving
objects in their environment shortly after birth. You probably remember the picture of Konrad Lorenz walking across a
field with a line of ducklings following him. Imprinting is affected by the age of the animal. The age of 13 to 16 hours is
the prime time for imprinting to occur in ducklings. Somewhat less imprinting occurs at the ages of 9 to 12 hours and 17

6: LEARNING 175
to 20 hours. Outside of this critical period, no imprinting seems to occur. This relationship is apparently not
unchangeable, as some evidence shows that imprinting can be extended beyond the critical period. However, that
evidence does not weaken the idea that there may be a period when learning is highly prepared, and other times when
learning is much more difficult. The idea of imprinting to the mother/caretaker for social attachment has been extended
to other species. Klein (2002) reported that the sensitive period for sheep and goats occurs at 2 to 3 hours after birth, at 3
to 6 months of age for primates, and at 6 to 12 months for humans.
Lenneberg (1967) applied the concept of sensitive periods to language acquisition, writing that language emerges
before three years of age due to “an interaction of maturation and self-programmed learning” (p. 158). He also feels that,
for children between the age of three and the early teens, “the possibility for primary language acquisition continues to
be good” (p. 158). However, after puberty, the ability to learn language is somewhat impaired. Lenneberg said that “the
brain behaves as if it had become set in its ways and primary, basic language skills not acquired by that time, except for
articulation, usually remain deficient for life” (p. 158).
Finally, Seligman (1970) used the concept of preparedness to explain some of the seemingly contradictory findings
concerning learning. He maintained that the typical laboratory study involves an organism learning an unprepared
behaviour. This type of learning requires repeated trials before learning is achieved and results in the standard negatively
accelerated learning curve. Other behaviours sometimes seem to violate these “laws” of learning generated from
unprepared behaviours. For example, taste aversions, phobias, and other such behaviours may be learned in one trial.
These do not violate the laws applied to the laboratory behaviours; they simply operate under a different set of laws.
Then there are the contraprepared behaviours. No matter how long or hard the experimenter tries, some behaviours
simply cannot be learned very well or at all. It is almost impossible to train a rat to press a bar in order to avoid shock,
for example. Thus, as mentioned above, looking for laws of learning could require three sets of laws—one each for
prepared, unprepared, and contraprepared behaviours.

Klein, S. B. (2002). Learning: Principles and applications (4th ed.). New York: McGraw-Hill.
Lenneberg, E. H. (1967). Biological foundations of language. New York: Wiley.
Seligman, M. E. P. (1970). On the generality of the laws of learning. Psychological Review, 77, 406–418.

LECTURE/DISCUSSION TOPIC:

COGNITIVE INTERPRETATIONS OF CLASSICAL CONDITIONING


Early interpretations of conditioning were, of course, steeped in the behaviourist tradition. Pavlov’s original ideas about
contiguity essentially went unchallenged. According to contiguity theory, the necessary and sufficient condition for
classical conditioning to occur is that the CS and UCS occur close together (contiguous) in time. There is much
information that supports this view. For example, research shows that conditioning becomes stronger with more trials.
More trials allow more contiguous pairings of the CS and UCS. As the CS-UCS interval lengthens (and contiguity
decreases), conditioning becomes weaker. The data seemed to fit contiguity theory so well that no one questioned the
theory for almost half a century (Malone, 1990).
Gradually, however, evidence began to accumulate that did not fit with contiguity notions (Purdy, Markham,
Schwartz, & Gordon, 2001):
• Taste aversions violate the typical CS-UCS interval findings. Very strong conditioning occurs despite a
longCS-UCS interval.
• If contiguity is valid, the temporal arrangement of the CS and UCS should be of prime importance.
Interestingly, simultaneous conditioning (CS and UCS onset and offset occur at the same time) is relatively
ineffective, although this arrangement would seem to have the greatest contiguity. Short-delay conditioning (CS
onset precedes UCS, both end at the same time) leads to the strongest conditioning, which also does not seem
to be consistent with contiguity theory. However, trace conditioning (CS onset and offset occur before UCS)
results in poor conditioning, which makes sense given the contiguity position, because there is a gap between
the CS offset and the UCS onset. According to contiguity theory, it even seems that a backward pairing (UCS
presented before CS) might result in conditioning, because the two stimuli are presented contiguously.
However, backward conditioning is notoriously difficult to accomplish (Purdy, Markham, Schwartz, & Gordon, 2001).

176 ENRICHED INSTRUCTOR’S MANUAL


• The very fact that inhibitory conditioning occurs casts doubt on contiguity theory. Remember that a CS paired
with the absence of an expected UCS results in an inhibitory CR. In such a case, the CS and UCS are explicitly
unpaired. Contiguity theory would predict no conditioning when the CS and UCS are not paired in time.
However, as noted in “Lecture/Discussion Topic: Inhibitory Classical Conditioning,” no conditioning is not the
same as inhibitory conditioning.
By the mid-1960s, evidence had accumulated that weakened contiguity theory. Then two researchers, working at
about the same time, developed ideas that have totally revised the way classical conditioning is viewed.
Rescorla (1968) receives credit for the research with the greatest impact on our view of classical conditioning.
Rescorla devised an experiment in which he manipulated contingency, or the predictive relationship between events,
while holding contiguity constant. He used a tone as the CS and a shock as the UCS. For example, one condition had a
probability of .4 that the shock would occur with the tone during a certain period. The probability that the shock would
occur alone differed for four groups (0, .1, .2, .4), although all four groups had equal contiguity (a 40% chance that the
tone and shock would occur together). One group performed in a situation with high contingency, therefore perfect
predictability (.4/0). The other groups had varying degrees of contingency, ranging from near perfect (.4/.1) to none
(.4/.4).
Despite the fact that all groups had equal contiguity, conditioning to the tone occurred only on the basis of
contingency. TM 6-2 shows that the .4/0 group showed strong conditioning, and the .4/.4 group showed no conditioning.
Rescorla used the Conditioned Emotional Response paradigm in which the rats were allowed to press a bar for food
when the CS was suddenly presented. Rats that were strongly conditioned totally suppressed the bar press response and
yielded a suppression ratio of 0 (no bar presses during the CS presentation). Rats that had not conditioned to the CS
pressed the bar at the same rate during the CS and when the CS was not present, yielding a suppression ratio of .5. Thus,
the .4/0 group showed strong conditioning, and the .4/.4 group showed no conditioning. The intermediate groups (.4/.1,
.4/.2) showed some conditioning, but the fact that the contingency was not perfect disrupted learning considerably.
Rescorla even found that the .1/0 group showed a higher level of conditioning than the .4/.4 group, despite the fact that
the latter group had more contiguous pairings of the CS and UCS.
The interpretation of Rescorla’s work has centered on expectancy, or predictability. If you are a rat in the .4/0
group, you are able to predict or expect the shock quite accurately. If the tone occurs, you expect that you might get
shocked. If the tone is not present, you predict no shock. On the other hand, if you are a rat in the .4/.4 group, you have
no way of predicting or expecting shock. Whether the tone is on or off makes no difference. For a review of his work,
see Rescorla (1988).
The other important influence on our thinking about classical conditioning was Kamin’s work on blocking. Previous
research showed that pairing two CSs simultaneously with a UCS resulted in conditioning to both CSs. However, Kamin
(1969) found that it was not possible to condition a response to a second CS if a relationship between the first CS and
UCS already existed. His paradigm is shown in TM 6-3. The outcomes are that Groups 1 and 3 will show a response in
Stage 3, as in traditional compound conditioning. However, Groups 2 and 4 will not respond in Stage 3, despite the fact
that they have experienced contiguous pairings of the second CS and the UCS in Stage 2.
The interpretation of Kamin’s work has centered on the information value of the CS. The first CS has acquired the
information value necessary to predict the occurrence of the UCS. The second CS adds no new information, so it is not
conditioned. However, if the UCS intensity is increased when the second CS is added, then conditioning does occur
(Lieberman, 2000). In this case, the combined CSs signal something different than the first CS alone does; they have
information value.
These two researchers alone have provided ample evidence that conditioning depends not only on contiguity but
also on contingency. Contiguity may be a necessary factor, but it is not sufficient to account for classical conditioning.

Kamin, L. J. (1969). Predictability, surprise, attention, and conditioning. In B. A. Campbell & R. M. Church (Eds.), Punishment and
aversive behavior (pp. 279–296). New York: Appleton-Century-Crofts.
Lieberman, D. A. (2000). Learning: Behavior and cognition. Belmont, CA: Wadsworth.
Malone, J. C. (1990). Theories of learning: A historical approach. Belmont, CA: Wadsworth.
Purdy, J. E., Markham, M. R., Schwartz, B. L, & Gordon, W. C. (2001). Learning and memory (2nd ed.). Belmont, CA: Wadsworth.
Rescorla, R. A. (1968). Probability of shock in the presence and absence of CS in fear conditioning. Journal of Comparative and
Physiological Psychology, 66, 1–5.
Rescorla, R. A. (1988). Pavlovian conditioning: It’s not what you think it is. American Psychologist, 43, 151–160.

6: LEARNING 177
DEMONSTRATION/ACTIVITY:
CLASSICAL AND INSTRUMENTAL CONDITIONING OF PLANARIA
Katz (1978) suggested using planaria to give students the chance to apply classical and instrumental conditioning
techniques. This activity will require some preparation on your part and may be used outside of class.
A conditioning chamber must be constructed, but one chamber will serve for both classical and instrumental
conditioning. The chamber consists of a petri dish (two-thirds full of water) resting in a hole in a wooden base. “The
base supports a 25-watt light which is mounted 6 inches above the petri dish, as well as two electrodes which extend into
the petri dish” (p. 91). One switch controls the light and another the electrodes (minimized wattage through a
transformer); power is supplied through a wall plug (see Katz’s article for an electrical diagram).
Classical conditioning consists of pairing the light with shock. The shock is presented at a minimal level to evoke a
turning response. The goal is to develop a turning response to the light alone.
For instrumental conditioning, a piece of cardboard or paper must be placed under the petri dish. A circle (start box)
is drawn in the middle of the paper, and half the paper is coloured black and half white. The planaria is placed in the
start box and given 10 baseline trials to determine its preference for black or for white. The task is then to train the
planaria to avoid by shocking it each time it goes to its preferred side. Katz noted that avoidance typically occurs within
30 minutes.
Katz reported that students enjoy and learn from this activity. The potential negative reaction to using shock is
probably mitigated by the planaria: It is not cute and furry. Also, the level of shock is low. The planaria also has the
advantage of being low on the biological scale, which gives students a chance to see the broad range of applicability of
classical and instrumental concepts. For a variation on Katz’s method, see Abramson, Kirkpatrick, Bollinger, Odde, and
Lambert (1999).

Abramson, C. I., Kirkpatrick, D. E., Bollinger, N., Odde, R., & Lambert, S. (1991). Planarians in the classroom: Habituation and
instrumental conditioning. In L. T. Benjamin, Jr., B. F. Nodine, R. M. Ernst, & C. Blair-Broeker (Eds.), Activities handbook for
the teaching of psychology: Vol. 4 (pp. 166–171). Washington, DC: American Psychological Association.
Katz, A. N. (1978). Inexpensive animal learning exercises for huge introductory laboratory classes. Teaching of Psychology, 5, 91–
93.

LECTURE/DISCUSSION TOPIC:

INSTRUMENTAL VERSUS OPERANT CONDITIONING


Weiten & McCann make no distinction between instrumental and operant conditioning; in fact, it states that “another
name for operant conditioning is instrumental learning.” However, some theorists distinguish between the two.
According to Klein (2002), instrumental learning involves a situation in which “the environment constrains the
opportunity for reward and a specific behaviour can obtain reward” (p. 475). He defined operant conditioning as a
situation where a “specific response produces reinforcement and the frequency of response determines the amount of
reinforcement obtained” (p. 477). There are subtle differences between these definitions. An instrumental response
obtains reward, whereas operant behaviour produces reinforcement. Klein made no distinction between the terms
reward and reinforcement, so that difference in terms is not the meaningful one. A second difference is that instrumental
responses are constrained by the environment, whereas operant responses control reinforcement. Purdy, Markham,
Schwartz, & Gordon (2001) pointed out that the “fundamental difference between instrumental and operant situations
involves the degree to which an organism is free to make a response at any given time” (p. 100).
Thus, instrumental and operant conditioning involve studying different types of behaviours in different types of
environments. In the instrumental paradigm, the experimenter controls the ability of the organism to obtain
reinforcement by using discrete trials. Instrumental environments include runways, mazes, Thorndike’s puzzle boxes,
and escape or avoidance chambers. To receive reinforcement, the organism has to be placed in the environment each
time. On the other hand, the classic example of an operant environment is a Skinner box. In a Skinner box, once the
organism is placed in the environment, it controls the rate and quantity of reinforcement through its responding.

178 ENRICHED INSTRUCTOR’S MANUAL


Researchers study different elements of behaviour through instrumental and operant conditioning (Purdy, Markham,
Schwartz, & Gordon, 2001). In an instrumental situation, when the opportunity for a response is provided, the
probability, speed, or accuracy of the organism’s response is typically the dependent variable. In an operant situation,
however, the rate of responding within a given time is most often the dependent variable measured. Response rate would
not be a logical variable to study in an instrumental situation, because the experimenter controls the opportunity to
respond.
Instrumental and operant situations exist outside the laboratory also. Any situation in which there is the chance for
only one reinforcement per opportunity fits the definition of instrumental learning. For example, instrumental
conditioning would be involved if a child is reinforced on the basis of whether the bed is made or homework is
completed on a daily basis. Humans also engage in many operant situations such as gambling, fishing, and dating (Klein,
2002). To ensure that your students understand the difference between instrumental and operant conditioning, ask them
to give examples of each in class. As an example is given, make sure it is correct, and ask the volunteer to explain why
the behaviour is instrumental or operant.
Klein (2002) implied that there may be more real-world examples of operant conditioning than of instrumental
conditioning. Do you find this to be true in your class’s examples?

Klein, S. B. (2002). Learning: Principles and applications (4th ed.). New York: McGraw-Hill.
Purdy, J. E., Markham, M. R., Schwartz, B. L, & Gordon, W. C. (2001). Learning and memory (2nd ed.). Belmont, CA:
Wadsworth.

DEMONSTRATION/ACTIVITY: SHAPING IN THE CLASSROOM


Shaping is one of those concepts encountered in introductory psychology that seems rather simple on the surface but
often is difficult to apply. I learned this lesson many times as students in my Learning class struggled to use the concept
in training rats to press a bar.
Watson (1981) suggested using a variation on a childhood game to demonstrate shaping. A volunteer leaves the
room while the rest of the class selects a simple behaviour to shape; Watson suggested that it be touching the
chalkboard. On return, the subject must attempt to discover the behaviour to be learned through the instructor’s shaping
(saying good every time the subject makes a move in the correct direction).
After this group demonstration, Watson divided the class into pairs and had one person serve as the shaper and the
other as the learner. The shaper guides the learner to some desired behaviour through the shaping process. Watson
suggested continuing the exercise in a new arrangement by having the shaper use the word bad as a punishment
whenever the subject “gets colder” rather than saying good as the learner gets closer to the desired behaviour. The class
will typically discover that punishment is not as effective as reinforcement.
Randall Wight, a colleague of mine at Ouachita, used a variation of this technique but made it an active learning
exercise. The entire class participates in the original shaping by clapping hands once for a reinforcement. Then they can
all see individual differences in shaping contingencies. Also, he typically had the class shape the behaviour of flipping
the light switch in the classroom, which allows students to generalize easily to shaping a rat to press a lever.
Another way to demonstrate shaping and reinforcement is to have the class form pairs, give each pair a batch of
paper clips, and have one member of the pair leave the room. You can also use pennies, but paper clips demonstrate that
the reinforcer doesn’t have to have real value. The ‘experimenters’ then decide what to reinforce, and it doesn’t all have
to be the same. It may be that a paper clip is given for every personal pronoun used, or for touching the hair, or for
talking about psychology! The ‘subjects’ come back in, and are told to sit and talk to their partner. The one doing the
shaping says nothing, just pushes a paper clip toward the one talking for each approximation of the desired response,
until the response is clearly established. The discussion may bring out that some students weren’t quite aware of which
behaviour is being reinforced; they may have established some superstitious behaviours along the way.

Watson, D. (1981). Shaping by successive approximations. In L. T. Benjamin, Jr., & K. D. Lowman (Eds.), Activities handbook for
the teaching of psychology (pp. 60–61). Washington, DC: American Psychological Association.

6: LEARNING 179
DEMONSTRATION/ACTIVITY: SHAPING A GERBIL
Many teachers would like to give students the opportunity to interact with a rat in a Skinner box but face obstacles such
as cost, access to equipment, and student dislike of rats. Plant (1980) developed a low-cost alternative that overcomes
these obstacles. He suggested making and using gerbil jars rather than Skinner boxes. A gerbil jar is made from a
gallon-size glass jar. It is necessary to drill holes in the jar with a carbide drill bit to furnish ventilation, water access,
and a food tray/magazine (raw shelled sunflower seeds work well). Plant suggested hanging a bell from the lid of the jar
and having students train a gerbil to reach up and ring the bell.
Plant noted that motivated students have trained gerbils to do back flips and carry objects in the jar. Your students’
gerbil jars could be modified to allow shaping of different behaviours. Furthermore, other concepts of operant
conditioning (e.g., acquisition curves, shaping, extinction, generalization, discrimination, schedules of reinforcement)
could also be applied to the gerbil jar task.
Plant advocated using the gerbil jar as an out-of-class activity, but it could also be started in class or dealt with in a
laboratory session. Jars should be available from your college’s food service at no cost. For instructions on building a
low-cost Skinner box, see Keith (1999). If you require students to purchase their own gerbil (individually or in small
groups), or if you can maintain a gerbil colony in your department, the cost of this activity will be minimal. Whether
students work with gerbils at home or in the department, be certain to give them some basic information about care of
the animals and the ethical guidelines appropriate to this activity (see Chapter 2 of this manual for ethical guidelines in
working with animals). The hands-on experience involved in this activity will strongly reinforce the information
presented in class.

Keith, K. D. (1999). Operant conditioning in the classroom: An inexpensive home-built Skinner box. In L. T. Benjamin, Jr., B. F.
Nodine, R. M. Ernst, & C. Blair-Broeker (Eds.), Activities handbook for the teaching of psychology: Vol. 4 (pp. 172–175).
Washington, DC: American Psychological Association.
Plant, L. (1980). The gerbil jar: A basic home experience in operant conditioning. Teaching of Psychology, 7, 109.

LECTURE/DISCUSSION TOPIC: GENERALIZATION AND DISCRIMINATION


Your students will be familiar with the phenomena of generalization and discrimination without even knowing it. Ask
your class how many of them have had the embarrassing experience of sneaking up behind and grabbing someone who
turns out not to be who they thought, but perhaps a total stranger. In this case, students are guilty of faulty generalization.
When asked why they made the mistake, students should be able to figure out that the cues from someone’s back are not
as distinctive (discriminable) as the cues from the face. Ask how many have seen someone at a distance and mistaken the
person for someone else until the person got closer, when the cues became more distinctive. This situation illustrates
generalization followed by discrimination.
Students may conclude that generalization and discrimination occur in opposition to each other. However, these
processes are actually complementary in many cases of learning. Consider the child learning about dogs. She learns that
the animal in her house is called a dog. When she goes outside and sees other dogs, they do not look the same as her
dog, so she discriminates and does not consider them to be dogs. She is corrected by her parents and then may begin to
generalize too broadly, including cats, squirrels, and cows in her concept of dog. Once again, discrimination must take
place as the child learns to differentiate dogs from other animals. In much of our learning, generalization and
discrimination occur in a give-and-take fashion. Thus, you learn to generalize about all cars to discriminate them from
trucks. However, within cars, you learn to discriminate different makes and models. Depending on your sophistication
regarding different cars, you may be able to discriminate between two similar models of the same car or only among red,
blue, silver, and other colours of cars.
Weiten & McCann provide little information about how researchers study generalization and discrimination in the
laboratory. To help students understand this process, you can introduce the notion of generalization gradients. In a
typical study, conditioning to a specific stimulus (S+) takes place. During testing, the organism is presented with the S+
and new stimuli that it has not previously experienced (S–). The question of interest is how much the organism responds
to the various S–. TM 6-4 illustrates an auditory task in which the S+ was a 1000-Hz tone. Line A represents a steep
generalization gradient, which implies less generalization and more discrimination. Note how the curve peaks around the
S+. Responses to S– that are similar to S+ are higher than responses to the more distant S–. Line B represents a shallow

180 ENRICHED INSTRUCTOR’S MANUAL


generalization gradient, implying more generalization and less discrimination. Notice that the response to S+ is only
slightly greater than the responses to the various S–. The subject represented in Line B has not learned to discriminate
well at all.
A wide variety of variables have been found to affect generalization and discrimination (Purdy, Markham,
Schwartz, & Gordon, 2001). Increased training seems to increase overall responding to S+ and to decrease
generalization. Increasing motivation level also reduces generalization in terms of the absolute number of responses
made. As the interval between training and testing increases, generalization also increases, probably due to forgetting.
Not surprisingly, discrimination training (contrasting S+ and S–) decreases generalization.
Generalization and discrimination are valuable processes of learning. Generalization allows us to approach similar
situations without learning an entirely new pattern of behaviour, as when we drive different cars, go to different classes,
and use different computers. Discrimination allows us to note important differences and respond accordingly. For
example, driving a car with a standard transmission differs in some important ways from driving a car with an automatic
transmission, as does a car with instrument controls in different locations than the car you typically drive.

Purdy, J. E., Markham, M. R., Schwartz, B. L, & Gordon, W. C. (2001). Learning and memory (2nd ed.). Belmont, CA:
Wadsworth.

DEMONSTRATION/ACTIVITY: REINFORCEMENT IN OPERANT CONDITIONING


Hergenhahn (1981) adapted the experimental procedure of Verplanck (1955) to demonstrate the role of reinforcement in
operant conditioning in the classroom. Because the exercise involves verbal conditioning, you can group the class in
experimenter-participant pairs so that all students can be involved simultaneously.
Tell the experimenters to reinforce any statement of opinion (for example, “I believe that . . .” or “I think . . .”) as
they listen to their participant talk. Have the experimenters tell their participant the following:

I will ask you to begin talking. Talk on any topic you wish. I will say nothing at all. Do not let my silence disturb
you. Your job is to work for points. You will receive a point each time I tap my pencil [pen]. As soon as you are
given a point, record it by making a tally mark on your sheet of paper. You are to keep track of your own points.
Do you have any questions? Please commence talking. (Hergenhahn, 1981, p. 62)
Hergenhahn suggested continuing this activity for 15 minutes, collecting data every 3 minutes on how many points
have been obtained. (You might make some signal at the end of each 3-minute period.) After 15 minutes, the
experimenters should engage in extinction (no pen or pencil taps) for 9 minutes. After the demonstration is complete,
have the experimenters ask the participants if they know what they were doing to obtain points.
You can plot the mean number of points (opinionated statements) in each 3-minute interval, including those during
extinction. You will likely see fairly typical acquisition and extinction curves (Figure 6.6 in the Weiten & McCann text
is an example). You can also discuss confounding variables that may have entered in, such as smiles and nods from the
experimenters. See Fernald and Fernald (1999) for a related exercise involving high- and low-probability responses.

Fernald, P. S., & Fernald, L. D. (1999). Shaping behavior through operant conditioning. In L. T. Benjamin, Jr., B. F. Nodine, R. M.
Ernst, & C. Blair-Broeker (Eds.), Activities handbook for the teaching of psychology: Vol. 4 (pp. 176–180). Washington, DC:
American Psychological Association.
Hergenhahn, B. R. (1981). Reinforcing statements of opinion. In L. T. Benjamin, Jr., & K. D. Lowman (Eds.), Activities handbook
for the teaching of psychology (pp. 62–63). Washington, DC: American Psychological Association.
Verplanck, W. S. (1955). The control of the content of conversation: Reinforcement of statements of opinion. Journal of
Abnormal and Social Psychology, 51, 668–676.

DEMONSTRATION/ACTIVITY: REHEARSING OPERANT CONCEPTS


Smith (1990) proposed a class activity that allows students to apply the concepts they have learned about operant
conditioning to a real-life situation. This can be a helpful exercise to determine whether students can transfer the
information they learned about rats in Skinner boxes to a different situation.

6: LEARNING 181
Although it is not necessary to have a prop, it will make the activity more concrete if you have a candy or gum
dispenser that has some sort of lever or button to manipulate in order to dispense the treat. A plastic gum ball machine
would work fine. You also need an empty plastic or glass container. Bring your treat dispenser (filled) and the empty
container to class and put them on your desk at the front of the room. Give a volunteer a chance to operate the dispenser.
Then give your class a quiz about the concepts of operant conditioning at work in this situation (see HM 6-3).
• What is the treat called? (positive reinforcement)
• What would happen to your behaviour if the dispenser was empty? (extinction)
• What would happen if the dispenser was refilled? (spontaneous recovery)
• Why was the student able to operate this dispenser despite never having operated it previously? (generalization)
• Why did the student choose to work the dispenser rather than open the empty container? (discrimination)
• What would happen if you operated the dispenser but did not receive the treat until an hour later? (poor
learning)
• What principle is at work in this situation? (delayed reinforcement)
• What schedule of reinforcement is at work here? (continuous reinforcement)
• What type of reinforcement does the treat represent? (primary)
• Why does the coin that you usually need to operate the dispenser have value to you? (conditioned
reinforcement)
You can make up other questions to fit the concepts you cover in class or use different terms than the answers given
here. Although this quiz requires recall, you could make it into a multiple-choice quiz if you wish. This exercise should
reduce the number of students who can identify operant concepts only in the Skinner box situation and only through rote
memory.

Smith, J. Y. (1990). Demonstration of learning techniques. In V. P. Makosky, C. C. Sileo, L. G. Whittemore, C. P. Landry, &
M. L. Skutley (Eds.), Activities handbook for the teaching of psychology: Vol. 3 (pp. 83–84). Washington, DC: American
Psychological Association.

LECTURE/DISCUSSION TOPIC:

NEGATIVE REINFORCEMENT VERSUS PUNISHMENT


Students often confuse negative reinforcement and punishment or interpret them as synonymous. To assess your
students’ understanding of these important concepts, you can administer the Negative Reinforcement Quiz provided in
HM 6-4. Tauber (1988) reported the results of giving a similar quiz to 140 introductory psychology students. On
question 1, only 16% gave a correct answer (removal of a stimulus resulting in an increase in behaviour); 37% answered
punishment. On question 2, 73% responded that negative reinforcement weakens a behaviour; on question 3, 76%
responded that people do not look forward to negative reinforcement. On questions 4 and 5, 92% said they would use
positive reinforcement, but only 66% would use negative reinforcement. Clearly, misconceptions regarding negative
reinforcement are abundant. Tauber recommended the use of a consequence matrix such as the one in TM 6-5 to help
clarify the various behavioural treatments.
Tauber (1988) suggested that you elicit examples of these treatments from the class. You may still need to provide
examples to clarify the difference between punishment and negative reinforcement. He suggests the following:
• Negative reinforcement: “You will have to stay after school until you clean your desk” (p. 153).
• Punishment: “Because you talked back, you will have to stay after school” (p. 153).
The difference between these two statements should be obvious.
Flora and Pavlik (1990) objected to the use of subjective terminology, such as desired, in Tauber’s matrix. They
presented a matrix that they think is free from subjectivity and that defines the terms with respect to the functions that
bring them about. Their matrix appears in TM 6-6.

182 ENRICHED INSTRUCTOR’S MANUAL


This information should help to eliminate the problem of students confusing negative reinforcement and
punishment. As Tauber’s (1988) data (and probably your personal experience) demonstrate, this is a major point of
misunderstanding in the Introductory Psychology class.

Flora, S. R., & Pavlik, W. B. (1990). An objective and functional matrix for introducing concepts of reinforcement and punishment.
Teaching of Psychology, 17, 121–122.
Tauber, R. T. (1988). Overcoming misunderstanding about the concept of negative reinforcement. Teaching of Psychology, 15, 152–
153.

LECTURE/DISCUSSION TOPIC: WHY DOESN’T PUNISHMENT WORK?


Weiten & McCann cover punishment quite well, including the negative side effects of punishment and ways to make
punishment more effective. However, it doesn’t completely explain why punishment is sometimes ineffective.
Punishment may appear to be effective because of an illusion due to a statistical phenomenon: regression toward the
mean. We know when we measure any characteristic, that some people will make extreme scores. However, if we
measure it a second time, the new scores will likely fall closer to the population mean. For example, if a star basketball
player has a child, the child is likely to be taller than average but not as tall as the same-sex parent. If a jockey has a
child, that child will probably be shorter than average but not as short as the same-sex parent.
How does regression toward the mean relate to punishment? Suppose Joey hits and kicks his little sister and is
punished severely for his behaviour. The next day, on the average, Joey’s behaviour will be better simply because he
was so bad the day before that there is room for little else but improvement. His parents may conclude that he is acting
better because of the punishment; however, regression toward the mean is actually at work. The fallacy of the parents’
reasoning can be seen in their use of positive reinforcement. Because Joey is a perfect angel the next day, they praise and
reward him lavishly. However, on the next day he again is mean to his little sister. The parents despair and question why
their use of positive reinforcement is not working. After all, they learned about this wonderful principle in introductory
psychology several years ago! Unfortunately, they did not learn about regression toward the mean. Because Joey was so
good, the odds favour him being worse in the future. Of course, Joey could have learned to be nice to his sister, but he is
still likely to regress from time to time. Thus, because of regression to the mean, punishment may appear to be effective
and reinforcement ineffective—essentially the reverse of typical events.
There is also an aspect of punishment itself that may render it ineffective. Most forms of punishment (other than
ignoring the offending party) force the punisher to pay attention of some sort to the one who is being punished. It is quite
possible that this attention itself is a reinforcer under certain conditions. Take the child who is disruptive in class. When
the teacher stops teaching to correct the child, the child’s objective of gaining attention has just been met, and
reinforcement has occurred. On the other hand, children who are good in class are often ignored by the teacher, which
may be perceived as a punishing stimulus. Teachers are now being taught to ignore disruptive children and to pay
attention to children who are behaving well in class, in order to set up the correct reinforcement and punishment
contingencies. For example, O’Leary and colleagues (1970) found that loud reprimands serve as positive reinforcers for
schoolchildren exhibiting disruptive behaviour, whereas soft reprimands are perceived as punishment. To give another
example, a child who feels a lack of attention at home may throw a tantrum to draw the parents’ attention, even if that
attention is in the form of a spanking. Any attention may be reinforcing if it is the only form of attention you receive.
Thus, punishment may be reinforcing and have the opposite effect of what was intended.

O’Leary, K. D., Kaufman, K. F., Kass, R. E., & Drabman, R. S. (1970). The effects of loud and soft reprimands on the behavior of
disruptive students. Exceptional Children, 37, 145–155.

6: LEARNING 183
LECTURE/DISCUSSION TOPIC: BEHAVIOUR MODIFICATION
The text discusses self-applied behaviour modification in the Chapter 6 Personal Application. Behaviour modification,
frequently known as contingency management, is most often used in therapy “to increase the frequency of appropriate
behaviours and to eliminate or reduce inappropriate responses” (Klein, 2002, p. 114). Klein outlined three steps to
implementing an effective contingency management program:
1. Assessment: The frequency of appropriate and inappropriate behaviours is ascertained, as well as the situations
in which each occurs; reinforcers for the behaviours are determined.
2. Contingency contracting: The relationship between responding and reinforcement is specified, as well as the
method of reinforcement for the appropriate behaviours.
3. Implementation: The treatment is implemented, and changes in behaviour are measured during and after
treatment.
The assessment phase is also referred to as a baseline measure. It is vital to collect information about the entire
situation and not just the behaviour involved. This information could be vital in deciding the nature of reinforcement to
be used during the treatment. Also, clues about the situations in which the behaviour occurs could become apparent.
During the contingency contracting phase, the information gathered from the assessment phase is used to design the
treatment program. Questions concerning the schedule of reinforcement, the need for shaping, and the identity of those
who will administer the reinforcement must be answered. With adults who are motivated to break a particular behaviour
pattern, self-reinforcement may be used. Klein (2002) cited studies in which self-reinforcement has been used
successfully with such problem behaviours as impulsive overspending, depression, inadequate study habits, and
overeating.
It is necessary to measure the behaviour both during and after treatment, to ensure that the treatment actually has the
desired effect and that the effect is lasting. This lasting effect is sometimes difficult to achieve. Many behaviour
modification programs employ a token economy. Appropriate behaviours result in secondary reinforcement through
tokens, which can be exchanged later for desired objects or privileges. A major problem with such systems is that the
desired behaviours become linked to the tokens. When the tokens disappear, so may the desired behaviours. Therefore,
the conditions that will maximize the transfer of the behaviour contingencies to the real world must be defined. Stahl and
Leitenberg (1976) gave several examples of attempts to maximize this transfer with mental hospital patients. For
example, praise has been conditioned with tokens so that praise alone will suffice later, characteristics of the hospital
have been designed to simulate conditions outside the hospital, and family members have been trained to observe
patients’ behaviour and administer rewards and punishments as appropriate.
This information can be used to supplement Weiten & McCann’s discussion of behaviour modification. Some of the
principles could be used if students decide to engage in self-modification programs. Such behaviour modification has
been used quite successfully in a wide variety of situations.

Klein, S. B. (2002). Learning: Principles and applications (4th ed.). New York: McGraw-Hill.
Stahl, J. R., & Leitenberg, H. (1976). Behavioral treatment of the chronic hospital patient. In H. Leitenberg (Ed.), Handbook of
behavior modification and behavior therapy (pp. 211–241). Englewood Cliffs, NJ: Prentice-Hall.

DEMONSTRATION/ACTIVITY:

PLANNING AND EVALUATING BEHAVIOUR MODIFICATION STRATEGIES


Ulman (1980) provided a classroom exercise that brings behaviour modification to life. This activity involves students in
group planning, team competition, simulation, and evaluation—all of which engage students and promote learning. It is
necessary to cover the topic of behaviour modification in class before using this exercise.
In the first phase, students are divided into groups of four to six, each group selects a leader (Ulman used random
selection), and the group leader appoints a recording secretary. Each group is instructed to choose an applied setting
(such as a classroom) and to describe a problem situation. They must include in their written description:
(a) the setting (time, place, and activity);
(b) the relevant characteristics of the target person(s) exhibiting the problem; and
(c) the problem behaviour itself, which could be academic or social or both. (Ulman, 1980, p. 182)

184 ENRICHED INSTRUCTOR’S MANUAL


The groups are told to be as descriptive and realistic as possible in devising their problem.
In the second phase, groups randomly exchange problem statements. Each group now must devise an intervention
plan for the problem statement it received. Tell the groups that their plan must include:
(a) a precise definition of the problem behaviour(s);
(b) a description of the behavioural measurement system, including procedures for assessing
inter-observerreliability;
(c) an exact description of the procedures for modifying the behaviour(s), not just the naming of a behaviour
modification technique such as “time out”;
(d) specification of an appropriate behaviour analysis research design;
(e) a description of provisions for maintaining the desired behavioural change; and
(f) a statement of and ethical justification for the expected outcome. (Ulman, 1980, p. 182)
In the third phase, each group presents its problem statement and proposed solution to the class. The remaining
groups evaluate the plan on a 5-point scale (ranging from 5 points for “excellent plan—sure to work” to 1 point for
“poor plan—probably will not work”). Ulman had the groups evaluate each plan on technical adequacy, practicality, and
appropriateness to the problem stated and forced each group to come to a consensus on the rating for each plan. The
rating groups state their evaluation and their justification for the rating, after which the presenting group is allowed to
defend its plan. The rating group is then given the opportunity to revise its rating. After all groups have presented their
plans and have been rated, you can compute average ratings and select the winning team and plan.
This exercise requires students to assume the role of a behavioural psychologist and to engage in typical
interactions. This type of active learning should also facilitate a greater understanding of the principles of behaviour
modification.
According to Ulman, this exercise requires about 2 hours to complete with five groups. You could streamline the
process by assigning the task and having groups work together outside of class on both developing their plan and
evaluating the other plans. If you offer a lab with your introductory psychology class, this could be a good activity for
the lab period.

Ulman, J. D. (1980). Synthesizing the elements of behavior modification: A classroom simulation game. Teaching of Psychology,
7, 182–183.

DEMONSTRATION/ACTIVITY: APPLYING SELF-MODIFICATION STRATEGIES


In the Chapter 6 Personal Application, the text goes into great detail about applying behaviour modification principles to
oneself to achieve self-control. The vast majority of your students will simply read over this material in case it is covered
on the exam and will otherwise ignore it. A technique has been suggested for getting students to engage in a
self-modification project (Anonymous, 1981).
Explain the nature of the project to your students (you may have covered the Chapter 6 Personal Application
already, or you may simply leave it to the students to read on their own). Ask them to choose a behaviour that they
would like to increase or decrease. Some examples of behaviours to be decreased might be saying “OK” or “you know”
excessively, fingernail biting, or watching too much television. Possible behaviours to increase might be exercising,
giving compliments, or eating healthy foods. The focus on behaviour could be either in terms of frequency or duration.
Tell students that they should follow the steps shown in Weiten & McCann’s Figure 6.22 during their project. You
will probably want them to keep a chart similar to Figure 6.23 for the duration of the project. The chart could be turned
in along with a written report of the project. For more information on self-modification projects, see the relevant section
in Watson and Tharp (2002, pp. 12-23).
A word of caution is appropriate here. Worthington (1977) found that, although 62% of his students claimed a
successful behaviour modification project in their writeup, only 6% reported actual success when given an anonymous
course questionnaire. These data seem to imply that you should implement whatever steps you can to convince students
to report their progress truthfully.

6: LEARNING 185
This project might be somewhat ambitious for a simple demonstration. However, if you require a term paper, this
self-modification project could serve as an alternative requirement. This project would help meet the goal of the many
students who expect and desire to learn some self-help strategies in introductory psychology.

Anonymous. (1981). Recording and self-modification. In L. T. Benjamin, Jr., & K. D. Lowman (Eds.), Activities handbook for the
teaching of psychology (pp. 64–65). Washington, DC: American Psychological Association.
Watson, D. L., & Tharp. R. G. (2002). Self-directed behavior: Self-modification for personal adjustment (8th ed.). Belmont, CA:
Wadsworth.
Worthington, E. L., Jr. (1977). Honesty and success in self-modification projects for a college class. Teaching of Psychology, 4, 78–
82.

REFERENCES FOR ADDITIONAL DEMONSTRATIONS/ACTIVITIES


• From Teaching of Psychology:
Constraints on learning: A useful undergraduate experiment, by E. D. Kemble & K. M. Phillips (1980), 7, 246–247
Teaching the principles of operant conditioning through laboratory experience: The rat olympics, by P. R. Solomon & D. L. Morse
(1981), 8, 111–112
Demonstration experiments in learned taste aversions, by J. W. Kling (1981), 8, 166–169
On the nature of stimulus and response, by L. A. Olsen (1981), 8, 177–178
Making classical conditioning understandable through a demonstration technique, by G. D. Gibb (1983), 10, 112–113
Classical salivary conditioning: An easy demonstration, by D. Cogan & R. Cogan (1984), 11, 170–171
Conditioning the instructor’s behavior: A class project in psychology of learning, by J. C. Chrisler (1988), 15, 135–137
From acceptance to rejection: Food contamination in the classroom, by D. W. Rajecki (1989), 16, 16–18
Sidney slug: A computer simulation for teaching shaping without an animal colony, by L. E. Acker, B. C. Goldwater, & J. L. Agnew
(1990), 17, 130–132
A classical conditioning laboratory for the psychology of learning course, by G. B. Nallan & D. Mark Bentley (1990), 17, 249–251
Demonstrating differential reinforcement by shaping classroom participation, by G. K. Hodge & N. H. Nelson (1991), 18, 239–241
Preparing for an important event: Demonstrating the modern view of classical conditioning, by A. Kohn & J. W. Kalat (1992), 19,
100–102
An inexpensive habituation and sensitization learning laboratory exercise using planarians, by M. J. Owren & D. L. Scheuneman
(1993), 20, 226–228
Classical-conditioning demonstrations for elementary and advanced courses, by C. I. Abramson, T. Onstott, S. Edwards, & K. Bowe
(1996), 23, 26–30
A computer tutorial on consequences in operant learning, by R. B. Graham (1997), 24, 216–217
Pavlov in the classroom: An interview with Robert A. Rescorla, by James E. Freeman (1997), 24, 283–286
Teaching operant conditioning at the zoo, by K. E. Lukas, M. J. Marr, & T. L. Maple (1998), 25, 112–116
A classroom demonstration of taste-aversion learning, by M. R. Best & W. R. Batsell, Jr. (1998), 25, 116–118
Updating coverage of operant conditioning in introductory psychology, by W. Buskist, E. Miller, C. Ecott, & T. S. Critchfield
(1999), 26, 280–283
A method for illustrating the continuity of behavior during schedules of reinforcement, by F. J. Silva, R. Yuille, & L. K. Peters
(2000), 27, 145–148
Operant conditioning concepts in introductory psychology textbooks and their companion Web sites, by J. P. Sheldon (2002), 29,
281-285
Peak shift phenomenon: A teaching activity for basic learning theory, by K. D. Keith (2002), 29, 298-300

• From Activities Handbook for the Teaching of Psychology, by L. T. Benjamin, Jr., & K. D. Lowman (Eds.), 1981,
Washington, DC: American Psychological Association:
Operant conditioning: Role in human behavior, by E. Stork, p. 57
Operant conditioning demonstration, by P. Keith-Spiegel, pp. 58–59
Knowledge of results, by L. Snellgrove, p. 66
Learning curves, by D. Holmer, pp. 71–72

•From Activities Handbook for the Teaching of Psychology: Vol. 2, by V. P. Makosky, L. G. Whittemore, & A. M.
Rogers (Eds.), 1987, Washington, DC: American Psychological Association:
Backwards alphabet, by L. M. Sheldahl, pp. 63–64
Human operant conditioning, by J. K. Bare, pp. 67–68

186 ENRICHED INSTRUCTOR’S MANUAL


• From Activities Handbook for the Teaching of Psychology: Vol. 3, by V. P. Makosky, C. C. Sileo, L. G. Whittemore,
C. P. Landry, & M. L. Skutley (Eds.), 1990, Washington, DC: American Psychological Association:
Teaching the distinction between negative reinforcement and punishment, by R. T. Tauber, pp. 99–102
A demonstration of context-dependent latent inhibition in operant conditioning, by M. A. Sletten & E. D. Kemble, pp. 103–105
The use of goldfish in operant conditioning, by J. R. Corey, pp. 106–108

• From Activities Handbook for the Teaching of Psychology: Vol. 4, by L. T. Benjamin, Jr., B. F. Nodine, R. M. Ernst,
& C. Blair-Broeker (Eds.), 1999, Washington, DC: American Psychological Association:
Shaping behavior through operant conditioning, by P. S. Fernald & L. D. Fernald, pp. 176–180
Using psychological perspectives to change habits, by R. McEntarffer, pp. 181–182
Applying the principles of learning and memory to students’ lives, by A. J. Weseley, pp. 183–185
Aggression on television, by M. A. Lloyd, pp. 346–349

SUGGESTED READINGS FOR CHAPTER 6


Atkinson, R. C., Herrnstein, R. J., Lindzey, G., & Luce, R. D. (Eds.). (1988). Stevens’ handbook of experimental psychology: Vol. 2.
Learning and cognition. New York: Wiley. Technical handbook that contains four very detailed chapters on learning research.
Bandura, A. (1986). Social foundations of thought and action. Englewood Cliffs, NJ: Prentice-Hall. A wide-ranging book that
synthesizes decades of research by the architect of social learning theory.
Bower, G. H., & Hilgard, E. R. (1981). Theories of learning. Englewood Cliffs, NJ: Prentice-Hall. The definitive classic on theories of
learning.
Catania, A. C., & Harnad, S. (Eds.). (1988). The selection of behavior: The operant behaviorism of B. F. Skinner: Comments and
consequences. New York: Cambridge University Press. An adaptation of a special issue of Behavioral and Brain Sciences, which
contained several seminal papers by Skinner and comments on his ideas by a host of the leading theorists in the area.
Domjan, M. (2003). The principles of learning and behavior (5th ed.). Belmont, CA: Wadsworth. A superb undergraduate text on
learning, which is accurate and well written, with interesting discussions of applications.
Klein, S. B., & Mowrer, R. R. (Eds.). (1989). Contemporary learning theories (Vols. 1–2). Hillsdale, NJ: Erlbaum. Two volumes on
current developments in learning theory, with excellent updates on such theories as preparedness, learned helplessness, and the
two-factor theory of avoidance.
Pryor, K. (1999). Don’t shoot the dog! The new art of teaching and training (2nd ed.). New York: Bantam Books. A clever and engaging
presentation of behavior modification principles.
Purdy, J. E., Markham, M. R., Schwartz, B. L, & Gordon, W. C. (2001). Learning and memory (2nd ed.). Belmont, CA:Wadsworth. A
concise and accessible overview of learning research that includes extensive coverage of verbal learning and memory.
Schwartz, B., & Robbins, S. (1995). Psychology of learning and behavior. New York: Norton. Another outstanding scholarly
undergraduate text on learning.
Watson, D. L., & Tharp, R. G. (2002). Self-directed behavior: Self-modification for personal adjustment (8th ed.). Belmont, CA:
Wadsworth. Clearly the best available undergraduate text on how to apply the principles of learning and conditioning to oneself.

6: LEARNING 187
COPYRIGHT © 2013 by Nelson Education Ltd.

HANDOUT MASTER 6–1:


DO THESE REPRESENT LEARNING?

____ 1. An infant stops sucking its thumb.


____ 2. A child acquires language.
____ 3. A computer program generates random opening moves for its first 100 chess games and tabulates the
outcomes of those games. Starting with the 101st game, the computer uses those tabulations to influence its
choice of opening moves.
____ 4. A worm is placed in a T maze. The left arm of the maze is brightly lit and dry; the right arm is dim and
moist. On the first 10 trials, the worm turns right 7 times. On the next 10 trials, the worm turns right all 10
times.
____ 5. Ethel stays up late the night before the October GRE administration and consumes large quantities of licit
and illicit pharmacological agents. Her combined (verbal plus quantitative) score is 410. The night before
the December GRE administration, she goes to bed early after a wholesome dinner and a glass of milk. Her
score increases to 1210. Is the change in scores due to learning? Is the change in pretest regimen due to
learning?
____ 6. A previously psychotic patient is given Dr. K’s patented phrenological surgery and no longer exhibits any
psychotic behaviours.
____ 7. A lanky zinnia plant is pinched back and begins to grow denser foliage and flowers.
____ 8. MYCIN is a computer program that does a rather good job of diagnosing human infections by consulting a
large database of rules it has been given. If we add another rule to the database, has MYCIN learned
something?
____ 9. After pondering a difficult puzzle for hours, Jane finally figures it out. From that point on, she can solve all
similar problems in the time it takes her to read them.
____ 10. After 30 years of smoking two packs a day, Zeb throws away his cigarettes and never smokes again.

Adapted from “Defining Learning: Two Classroom Activities,” by T. Rocklin, 1987, Teaching of Psychology, 14, p. 228. Copyright
1987 by Lawrence Erlbaum Associates, Inc. Adapted by permission.

188 ENRICHED INSTRUCTOR’S MANUAL


COPYRIGHT © 2013 by Nelson Education Ltd.

HANDOUT MASTER 6–2:


REHEARSING CLASSICAL CONDITIONING CONCEPTS

For each of the following scenarios, identify the unconditioned stimulus, conditioned stimulus, unconditioned response,
and conditioned response.

Suzy goes outside to play in her tree house. A swarm of bees has nested near her tree house, and she gets stung when she
climbs up to the tree house. This happens three times in a week. Suzy becomes afraid to go near the tree and cries
violently when her dad tries to get her to climb up to the tree house.

Jerry’s wife, Mary, gets a new nightgown and wears it whenever she is in the mood for sexual relations. After a month,
the sight of the nightgown alone is enough to excite Jerry.

A couple goes to a movie on their first date and they have a wonderful time, eventually getting married. Whenever they
see this movie on the late night show, they get a tender feeling and think about each other.

A student survives a plane crash that occurred because of a thunderstorm. Now, whenever the student hears thunder, he
gets anxious.

6: LEARNING 189
COPYRIGHT © 2013 by Nelson Education Ltd.

ANSWERS FOR HANDOUT MASTER 6–2:


REHEARSING CLASSICAL CONDITIONING CONCEPTS

For each of the following scenarios, identify the unconditioned stimulus, conditioned stimulus, unconditioned response,
and conditioned response.

Suzy goes outside to play in her tree house. A swarm of bees has nested near her tree house, and she gets stung when she
climbs up to the tree house. This happens three times in a week. Suzy becomes afraid to go near the tree and cries
violently when her dad tries to get her to climb up to the tree house.

US bee sting, which causes pain


CS tree house
UR escape from painful stimulus
CR avoidance of tree house

Jerry’s wife, Mary, gets a new nightgown and wears it whenever she is in the mood for sexual relations. After a month,
the sight of the nightgown alone is enough to excite Jerry.

US wife receptive to sexual relations


CS nightgown
UR sexual response
CR sexual response at the sight of the nightgown

A couple goes to a movie on their first date and they have a wonderful time, eventually getting married. Whenever they
see this movie on the late night show, they get a tender feeling and think about each other.

US feelings for spouse


CS movie
UR love
CR tender feeling, thinking of spouse

A student survives a plane crash that occurred because of a thunderstorm. Now, whenever the student hears thunder, he
gets anxious.

US plane crash (caused by storm)


CS thunderstorm
UR fear
CR anxiety

190 ENRICHED INSTRUCTOR’S MANUAL


COPYRIGHT © 2013 by Nelson Education Ltd.

HANDOUT MASTER 6–3:


REHEARSING OPERANT CONDITIONING CONCEPTS

Your instructor may use some props for this activity. If not, imagine that there is a full gum ball machine and a large
glass jar with a lid on the instructor’s desk. Answer the following questions about these props using your knowledge of
operant conditioning concepts.

What is the gum ball that you receive from the machine called?

What is the reason that you would not attempt to buy a gum ball if the dispenser was empty?

What would you call your behaviour if the dispenser was refilled and you bought a gum ball?

Why would you be able to operate this dispenser despite never having operated it previously?

Why would you choose to work the dispenser rather than open the empty container?

What would happen if you operated the dispenser but did not receive the gum ball until an hour later?

What principle would explain the result of the previous situation?

What schedule of reinforcement does a full gum ball machine use?

What type of reinforcement does the gum ball represent?

Why does the coin that you usually need to operate the dispenser have value to you?

Based on “Demonstration of Learning Techniques,” by J. Y. Smith. In V. P. Makosky, C. C. Sileo, L. G. Whittemore, C. P. Landry,


& M. L. Skutley (Eds.), Activities handbook for the teaching of psychology: Vol. 3, pp. 83–84, American Psychological Association,
1990.

6: LEARNING 191
COPYRIGHT © 2013 by Nelson Education Ltd.

HANDOUT MASTER 6–4: NEGATIVE REINFORCEMENT QUIZ

1. Provide a word or term that means the same thing as negative reinforcement:

2. Negative reinforcement
a. increases behaviour.
b. decreases behaviour.
c. has no effect on behaviour.

3. If you were about to receive a negative reinforcement, would you look forward to it?
a. Yes
b. No

4. Would you use positive reinforcement with a child?


a. Yes
b. No

5. Would you use negative reinforcement with a child?


a. Yes
b. No

6. With regard to question 4, why or why not?

7. With regard to question 5, why or why not?

Based on “Overcoming Misunderstanding About the Concept of Negative Reinforcement,” by R. T. Tauber, 1988, Teaching of
Psychology, 15, 152–153. Mahwah, NJ: Erlbaum.

192 ENRICHED INSTRUCTOR’S MANUAL


COPYRIGHT © 2013 by Nelson Education Ltd.

TRANSPARENCY MASTER 6–1:


RATS’ RESPONSES TO AVERSIVE STIMULI ASSOCIATED WITH W ATER

From “Relation of Cue to Consequence in Avoidance Learning,” by J. Garcia and R. A. Koelling, 1966, Psychonomic Science, 4, pp.
123–124. Reprinted by permission.

6: LEARNING 193
COPYRIGHT © 2013 by Nelson Education Ltd.

TRANSPARENCY MASTER 6-2:


CONDITIONING AS A FUNCTION OF CONTINGENCY

From “Pavlovian Conditioning: It's Not What You Think It Is,” by R. A. Rescorla, 1988, American Psychologist, 43, 151–160.
Copyright © 1988 by the American Psychological Association. Reprinted by permission.

194 ENRICHED INSTRUCTOR’S MANUAL


COPYRIGHT © 2013 by Nelson Education Ltd.

TRANSPARENCY MASTER 6-3:


KAMIN'S BLOCKING PARADIGM

From Learning and Memory, 2nd ed. (p. 45), by J. E. Purdy, M. R. Markham, B. L. Schwartz, & W. C. Gordon, 2001, Belmont, CA:
Wadsworth.

6: LEARNING 195
COPYRIGHT © 2013 by Nelson Education Ltd.

TRANSPARENCY MASTER 6-4:


GENERALIZATION GRADIENTS IN CLASSICAL CONDITIONING

From Learning and Memory, 2nd ed. (p. 199), by J. E. Purdy, M. R. Markham, B. L. Schwartz, & W. C. Gordon, 2001, Belmont,
CA: Wadsworth.

196 ENRICHED INSTRUCTOR’S MANUAL


COPYRIGHT © 2013 by Nelson Education Ltd.

TRANSPARENCY MASTER 6-5:


CONSEQUENCE MATRIX

CONSEQUENCE

Applied Removed

Desired Positive Time-out


reinforcement

Dreaded Punishment Negative


reinforcement

Based on “Overcoming Misunderstanding About the Concept of Negative Reinforcement,” by R. T. Tauber, 1988, Teaching of
Psychology, 15, 152–153. Mahwah, NJ: Erlbaum.

6: LEARNING 197
COPYRIGHT © 2013 by Nelson Education Ltd.

TRANSPARENCY MASTER 6-6:


BASIC OPERANT CONCEPTS

Behaviour probability or rate

Action Increases Decreases

Present stimulus Positive Positive


reinforcement punishment

Remove stimulus Negative Negative


reinforcement punishment
(omission,
time-out)

Adapted from “An Objective and Functional Matrix for Introducing Concepts of Reinforcement and Punishment,” by S. R. Flora and
W. B. Pavlik, 1990, Teaching of Psychology, 17, p. 122. Mahwah, NJ: Erlbaum.

198 ENRICHED INSTRUCTOR’S MANUAL


6: LEARNING 199

You might also like