Professional Documents
Culture Documents
Chapter Six:
Learning
Chapter Outline
6: LEARNING 165
Lecture/Discussion Topic: Behaviour Modification ......................................................................................183
Demonstration/Activity: Planning and Evaluating Behaviour Modification Strategies .................................183
Demonstration/Activity: Applying Self-Modification Strategies ...................................................................184
References for Additional Demonstrations/Activities ....................................................................................185
Suggested Readings for Chapter 6 .................................................................................................................186
Handout Masters (HM) ..................................................................................................................................187
Transparency Masters (TM) ...........................................................................................................................192
This chapter provides a good foundation for further study in various areas of psychology
including developmental psychology, neuroscience, clinical psychology, sports
psychology, etc.
Students will learn the methods necessary to shape behaviour. This information can be
used by students in different situations throughout their lives. They can learn to
motivate themselves and become successful at school, in their careers and with their
own health goals.
Learning theory can provide useful tools for understanding how and when to reward or
punish a child’s behaviour. Not to suggest that a child and a pet are equivalent, but the
same methods can be used to shape a pets behaviour.
6: LEARNING 167
BARRIERS TO LEARNING: WHAT ARE COMMON STUDENT MISCONCEPTIONS AND STUMBLING
BLOCKS?
Most of the students prior experience with learning theories will be related to Pavlov
and classical conditioning. Students often struggle with understanding the similarities
and differences between classical and operant conditioning.
Many students have difficulty accepting the fact that animal research is important and
valuable. Students are interested in issues related to the ethical treatment of animals.
It is sometimes difficult for students to accept the fact that we are similar to animals and
that the research we perform on animals can be generalized to humans.
6: LEARNING 169
Module 5e (Reinforcement and Punishment) deals with positive reinforcement, negative reinforcement, and
punishment. The module clearly differentiates negative reinforcement from punishment, so it may be helpful to your
students. You probably make the same points in lecture, but repeated practice on this difficult discrimination may help.
Module 5f (Avoidance and Escape Learning) presents escape learning, avoidance learning, and the two-process
theory of avoidance. The module shows how avoidance learning is built on previous escape learning and provides a nice
computer sequence of avoidance learning. Mowrer’s two-process theory of avoidance may not be a concept you cover,
so be prepared for questions from your students concerning it.
Simulation 04 (Shaping in Operant Conditioning) gives students a chance to shape Morphy the rat to press a bar to
criterion (15 presses). Morphy emits a variety of behaviours in a variety of locations in a Skinner box. Students must be
alert to dispense reinforcement quickly before Morphy moves or emits a different behaviour. Students may learn best
from this simulation if they complete it more than once. Students will probably enjoy trying to lower the time that it
takes them to train Morphy to criterion.
Hilgard, E. R., & Bower, G. H. (1975). Theories of learning (4th ed.). Englewood Cliffs, NJ: Prentice-Hall.
Rocklin, T. (1987). Defining learning: Two classroom activities. Teaching of Psychology, 14, 228–229.
DEMONSTRATION/ACTIVITY:
Smith, R. A. (1987). Jaws: Demonstrating classical conditioning. In V. P. Makosky, L. G. Whittemore, & A. M. Rogers (Eds.),
Activities handbook for the teaching of psychology: Vol. 2 (pp. 65–66). Washington, DC: American Psychological Association.
Vernoy, M. W. (1987). Demonstrating classical conditioning in introductory psychology: Needles do not always make balloons pop!
Teaching of Psychology, 14, 176–177.
DEMONSTRATION/ACTIVITY:
Sparrow and Fernald (1989) developed a technique for classically conditioning students during a class session and for
demonstrating other related concepts such as generalization, discrimination, and spontaneous recovery. They built a
conditioner that consisted of a light with a dimmer switch, a siren, and a buzzer (details are included in their article).
However, a light with a dimmer switch and a compressed-air horn should suffice to provide much the same
demonstration.
Sparrow and Fernald took four steps to classically condition the class:
1. Illuminate the light at a middle range on the dimmer switch. The light serves as the originally neutral stimulus,
and students should show essentially no response to it.
2. Sound the siren or horn several times. This stimulus should be sufficiently loud to evoke a startle response in
the students. Thus the sound serves as the UCS to elicit the UCR (startle response).
3. Pair the light with the sound about 10 times (CS paired with UCS).
4. Demonstrate a CR by presenting the light alone several times. Students should show a small startle response if
conditioned.
Sparrow and Fernald advised diagramming these four steps for the class after the demonstration, using classical
conditioning terms and actual stimuli, to ensure that students make the connection between the demonstration and the
conditioning process.
If students are simply asked whether a startle response occurred, demand characteristics may influence their reports.
Sparrow and Fernald suggested two ways to circumvent this potential problem:
• Have some students serve as confederates and observe the other students to note their responses as they are
being conditioned. Of course, it is still possible that the observers could be influenced by expectancies.
• The startle response of a student (or perhaps several) could be monitored with a galvanic skin response (GSR)
meter. Gibb (1983) advocated the use of a GSR meter with a clear back and front for classical conditioning
demonstrations in class. You can place the clear meter on an overhead projector so that the class can see the
GSR readings.
6: LEARNING 171
After conditioning has taken place, generalization can be demonstrated by varying the intensity of the light with
the dimmer switch. Students will likely continue to show a startle response to the altered stimulus. These
generalization trials will tend to weaken the CR somewhat. Discrimination can be demonstrated by presenting the
original CS and a brighter and dimmer light about 10 times each, pairing the sound only with the original CS.
Announce that there will be six more light presentations, and ask students to carefully monitor their reaction to each
one. Randomly present each of the three intensity lights twice. If students have learned to discriminate, a startle
response should occur only to the original CS. Extinction is shown by presenting the light alone 10 to 15 times. The
startle response should die out. After several minutes have passed, presentation of the light will result in some
startle, thus demonstrating spontaneous recovery. Sparrow and Fernald used the buzzer to demonstrate the notion of
higher-order conditioning, although they were not able to create higher-order conditioning in their students.
Sparrow and Fernald recommended this procedure because it recreates Pavlov’s procedures fairly accurately and
avoids some problems inherent in other descriptions of classical conditioning exercises. For a similar activity using a
water gun, see Shenker (1999).
Gibb, G. D. (1983). Making classical conditioning understandable through a demonstration technique. Teaching of Psychology, 10,
112–113.
Shenker, J. I. (1999). Classical conditioning: An all-purpose demonstration using a toy watergun. In L. T. Benjamin, Jr., B. F.
Nodine, R. M. Ernst, & C. Blair-Broeker (Eds.), Activities handbook for the teaching of psychology: Vol. 4 (pp. 163–165).
Washington, DC: American Psychological Association.
Sparrow, J., & Fernald, P. (1989). Teaching and demonstrating classical conditioning. Teaching of Psychology, 16, 204–206.
DEMONSTRATION/ACTIVITY:
Another difficulty in conceptualizing inhibitory conditioning is the fact that it is difficult to distinguish between
inhibitory conditioning and no learning at all. For example, if an organism does not respond to a given stimulus, it is not
clear whether inhibitory conditioning is being displayed or whether the organism has never learned anything about the
stimulus. Purdy, Markham, Schwartz, and Gordon (2001) noted that there are two methods for determining whether or
not inhibitory conditioning has taken place: the retardation and summation tests.
In the retardation test, a CS is presented in the absence of an expected UCS. Thus, the CS should come to produce
the withholding or suppression of a response. After the inhibitory conditioning, the situation is reversed. The CS is used
in an excitatory conditioning paradigm; it is used as a signal for the occurrence of the UCS. If previous inhibitory
conditioning has taken place, the CS will be difficult to condition in an excitatory fashion, more difficult than a neutral
stimulus. Thus, the inhibitory conditioning put this CS at a disadvantage for excitatory conditioning rather than merely
making it the zero point. In the laboratory, an animal might learn that a bell signals the presentation of a shock and a
light signals the absence of shock. At some later point, the light is then paired with shock. The learning of this
association will be slower than pairing the shock with a new neutral stimulus. Imagine trying to learn that a police car
driving by is now a signal to speed up rather than a signal to slow down. After years of experience, slowing down at the
sight of police cars would be a difficult response to overcome.
The summation test is somewhat simpler. In this case, an inhibitory CS and an excitatory CS are presented at the
same time. If the inhibitory CS actually produces inhibition, it should reduce or eliminate the response that normally
would occur to the excitatory CS. Remember the example in which a bell is used to signal shock and a light to signal the
absence of shock. Suppose you present both the bell and light at the same time. The animal will probably be confused
and show a lessened fear response to the bell than normally would occur. Imagine driving down the street and coming to
a traffic signal on which both the red and green lights are lit at the same time. What do you do? You would probably
hesitate, which would reduce or eliminate the normal response to the green light.
Although students typically have more difficulty learning about inhibitory conditioning than excitatory conditioning,
comprehending inhibitory conditioning is important to understanding how discrimination occurs. In discrimination, we
typically have to learn to withhold a response to a stimulus that is somehow different from the stimulus to which we
should respond. Thus, we must experience inhibitory conditioning to the “wrong” stimulus in order for discrimination to
occur.
Purdy, J. E., Markham, M. R., Schwartz, B. L, & Gordon, W. C. (2001). Learning and memory (2nd ed.). Belmont, CA:
Wadsworth.
6: LEARNING 173
LECTURE/DISCUSSION TOPIC: BIOLOGICAL CONSTRAINTS ON LEARNING
Two major lines of research have undermined the behaviouristic interpretation of conditioning: biological constraints
and cognitive interpretations. A major tenet of behaviourists was the notion that laws of conditioning exist. Thus,
principles of learning could be developed that would apply to all organisms for all behaviours. It did not matter to
behaviourists whether they studied animals or humans. They chose animals because they could exert greater control over
animals, house them more conveniently, turn over generations more rapidly, study simpler organisms with simpler
methods—just to name a few reasons. Regardless, behaviourists did not believe that they were studying only one
particular animal or one specific behaviour. Behaviourists believed that they were learning general principles that would
apply across the board. For example, Skinner (1938) wrote, “The general topography of operant behaviour is not
important, because most if not all specific operants are conditioned. I suggest that the dynamic properties of operant
behaviour may be studied with a single reflex” (pp. 45–46). Pavlov (1927) wrote that “it is obvious that the reflex
activity of any effector organ can be chosen for the purpose of investigation, since signaling stimuli can get linked up
with any of the inborn reflexes” (p. 17). Any information that weakens this concept of generality, therefore, strikes at
one of the key principles of behaviourism.
The Weiten & McCann text summarizes several biological limitations on learning. This information implies that an
organism is not a blank slate when it approaches a learning situation. The biology and heredity of an organism have
placed certain limitations on the organism’s potential for learning.
One limitation mentioned in the text is instinctive drift. The Brelands, who found that raccoons would not let go of
coins, found other examples of animal misbehaviour (Breland & Breland, 1966). For example, one of the Brelands’ most
famous trained animal acts was the dancing chicken: A chicken comes out of a cage, gets up on a platform, and “dances”
to music. In actuality, they had tried to train a chicken to simply stand on a platform for a short period of time. However,
they “found that over 50% developed a very strong and pronounced scratch pattern, which tended to increase in
persistence as the time interval was lengthened” (p. 682). Therefore, the Brelands developed a new act around the
instinctive behaviour of the chicken. In another instance, they were able to train a chicken to play baseball by pulling a
loop that made a bat hit a ball—as long as the baseball field was in a cage. When they removed the cage for
photographic purposes, they ran into a problem with the chickens. Even well-trained chickens “became wildly excited
when the ball started to move. They would jump up on the playing field, chase the ball all over the field, even knock it
off on the floor and chase it around, pecking it in every direction, although they had never had access to the ball before”
(p. 683). The Brelands also trained a pig to pick up large wooden coins and put them in a piggy bank. After several
weeks, the behaviour deteriorated as the pigs began to drop the coins and root them. In all of these cases, instinctive
behaviours began to interfere with learned behaviours—thus the term instinctive drift.
Taste aversions are another good example of biological influences on learning. The text provides good coverage of
this topic, particularly of some of the classic studies in this area. However, students often have difficulty understanding
these results. Garcia and Koelling (1966) paired “bright-noisy-tasty” water (a light, a click, and a flavour were presented
when rats drank) with either shock or lithium chloride (a nausea-producing drug). After the negative stimulus was
administered, the rats were given preference testing between bright-noisy water and tasty water. They obtained the
results shown in TM 6-1. Rats who experienced nausea tended to avoid the tasty water, whereas rats who had been
shocked avoided the bright-noisy water. These results illustrate the concept of belongingness: External consequences
tend to be associated with external stimuli, and internal consequences tend to be associated with internal stimuli. It
seems clear that belongingness is simply another way of describing preparedness. Organisms are clearly prepared to
associate nausea with tastes in order to form taste aversions (see “Lecture/Discussion Topic: Preparedness in Learning”).
If you are wandering in the woods without food, eat some purple berries, and get sick later, you need to learn not to eat
the purple berries in case they are poisonous. Notice, however, that organisms are not prepared to associate nausea with
locale. Taste aversions are learned, not location aversions. When you get sick from eating a McDonald’s hamburger,
aversion occurs to the burgers, not to the restaurant.
Breland, K., & Breland, M. (1966). The misbehavior of organisms. American Psychologist, 16, 681–684.
Garcia, J., & Koelling, R. A. (1966). Relation of cue to consequence in avoidance learning. Psychonomic Science, 4, 123–124.
Pavlov, I. (1927). Conditioned reflexes. Oxford, UK: Oxford University Press.
Skinner, B. F. (1938). The behavior of organisms: An experimental analysis. New York: Appleton-Century-Crofts.
6: LEARNING 175
to 20 hours. Outside of this critical period, no imprinting seems to occur. This relationship is apparently not
unchangeable, as some evidence shows that imprinting can be extended beyond the critical period. However, that
evidence does not weaken the idea that there may be a period when learning is highly prepared, and other times when
learning is much more difficult. The idea of imprinting to the mother/caretaker for social attachment has been extended
to other species. Klein (2002) reported that the sensitive period for sheep and goats occurs at 2 to 3 hours after birth, at 3
to 6 months of age for primates, and at 6 to 12 months for humans.
Lenneberg (1967) applied the concept of sensitive periods to language acquisition, writing that language emerges
before three years of age due to “an interaction of maturation and self-programmed learning” (p. 158). He also feels that,
for children between the age of three and the early teens, “the possibility for primary language acquisition continues to
be good” (p. 158). However, after puberty, the ability to learn language is somewhat impaired. Lenneberg said that “the
brain behaves as if it had become set in its ways and primary, basic language skills not acquired by that time, except for
articulation, usually remain deficient for life” (p. 158).
Finally, Seligman (1970) used the concept of preparedness to explain some of the seemingly contradictory findings
concerning learning. He maintained that the typical laboratory study involves an organism learning an unprepared
behaviour. This type of learning requires repeated trials before learning is achieved and results in the standard negatively
accelerated learning curve. Other behaviours sometimes seem to violate these “laws” of learning generated from
unprepared behaviours. For example, taste aversions, phobias, and other such behaviours may be learned in one trial.
These do not violate the laws applied to the laboratory behaviours; they simply operate under a different set of laws.
Then there are the contraprepared behaviours. No matter how long or hard the experimenter tries, some behaviours
simply cannot be learned very well or at all. It is almost impossible to train a rat to press a bar in order to avoid shock,
for example. Thus, as mentioned above, looking for laws of learning could require three sets of laws—one each for
prepared, unprepared, and contraprepared behaviours.
Klein, S. B. (2002). Learning: Principles and applications (4th ed.). New York: McGraw-Hill.
Lenneberg, E. H. (1967). Biological foundations of language. New York: Wiley.
Seligman, M. E. P. (1970). On the generality of the laws of learning. Psychological Review, 77, 406–418.
LECTURE/DISCUSSION TOPIC:
Kamin, L. J. (1969). Predictability, surprise, attention, and conditioning. In B. A. Campbell & R. M. Church (Eds.), Punishment and
aversive behavior (pp. 279–296). New York: Appleton-Century-Crofts.
Lieberman, D. A. (2000). Learning: Behavior and cognition. Belmont, CA: Wadsworth.
Malone, J. C. (1990). Theories of learning: A historical approach. Belmont, CA: Wadsworth.
Purdy, J. E., Markham, M. R., Schwartz, B. L, & Gordon, W. C. (2001). Learning and memory (2nd ed.). Belmont, CA: Wadsworth.
Rescorla, R. A. (1968). Probability of shock in the presence and absence of CS in fear conditioning. Journal of Comparative and
Physiological Psychology, 66, 1–5.
Rescorla, R. A. (1988). Pavlovian conditioning: It’s not what you think it is. American Psychologist, 43, 151–160.
6: LEARNING 177
DEMONSTRATION/ACTIVITY:
CLASSICAL AND INSTRUMENTAL CONDITIONING OF PLANARIA
Katz (1978) suggested using planaria to give students the chance to apply classical and instrumental conditioning
techniques. This activity will require some preparation on your part and may be used outside of class.
A conditioning chamber must be constructed, but one chamber will serve for both classical and instrumental
conditioning. The chamber consists of a petri dish (two-thirds full of water) resting in a hole in a wooden base. “The
base supports a 25-watt light which is mounted 6 inches above the petri dish, as well as two electrodes which extend into
the petri dish” (p. 91). One switch controls the light and another the electrodes (minimized wattage through a
transformer); power is supplied through a wall plug (see Katz’s article for an electrical diagram).
Classical conditioning consists of pairing the light with shock. The shock is presented at a minimal level to evoke a
turning response. The goal is to develop a turning response to the light alone.
For instrumental conditioning, a piece of cardboard or paper must be placed under the petri dish. A circle (start box)
is drawn in the middle of the paper, and half the paper is coloured black and half white. The planaria is placed in the
start box and given 10 baseline trials to determine its preference for black or for white. The task is then to train the
planaria to avoid by shocking it each time it goes to its preferred side. Katz noted that avoidance typically occurs within
30 minutes.
Katz reported that students enjoy and learn from this activity. The potential negative reaction to using shock is
probably mitigated by the planaria: It is not cute and furry. Also, the level of shock is low. The planaria also has the
advantage of being low on the biological scale, which gives students a chance to see the broad range of applicability of
classical and instrumental concepts. For a variation on Katz’s method, see Abramson, Kirkpatrick, Bollinger, Odde, and
Lambert (1999).
Abramson, C. I., Kirkpatrick, D. E., Bollinger, N., Odde, R., & Lambert, S. (1991). Planarians in the classroom: Habituation and
instrumental conditioning. In L. T. Benjamin, Jr., B. F. Nodine, R. M. Ernst, & C. Blair-Broeker (Eds.), Activities handbook for
the teaching of psychology: Vol. 4 (pp. 166–171). Washington, DC: American Psychological Association.
Katz, A. N. (1978). Inexpensive animal learning exercises for huge introductory laboratory classes. Teaching of Psychology, 5, 91–
93.
LECTURE/DISCUSSION TOPIC:
Klein, S. B. (2002). Learning: Principles and applications (4th ed.). New York: McGraw-Hill.
Purdy, J. E., Markham, M. R., Schwartz, B. L, & Gordon, W. C. (2001). Learning and memory (2nd ed.). Belmont, CA:
Wadsworth.
Watson, D. (1981). Shaping by successive approximations. In L. T. Benjamin, Jr., & K. D. Lowman (Eds.), Activities handbook for
the teaching of psychology (pp. 60–61). Washington, DC: American Psychological Association.
6: LEARNING 179
DEMONSTRATION/ACTIVITY: SHAPING A GERBIL
Many teachers would like to give students the opportunity to interact with a rat in a Skinner box but face obstacles such
as cost, access to equipment, and student dislike of rats. Plant (1980) developed a low-cost alternative that overcomes
these obstacles. He suggested making and using gerbil jars rather than Skinner boxes. A gerbil jar is made from a
gallon-size glass jar. It is necessary to drill holes in the jar with a carbide drill bit to furnish ventilation, water access,
and a food tray/magazine (raw shelled sunflower seeds work well). Plant suggested hanging a bell from the lid of the jar
and having students train a gerbil to reach up and ring the bell.
Plant noted that motivated students have trained gerbils to do back flips and carry objects in the jar. Your students’
gerbil jars could be modified to allow shaping of different behaviours. Furthermore, other concepts of operant
conditioning (e.g., acquisition curves, shaping, extinction, generalization, discrimination, schedules of reinforcement)
could also be applied to the gerbil jar task.
Plant advocated using the gerbil jar as an out-of-class activity, but it could also be started in class or dealt with in a
laboratory session. Jars should be available from your college’s food service at no cost. For instructions on building a
low-cost Skinner box, see Keith (1999). If you require students to purchase their own gerbil (individually or in small
groups), or if you can maintain a gerbil colony in your department, the cost of this activity will be minimal. Whether
students work with gerbils at home or in the department, be certain to give them some basic information about care of
the animals and the ethical guidelines appropriate to this activity (see Chapter 2 of this manual for ethical guidelines in
working with animals). The hands-on experience involved in this activity will strongly reinforce the information
presented in class.
Keith, K. D. (1999). Operant conditioning in the classroom: An inexpensive home-built Skinner box. In L. T. Benjamin, Jr., B. F.
Nodine, R. M. Ernst, & C. Blair-Broeker (Eds.), Activities handbook for the teaching of psychology: Vol. 4 (pp. 172–175).
Washington, DC: American Psychological Association.
Plant, L. (1980). The gerbil jar: A basic home experience in operant conditioning. Teaching of Psychology, 7, 109.
Purdy, J. E., Markham, M. R., Schwartz, B. L, & Gordon, W. C. (2001). Learning and memory (2nd ed.). Belmont, CA:
Wadsworth.
I will ask you to begin talking. Talk on any topic you wish. I will say nothing at all. Do not let my silence disturb
you. Your job is to work for points. You will receive a point each time I tap my pencil [pen]. As soon as you are
given a point, record it by making a tally mark on your sheet of paper. You are to keep track of your own points.
Do you have any questions? Please commence talking. (Hergenhahn, 1981, p. 62)
Hergenhahn suggested continuing this activity for 15 minutes, collecting data every 3 minutes on how many points
have been obtained. (You might make some signal at the end of each 3-minute period.) After 15 minutes, the
experimenters should engage in extinction (no pen or pencil taps) for 9 minutes. After the demonstration is complete,
have the experimenters ask the participants if they know what they were doing to obtain points.
You can plot the mean number of points (opinionated statements) in each 3-minute interval, including those during
extinction. You will likely see fairly typical acquisition and extinction curves (Figure 6.6 in the Weiten & McCann text
is an example). You can also discuss confounding variables that may have entered in, such as smiles and nods from the
experimenters. See Fernald and Fernald (1999) for a related exercise involving high- and low-probability responses.
Fernald, P. S., & Fernald, L. D. (1999). Shaping behavior through operant conditioning. In L. T. Benjamin, Jr., B. F. Nodine, R. M.
Ernst, & C. Blair-Broeker (Eds.), Activities handbook for the teaching of psychology: Vol. 4 (pp. 176–180). Washington, DC:
American Psychological Association.
Hergenhahn, B. R. (1981). Reinforcing statements of opinion. In L. T. Benjamin, Jr., & K. D. Lowman (Eds.), Activities handbook
for the teaching of psychology (pp. 62–63). Washington, DC: American Psychological Association.
Verplanck, W. S. (1955). The control of the content of conversation: Reinforcement of statements of opinion. Journal of
Abnormal and Social Psychology, 51, 668–676.
6: LEARNING 181
Although it is not necessary to have a prop, it will make the activity more concrete if you have a candy or gum
dispenser that has some sort of lever or button to manipulate in order to dispense the treat. A plastic gum ball machine
would work fine. You also need an empty plastic or glass container. Bring your treat dispenser (filled) and the empty
container to class and put them on your desk at the front of the room. Give a volunteer a chance to operate the dispenser.
Then give your class a quiz about the concepts of operant conditioning at work in this situation (see HM 6-3).
• What is the treat called? (positive reinforcement)
• What would happen to your behaviour if the dispenser was empty? (extinction)
• What would happen if the dispenser was refilled? (spontaneous recovery)
• Why was the student able to operate this dispenser despite never having operated it previously? (generalization)
• Why did the student choose to work the dispenser rather than open the empty container? (discrimination)
• What would happen if you operated the dispenser but did not receive the treat until an hour later? (poor
learning)
• What principle is at work in this situation? (delayed reinforcement)
• What schedule of reinforcement is at work here? (continuous reinforcement)
• What type of reinforcement does the treat represent? (primary)
• Why does the coin that you usually need to operate the dispenser have value to you? (conditioned
reinforcement)
You can make up other questions to fit the concepts you cover in class or use different terms than the answers given
here. Although this quiz requires recall, you could make it into a multiple-choice quiz if you wish. This exercise should
reduce the number of students who can identify operant concepts only in the Skinner box situation and only through rote
memory.
Smith, J. Y. (1990). Demonstration of learning techniques. In V. P. Makosky, C. C. Sileo, L. G. Whittemore, C. P. Landry, &
M. L. Skutley (Eds.), Activities handbook for the teaching of psychology: Vol. 3 (pp. 83–84). Washington, DC: American
Psychological Association.
LECTURE/DISCUSSION TOPIC:
Flora, S. R., & Pavlik, W. B. (1990). An objective and functional matrix for introducing concepts of reinforcement and punishment.
Teaching of Psychology, 17, 121–122.
Tauber, R. T. (1988). Overcoming misunderstanding about the concept of negative reinforcement. Teaching of Psychology, 15, 152–
153.
O’Leary, K. D., Kaufman, K. F., Kass, R. E., & Drabman, R. S. (1970). The effects of loud and soft reprimands on the behavior of
disruptive students. Exceptional Children, 37, 145–155.
6: LEARNING 183
LECTURE/DISCUSSION TOPIC: BEHAVIOUR MODIFICATION
The text discusses self-applied behaviour modification in the Chapter 6 Personal Application. Behaviour modification,
frequently known as contingency management, is most often used in therapy “to increase the frequency of appropriate
behaviours and to eliminate or reduce inappropriate responses” (Klein, 2002, p. 114). Klein outlined three steps to
implementing an effective contingency management program:
1. Assessment: The frequency of appropriate and inappropriate behaviours is ascertained, as well as the situations
in which each occurs; reinforcers for the behaviours are determined.
2. Contingency contracting: The relationship between responding and reinforcement is specified, as well as the
method of reinforcement for the appropriate behaviours.
3. Implementation: The treatment is implemented, and changes in behaviour are measured during and after
treatment.
The assessment phase is also referred to as a baseline measure. It is vital to collect information about the entire
situation and not just the behaviour involved. This information could be vital in deciding the nature of reinforcement to
be used during the treatment. Also, clues about the situations in which the behaviour occurs could become apparent.
During the contingency contracting phase, the information gathered from the assessment phase is used to design the
treatment program. Questions concerning the schedule of reinforcement, the need for shaping, and the identity of those
who will administer the reinforcement must be answered. With adults who are motivated to break a particular behaviour
pattern, self-reinforcement may be used. Klein (2002) cited studies in which self-reinforcement has been used
successfully with such problem behaviours as impulsive overspending, depression, inadequate study habits, and
overeating.
It is necessary to measure the behaviour both during and after treatment, to ensure that the treatment actually has the
desired effect and that the effect is lasting. This lasting effect is sometimes difficult to achieve. Many behaviour
modification programs employ a token economy. Appropriate behaviours result in secondary reinforcement through
tokens, which can be exchanged later for desired objects or privileges. A major problem with such systems is that the
desired behaviours become linked to the tokens. When the tokens disappear, so may the desired behaviours. Therefore,
the conditions that will maximize the transfer of the behaviour contingencies to the real world must be defined. Stahl and
Leitenberg (1976) gave several examples of attempts to maximize this transfer with mental hospital patients. For
example, praise has been conditioned with tokens so that praise alone will suffice later, characteristics of the hospital
have been designed to simulate conditions outside the hospital, and family members have been trained to observe
patients’ behaviour and administer rewards and punishments as appropriate.
This information can be used to supplement Weiten & McCann’s discussion of behaviour modification. Some of the
principles could be used if students decide to engage in self-modification programs. Such behaviour modification has
been used quite successfully in a wide variety of situations.
Klein, S. B. (2002). Learning: Principles and applications (4th ed.). New York: McGraw-Hill.
Stahl, J. R., & Leitenberg, H. (1976). Behavioral treatment of the chronic hospital patient. In H. Leitenberg (Ed.), Handbook of
behavior modification and behavior therapy (pp. 211–241). Englewood Cliffs, NJ: Prentice-Hall.
DEMONSTRATION/ACTIVITY:
Ulman, J. D. (1980). Synthesizing the elements of behavior modification: A classroom simulation game. Teaching of Psychology,
7, 182–183.
6: LEARNING 185
This project might be somewhat ambitious for a simple demonstration. However, if you require a term paper, this
self-modification project could serve as an alternative requirement. This project would help meet the goal of the many
students who expect and desire to learn some self-help strategies in introductory psychology.
Anonymous. (1981). Recording and self-modification. In L. T. Benjamin, Jr., & K. D. Lowman (Eds.), Activities handbook for the
teaching of psychology (pp. 64–65). Washington, DC: American Psychological Association.
Watson, D. L., & Tharp. R. G. (2002). Self-directed behavior: Self-modification for personal adjustment (8th ed.). Belmont, CA:
Wadsworth.
Worthington, E. L., Jr. (1977). Honesty and success in self-modification projects for a college class. Teaching of Psychology, 4, 78–
82.
• From Activities Handbook for the Teaching of Psychology, by L. T. Benjamin, Jr., & K. D. Lowman (Eds.), 1981,
Washington, DC: American Psychological Association:
Operant conditioning: Role in human behavior, by E. Stork, p. 57
Operant conditioning demonstration, by P. Keith-Spiegel, pp. 58–59
Knowledge of results, by L. Snellgrove, p. 66
Learning curves, by D. Holmer, pp. 71–72
•From Activities Handbook for the Teaching of Psychology: Vol. 2, by V. P. Makosky, L. G. Whittemore, & A. M.
Rogers (Eds.), 1987, Washington, DC: American Psychological Association:
Backwards alphabet, by L. M. Sheldahl, pp. 63–64
Human operant conditioning, by J. K. Bare, pp. 67–68
• From Activities Handbook for the Teaching of Psychology: Vol. 4, by L. T. Benjamin, Jr., B. F. Nodine, R. M. Ernst,
& C. Blair-Broeker (Eds.), 1999, Washington, DC: American Psychological Association:
Shaping behavior through operant conditioning, by P. S. Fernald & L. D. Fernald, pp. 176–180
Using psychological perspectives to change habits, by R. McEntarffer, pp. 181–182
Applying the principles of learning and memory to students’ lives, by A. J. Weseley, pp. 183–185
Aggression on television, by M. A. Lloyd, pp. 346–349
6: LEARNING 187
COPYRIGHT © 2013 by Nelson Education Ltd.
Adapted from “Defining Learning: Two Classroom Activities,” by T. Rocklin, 1987, Teaching of Psychology, 14, p. 228. Copyright
1987 by Lawrence Erlbaum Associates, Inc. Adapted by permission.
For each of the following scenarios, identify the unconditioned stimulus, conditioned stimulus, unconditioned response,
and conditioned response.
Suzy goes outside to play in her tree house. A swarm of bees has nested near her tree house, and she gets stung when she
climbs up to the tree house. This happens three times in a week. Suzy becomes afraid to go near the tree and cries
violently when her dad tries to get her to climb up to the tree house.
Jerry’s wife, Mary, gets a new nightgown and wears it whenever she is in the mood for sexual relations. After a month,
the sight of the nightgown alone is enough to excite Jerry.
A couple goes to a movie on their first date and they have a wonderful time, eventually getting married. Whenever they
see this movie on the late night show, they get a tender feeling and think about each other.
A student survives a plane crash that occurred because of a thunderstorm. Now, whenever the student hears thunder, he
gets anxious.
6: LEARNING 189
COPYRIGHT © 2013 by Nelson Education Ltd.
For each of the following scenarios, identify the unconditioned stimulus, conditioned stimulus, unconditioned response,
and conditioned response.
Suzy goes outside to play in her tree house. A swarm of bees has nested near her tree house, and she gets stung when she
climbs up to the tree house. This happens three times in a week. Suzy becomes afraid to go near the tree and cries
violently when her dad tries to get her to climb up to the tree house.
Jerry’s wife, Mary, gets a new nightgown and wears it whenever she is in the mood for sexual relations. After a month,
the sight of the nightgown alone is enough to excite Jerry.
A couple goes to a movie on their first date and they have a wonderful time, eventually getting married. Whenever they
see this movie on the late night show, they get a tender feeling and think about each other.
A student survives a plane crash that occurred because of a thunderstorm. Now, whenever the student hears thunder, he
gets anxious.
Your instructor may use some props for this activity. If not, imagine that there is a full gum ball machine and a large
glass jar with a lid on the instructor’s desk. Answer the following questions about these props using your knowledge of
operant conditioning concepts.
What is the gum ball that you receive from the machine called?
What is the reason that you would not attempt to buy a gum ball if the dispenser was empty?
What would you call your behaviour if the dispenser was refilled and you bought a gum ball?
Why would you be able to operate this dispenser despite never having operated it previously?
Why would you choose to work the dispenser rather than open the empty container?
What would happen if you operated the dispenser but did not receive the gum ball until an hour later?
Why does the coin that you usually need to operate the dispenser have value to you?
6: LEARNING 191
COPYRIGHT © 2013 by Nelson Education Ltd.
1. Provide a word or term that means the same thing as negative reinforcement:
2. Negative reinforcement
a. increases behaviour.
b. decreases behaviour.
c. has no effect on behaviour.
3. If you were about to receive a negative reinforcement, would you look forward to it?
a. Yes
b. No
Based on “Overcoming Misunderstanding About the Concept of Negative Reinforcement,” by R. T. Tauber, 1988, Teaching of
Psychology, 15, 152–153. Mahwah, NJ: Erlbaum.
From “Relation of Cue to Consequence in Avoidance Learning,” by J. Garcia and R. A. Koelling, 1966, Psychonomic Science, 4, pp.
123–124. Reprinted by permission.
6: LEARNING 193
COPYRIGHT © 2013 by Nelson Education Ltd.
From “Pavlovian Conditioning: It's Not What You Think It Is,” by R. A. Rescorla, 1988, American Psychologist, 43, 151–160.
Copyright © 1988 by the American Psychological Association. Reprinted by permission.
From Learning and Memory, 2nd ed. (p. 45), by J. E. Purdy, M. R. Markham, B. L. Schwartz, & W. C. Gordon, 2001, Belmont, CA:
Wadsworth.
6: LEARNING 195
COPYRIGHT © 2013 by Nelson Education Ltd.
From Learning and Memory, 2nd ed. (p. 199), by J. E. Purdy, M. R. Markham, B. L. Schwartz, & W. C. Gordon, 2001, Belmont,
CA: Wadsworth.
CONSEQUENCE
Applied Removed
Based on “Overcoming Misunderstanding About the Concept of Negative Reinforcement,” by R. T. Tauber, 1988, Teaching of
Psychology, 15, 152–153. Mahwah, NJ: Erlbaum.
6: LEARNING 197
COPYRIGHT © 2013 by Nelson Education Ltd.
Adapted from “An Objective and Functional Matrix for Introducing Concepts of Reinforcement and Punishment,” by S. R. Flora and
W. B. Pavlik, 1990, Teaching of Psychology, 17, p. 122. Mahwah, NJ: Erlbaum.