Professional Documents
Culture Documents
Solution Manual:
https://testbankpack.com/p/solution-manual-for-essentials-of-understanding-
psychology-11th-edition-by-feldman-isbn-0077861884-9780077861889/
Test bank:
https://testbankpack.com/p/test-bank-for-essentials-of-understanding-
psychology-11th-edition-by-feldman-isbn-0077861884-9780077861889/
Chapter 5
Learning
DO YOU KNOW ABOUT OUR DIGITAL OFFERINGS?
SMARTBOOK
• Make It Informed. Real-time reports quickly identify the concepts that require more attention from individual
students—or the entire class. SmartBook™ detects the content a student is most likely to forget and brings it
back to improve long-term knowledge retention. Students help inform the revision strategy.
• Make It Precise. Systematic and precise, a heat map tool collates data anonymously collected from
thousands of students who used Connect Psychology’s Learnsmart.
• Make It Accessible. The data is graphically represented in a heat map as “hot spots” showing specific
concepts with which students had the most difficulty. Revising these concepts, then, can make them more
accessible for students.
CONNECT
© 2015 by McGraw-Hill Education. This is proprietary material solely for authorized instructor use. Not authorized for sale or distribution in any
manner. This document may not be copied, scanned, duplicated, forwarded, distributed, or posted on a website, in whole or part.
Chapter 5 Learning
• Make It Intuitive. You receive instant, at-a-glance views of student performance matched with student
activity.
• Make It Dynamic. Connect Insight™ puts real-time analytics in your hands so you can take action early and
keep struggling students from falling behind.
• Make It Mobile. Connect Insight™ travels from offi ce to classroom, available on demand wherever and
whenever it’s needed.
OPENING THEMES
The topic of learning is a central one to psychology, incorporating the areas of classical and operant
conditioning. These lectures will give students a basic understanding of the behaviorist perspective.
Later in the course, students will learn how methods derived from behaviorist techniques are applied to
the treatment of psychological disorders.
LEARNING OBJECTIVES
Learning is a relatively permanent change in behavior that is brought about by experience. It is clear
that we are primed for learning from the beginning of life. Infants exhibit a simple type of learning called
habituation. Habituation is the decrease in response to a stimulus that occurs after repeated
presentations of the same stimulus. Most learning is considerably more complex than habituation, and
the study of learning has been at the core of the field of psychology.
Although philosophers since the time of Aristotle have speculated on the foundations of learning, the
first systematic research on learning was done at the beginning of the 20th century, when Ivan Pavlov
developed the framework for learning called classical conditioning. Classical conditioning is a type of
learning in which a neutral stimulus comes to elicit a response after being paired with a stimulus that
naturally brings about that response.
The basic processes of classical conditioning that underlie Pavlov’s discovery are straightforward,
although the terminology he chose is not simple. Keeping in mind Pavlov’s laboratory experiments with
dogs, the basics of classical conditioning can be explained as follows: First, before conditioning, there
are two unrelated stimuli: the ringing of a bell and meat. We know that normally the ringing of a bell
does not lead to salivation but to some irrelevant response, such as pricking up the ears or perhaps a
startle reaction. The bell is therefore called the neutral stimulus, because it is a stimulus that, before
conditioning, does not naturally bring about the response in which we are interested. We also have
meat, which naturally causes a dog to salivate—the response we are interested in conditioning. The
© 2015 by McGraw-Hill Education. This is proprietary material solely for authorized instructor use. Not authorized for sale or distribution in any
manner. This document may not be copied, scanned, duplicated, forwarded, distributed, or posted on a website, in whole or part.
Chapter 5 Learning
meat is considered an unconditioned stimulus (UCS) because food placed in a dog’s mouth
automatically causes salivation to occur. The response that the meat elicits (salivation) is called an
unconditioned response (UCR)—a natural, innate, reflexive response that is not associated with
previous learning. Unconditioned responses are always brought about by the presence of unconditioned
stimuli. After a number of pairings of the bell and meat, the bell alone causes the dog to salivate. When
conditioning is complete, the bell has evolved from a neutral stimulus to a conditioned stimulus (CS). At
this time, salivation that occurs as a response to the conditioned stimulus (bell) is considered a
conditioned response (CR).
Although the initial conditioning experiments were carried out with animals, classical conditioning
principles were soon found to explain many aspects of everyday human behavior. Emotional responses
are especially likely to be learned through classical conditioning processes. Learning by means of
classical conditioning occurs during childhood as well as in adulthood. In more extreme cases, classical
conditioning can lead to the development of phobias, which are intense, irrational fears. Posttraumatic
stress disorder (P TSD), suffered by some war veterans and others who have had traumatic experiences,
can also be produced by classical conditioning. On the other hand, classical conditioning also relates to
pleasant experiences. For instance, a particular fondness for the smell of a certain perfume or
aftershave lotion because thoughts of an early love come rushing back whenever one encounters it.
EXTINCTION
Extinction occurs when a previously conditioned response decreases in frequency and eventually
disappears. To produce extinction, one needs to end the association between conditioned stimuli and
unconditioned stimuli. However, it is not necessary that once a conditioned response is extinguished
that it vanishes forever. It is possible that the subject shows signs of spontaneous recovery or the
reemergence of an extinguished conditioned response after a period of time and with no further
conditioning.
Stimulus generalization is a process in which, after a stimulus has been conditioned to produce a
particular response, stimuli that are similar to the original stimulus produce the same response. The
greater the similarity between two stimuli, the greater the likelihood of stimulus generalization. The
conditioned response elicited by the new stimulus is usually not as intense as the original conditioned
response, although the more similar the new stimulus is to the old one, the more similar the new
response will be. Stimulus discrimination, in contrast, occurs if two stimuli are sufficiently distinct from
each other that one evokes a conditioned response but the other does not. Stimulus discrimination
provides the ability to differentiate between stimuli.
© 2015 by McGraw-Hill Education. This is proprietary material solely for authorized instructor use. Not authorized for sale or distribution in any
manner. This document may not be copied, scanned, duplicated, forwarded, distributed, or posted on a website, in whole or part.
Chapter 5 Learning
Pavlov hypothesized that all learning is nothing more than long strings of conditioned responses; this
notion has not been supported by subsequent research. According to Pavlov, the process of linking
stimuli and responses occurs in a mechanistic, unthinking way. In contrast to this perspective, learning
theorists influenced by cognitive psychology have argued that learners actively develop an
understanding and expectancy about which particular unconditioned stimuli are matched with specific
conditioned stimuli. Traditional explanations of how classical conditioning operates have also been
challenged by John Garcia, a learning psychologist. He found that some organisms—including humans—
were biologically prepared to quickly learn to avoid foods that smelled or tasted like something that
made them sick.
STUDENT ASSIGNMENTS:
CONNECT ASSIGNMENTS
There are both Concept Clips and Interactivity assignments which can be assigned to students to
demonstrate Classical Conditioning.
Have students complete Handout 6–1, in which they analyze examples of classical conditioning.
Have students search online for examples of current research involving classical conditioning, including
research involving conditioning as a method for treating psychological disorders such as phobias or
mood disorders.
Ask students to describe and analyze a current television or magazine advertisement and answer
these questions:
What outcomes are suggested for those who use or do not use the product?
What does the ad tell us about how advertisers attempt to condition the behavior of consumers?
LECTURE IDEAS:
© 2015 by McGraw-Hill Education. This is proprietary material solely for authorized instructor use. Not authorized for sale or distribution in any
manner. This document may not be copied, scanned, duplicated, forwarded, distributed, or posted on a website, in whole or part.
Chapter 5 Learning
Present the following hints to help make the relationships between stimuli and responses easier to
understand and remember:
During conditioning, a previously neutral stimulus is transformed into the conditioned stimulus.
An unconditioned response and a conditioned response are the same (such as salivation in the example
described earlier). However, the unconditioned response occurs naturally, whereas the conditioned
response is learned.
Stimulus generalization—What takes place when a conditioned response follows a stimulus that is
similar to the original conditioned stimulus.
Stimulus discrimination—The ability to differentiate between stimuli so that responses occur only to
certain stimuli and not others.
Collect these props: a small whistle and a squeezable “puff” maker (as is sold in ear wax cleaner kits).
Ask for a student volunteer. It should be a female, about your height, and she should not be wearing
contact lenses.
Have the student stand squarely facing you, about one foot away. Set this up so that other students can
see her eyes.
Announce that you will now show how classical conditioning is done. You will show that you can
condition the volunteer to blink her eyes in response to the whistle.
Put the volunteer at ease. Ask her where she is from and then have the class applaud to that.
© 2015 by McGraw-Hill Education. This is proprietary material solely for authorized instructor use. Not authorized for sale or distribution in any
manner. This document may not be copied, scanned, duplicated, forwarded, distributed, or posted on a website, in whole or part.
Chapter 5 Learning
Now show that she will not blink when you blow the whistle.
Then start conditioning—pair the whistle with the air puff about five or six times. On the next trial, just
blow the whistle. Have the observers verify that she blinked, and then take your bows and applause!
This is a very uncontrolled situation, but what will help you have a successful result is to create the
expectation that you will get a successful result—you are counting, in part, on the suggestibility of your
subject. (While this is going on, you may want to have someone take a picture.)
After completing the demonstration, use this overhead to have the students review the relevant
concepts:
Ask students what or who they associate with a particular product—name a brand of athletic shoes, for
example, or a certain soft drink or car. If students name a personality, icon, or abstraction to describe
the product rather product features, then they are seeing the theory of behaviorism in action.
In a classic scene from this NBC sitcom, one office worker uses classical conditioning on another office
worker by pairing Altoids and the “ta da” of the computer. The episode is entitled Phyllis’ Wedding.
This ad, which began airing in the fall of 2009, is an excellent example of pairing “happy” stimuli with a
credit card:
http://www.youtube.com/watch?v=TQk7Zh-dXCk
LEARNING OBJECTIVES
© 2015 by McGraw-Hill Education. This is proprietary material solely for authorized instructor use. Not authorized for sale or distribution in any
manner. This document may not be copied, scanned, duplicated, forwarded, distributed, or posted on a website, in whole or part.
Chapter 5 Learning
16–2 What are some practical methods for bringing about behavior change, both in ourselves and in
others?
After conducting the cat in the cage experiment, Edward L. Thorndike observed the cat had learned that
pressing the paddle was associated with the desirable consequence of getting food. Thorndike
summarized that relationship by formulating the law of effect: Responses that lead to satisfying
consequences are more likely to be repeated. According to Thorndike, it was not necessary for an
organism to understand that there was a link between a response and a reward. Instead, Thorndike
believed, over time and through experience the organism would make a direct connection between the
stimulus and the response without any awareness that the connection existed.
Thorndike’s early research served as the foundation for the work of one of the 20th century’s most
influential psychologists, B. F. Skinner (1904–1990). The Skinner box was a chamber with a highly
controlled environment that was used to study operant conditioning processes with laboratory animals.
Whereas Thorndike’s goal was to get his cats to learn to obtain food by leaving the box, animals in a
Skinner box learn to obtain food by operating on their environment within the box. Skinner became
interested in specifying how behavior varies as a result of alterations in the environment. Skinner,
whose work went far beyond perfecting Thorndike’s earlier apparatus, is considered the inspiration for a
whole generation of psychologists studying operant conditioning.
Reinforcement is the process by which a stimulus increases the probability that a preceding behavior
will be repeated. A reinforcer is any stimulus that increases the probability that a preceding behavior will
occur again. A primary reinforcer satisfies some biological need and works naturally, regardless of a
person’s previous experience. In contrast, a secondary reinforcer is a stimulus that becomes reinforcing
because of its association with a primary reinforcer.
A positive reinforcer is a stimulus added to the environment that brings about an increase in a
preceding response. If food, water, money, or praise is provided after a response, it is more likely that
© 2015 by McGraw-Hill Education. This is proprietary material solely for authorized instructor use. Not authorized for sale or distribution in any
manner. This document may not be copied, scanned, duplicated, forwarded, distributed, or posted on a website, in whole or part.
Chapter 5 Learning
that response will occur again in the future. In contrast, a negative reinforcer refers to an unpleasant
stimulus whose removal leads to an increase in the probability that a preceding response will be
repeated in the future. Negative reinforcement, then, teaches the individual that taking an action
removes a negative condition that exists in the environment. Like positive reinforcers, negative
reinforcers increase the likelihood that preceding behaviors will be repeated (Magoon & Critchfield,
2008).
Punishment refers to a stimulus that decreases the probability that a prior behavior will occur again.
Unlike negative reinforcement, which produces an increase in behavior, punishment reduces the
likelihood of a prior response. There are two types of punishment: positive punishment and negative
punishment, just as there are positive reinforcement and negative reinforcement. Positive punishment
weakens a response through the application of an unpleasant stimulus. In contrast, negative
punishment consists of the removal of something pleasant. Both positive and negative punishment
result in a decrease in the likelihood that a prior behavior will be repeated.
Punishment often presents the quickest route to changing behavior that, if allowed to continue, might
be dangerous to an individual. Moreover, the use of punishment to suppress behavior, even
temporarily, provides an opportunity to reinforce a person for subsequently behaving in a more
desirable way. Punishment has several disadvantages that make its routine questionable. For one thing,
punishment is frequently ineffective, particularly if it is not delivered shortly after the undesired
behavior or if the individual is able to leave the setting in which the punishment is being given. Even
worse, physical punishment can convey to the recipient the idea that physical aggression is permissible
and perhaps even desirable. Ultimately, those who resort to physical punishment run the risk that they
will grow to be feared. Punishment can also reduce the self-esteem of recipients unless they can
understand the reasons for it. Finally, punishment does not convey any information about what an
alternative, more appropriate behavior might be. To be useful in bringing about more desirable behavior
in the future, punishment must be accompanied by specific information about the behavior that is being
punished, along with specific suggestions concerning a more desirable behavior. In short, reinforcing
desired behavior is a more appropriate technique for modifying behavior than using punishment.
Schedules of reinforcement refer to the different patterns of frequency and timing of reinforcement
following desired behavior. Behavior that is reinforced every time it occurs is said to be on a continuous
reinforcement schedule; if it is reinforced some but not all of the time, it is on a partial (or intermittent)
reinforcement schedule. Although learning occurs more rapidly under a continuous reinforcement
schedule, behavior lasts longer after reinforcement stops when it is learned under a partial
reinforcement schedule. Partial reinforcement schedules maintain performance longer than do
continuous reinforcement schedules before extinction—the disappearance of the conditioned
response—occurs. Partial reinforcement schedules can be put into two categories: schedules that
consider the number of responses made before reinforcement is given, called fixed-ratio and variable-
© 2015 by McGraw-Hill Education. This is proprietary material solely for authorized instructor use. Not authorized for sale or distribution in any
manner. This document may not be copied, scanned, duplicated, forwarded, distributed, or posted on a website, in whole or part.
Chapter 5 Learning
ratio schedules, and those that consider the amount of time that elapses before reinforcement is
provided, called fixed-interval and variable-interval schedules.
In a fixed-ratio schedule, reinforcement is given only after a specific number of responses. In a variable-
ratio schedule, reinforcement occurs after a varying number of responses rather than after a fixed
number. Although the specific number of responses necessary to receive reinforcement varies, the
number of responses usually hovers around a specific average.
In contrast to fixed and variable-ratio schedules, in which the crucial factor is the number of responses,
fixed interval and variable- interval schedules focus on the amount of time that has elapsed since a
person or animal was rewarded. Because a fixed-interval schedule provides reinforcement for a
response only if a fixed time period has elapsed, overall rates of response are relatively low. One way to
decrease the delay in responding that occurs just after reinforcement, and to maintain the desired
behavior more consistently throughout an interval, is to use a variable-interval schedule. In a variable-
interval schedule, the time between reinforcements varies around some average rather than being
fixed.
Just as in classical conditioning, operant learning involves the phenomena of discrimination and
generalization. The process by which people learn to discriminate stimuli is known as stimulus control
training. In stimulus control training, a behavior is reinforced in the presence of a specific stimulus, but
not in its absence. A discriminative stimulus signals the likelihood that reinforcement will follow a
response. Just as in classical conditioning, the phenomenon of stimulus generalization, in which an
organism learns a response to one stimulus and then exhibits the same response to slightly different
stimuli, occurs in operant conditioning.
Shaping is the process of teaching a complex behavior by rewarding closer and closer approximations of
the desired behavior. In shaping, you start by reinforcing any behavior that is at all similar to the
behavior you want the person to learn. Later, you reinforce only responses that are closer to the
behavior you ultimately want to teach. Finally, you reinforce only the desired response. Each step in
shaping, then, moves only slightly beyond the previously learned behavior, permitting the person to link
the new step to the behavior learned earlier.
Biological Constraints on Learning: You Can’t Teach an Old Dog Just Any Trick
Not all behaviors can be trained in all species equally well. Instead, there are biological constraints, built-
in limitations in the ability of animals to learn particular behaviors. The existence of biological
constraints is consistent with evolutionary explanations of behavior. Clearly, there are adaptive benefits
© 2015 by McGraw-Hill Education. This is proprietary material solely for authorized instructor use. Not authorized for sale or distribution in any
manner. This document may not be copied, scanned, duplicated, forwarded, distributed, or posted on a website, in whole or part.
Chapter 5 Learning
that promote survival for organisms that quickly learn—or avoid—certain behaviors. Additional support
for the evolutionary interpretation of biological constraints lies in the fact that the associations that
animals learn most readily involve stimuli that are most relevant to the specific environment in which
they live.
We have considered classical conditioning and operant conditioning as two completely different
processes. The key concept in classical conditioning is the association between stimuli, whereas in
operant conditioning it is reinforcement. Furthermore, classical conditioning involves an involuntary,
natural, innate behavior, but operant conditioning is based on voluntary responses made by an
organism.
Behavior modification is a formalized technique for promoting the frequency of desirable behaviors and
decreasing the incidence of unwanted ones. A behavior analyst is a psychologist who specializes in
behavior modification techniques. The techniques used by behavior analysts are as varied as the list of
processes that modify behavior. They include reinforcement scheduling, shaping, generalization training,
discrimination training, and extinction. Participants in a behavior change program do, however, typically
follow a series of similar basic steps that include the following:
Behavior-change techniques based on these general principles have enjoyed wide success and have
proved to be one of the most powerful means of modifying behavior.
STUDENT ASSIGNMENTS:
CONNECT ASSIGNMENTS
There are both Concept Clips and Interactivity assignments which can be assigned to students to
demonstrate Operant Conditioning, Schedules of Reinforcement and Shaping.
© 2015 by McGraw-Hill Education. This is proprietary material solely for authorized instructor use. Not authorized for sale or distribution in any
manner. This document may not be copied, scanned, duplicated, forwarded, distributed, or posted on a website, in whole or part.
Chapter 5 Learning
Have students complete the assignment on Handout 6–2 on shaping and successive approximation.
SCHEDULES OF REINFORCEMENT
LECTURE IDEAS:
DEMONSTRATION: SHAPING
You will need to arrange for a bicycle to be present in the classroom. Have it sitting unobtrusively off to
one side against a wall. If this is not possible, an umbrella will suffice as a prop.
Select a volunteer, preferably male. Ask for his name, where he is from, and have the class cheer him on
for what he is about to do. Ask him to step outside of the room. When he is out of earshot, tell the class
that they are going to use shaping to get him to ride the bicycle across the front of the classroom
(alternatively, to open the umbrella and dance in a circle while holding it over his head). They will do this
by clapping as he gets closer to each desire step in a sequence. First, he will have to look at the bicycle.
Then he will have to walk over to it, and so on, until he actually rides the bicycle across the stage. The
class will look at you and you will cue them when to clap. After the volunteer performs the desired act,
the clapping should stop and should not start again until the next higher level in the hierarchy is
reached. After the desired behavior is performed, lead the class in a big round of applause for him.
(While he is on the bicycle, you may want to have someone take his picture.)
Classical: Think of “Beethoven’s Fifth” (“classical” music)—you feel an emotional reaction when you hear
the first four notes.
© 2015 by McGraw-Hill Education. This is proprietary material solely for authorized instructor use. Not authorized for sale or distribution in any
manner. This document may not be copied, scanned, duplicated, forwarded, distributed, or posted on a website, in whole or part.
Chapter 5 Learning
Shaping: When you have your hair done, your stylist has been trained through a complex process.
Burrhus Frederic Skinner (1904–1990) is one of the most famous, influential, and controversial figures in
contemporary American psychology. He was born in the small railroad town of Susquehanna,
Pennsylvania, in March 1904. After graduating from Hamilton College in 1926 with a degree in English,
he tried writing, but eventually gave it up, because he felt he had nothing important to say. He became
interested in psychology and earned his PhD from Harvard University in 1931.
He taught for several years at the University of Minnesota and Indiana University. During this time he
wrote two of his most important books—The Behavior of Organisms (1938) and a novel, Walden Two
(1948), which is an account of a utopian society run in accordance with operant principles. Skinner
returned to Harvard in 1948, where he remained until his death in August 1990.
B. F. Skinner made numerous contributions to the science of behavior. He strongly influenced the area
of learning that he named operant conditioning. His Skinner box is now a standard apparatus for the
experimental study of animal behavior. Much of his work involved the study of how reinforcement
schedules influence learning and behavior. His Beyond Freedom and Dignity (1971) is a nonfiction
examination of his utopian society, in which he explains why we must understand how we control
behavior in everyday life. In his 1987 book, Upon Further Reflection, Skinner presents his views on issues
ranging from world peace and evolution to education and old age.
Edward Lee Thorndike was born in Williamsburg, Massachusetts, in 1874. His mother was homemaker,
and his father was a minister. After graduating from high school in 1891, he attended Wesleyan
University, where he graduated in 1895. He then continued his education at Harvard University. In 1897,
he left Harvard and began graduate work at Columbia University. Thorndike studied learning in cats, and
earned a PhD in psychology in 1898.
His dissertation resulted in his publication in 1898 of “Animal Intelligence” in Psychological Review.
Thorndike observed trial and error learning in cats. He placed a cat in a small cage and observed it
manipulate the environment in order to escape. Thorndike called this type of learning instrumental
learning, stating that the individual is instrumental in producing a response.
After teaching for a year at the College for Women of Case Western Reserve in Cleveland, Ohio,
Thorndike went to Teachers College at Columbia University, where he remained the rest of his academic
© 2015 by McGraw-Hill Education. This is proprietary material solely for authorized instructor use. Not authorized for sale or distribution in any
manner. This document may not be copied, scanned, duplicated, forwarded, distributed, or posted on a website, in whole or part.
Chapter 5 Learning
career. He became more interested in human mental abilities, and in 1903 published a monograph,
“Heredity, Correlation and Sex Differences in School Abilities.”
Thorndike was a prolific writer, publishing more than 450 articles and books. Some of his important
publications include Educational Psychology (1903), The Elements of Psychology (1905), The
Fundamentals of Learning (1932), and The Psychology of Wants, Interests, and Attitudes (1935).
He also worked on solving industrial problems, such as employee exams and testing. He was a member
of the board of the Psychological Corporation. He served as president of the American Psychological
Association in 1912. Thorndike died in 1949.
Acquisition Acquisition
Extinction Extinction
Show the slides which include Figure 3 from Chapter 6 to demonstrate the differenced between
reinforcement and punishment.
© 2015 by McGraw-Hill Education. This is proprietary material solely for authorized instructor use. Not authorized for sale or distribution in any
manner. This document may not be copied, scanned, duplicated, forwarded, distributed, or posted on a website, in whole or part.
Chapter 5 Learning
SCHEDULES OF REINFORCEMENT
Schedule Examples
Fixed ratio: rewards given Getting a free coffee for every 10 cups that you buy at a
after fixed number of local coffee house.
responses.
Magazine subscription offer—buy 11 issues and get the
12th one for free.
Variable ratio: rewards Gambling (slot machine) is always the best example.
given after varying number
of responses.
Fixed interval: rewards Exam given every Friday in class. Studying occurs on
given after a fixed period of Thursday night.
time.
A store offers a sale or discount every Saturday.
Variable interval: rewards A radio station offers free tickets at some point during
given after varying periods the next hour but you do not know when the offer will
of time. occur.
© 2015 by McGraw-Hill Education. This is proprietary material solely for authorized instructor use. Not authorized for sale or distribution in any
manner. This document may not be copied, scanned, duplicated, forwarded, distributed, or posted on a website, in whole or part.
Chapter 5 Learning
Season 2 of the show Lost (on ABC) introduces a key plot element in which the characters must press a
button on a computer every 108 minutes or else something bad will happen (they do not know what it is
because they are afraid to find out). This is an example of both negative reinforcement and fixed interval
conditioning.
LEARNING OBJECTIVES
Some psychologists view learning in terms of the thought processes, or cognitions, that underlie it—an
approach known as cognitive learning theory. Although psychologists working from the cognitive
learning perspective do not deny the importance of classical and operant conditioning, they have
developed approaches that focus on the unseen mental processes that occur during learning, rather
than concentrating solely on external stimuli, responses, and reinforcements. In its most basic
formulation, cognitive learning theory suggests that it is not enough to say that people make responses
because there is an assumed link between a stimulus and a response—a link that is the result of a past
history of reinforcement for a response. Instead, according to this point of view, people, and even lower
animals, develop an expectation that they will receive a reinforcer after making a response. Two types
of learning in which no obvious prior reinforcement is present are latent learning and observational
learning.
LATENT LEARNING
In latent learning, a new behavior is acquired but is not demonstrated until some incentive is provided
for displaying it. Both humans and animals develop cognitive maps, which are mental representations of
spatial locations and directions. For example, latent learning may permit a person to know the location
of a kitchenware store at a local mall that they have frequently visited, even though they have never
entered the store and do not even like to cook.
According to psychologist Albert Bandura and colleagues, a major part of human learning consists of
observational learning, which is learning by watching the behavior of another person, or model.
Because of its reliance on observation of others—a social phenomenon—the perspective taken by
Bandura is often referred to as a social cognitive approach to learning. Observational learning is
particularly important in acquiring skills in which the operant conditioning technique of shaping is
inappropriate. Not all behavior that we witness is learned or carried out. One crucial factor that
determines whether we later imitate a model is whether the model is rewarded for his/her behavior.
Models who are rewarded for behaving in a particular way are more apt to be mimicked than are
models who receive punishment. Observing the punishment of a model, however, does not necessarily
stop observers from learning the behavior.
© 2015 by McGraw-Hill Education. This is proprietary material solely for authorized instructor use. Not authorized for sale or distribution in any
manner. This document may not be copied, scanned, duplicated, forwarded, distributed, or posted on a website, in whole or part.
Another random document with
no related content on Scribd:
Hugo was a great poet as well as a great romancer, George
Meredith, as we have endeavored to show, is a singer of peculiar
force as well as a master novelist, and among the later literary
figures of especial power we have Kipling, whose prose and poetry
about balance the scale of worth; but the exceptions are few, and the
logic of letters tends to show oneness of aim in the case of genius.
Thomas Hardy undoubtedly belongs to the ranks of great
novelists; his series of romances has been laid on the firm basis of
beauty and knowledge; he has hallowed a part of England peculiarly
rich in unique personality and natural charm; it belongs to him and
the heirship of his memory as validly as though it had been granted
him by the Crown. So well has he filled the office of fictionist that
there seems no need of an attempt on his part to enforce his fame
by appearing as a poet. The publication of “Wessex Poems” (New
York: Harper & Bros.) is indeed no positive declaration of such
ambition; it is perhaps put forth hesitatingly rather in response to
public demand than because of a conviction of its intrinsic merit. It
represents the fruit of odd moments punctuating a long literary
career. The character of the volume is what one might have
anticipated, although had it been of a wholly different sort it could
scarcely have created surprise. There are two Hardys—the man on
whose heart weighs the melancholy facts of human existence and
the happier artist in close and peaceful communion with the sweet
infinite spirit of nature. It is the former Hardy that figures in the
volume singularly unsoftened by any intimation of the other phase of
the writer.
The character of Hardy himself as existing behind the art-self is
one that inspires a peculiar interest. One would know it not simply to
gratify a curiosity that, indeed, is too much indulged of late in lines of
gross private revelation, but to weigh the justice of the charge of
wilful pessimism so generally made against him. The gloomy brow of
Hardy’s art seems far from being of that impersonal sort which
makes much of the modern melancholy of literature inexcusable as a
mere degenerate seeking.
One feels inclined to say that Hardy’s prose is poetry and his
poetry prose. The present volume reveals little of the genuine lyric
gift, but the singing while labored is not without force and individual
color. Some of the ballads possess considerable spirit, and where
character is outlined it cuts the consciousness with Hardy’s well-
known skill of vivid portraiture; as for instance, “The Dance at the
Phœnix,” describing the passion of an aged dame for the pleasures
of her youth how she steals forth from the bed of her good man to
foot it gaily at the inn and how on her return at morn she dies from
over-exertion; “Her Death and After” where the lover of a dead
woman sacrifices her fair fame for the sake of rescuing her child
from the cruelties of a stepmother; and “The Burghers,” a tale of
guilty lovers, and a husband’s unique conduct. In these, as in other
poems of the kind, one can not but feel that Hardy would have put
the matter so much better in prose; which, indeed, is what in some
cases he has done. Some of the contemplative verse has a
quaintness of expression which suggests the sonnets of
Shakespeare; the lines are frequently lame, but every now and then
there is a really virile phrase. In true old English style are some of
the lyrics, of which “The Stranger’s Song” is perhaps the most
successful:
ADVERTISE IN “DIXIE.”
It reaches not only Baltimore, but the South generally.
CROOK, HORNER & CO.,
Steam Hot-Water Heating and Ventilating Engineers,
BALTIMORE.
PLANS AND SPECIFICATIONS FURNISHED.
T. K. Worthington, President.
Thomas Hill, Vice-President.
John H. Duncan, Sec’y—Treas. Pro Tem.
Directors:
John A. Whitridge,
Alexander Brown,
Edgar G. Miller,
Thomas Hill,
George Whitelock,
Jesse Hilles,
E. H. Perkins,
Francis M. Jencks,
L. M. Duvall,
Jno. A. Tompkins,
T. K. Worthington.
II.
III.
IV.
BORST’S RATHSKELLER
UNDER THE ACADEMY OF MUSIC.
Meals at all Hours. Attendance First-Class.
DELICACIES IN SEASON.