You are on page 1of 10

Behaviorism

Ivan Pavlov 1849-1936


Ivan Pavlov, a Russian physiologist, is well known for his work in classical
conditioning or stimulus substitution. Pavlov's most renowned experiment involved
meat, a dog and a bell. Initially, Pavlov was measuring the dog's salivation in order to
study digestion. This is when he stumbled upon classical conditioning.
Pavlov's Experiment. Before conditioning, ringing the bell (neutral stimulus)
caused no response from the dog. Placing food (unconditioned stimulus) in front of the
dog initiated salivation (unconditioned response). During conditioning, the bell was rung
a few seconds before the dog was presented with food. After conditioning, the ringing
of the bell (conditioned stimulus) alone produced salivation (conditioned response).
This is classical conditioning

Somehow you were conditioned to associate particular objects with your teacher. So at
present, when you encounter the objects, you are also reminded of your teacher. This
is an example of classical conditioning.
Pavlov also had the following findings:
 Stimulus Generalization. Once the dog has learned to salivate at the sound of
the bell, it will salivate at other similar sounds.
 Extinction. If you stop pairing the bell with the food, salivation will eventually
cease in response to the bell.
 Spontaneous Recovery. Extinguished responses can be "recovered" after an
elapsed time, but will soon extinguish again if the dog is presented with food.
 Discrimination. The dog could learn to discriminate between similar bells
(stimuli) and discern which bell would result in the presentation of food and which
would not.
 Higher-Order Conditioning. Once the dog has been conditioned to rung.
associate the bell with food, another unconditioned stimulus, such as a light may
be flashed at the same time that the bell is Eventually, the dog will salivate at the
flash of the light without the sound of the bell.
Edward L. Thorndike. 1874-1949
Edward Thorndike's Connectionism theory gave us the original S-R framework of
behavioral psychology. More than a hundred years ago he wrote a text book entitled,
Educational Psychology. He was the first one to use this term. He explained that
learning is the result of associations forming between stimuli (S) and responses (R).
Such associations or "habits" become strengthened or weakened by the nature and
frequency of the S-R pairings. The model for S-R theory was trial and error learning in
which certain responses came to be repeated more than others because of rewards.
The main principle of connectionism (like all behavioral theory) was that learning could
be adequately explained without considering any unobservable internal states.
Thorndike's theory on connectionism, states that learning has taken place when a
strong connection or bond between stimulus and response is formed. He came up with
THREE PRIMARY LAWS:
1. Law of Effect. The law of effect states that a connection between a stimulus and
response is strengthened when the consequence is positive (reward) and the
connection between the stimulus and the response is weakened when the
consequence is negative. Thorndike later on, revised this "law" when he found
that negative rewards (punishment) do not necessarily weaken bonds, and that
some seemingly pleasurable consequences do not necessarily motivate
performance.
2. Law of Exercise. This tells us that the more an S-R (stimulus- response) bond is
practiced the stronger it will become. "Practice makes perfect" seem to be
associated with this. However, like the law of effect, the law of exercise also had
to be revised when Thorndike found that practice without feedback does not
necessarily enhance performance
3. Law of Readiness. This states that the more readiness the learner has to
respond to the stimulus, the stronger will be the bond between them. When a
person is ready to respond to a stimulus and is not made to respond, it becomes
annoying to the person. For example, if the teacher says, "Okay we will now
watch the movie (stimulus) you've been waiting for." And suddenly the power
goes off. The students will feel frustrated because they were ready to respond to
the stimulus but was prevented from doing so. Likewise, if the person is not at all
ready to respond to a stimuli and is asked to respond, that also becomes
annoying. For instance, the teacher calls a student to stand up and recite, and
then the teacher asks the question and expects the student to respond right
away when he is still not ready. This will be annoying to the student. That is why
teachers should remember to say the question first, and wait for a few seconds
before calling on anyone to answer.
Principles Derived from Thorndike's Connectionism:
1. Learning requires both practice and rewards (laws of effect/exercise)
2. A series of S-R connections can be chained together if they belong to the same
action sequence (law of readiness).
3. Transfer of learning occurs because of previously encountered situations.
4. Intelligence is a function of the number of connections learned.
John B. Watson 1878 - 1958
John B. Watson was the first American psychologist to work with Pavlov's ideas. He
too was initially involved in animal studies, then later became involved in human
behavior research.
He considered that humans are born with a few reflexes and the emotional
reactions of love and rage. All other behavior is learned through stimulus-response
associations through conditioning. He believed in the power of conditioning so much
that he said that if he is given a dozen healthy infants he can make them into anything
you want them to be, basically through making stimulus-response connections through
conditioning.
Experiment on Albert. Watson applied classical conditioning in his experiment
concerning Albert, a young child and a white rat. In the beginning, Albert was not afraid
of the rat; but Watson made a sudden loud noise each time Albert touched the rat.
Because Albert was frightened by the loud noise, he soon became conditioned to fear
and avoid the rat. Later, the child's response was generalized to other small animals.
Now, he was also afraid of small animals. Watson then "extinguished" or made the
child "unlearn" fear by showing the rat without the loud noise.
Surely, Watson's research methods would be questioned today: nevertheless, his work
did clearly show the role of conditioning in the development of emotional responses to
certain stimuli. This may help us understand the fears, phobias and prejudices that
people develop.

Burrhus Frederick Skinner 1904 - 1990


Like Pavlov, Watson and Thorndike, Skinner believed in the stimulus-response
pattern of conditioned behavior. His theory zeroed in only on changes in observable
behavior, excluding any likelihood of any processes taking place in the mind. Skinner's
1948 book, Walden Two, is about a utopian society based on operant conditioning. He
also wrote, Science and Human Behavior, (1953) in which he pointed out how the
principles of operant conditioning function in social institutions such as government,
law, religion, economics and education.
Skinner's work differs from that of the three behaviorists before him in that he
studied operant behavior (voluntary behaviors used in operating on the environment).
Thus, his theory came to be known as Operant Conditioning.
Operant Conditioning is based upon the notion that learning is a result of change
in overt behavior. Changes in behavior are the result of an individual's response to
events (stimuli) that occur in the environment. A response produces a consequence
such as defining a word, hitting a ball, or solving a math problem. When a particular
Stimulus-Response (S-R) pattern is reinforced (rewarded), the individual is conditioned
to respond.
Reinforcement is the key element in Skinner's S-R theory. A reinforcer is anything
that strengthens the desired response. There is a positive reinforcer and and a negative
reinforcer.
A positive reinforcer is any stimulus that is given or added to increase the
response. An example of positive reinforcement is when a teacher promises extra time
in the play area to children who behave well during the lesson. Another is a mother who
promises a new cell phone for her son who gets good grades. Still, other examples
include verbal praises, star stamps and stickers.

A negative reinforcer is any stimulus that results in the increased frequency of a


response when it is withdrawn or removed. A negative reinforcer is not a punishment, in
fact it is a reward. For instance, a teacher announces that a student who gets an
average grade of 1.5 for the two grading periods will no longer take the final
examination. The negative reinforcer is "removing" the final exam. which we realize is a
form of reward for working hard and getting an average grade of 1.5.
A negative reinforcer is different from a punishment because a punishment is a
consequence intended to result in reduced responses. An example would be a student
who always comes late is not allowed to join a group work that has already began
(punishment) and, therefore, loses points for that activity. The punishment was done to
reduce the response of repeatedly coming to class late.
Skinner also looked into extinction or non-reinforcement: Responses that are not
reinforced are not likely to be repeated. For example, ignoring a student's misbehavior
may extinguish that behavior.
Shaping of Behavior. An animal on a cage may take a very long time to figure out
that pressing a lever will produce food. To accomplish such behavior, successive
approximations of the behavior are rewarded until the animal learns the association
between the lever and the food reward. To begin shaping, the animal may be rewarded
for simply turning in the direction of the lever, then for moving toward the lever, for
brushing against the lever, and finally for pressing the lever.
Behavioral chaining comes about when a series of steps are needed to be
learned. The animal would master each step in sequence until the entire sequence is
learned. This can be applied to a child being taught to tie a shoelace. The child can be
given reinforcement (rewards) until the entire process of tying the shoelace is learned.
Reinforcement Schedules. Once the desired behavioral response is
accomplished, reinforcement does not have to be 100%; in fact, it can be maintained
more successfully through what Skinner referred to as partial reinforcement schedules.
Partial reinforcement schedules include interval schedules and ratio schedules.
Fixed Interval Schedules. The target response is reinforced after a fixed amount
of time has passed since the last reinforcement. Example, the bird in a cage is given
food (reinforcer) every 10 minutes, regardless of how many times it presses the bar.
Variable Interval Schedules. This is similar to fixed interval schedules but the
amount of time that must pass between reinforcement varies. Example, the bird may
receive food (reinforcer) different intervals, not every ten minutes.
Fixed Ratio Schedules. A fixed number of correct responses must occur before
reinforcement may recur. Example, the bird will be given food (reinforcer) everytime it
presses the bar 5 times.
Variable Ratio Schedules. The number of correct repetitions of the correct
response for reinforcement varies. Example, the bird is given food (reinforcer) after it
presses the bar 3 times, then after 10 times, then after 4 times. So the bird will not be
able to predict how many times it needs to press the bar before it gets food again.
Variable interval and especially, variable ratio schedules produce steadier and
more persistent rates of response because the learners cannot predict when the
reinforcement will come although they know that they will eventually succeed. An
example of this is why people continue to buy lotto tickets even when an almost
negligible percentage of people actually win. While it is true that very rarely there is a
big winner, but once in a while somebody hits the jackpot (reinforcement). People
cannot predict when the jackpot can be gotten (variable interval) so they continue to
buy tickets (repetition of response).
Implications of Operant Conditioning.
These implications are given for programmed instruction.
1. Practice should take the form of question (stimulus) – answer (response)
frames which expose the student to the subject in gradual steps.
2. Require that the learner makes a response for every frame and receives
immediate feedback.
3. Try to arrange the difficulty of the questions so the response is always correct
and hence, a positive reinforcement.
4. Ensure that good performance in the lesson is paired with secondary
reinforcers such as verbal praise, prizes and good grades.
Principles Derived from Skinner's Operant Conditioning:
1. Behavior that is positively reinforced will reoccur; intermittent reinforcement is
particularly effective.
2. Information should be presented in small amounts so responses can be
reinforced ("shaping").
3. Reinforcements will generalize across similar stimuli ("stimulus
generalization") producing secondary conditioning.
Looking back at the activity at the beginning, try to look into the rewards and
punishments that your former teacher used in class. Connect them with Skinner's
Operant Conditioning. Can you now see why your teacher used them?

Tolman's Purposive Behaviorism


Usually, people who worked on the maze activity which you just did would say
they found the second maze easier. This is because they saw that the two mazes were
identical, except that the entrance and exit points were reversed. Their experience in
doing maze A helped them answer Maze B a lot easier. People create mental maps of
things they perceived. These mental maps help them respond to other things or tasks
later, especially if they see the similarity. You may begin to respond with trial and error
(behavioristic), but later on your response becomes more internally driven (cognitive
perspective). This is what neobehaviorism is about. It has aspects of behaviorism but
it also reaches out to the cognitive perspective.
There are two theories reflecting neobehaviorism that stands out.
1. Edward Tolman's Purposive Behaviorism
2. Albert Bandura's Social Learning Theory.
Both theories are influenced by behaviorism (which is focused on external elements in
learning), but their principles seem to also be reflective of the cognitive perspective
(focused on more internal elements).
Tolman's Purposive Behaviorism
Purposive behaviorism has also been referred to as Sign Learning Theory and
is often seen as the link between behaviorism and cognitive theory. Tolman's theory
was founded on two psychological views: those of the Gestalt psychologists and those
of John Watson, the behaviorist.
Tolman believed that learning is a cognitive process. Learning involves forming
beliefs and obtaining knowledge about the environment and then revealing that
knowledge through purposeful and goal-directed behavior.
Tolman stated in his sign theory that an organism learns by pursuing signs to a
goal, ie, learning is acquired through meaningful behavior. He stressed the organized
aspect of learning: "The stimuli which are allowed in are not connected by just simple
one-to-one switches to the outgoing responses. Rather the incoming impulses are
usually worked over and elaborated in the central control room into a tentative
cognitive-like map of the environment. And it is this tentative map, indicating routes and
paths and environmental relationships, which finally determines what responses, if any,
the animal will finally make."
Tolman's form of behaviorism stressed the relationships between stimuli rather
than stimulus-response. Tolman said that a new stimulus (the sign) becomes
associated with already meaningful stimulus (the significate) through a series of
pairings; there is no need for reinforcement in order to establish learning. In your maze
activity, the new stimulus or "sign" (maze B) became associated with already
meaningful stimuli, the significate (maze A). So you. may have connected the two
stimuli, maze A and maze B; and used your knowledge and experience in maze A to
learn to respond to maze B.
Tolman's Key Concepts
Learning is always purposive and goal-directed. Tolman asserted that
learning is always purposive and goal-directed. He held the notion that an organism
acted or responded for some adaptive purpose. He believed individuals do more than
merely respond to stimuli; they act on beliefs, attitudes, changing conditions, and they
strive toward goals. Tolman saw behavior as holistic, purposive and cognitive.
Cognitive maps in rats. In his most famous experiment, one group of rats was
placed at random starting locations in a maze but the food was always in the same
location. Another group of rats had the food placed in different locations which always
required exactly the same pattern of turns from their starting location. The group that
had the food in the same location performed much better than the other group,
supposedly demonstrating that they had learned the location rather than a specific
sequence of turns. This is tendency to "learn location" signified that rats somehow
formed cognitive maps that help them perform well on the maze. He also found out
that organisms will select the shortest or easiest path to achieve a goal.
Applied in human learning, since a student passes by the same route going to
school everyday, he acquires a cognitive map of the location of his school. So when
transportation re-routing is done, he can still figure out what turns to make to get to
school the shortest or easiest way.
Latent Learning. Latent learning is a kind of learning that remains or stays with
the individual until needed. It is learning that is not outwardly manifested at once.
According to Tolman it can exist even without reinforcement. He demonstrated this in
his rat experiments wherein rats apparently "learned the maze" by forming cognitive
maps of the maze, but manifested this knowledge of the maze only when they needed
to.
Applied in human learning, a two-year old always sees her dad operate the t.v.
remote control and observes how the tv. is turned on or how channel is changed, and
volume adjusted. After sometime, the parents are surprised that on the first time that
their daughter holds the remote control, she already knows which buttons to press for
what function. Through latent learning, the child knew the skills beforehand, even
though she has never done them before.
The concept of intervening variable. Intervening variables are variables that are
not readily seen but serve as determinants of behavior. Tolman believed that learning is
mediated or is influenced by expectations, perceptions, representations, needs and
other internal or environmental variables. Example, in his experiments with rats he
found out that hunger was an intervening variable.
Reinforcement not essential for learning. Tolman concluded that reinforcement is not
essential for learning, although it provides an incentive for performance. In his studies,
he observed that a rat was able to acquire knowledge of the way through a maze, i.e.,
to develop a cognitive map, even in the absence of reinforcement.
Bandura's Social Learning Theory
10-Year-Old Boy in Texas Hangs Himself After Watching Saddam Execution
The Associated Press

HOUSTON Jan 4, 2007 (AP) Police and family members said a 10-year-old boy who
died by hanging himself from a bunk bed was apparently mimicking the execution of
former Iraqi leader Saddam Hussein.
Sergio Pelico was found dead Sunday in his apartment bedroom in the Houston-area
city of Webster, said Webster police Lt. Tom Claunch. Pelico's mother told police he
had previously watched a news report on Saddam's death.
"It appears to be accidental," Claunch said. "Our gut reaction is that he was
experimenting." An autopsy of the fifth-grader's body was pending.
Julio Gustavo, Sergio's uncle, said the boy was a happy and curious child.
He said Sergio had watched TV news with another uncle on Saturday and asked the
uncle about Saddam's death.
"His uncle told him it was because Saddam was real bad," Gustavo said. "He (Sergio)
said, 'OK. And that was it."
Sergio's mother, Sara Pelico DeLeon, was at work Sunday while Sergio and other
children were under the care of an uncle, Gustavo said. One of the children found
Sergio's body in his bedroom.
Police said the boy had tied a slipknot around his neck while on a bunk bed. Police
investigators learned that Sergio had been upset about not getting a Christmas gift from
his father, but they don't believe the boy intentionally killed himself.
Clinical psychologist Edward Bischof of California said children of Sergio's age mimic
risky behaviors they see on TV such as wrestling or extreme sports without realizing
the dangers. He said TV appeared to be the stimulant in Sergio's case.
"I would think maybe this kid is trying something that he thinks fun to act out without
having the emotional and psychological maturity to think the thing through before he
acts on it," Bischof said.
Family members held a memorial for the boy Wednesday in the apartment complex
activity center. Gustavo said the family is trying to put together enough money to send
Sergio's body to Guatemala for burial.
"I don't think he thought it was real," Gustavo said of Saddam's hanging. "They showed
them putting the noose around his neck and everything. Why show that on TV?"

Albert Bandura's Social Learning Theory


Social learning theory focuses on the learning that occurs within a social context. It
considers that people learn from one another, including such concepts as observational
learning, imitation and modeling. The ten-year old boy Sergio Pelico did watch
Saddam's execution on TV and then must have imitated it.
Among others, Albert Bandura is considered the leading proponent of this theory.
General principles of social learning theory
1. People can learn by observing the behavior of others and the outcomes of
those behaviors.
2. Learning can occur without a change in behavior. Behaviorists say that
learning has to be represented by a permanent change in behavior, in contrast
social learning theorists say that because people can learn through observation
alone, their learning may not necessarily be shown in their performance.
Learning may or may not result in a behavior change.
3. Cognition plays a role in learning. Over the last 30 years, social learning
theory has become increasingly cognitive in its interpretation of human
learning. Awareness and expectations of future reinforcements or punishments
can have a major effect on the behaviors that people exhibit.
4. Social learning theory can be considered a bridge or a transition between
behaviorist learning theories and cognitive learning theories.

How the environment reinforces and punishes modeling


People are often reinforced for modeling the behavior of others. Bandura suggested
that the environment also reinforces modeling. This is in several possible ways:
1. The observer is reinforced by the model. For example a student who changes
dress to fit in with a certain group of students has a strong likelihood of being
accepted and thus reinforced by that group.
2. The observer is reinforced by a third person. The observer might be
modeling the actions of someone else, for example, an outstanding class leader
or student. The teacher notices this and compliments and praises the observer
for modeling such behavior thus reinforcing that behavior.
3. The imitated behavior itself leads to reinforcing consequences. Many
behaviors that we learn from others produce satisfying or reinforcing results.
For example, a student in my multimedia class could observe how the extra work
a classmate does is fun. This student in turn would do the same extra work and
also experience enjoyment.
4. Consequences of the model's behavior affect the observer's behavior
vicariously. This is known as vicarious reinforcement. This is where the model
is reinforced for a response and then the observer shows an increase in that
same response. Bandura illustrated this by having students watch a film of a
model hitting an inflated clown doll. One group of children saw the model
being praised for such action. Without being reinforced, the group of children
began to also hit the doll.

Contemporary social learning perspective of reinforcement and punishment


1. Contemporary theory proposes that both reinforcement and punishment have
indirect effects on learning. They are not the sole or main cause.
2. Reinforcement and punishment influence the extent to which an individual
exhibits a behavior that has been learned.
3. The expectation of reinforcement influences cognitive processes that
promote learning. Therefore, attention pays a critical role in learning. and
attention is influenced by the expectation of reinforcement. An example would be,
when the teacher tells a group of students that what they will study next is not
on the test. Students will not pay attention because they do not expect to know
the information for a test.
Cognitive factors in social learning
Social learning theory has cognitive factors as well as behaviorist factors (actually
operant factors).
1. Learning without performance: Bandura makes a distinction between learning
through observation and the actual imitation of what has been learned. This is
similar to Tolman's latent learning
2. Cognitive processing during learning: Social learning theorists contend that
attention is a critical factor in learning.
3. Expectations: As a result of being reinforced, people form expectations about
the consequences that future behaviors are likely to bring. They expect certain
behaviors to bring reinforcements and others to bring punishment. The learner
needs to be aware, however, of the response reinforcements and response
punishment. Reinforcement increases a response only when the learner is
aware of that connection.
4. Reciprocal causation: Bandura proposed that behavior can influence both the
environment and the person. In fact each of these three variables, the person,
the behavior, and the environment can have an influence on each other.
5. Modeling: There are different types of models. There is the live model, an
actual person demonstrating the behavior. There can also on
or action portrayed in some other medium, such as television, videotape,
computer programs.
Behaviors that can be learned through modeling
Many behaviors can be learned, at least partly, through modeling. Examples that
can be cited are, students can watch parents read, students can watch the
demonstrations of mathematics problems, or see someone act bravely in a fearful
situation. Aggression can be learned through models. Research indicates that children
become more aggressive when they observed aggressive or violent models. Moral
thinking and moral behavior are influenced by observation and modeling. This includes
moral judgments regarding right and wrong which can, in part, develop through
modeling.
Conditions necessary for effective modeling to occur
Bandura mentions four conditions that are necessary before an individual can
successfully model the behavior of someone else:
1. Attention The person must first pay attention to the model.
2. Retention The observer must be able to remember the behavior that has been
observed. One way of increasing this is using the technique of rehearsal. -
3. Motor reproduction -The third condition is the ability to replicate the behavior
that the model has just demonstrated. This means that the observer has to be
able to replicate the action, which could be a problem with a learner who is not
ready developmentally to replicate the action. For example, little children have
difficulty doing complex physical motion.
4. Motivation - The final necessary ingredient for modeling to occur is
motivation. Learners must want to demonstrate what they have learned.
Remember that since these four conditions vary among individuals, different
people will reproduce the same behavior differently.
Effects of modeling on behavior:
1. Modeling teaches new behaviors.
2. Modeling influences the frequency of previously learned behaviors.
3. Modeling may encourage previously forbidden behaviors.
4. Modeling increases the frequency of similar behaviors. For example a student
might see a friend excel in basketball and he tries to excel in football because he
is not tall enough for basketball.
Educational implications of social learning theory
Social learning theory has numerous implications for classroom use.
1. Students often learn a great deal simply by observing other people.
2. Describing the consequences of behavior can effectively increase the
appropriate behaviors and decrease inappropriate ones. This can involve
discussing with learners about the rewards and consequences of various
behaviors.
3. Modeling provides an alternative to shaping for teaching new behaviors.
Instead of using shaping, which is operant conditioning. modeling can provide a
faster, more efficient means for teaching new behavior. To promote effective
modeling, a teacher must make sure that the four essential conditions exist;
attention, retention, motor reproduction, and motivation.
4. Teachers and parents must model appropriate behaviors and take care that
they do not model inappropriate behaviors.
5. Teachers should expose students to a variety of other models. This technique
is especially important to break down traditional stereotypes.

You might also like