You are on page 1of 11

Republic of the Philippines

Laguna State Polytechnic University


Province of Laguna

GRADUATE STUDIES AND APPLIED RESEARCH


Master of Arts in Education
Major in Guidance and Counseling
Foundations of Guidance and Counseling Joevy P. de Lima
3rd shift, 1st semester, SY 2019-2020

VISION: The Laguna State MISSION: LSPU provides quality education QUALITY POLICY: We, at LSPU are committed
Polytechnic University shall be through responsive instruction, distinctive with continual improvement to provide quality,
the Center for Sustainable research, and sustainable extension and efficient services to the university stakeholders’
Development transforming production services for improved quality of highest level of satisfaction through a dynamic
lives and communities. life towards nation building. and excellent management system imbued with
utmost integrity, professionalism and
innovation.

FOUNDATIONS OF GUIDANCE AND COUNSELING (EDUC 202)

Three Kinds of Learning

1. Classical Conditioning
Concept of Classical Conditioning:
Classical conditioning gets its name from the fact that it is the kind of learning situation that
existed in the early “Classical” experiments of Ivan Pavlov (1849- 1936). In the late 1890s,
the famous Russian physiologist began to establish many of the basic principles of this form
of conditioning.

Classical conditioning is also sometimes called respondent conditioning or Pavlovian


conditioning. Pavlov won the Nobel Prize in 1904 for his work on digestion. Pavlov is
remembered for his experiments on basic learning processes.

Pavlov had been studying the secretion of stomach acids and salivation in dogs in response
to the ingestion of varying amounts and kinds of food. While doing so, he observed a
curious phenomenon: Sometimes stomach contraction, secretions and salivation would
begin when no food had actually been eaten.

The mere sight of a food bowl, the individual who normally brought the food, or even the
sound of that individual’s foot steps are enough to produce a physiological response in the
dog. Pavlov’s genius research was able to recognize the implications of this rather basic
discovery.

He saw that the dogs were responding not only on the basis of biological need but also as a
result of learning or as it came to be called, classical conditioning. In classical conditioning,

Page 1 of 11
Republic of the Philippines
Laguna State Polytechnic University
Province of Laguna

an organism learns to respond to a neutral stimulus that normally does not bring about that
response.

To demonstrate and analyze classical conditioning, Pavlov conducted a series of


experiments. In one, he attached a tube to the salivary gland of a dog. He then sounded a
tuning fork and just a few seconds later, presented the dog with meat powder.

This pairing was carefully planned so that exactly the same amount of time lapsed between
the presentation of the sound and the meat powder occurred repeatedly.

At first the dog would salivate only when the meat powder itself was presented, but soon it
began to salivate at the sound of the tuning fork. In fact, even when Pavlov stopped
presenting the meat powder, the dog still salivated after hearing the sound. The dog had
been classically conditioned to salivate to the tone.

Process of Classical Conditioning:


The figure below shows the process of classical conditioning:
(a) Before Conditioning:

(b) During Conditioning:

(c) After Conditioning:

Figure (a):
Consider the first diagram. Prior to sound of a tuning fork and meat powder, we know that
the sound of tuning fork leads not only to salivation but also to some irrelevant response

Page 2 of 11
Republic of the Philippines
Laguna State Polytechnic University
Province of Laguna

such as pricking of the ears, the sound in this case is therefore called the neutral stimulus
because it has no effect on the response or interest.

We also have meat powder, which because of the biological makeup of the dog, naturally
leads to salivation of the response that we are interested in conditioning. The meat powder
is considered as unconditioned stimulus or UCS, because food placed in a dog’s mouth
automatically causes salivation to occur.

The response that the meat powder elicits (salivation) is called an unconditioned response
or UCR, a response that is not associated with previous learning. Unconditioned responses
are natural because they are innate responses that involve no training. They are always
brought about by the presence of unconditioned stimuli.

Figure (b):
Illustrates what happens during conditioning. The tuning fork is repeatedly sounded. Just
before presentation of the meat powder, the goal of conditioning is for the tuning fork to
become associated with the unconditioned stimulus (meat powder), and therefore, to bring
about the same sort of response as the unconditioned stimulus. During this period,
salivation gradually increases each time the tuning fork is sounded, until the tuning fork
alone causes the dog to salivation.

Figure (c):
When conditioning is complete, the tuning fork has evolved from a neutral stimulus to what
is now called a conditioned stimulus or CS. At this time salivation that occurs as a response
to the conditioned stimulus (tuning fork) is considered as a conditioned response or CR. This
situation is seen in figure (c) after conditioning, then the condition stimulus evokes the
conditioned response.

The sequence and timing of the presentation of the unconditioned stimulus and the
conditioned stimulus are particularly important. A neutral stimulus that is presented just
before the unconditioned stimulus is most apt to result in successful conditioning.

Research has shown that conditioning is most effective if the neutral stimulus (which will
become a conditioned stimulus) precedes the unconditioned stimulus by between a half
second and several seconds, depending on what kind of response is being conditioned.

Page 3 of 11
Republic of the Philippines
Laguna State Polytechnic University
Province of Laguna

Stage 1: Before Conditioning:


In this stage, the unconditioned stimulus (UCS) produces an unconditioned response
(UCR) in an organism.
In basic terms, this means that a stimulus in the environment has produced a behavior /
response which is unlearned (i.e., unconditioned) and therefore is a natural response which
has not been taught. In this respect, no new behavior has been learned yet.
For example, a stomach virus (UCS) would produce a response of nausea (UCR). In another
example, a perfume (UCS) could create a response of happiness or desire (UCR).
This stage also involves another stimulus which has no effect on a person and is called
the neutral stimulus (NS). The NS could be a person, object, place, etc.
The neutral stimulus in classical conditioning does not produce a response until it is paired
with the unconditioned stimulus.

Stage 2: During Conditioning:


During this stage a stimulus which produces no response (i.e., neutral) is associated with the
unconditioned stimulus at which point it now becomes known as the conditioned stimulus
(CS).
For example, a stomach virus (UCS) might be associated with eating a certain food such as
chocolate (CS). Also, perfume (UCS) might be associated with a specific person (CS).

For classical conditioning to be effective, the conditioned stimulus should occur before the
unconditioned stimulus, rather than after it, or during the same time. Thus, the conditioned
stimulus acts as a type of signal or cue for the unconditioned stimulus.
Often during this stage, the UCS must be associated with the CS on a number of occasions,
or trials, for learning to take place. However, one trail learning can happen on certain
occasions when it is not necessary for an association to be strengthened over time (such as
being sick after food poisoning or drinking too much alcohol).

Stage 3: After Conditioning:


Now the conditioned stimulus (CS) has been associated with the unconditioned stimulus
(UCS) to create a new conditioned response (CR).
For example, a person (CS) who has been associated with nice perfume (UCS) is now found
attractive (CR). Also, chocolate (CS) which was eaten before a person was sick with a virus
(UCS) now produces a response of nausea (CR).

Page 4 of 11
Republic of the Philippines
Laguna State Polytechnic University
Province of Laguna

Operant Conditioning
Thorndike’s Law of Effect
Thorndike's law of effect states that behaviors are modified by their positive or negative
consequences.
LEARNING OBJECTIVE
Relate Thorndike's law of effect to the principles of operant conditioning

KEY POINTS
The law of effect states that responses that produce a satisfying effect in a particular situation
become more likely to occur again, while responses that produce a discomforting effect are less
likely to be repeated.
Edward L. Thorndike first studied the law of effect by placing hungry cats inside puzzle boxes and
observing their actions. He quickly realized that cats could learn the efficacy of certain behaviors
and would repeat those behaviors that allowed them to escape faster.
The law of effect is at work in every human behavior as well. From a young age, we learn which
actions are beneficial and which are detrimental through a similar trial and error process.
While the law of effect explains behavior from an external, observable point of view, it does not
account for internal, unobservable processes that also affect the behavior patterns of human beings.

TERMS
Law of Effect
A law developed by Edward L. Thorndike that states, "responses that produce a satisfying effect in a
particular situation become more likely to occur again in that situation, and responses that produce
a discomforting effect become less likely to occur again in that situation."
trial and error
The process of finding a solution to a problem by trying many possible solutions and learning from
mistakes until a way is found.
behavior modification
The act of altering actions and reactions to stimuli through positive and negative reinforcement or
punishment.
FULL TEXT
Operant conditioning is a theory of learning that focuses on changes in an individual's observable
behaviors. In operant conditioning, new or continued behaviors are impacted by new or continued
consequences. Research regarding this principle of learning first began in the late 19th century with
Edward L. Thorndike, who established the law of effect. 
Thorndike's Experiments
Thorndike's most famous work involved cats trying to navigate through various puzzle
boxes. In this experiment, he placed hungry cats into homemade boxes and recorded the
time it took for them to perform the necessary actions to escape and receive their food
reward. Thorndike discovered that with successive trials, cats would learn from previous
behavior, limit ineffective actions, and escape from the box more quickly. He observed that
the cats seemed to learn, from an intricate trial and error process, which actions should be
continued and which actions should be abandoned; a well-practiced cat could quickly
remember and reuse actions that were successful in escaping to the food reward. 

Page 5 of 11
Republic of the Philippines
Laguna State Polytechnic University
Province of Laguna

Thorndike's puzzle box

This image shows an example of Thorndike's puzzle box alongside a graph demonstrating the
learning of a cat within the box. As the number of trials increased, the cats were able to escape
more quickly by learning.

The Law of Effect


Thorndike realized not only that stimuli and responses were associated, but also that
behavior could be modified by consequences. He used these findings to publish his now
famous "law of effect" theory. According to the law of effect, behaviors that are followed by
consequences that are satisfying to the organism are more likely to be repeated, and
behaviors that are followed by unpleasant consequences are less likely to be repeated.
Essentially, if an organism does something that brings about a desired result, the organism is
more likely to do it again. If an organism does something that does not bring about a desired
result, the organism is less likely to do it again. 

Page 6 of 11
Republic of the Philippines
Laguna State Polytechnic University
Province of Laguna

Law of effect

Initially, cats displayed a variety of behaviors inside the box. Over successive trials, actions
that were helpful in escaping the box and receiving the food reward were replicated and
repeated at a higher rate.

Thorndike's law of effect now informs much of what we know about operant conditioning
and behaviorism. According to this law, behaviors are modified by their consequences, and
this basic stimulus-response relationship can be learned by the operant person or animal.
Once the association between behavior and consequences is established, the response is
reinforced, and the association holds the sole responsibility for the occurrence of that
behavior. Thorndike posited that learning was merely a change in behavior as a result of a
consequence, and that if an action brought a reward, it was stamped into the mind and
available for recall later. 
From a young age, we learn which actions are beneficial and which are detrimental through
a trial and error process. For example, a young child is playing with her friend on the
playground and playfully pushes her friend off the swingset. Her friend falls to the ground
and begins to cry, and then refuses to play with her for the rest of the day. The child's
actions (pushing her friend) are informed by their consequences (her friend refusing to play
with her), and she learns not to repeat that action if she wants to continue playing with her
friend.

Skinner’s Operant Conditioning


Operant conditioning is a method of learning that occurs through rewards and punishments
for behavior. Through operant conditioning, an individual makes an association between a
particular behavior and a consequence (Skinner, 1938).
By the 1920s, John B. Watson had left academic psychology, and other behaviorists were
becoming influential, proposing new forms of learning other than classical conditioning.
Perhaps the most important of these was Burrhus Frederic Skinner. Although, for obvious
reasons, he is more commonly known as B.F. Skinner.
He believed that the best way to understand behavior is to look at the causes of an action
and its consequences. He called this approach operant conditioning.

Page 7 of 11
Republic of the Philippines
Laguna State Polytechnic University
Province of Laguna

Skinner is regarded as the father of Operant Conditioning, but his work was based
on Thorndike’s (1898) law of effect. According to this principle, behavior that is followed by
pleasant consequences is likely to be repeated, and behavior followed by unpleasant
consequences is less likely to be repeated.
Skinner introduced a new term into the Law of Effect - Reinforcement. Behavior which is
reinforced tends to be repeated (i.e., strengthened); behavior which is not reinforced tends
to die out-or be extinguished (i.e., weakened).
Skinner (1948) studied operant conditioning by conducting experiments using animals which
he placed in a 'Skinner Box' which was similar to Thorndike’s puzzle box.

Principles and Procedures


Skinner identified three types of responses, or operant, that can follow behavior.
• Neutral operants: responses from the environment that neither increase nor decrease the
probability of a behavior being repeated.
• Reinforcers: Responses from the environment that increase the probability of a behavior
being repeated. Reinforcers can be either positive or negative.
• Punishers: Responses from the environment that decrease the likelihood of a behavior
being repeated. Punishment weakens behavior.

We can all think of examples of how our own behavior has been affected by reinforcers and
punishers. As a child you probably tried out a number of behaviors and learned from their
consequences. 
For example, if when you were younger you tried smoking at school, and the chief
consequence was that you got in with the crowd you always wanted to hang out with, you
would have been positively reinforced (i.e., rewarded) and would be likely to repeat the
behavior.
If, however, the main consequence was that you were caught, caned, suspended from
school and your parents became involved you would most certainly have been punished,
and you would consequently be much less likely to smoke now.

Positive Reinforcement
Skinner showed how positive reinforcement worked by placing a hungry rat in his Skinner
box. The box contained a lever on the side, and as the rat moved about the box, it would
accidentally knock the lever. Immediately it did so a food pellet would drop into a container
next to the lever.
The rats quickly learned to go straight to the lever after a few times of being put in the box.
The consequence of receiving food if they pressed the lever ensured that they would repeat
the action again and again.

Page 8 of 11
Republic of the Philippines
Laguna State Polytechnic University
Province of Laguna

Positive reinforcement strengthens a behavior by providing a consequence an individual


finds rewarding. For example, if your teacher gives you £5 each time you complete your
homework (i.e., a reward) you will be more likely to repeat this behavior in the future, thus
strengthening the behavior of completing your homework.

Negative Reinforcement
The removal of an unpleasant reinforcer can also strengthen behavior. This is known as
negative reinforcement because it is the removal of an adverse stimulus which is
‘rewarding’ to the animal or person. Negative reinforcement strengthens behavior because
it stops or removes an unpleasant experience.
For example, if you do not complete your homework, you give your teacher £5. You will
complete your homework to avoid paying £5, thus strengthening the behavior of completing
your homework.

Skinner showed how negative reinforcement worked by placing a rat in his Skinner box and
then subjecting it to an unpleasant electric current which caused it some discomfort. As the
rat moved about the box it would accidentally knock the lever. Immediately it did so the
electric current would be switched off. The rats quickly learned to go straight to the lever
after a few times of being put in the box. The consequence of escaping the electric current
ensured that they would repeat the action again and again.
In fact Skinner even taught the rats to avoid the electric current by turning on a light just
before the electric current came on. The rats soon learned to press the lever when the light
came on because they knew that this would stop the electric current being switched on.

Punishment (weakens behavior)
Punishment is defined as the opposite of reinforcement since it is designed to weaken or
eliminate a response rather than increase it. It is an aversive event that decreases the
behavior that it follows.
Like reinforcement, punishment can work either by directly applying an unpleasant stimulus
like a shock after a response or by removing a potentially rewarding stimulus, for instance,
deducting someone’s pocket money to punish undesirable behavior.
Note: It is not always easy to distinguish between punishment and negative reinforcement.
There are many problems with using punishment, such as:
 Punished behavior is not forgotten, it's suppressed - behavior returns when
punishment is no longer present.
 Causes increased aggression - shows that aggression is a way to cope with problems.
 Creates fear that can generalize to undesirable behaviors, e.g., fear of school.
 Does not necessarily guide toward desired behavior - reinforcement tells you what
to do, punishment only tells you what not to do.

Page 9 of 11
Republic of the Philippines
Laguna State Polytechnic University
Province of Laguna

Reinforcers
The most effective way to teach a person or animal a new behavior is with positive
reinforcement. In positive reinforcement, a desirable stimulus is added to increase a
behavior.

For example, you tell your five-year-old son, Jerome, that if he cleans his room, he will get a
toy. Jerome quickly cleans his room because he wants a new art set. Let’s pause for a
moment. Some people might say, “Why should I reward my child for doing what is
expected?” But in fact we are constantly and consistently rewarded in our lives. Our
paychecks are rewards, as are high grades and acceptance into our preferred school. Being
praised for doing a good job and for passing a driver’s test is also a reward. Positive
reinforcement as a learning tool is extremely effective. It has been found that one of the
most effective ways to increase achievement in school districts with below-average reading
scores was to pay the children to read. Specifically, second-grade students in Dallas were
paid $2 each time they read a book and passed a short quiz about the book. The result was a
significant increase in reading comprehension (Fryer, 2010). What do you think about this
program? If Skinner were alive today, he would probably think this was a great idea. He was
a strong proponent of using operant conditioning principles to influence students’ behavior
at school. In fact, in addition to the Skinner box, he also invented what he called a teaching
machine that was designed to reward small steps in learning (Skinner, 1961)—an early
forerunner of computer-assisted learning. His teaching machine tested students’ knowledge
as they worked through various school subjects. If students answered questions correctly,
they received immediate positive reinforcement and could continue; if they answered
incorrectly, they did not receive any reinforcement. The idea was that students would spend
additional time studying the material to increase their chance of being reinforced the next
time (Skinner, 1961).

PRIMARY AND SECONDARY REINFORCERS


Rewards such as stickers, praise, money, toys, and more can be used to reinforce learning. Let’s go
back to Skinner’s rats again. How did the rats learn to press the lever in the Skinner box? They were
rewarded with food each time they pressed the lever. For animals, food would be an obvious
reinforcer.
What would be a good reinforce for humans? For your daughter Sydney, it was the promise
of a toy if she cleaned her room. How about Joaquin, the soccer player? If you gave Joaquin
a piece of candy every time he made a goal, you would be using a primary reinforcer.
Primary reinforcers are reinforcers that have innate reinforcing qualities. These kinds of
reinforcers are not learned. Water, food, sleep, shelter, sex, and touch, among others, are
primary reinforcers. Pleasure is also a primary reinforcer. Organisms do not lose their drive
for these things. For most people, jumping in a cool lake on a very hot day would be
reinforcing and the cool lake would be innately reinforcing—the water would cool the
person off (a physical need), as well as provide pleasure.
A secondary reinforcer has no inherent value and only has reinforcing qualities when linked
with a primary reinforcer. Praise, linked to affection, is one example of a secondary
reinforcer, as when you called out “Great shot!” every time Joaquin made a goal. Another

Page 10 of 11
Republic of the Philippines
Laguna State Polytechnic University
Province of Laguna

example, money, is only worth something when you can use it to buy other things—either
things that satisfy basic needs (food, water, shelter—all primary reinforcers) or other
secondary reinforcers. If you were on a remote island in the middle of the Pacific Ocean and
you had stacks of money, the money would not be useful if you could not spend it. What
about the stickers on the behavior chart? They also are secondary reinforcers.

Page 11 of 11

You might also like