At its most general, we can define learning as a durable change in behavior or knowledge due to experience. This actually covers a lot of ground. So, a baby beginning to walk, a guy taking guitar lessons so he play well enough to impress girls, a single mom taking real estate classes to get her real estate license and switch careers, a kid figuring out how to advance to the next level on a video game without resorting to cheat codes, a grandfather regaining lost function in a hand after a stroke, all of these reflect types of learning. These types of learning aren't all the same, don't all use the same psychological processes or require the same parts of the brain, but they're still learning according to our definition.

Classical Conditioning (also Pavlovian or Respondent Conditioning) is a form of associative learning that was first demonstrated by
Ivan Pavlov[1] . The typical procedure for inducing classical conditioning involves presentations of a neutral stimulus along with a stimulus of some significance. The neutral stimulus could be any event that does not result in an overt behavioral response from the organism under investigation. Pavlov referred to this as a Conditioned Stimulus (CS). Conversely, presentation of the significant stimulus necessarily evokes an innate, often reflexive, response. Pavlov called these the Unconditioned Stimulus (US) and Unconditioned Response (UR), respectively. If the CS and the US are repeatedly paired, eventually the two stimuli become associated and the organism begins to produce a behavioral response to the CS. Pavlov called this the Conditioned Response (CR). Classical conditioning has been demonstrated in only three species using a variety of methodologies. Popular forms of classical conditioning that are used to study neural structures and functions that underlie learning and memory include fear conditioning, eyeblink conditioning, and Classical Conditioning of Aplysia gill and siphon withdrawal reflex. What do you do when you hear a bell ring? A teacher told this story on himself. When most teachers hear a bell one of the first things they do is walk out into the hallway to be a monitor. Right? Just keep a watchful on the students. Well this guy had acquired such a habit that when he was at home and the doorbell rang he'd walk into a nearby hallway and "monitor" his family. For him it was simply such a strong habit that he'd produce the right behavior (going into the hall to monitor) at the wrong place (his own home). In this chapter we will look at Classical Conditioning, perhaps the oldest model of change there is. It has several interesting applications to the classroom, ones you may not have thought about it. Let's look at the components of this model.


What does the dog do? Right. hard wired together like a horse and carriage and love and marriage as the song goes. Bell ---> Salivate The bell elicits the same response the sight of the food gets. Over repeated trials. Eventually. because we are humans who have an insatiable curiosity. See the food. Every time the dog sees the food. We ring the bell (Ding-dong). Now. Something like this might happen: Food ---> Salivation The dog is hungry. "Unconditioned" means that . uncontrolled. When we present the food to the hungry dog (and before the dog salivates). after you've tricked your dog into drooling and acting even more stupidly than usual. You start with two things that are already connected with each other (food and salivation). Then you add a third thing (bell) for several trials. Ding-dong. • • Food ---------------------> Salivation Unconditioned Stimulus ---> Unconditioned Response "Unconditioned" simply means that the stimulus and the response are naturally connected. the dog has learned to associate the bell with the food and now the bell has the power to produce the same response as the food. we ring a bell. This is a natural sequence of events. Alpo. the dog sees the food. Thus. Now. but we don't show any food. (And. • • • Bell with Food ---> Salivation We repeat this action (food and bell given simultaneously) at several meals.The easiest place to start is with a little example. we experiment. Now. They just came that way. we do another experiment. then salivate. an unconscious. and unlearned relationship. "Conditioning" from all this? Let me draw up the diagrams with the official terminology. the dog salivates. where do we get the term. Consider a hungry dog who sees a bowl of food. this third thing may become so strongly associated that it has the power to produce the old behavior. the dog also hears the bell. you must give it a special treat. because we are humans who like to play tricks on our pets. of course.) This is the essence of Classical Conditioning. It really is that simple.

4. connect. "Stimulus" simply means the thing that starts it while "response" means the thing that ends it. 1. but for now I do want to make a point. • • Bell ---------------------> Salivation Conditioned Stimulus ---> Conditioned Response Let's review these concepts. There are two key parts. the famous Russian physiologist. He created the first learning theory which precedes the learning theory most teachers know quite well. but hang in there. . we start with an existing relationship. link something new with the old relationship. First. Unconditioned Stimulus: a thing that can already elicit a response. reinforcement theory. A stimulus elicits and a response is elicited. Finally.) Another diagram. We will look at reinforcement theory in a separate chapter. A LITTLE HISTORY AND A COMPARISON The example we used here is from the first studies on classical conditioning as described by Ivan Pavlov. 2. until the new thing has the power to elicit the old response. Unconditioned Response: a thing that is already elicited by a stimulus. It means that we are trying to associate. (This is circular reasoning. true. Conditioned Relationship: the new stimulus-response relationship we created by associating a new stimulus with an old response. preexisting. after many trials we hope for. we pair a new thing (Conditioning Stimulus) with the existing relationship. Conditioning Stimulus: a new stimulus we deliver the same time we give the old stimulus. 5. bond. untaught. • • • • • Conditioning Stimulus Bell with Food -----------------------> Salivation Unconditioned Stimulus------> Unconditioned Response We already know that "Unconditioned" means unlearned.this connection was already present before we got there and started messing around with the dog or the child or the spouse. Second. already-present-before-we-got-there. 3. Pavlov discovered these important relationships around the turn of the century in his work with dogs (really). Unconditioned Relationship: an existing stimulus-response connection. Unconditioned Stimulus ---> Unconditioned Response. And we want this new thing to elicit (rather than be elicited) so it will be a stimulus and not a response. "Conditioning" just means the opposite.

Consider our basic example.The point is this: Classical conditioning says nothing about rewards and punishments which are key terms in reinforcement theory.. To keep them separated just look for the presence of rewards and punishments. not even an implication like that.meat powder is unconditional stimulas Response in case of conditioned stimulas is CONDITIONED RESPONSE Response in case of unconditioned stimulas is UNCONDITIONED response PHENOMENA OF LEARNING . is called a conditioned stimulus (CS) and as the organism acquires the predictive relationship the CS will elicit a conditioned response (CR) from the organism in preparation for the UCS before the UCS is presented. a UCR. the UCR. a response in preparation to predicted. then that stimulus is a good predictor of the impending presentation of the food (UCS). instinctively elicits salivation. Classical conditioning is built on creating relationships by association over trials. previously neutral and meaningless. Conditioning requires that there first be an unconditioned stimulus (UCS). • • • • • Conditioning Stimulus BELL with Food ---------------------> Salivation Unconditioned Stimulus ---> Unconditioned Response There is nothing in here about rewards or punishments. no terminology like that. In an example from Pavlov's work. . An example of a UCS from Pavlov's experiments is food. the sound of the bell eventually prompted the dog to salivate before the food was available. But because the bell's ringing reliably predicted the food presentation. This salivation that occurred to the bell before the food arrived was the CR. If a stimulus event always occurs just before the dog gets food. Such a predictive stimulus. (after the bell is know as conditioned stimulas. a UCS is stimulus which already means something to the organism and already generates a response on its own. which is already known and established through inborn instinct or prior learning to elicit an unconditioned response (UCR) from the organism. expected opportunity to eat food. Prior to the pairings of the bell and the food. the dog might not have ever encountered the sound of a bell ringing in its life. a bell which was always rung before the food presentation was a CS. In everyday English. the initial learning of the predictive relationship between stimuli. Acquisition Pavlovian conditioning starts with acquisition. which after the food is presented. from a dog. Some people confuse Classical Conditioning with Reinforcement Theory.

This new association now competes with the prior association. if the bell was again paired with the food. 3. the dog will still salivate to the food. eventually the dog would stop salivating to the bell. That would be an incidence of generalization. stimuli with similar sensory qualities to the CS may elicit the CR. This process is extinction. It is common that if the CS is presented again during another training session. the bell would elicit salivation after fewer CS-UCS pairings than it did the first time. Extinction After acquisition. Using our Pavlov's dog example. Let's say the experimenter now introduces a second bell with a slightly different sound. The dog . predictive relationship is learned. yet opposite. should the CS be presented repeatedly alone.2. indicating that the prior learning provides an advantage on the second attempt at learning. It may take several days of presenting the bell alone before all salivating to the bell completely ceases. We can make the discrimination more difficult. maybe a little bit higher or lower in pitch than the original bell. let's say that a dog has learned that a bell predicts the presentation of food. The reason is that each time the two associations. So. the association is re-learned even faster than it was initially. Eventually. Extinction is typically incomplete after one bout of CS presentation without the UCS following. Sometimes several bouts of extinction. What happens during Pavlovian extinction is that a new. 4. Stimulus Generalization/Discrimination In stimulus generalization. after the first day of extinction training. the organism will perform the CR although with less vigor and effort and it will soon extinguish again. the dog will learn to salivate only to the bell and not to the keys jingling. CS-alone. If food is presented by itself. extinction is not the same as forgetting or erasing the prior CS-UCS association. without the UCS following. training are required before spontaneous recovery completely ceases and extinction is complete. Spontaneous Recovery Spontaneous recovery is further evidence that extinction is not erasure or simply forgetting. As evidence that the prior association is not forgotten or erased consider this: If the same extinguished CS once again is paired with the UCS. This does not affect the UCS or the UCR. However. if the bell continued to ring for Pavlov's dog without food following. But let's say that that person with the jingling keys starts to walk by often. just not to the bell. In our Pavlovian example. As an example. compete and it may take several bouts of extinction training before the extinction learning can completely overwhelm the initial acquisition learning. eventually the CR will cease to be elicited. that the CS now predicts the absence of the UCS. even if after the first bout of extinction training the CR was apparently completely eliminated. continuing with our Pavlov's dog example. each successive day when the bell was presented alone there'd be some small amount salivation to the bell at the beginning of the session. But one day somebody walks by the dog and their keys are jingling and the metallic sound causes the dog to salivate. acquisition and extinction. That would be stimulus discrimination. essentially by mistake.

the generalization to the second bell (as evidenced by salivation to the different bell) should decrease and the dog should be able to discriminate between the two bells' similar sounds. A rat in a small chamber will seemingly randomly explore the chamber. In contrast to classical conditioning. If the behavior has aversive (unpleasant. Reinforcement vs. Thus. but both would increase the occurrence of a behavior. in operant conditioning the organism "happens" to the world. Soon the rat is no longer randomly exploring. An example of this is the classic Skinnerian lever-press task. but continuously pressing the lever and eating until it is no longer hungry (or the experimenter runs out of food pellets). appealing. Just like the rat leverpressing for food (also positive reinforcement) the consequences would be . Omission/Punishment One thing that is often a pet peeve of purist animal learning researchers is the misuse of the term "reinforcement" in popular usage. Some of the more tightlywound researchers sometimes shudder when they hear an angry parent say to a misbehaving child. in operant conditioning the organism first generates a response (the organism "happens" to the world) and the outcome of that response determines if the organism will perform that behavior again in the future. as in classical conditioning. rewarding) consequences the likelihood of that behavior being performed again will increase. "If you don't stop. reinforcement increases the likelihood of a response. the outcomes of the behaviors come to control them. aversive = negative). the parent might give the kid a few unexpected dollars whenever they saw the child cleaning their room. So. punishing) consequences the likelihood of that behavior occurring again goes down. When a behavior has appetitive (pleasant. But with further training over time. The rat may eat the food pellet and go on poking and prodding and exploring and after some time push down the same lever again and again a food pellet rolls out of a little hole and lands in a cup. some of those behaviors won’t have consequences and some of them will have consequences. I'm gonna apply some negative reinforcement!" This is why. the rat pushes down on a lever and a food pellet rolls out of a little hole in the wall into a cup. strictly speaking. So. The organism is constantly performing behaviors. But then. So if a parent wanted to increase the frequency of a child cleaning their bedroom. disturbing. the use of the terms positive reinforcement and negative reinforcement differ in the nature of the stimulus used as a reinforcer (appetitive = positive. instead of the organism merely reacting to predictive cues about what is about to occur (the world "happens" to the organism) and trying to prepare a response. OPERANT CONDITION: In operant learning the consequences of a behavior come to guide and control the occurrence of that behavior. 1. The emitted behavior (lever-press) becomes controlled by its consequences (presentation of a food pellet). at one point.may have a more difficult time learning to tell the difference compared to keys jingling. It may poke and prod and move and jiggle and climb and manipulate any and everything in the chamber with no outcome.

There is some debate as to why these two different forms of learning exhibit such similarity. all forms of punishment do not involve applying an aversive stimulus. For example. higher order conditioning occurs. 2. a fixed ratio (FR) schedule dictates reinforcement after a fixed number of responses have been made. for instance). A company that pays a fixed price for every finished product produced by the worker. "You're gonna get another one if you don't start cleaning your room. extinction. Whether simple punishment or reward omission is used the aim is the same. In contrast. if a parent spanked a child to get them to stop a bad behavior (using swear words. So. . much like the CS predicts the UCS. different patterns of reinforcement produce very different patterns of behavior. These schedules determine the manipulation of either the number of responses (the ratio of responses) or the elapsed time since the last rewarded response (the interval of responses) In the simplest cases. where an expected appetitive stimulus is removed or withheld. to decrease the occurrence of an unwanted behavior. Operant All of the processes seen in classical conditioning. a fixed interval (FI) schedule dictates reinforcement of the first response after a fixed time interval has passed since the previous reinforcement. When two similar stimuli are present (similar looking levers to press. If the parent spanked the child and said. regardless of how long or short the time involved would be paying the worker according to an FR schedule. One view of the basis of this similarity suggests that in operant learning the animal’s own response serves as a predictor (or cue) of the outcome." that's negative reinforcement. An example of this would be a parent taking away a cherished toy as a consequence for a child using swear words. However. acquisition. spontaneous recovery. It is similar to human "by the piece" payment schedules in some jobs. From this simple associative basis the rest of the similarities may be eventually built. an FR 10 schedule would deliver reinforcement for every 10 responses while a FR 1 would deliver reinforcement fro each response (also called continuous reinforcement). When successive outcomes can be strung together with successive behaviors.expected to increase the room-cleaning behavior. When a stimulus is applied to decrease the occurrence of a behavior. stimulus generalization and discrimination and higher order conditioning are also found in operant conditioning. An aversive stimulus is applied and that aversive stimulus is terminated when the behavior increases. Schedules of Reinforcement In regard to positive reinforcement. as in a parent threatening to unpleasantly discipline a misbehaving child to get them to stop their behavior. Comparison of processes Classical vs. then generalization and discrimination can occur. One form of punishment is reward omission. When outcomes occur after behaviors. there is acquisition and then when those outcomes cease to occur there is extinction. for instance) that is simple punishment. The rules or patterns governing reinforcement are called schedules of reinforcement. is punishment. An FR schedule generates rapid and consistent responding.

every fifth response in a VR 5 schedule would be reinforced. If reinforcement is suddenly stopped with an FR or FI schedule. Discrimination: They make learn how to discriminate. on average the first response after 20 second had elapsed would be reinforced. sometimes the next response might be reinforced. Organisms trained on variable schedules can be quite persistent in the face of nonreinforcement. humans at a casino playing the slot machine are perfect examples of a VR schedule and you can see how persistent those gamblers can be in their responding.this can be done by using the following method STMULAS CONTROL TRAINING: In one stimulus reinforce is given where as on other no reinforce is given so that differentiate between the two STIMULAS GENERALIZATION: Same as spontaneous learning: Shaping: . sometimes the third and sometimes the sixth. You'd be experiencing frustrative nonreward. the first response made after 30 seconds has passed since the last rewarded response is reinforced. sometimes the tenth. As an example. Each type of variable schedule does not generate as high a level of responding as their fixed counterparts. regardless of the total number of responses. but each time interval varies around that average. but the organisms do acquire a kind of persistence due to the constant uncertainty of the occurrence of their reinforcement.For example. But on average. on average every fifth response would be reinforced but the actual ratio constantly varies randomly after each reinforcement. the organisms quickly cease responding. if any. Often they display a form of distress called frustrative nonreward because of the failed reward expectancy. In a VR 5 schedule. frustrative nonreward. However rats. for instance. Why? Because their training schedules accustom them to dealing with uncertainty of reinforcement on each response. Imagine how you'd feel if you worked at job all week and then the boss said you weren't going to get paid for your work. This tends to create a lack of responses during most of the 30 second interval with a flurry of responses near the end of the interval until the 30 second time period has elapsed and the first response during the new 30 second interval is reinforced and then the organism stops responding until near the end of the time interval in anticipation of a new interval beginning. trained on variable schedules are far more persistent and display little. Similarly in a VI 20 schedule. There are also variable ratio (VR) and variable interval (VI) schedules. for instance. These are similar to their fixed counterparts but variable aspect means that the organism has to deal with some uncertainty with each response. in a FI 30 schedule.

and other forms of information processing. to solve a problem. Observational learning :Observational learning is a powerful means of social learning. Unlike learning by trial-and-error. [1] Latent learning is when an organism learns something in its life. even though he has never done them before. then he finds he knows how to do these things. it occurs without obvious reinforcement to be applied later. especially to form conclusions. Latent Learning is a form of learning that is not immediately expressed in an overt response." Cognitive learning is a powerful mechanism that provides the means of knowledge. touching or experiencing. and may not be available to consciousness. a banana was placed high out of reach that the chimpanzees found a way to reach it. watching. but the knowledge is not immediately expressed. It principally occurs through the cognitive processing of information displayed by models. Cognitive processes include creating mental representations of physical objects and events. In one example. and goes well beyond simple imitation of others. and auditorially. For instance a child may observe a parent setting the table or tightening a screw. They stack boxes on top of each other to reach it and used sticks to knock the banana down. Kohler found that chimpanzees could use insight learning instead of trial-and error to solve problems. This learning illustrates the importance of cognitive learning. Cognitive learning is defined as the acquisition of knowledge and skill by mental or cognitive processes — . insight learning is solving problems not based on actual experience (like trial and error steps) but on trials occurring mentally.If we want and organism to learn a complex behaviour than u lead it step by step. inferences. textually. This was observed in the experiments of Wolfgang Kohler in 1900s involving chimpanzees. Conditioning can never explain what you are learning from reading our web-site.the procedures we have for manipulating information 'in our heads'. The information can be conveyed verbally. or judgments. and through actions either by live or symbolic models such as . insight learning It is a type of learning that uses reason. until specific events/experiences might need this knowledge to be demonstrated. such as when a person is in a problem for a period of time and suddenly learns the way to solve it. Often the solution is learned suddenly.such behaviour are reinenforced which are closed to desired behaviour…it is step by step learning COGNITIVE LEARNING: "Cognitive learning is the result of listening. It remains dormant. but does not act on this learning for a year.

. movies. the same psychological processes underlie observational learning. however. Regardless of the medium used to present the modeled activities. is restricted by the cognitive competence and enactment skills of the learner.television. The complexity of the learning. and the Internet. This representation guides the enactment of observationally learned patterns of conduct. These include attention and memory processes directed to establish a conceptual representation of the modeled activity. Unlike learning by doing. observational learning does not require enactment of the modeling activities during learning. Whether the learned patterns will be performed or not depends on incentive structures and observers' actual and perceived competence to enact the modeled performances.

Sign up to vote on this title
UsefulNot useful