1

Part Four: Applied Behavior Analysis
• • • • • • • • • • • • • • • • • • • •

Applied Behavior Analysis Behavioral Analysis Baselining Self-Monitoring as a Method of Behavior Change Using Reinforcement Using the Premack Principle Natural Social Reinforcers Secondary Reinforcement Using Tokens More about Shaping Prompting and Fading Differential Reinforcement DRO and DRL Using Negative Reinforcement Using Punishment Treatment of Uncontrollable Sneezing Punishment from the Environment The Punishment Trap Response Cost with a Rabbit Applied Analysis of Antecedents Summary: Applied Behavior Analysis

2 Applied Behavior Analysis
In preceding sections of this chapter you have been introduced to the main tools of the applied behavior analyst, a behavioral psychologist who specializes in operant conditioning. There are two main tools: (1) systematic arrangement of consequences (reinforcement and punishment) and (2) careful analysis and arrangement of antecedents (S+ and S-). Together, these skills can be called contingency management. A contingency is a dependency between events, such as the delivery of food when an animal performs a behavior. Contingency management is used whenever animals are motivated by incentives (such as getting paid for a job) or penalties (such as paying a fine for doing something wrong). Applied behavior analysis is the application of conditioning principles to any tasks or problems outside the laboratory. We already discussed applications of classical conditioning in an earlier section of this chapter. In this section we will concentrate on applications of operant conditioning. How did a professor start a class on applied behavior analysis, and what was the point? One professor started a graduate class on applied behavior analysis by generating a huge list of problems. The professor told the students, "Think of any problem a person can have, and we will design a behavioral therapy for it. Let's get a big list on the board." At first the class responded slowly. Someone suggested "marriage problems" and the professor said, "Put it in terms of behaviors." "OK," said the student, "let's say the problem is not enough time spent together." The professor wrote that on the board. Another student suggested the problem of eliminating "writer's block," defined behaviorally as the problem of increasing the frequency of certain types of writing. Another student suggested the problem of eliminating unwanted involuntary movements (tics). Other students mentioned problems like nailbiting and quitting cigarettes. How are problems defined in behavioral terms? As the list grew, the students realized this process could take quite a while. The list of possible human problems is neverending. Most can be defined in behavioral terms. In other words, most problems can be described in terms of some observable, measurable activity (behavior) that must be made either more frequent or less frequent. That is the essence of the behavioral approach: problems are defined as behaviors that can be made more frequent or less frequent, and the therapy consists of applying conditioning techniques to make the desired changes. After the list filled the board, the professor gave the class its assignment. Each student had to select a problem and, by the end of the term, design a behavioral treatment for it. The professor was making a point, not just an assignment. Behavioral approaches can be applied to virtually any conceivable problem.

Unaffective talk. Fast talk. Overtalk. one must state the problem in terms of behaviors. Developed by Ogden Lindsley. or speaking with a shaky voice. A person doesn't speak enough. specific verbal problems. 11. 4. A person speaks too quickly after someone else. punishers. A person speaks too slowly. Antecedent and consequent stimuli must be identified. 9. modifiable behaviors. 3. The checklist included the following: 1. For example. controlling stimuli. it is called Lindsley's Simplified Precision Model. Singsong speech. A person speaks too loudly. 4. whining. A person talks as if singing or chanting.. If you cannot specify a behavior. What is Lindsley's Simplified Precision Model? 1. unemotional voice even when emotion is appropriate. one can make an educated guess about which intervention strategy or "treatment" might be best. Rapid latency. A person talks too fast. Monotone speech. Lindsley's Simplified Precision Model Green and Morrow (1974) offered a convenient. or observational learning. If the first try does not succeed. A person speaks considerably more than is appropriate. Affective talk. Quiet talk.the concepts we have covered in this chapter. reinforcers.3 Behavioral Analysis The first step in applied behavior analysis is to analyze the problem. 8. that is. Loud talk. screaming. The first step in Lindsley's list was to pinpoint the behavior to be modified. 3. The analysis must be behavioral. how can you modify it? What "heroic" efforts are exemplified by the list of speech problems? Behavior modifiers and therapists sometimes go to heroic lengths to identify specific. try and try again with revised procedures. Record the rate of that behavior. 12. a team of behavior therapists at a speech clinic came up with 49 different. 7. 10. 2.. 2. 6. four-step guide to successful behavior change. This is often the most crucial step. Change consequences of the behavior. crying. A person speaks with unvarying tone. A person speaks in a flat. A person talks with great emotion. Undertalk. 5. After this analysis. Pinpoint the target behavior to be modified. Slow latency. . A person speaks too softly. A person responds only very slowly. Slow talk.

or punishment. without trying to alter it. . For example. singled out for reinforcement. pp. ." (Adapted from Thomas. 1974. to alter the frequency of the behavior.and the list goes on. In what important respect was Lindsley's model incomplete? During the baseline period. Baselining The next step. discriminative stimuli (both S+s and S-s) act as if they control behavior. is to measure its frequency of occurrence before attempting to change it. What is baselining? What is its purpose? How long should baselining continue. 248-251. then a therapist can attempt to change the antecedents or consequences of the behavior. it is possible to reinforce one and extinguish the other. Baselining is keeping a careful record of how often a behavior occurs. a behavior. if a child's misbehavior occurs every day when the child is dropped off at a nursery school. one should keep a record of potentially important antecedent stimuli. up to #49. Each category specifies a type of behavior: something that can be recognized.. One weakness of Lindsley's Simplified Precision Model (previous page) was that it did not mention antecedents. If a person clearly alternates between logical and illogical talk. Walter & O'Flaherty. Once the problem is specified in terms of something that can be measured or detected. so one knows later if the attempt to change behavior had any effect.) What happened when a client first entered the speech clinic? What happened once the problem was specified? When a client first entered the speech clinic. the baseline observation period can be short.4 13. If the frequency of the behavior varies a lot. The purpose of baselining is to establish a point of reference. turning it on or off. A person too often "butts in" to conversation. a good behavior analyst will target this period of the day and try to arrange for a reinforcing event to occur if the child remains calm following the departure of the parent at such a time. It only mentioned changing consequences of a behavior. once they are noticed. extinction.. the therapists checked off which behaviors defined the client's problem. as a rule? As a general rule. baselining should continue until there is a definite pattern. after specifying a behavior to be changed. A genuine behavior change (as opposed to a random variation in the frequency of a behavior) should stand out sharply from the baseline rate of the behavior. Obtrusions. Often the relevance of antecedents will be obvious. If the behavior is produced at a steady rate. While taking baseline measurements of an operant rate-the frequency of some behavior-an applied behavior analyst should pay careful attention to antecedents -stimuli that come before the behavior. which is "illogical talk. baseline observations should continue for a long time. as well as the record of the frequency of the behavior itself. As we saw earlier.

A direct form of intervention is required. and this (or the fear of having bad eating habits exposed) motivates a person to eat less. like snacking. Jay was . Using Reinforcement In some cases. the likely explanation is that a person becomes more conscious of the behavior and starts to alter it when it is measured. A physician had ruled out any organic basis for this behavior. and it can be an effective tool for behavior change.. Therefore the third part of Lindsley's method consists of arranging a contingency involving reinforcement or punishment. Second. or TV-watching.plotted on a standardized graph form. Recently Jay was transferred from a state hospital to a nursing home. the nurses had been routinely recording the number of times Jay wet his clothes each day. Jay. It also draws attention to the consequences of behavior. retarded man urinated in his clothes. I am over my limit. What is self-monitoring? What sorts of problems respond well to self-monitoring? Self-monitoring often works especially well with impulsive habits. but it forces attention to natural reinforcements and punishments. many people wish to lose weight. Questionable punishment procedures. Jay was left wet for thirty minutes following each wet. These are all things a person may start to do "without thinking. The act of recording data focuses attention on each bite of food and its consequences.. Conveniently.. to let [a student] try a program to eliminate the wetting behavior. The nursing home director said that Jay must leave if he continued to be enuretic.. lacks any specially arranged reinforcement or punishment. with reservation. "If I eat this. insisted upon by the nursing home director.. to really work. were used in the first two modification phases. For example... Green and Morrow (1974) offer the following example of Lindsley's Simplified Precision Model in action.Jay's daily rate of wets was. It forces a person to think about every occurrence of a behavior." or "If I start watching TV I won't get my homework done. People who keep a record of every calorie consumed often find that they lose weight as a result. as a behavior change procedure. He agreed. but few people keep detailed records of their calorie intake... due to the measurement itself. self-monitoring must be done honestly without cheating." Self-monitoring. cigarette smoking.. Measurement of one's own behavior is called self-monitoring ." Of course. mere measurement and informative feedback is not enough.5 Self-Monitoring as a Method of Behavior Change Sometimes behavior will change during a baseline observation period. Although old-time behaviorists never would have used this language. a twenty-year-old spastic. First.

Using the Premack Principle The search for effective reinforcers sometimes requires creative thinking. How does the story of "Jay" illustrate step #4 of Lindsley's procedure? In a fourth phase. gestures. Presumably. He acted belligerently toward his parents and was destructive of home property. Candy and praise were chosen as consequences after discussion with the nursing home personnel disclosed what Jay seemed to "go for. Throughout both punishment phases the median rate remained unchanged. Initially. but his high probability behaviors were to publicly assume a semifetal position." The procedure essentially eliminated wetting. behavior analysts decided to stop the "questionable punishment procedures" and try positive reinforcement instead. discussed earlier. try and try again with revised procedures. is the idea that preferred or high frequency behaviors can be used to reinforce less preferred. Burton had been forced out of school because of his bizarre mannerisms. The first attempt at behavior change may not work. 1974. How was the Premack principle used to help Burton. He had been known to punish his parents by such behaviors as pouring buckets of water over his head in the middle of the living room. therefore.6 left in his room for the remainder of the day after he wet once. albeit a highly intelligent 13-year-old. approved toilet behavior and nonwetting were now maintained by natural consequences. and with the nursing home director's reluctant consent. No punishment was used. to lock himself alone in his room for long hours. In an "after" phase (after reinforcements were discontinued). What to do? Lindsley's step #4 is "if at first you don't succeed. and posturing. alternately. It was generally assumed that he was a severely schizophrenic child." That is a rather strange sounding rule.47)) Note the "after" phase. the strange 13-year-old? Case #S-13. but it proves important. Neither he nor his parents rated any objects or people as reinforcing. The Premack principle. Proper intervention procedures also include a follow-up to make sure the change is permanent. low frequency behaviors. . such as social approval and the comfort of staying dry. Reading the homework assigned by his visiting teacher was low probability behavior. In the case of Jay. Jay was given verbal praise and a piece of candy each time he urinated in the toilet. (Green & Morrow. and. Like other scientists. they must guess and test and try again. Here is a case history in which the Premack principle was employed with good results. the reinforcement of "retiring to his room" was used. Behavior analysts are not automatically correct in their first assessment of a situation. following consultation by the student with the junior author. p. contingent upon completing his homework assignment. the rate remained at zero except for one lapse.

A primary rule of social reinforcement is "be sincere. and high probability was escaping alone to a corner of the schoolyard. not really meaning what you say.7 Later. Later. Walcott and Green (1974) showed that this cleaning symbiosis was reinforcing to the larger fish. as if flattering someone means buttering them up. compliments. the same behavior at home." If flattery is honest and true. or simple attention and company. Perhaps the word flattery is a bad choice. Rena's father had occasionally done this with her.. he was returned to a special education classroom. which teaches "how to win friends and influence people. the larger fish would perform a behavior more often when it was followed by contact with the smaller. The Dale Carnegie course. How was natural social reinforcement used with Rena? Case #50. To some people. touching. but appreciation is very effective! Natural social reinforcement can be useful in professionally planned behavior modification programs.. school attendance became a high probability behavior. (Tharp and Wetzel. Rena. Since reinforcers at home were so limited. For example. and general defiance. he was allowed to attend school only contingent upon more acceptable behavior at home. In other words. etc. social reinforcers can be ruined if they are perceived to be fake or manipulative. grooming. hugs. this interaction . it implies deceit. A contingency was established in which Burton was allowed to leave the class after completion of his assignment. small fish sometimes linger in the area of larger fish and clean them by eating parasites and debris from the larger fish. Among humans. The following example is from Tharp and Wetzel's book Behavior Modification in the Natural Environment (1969). Low probability behavior was classwork. 1969. an elementary school student. cleaning fish. An intervention plan was set up whereby Rena's teacher could inform the parents each day her behavior was satisfactory. disruptive classroom behavior. it is a powerful reinforcer. After interviewing her parents. we had to rely on the positive attention her father could give her when he got home. p.47) Natural Social Reinforcers Probably the most commonly used reinforcer in human and animal affairs is natural social reinforcement. and cleaning. we discovered that Rena was exhibiting. Rena was referred by her parents who were very concerned about her inappropriate behavior at school. Natural social reinforcement includes all sorts of positive social contact. At that point. What are natural social reinforcers? Common social reinforcers among non-human animals are attention." says flattery is not recommended as a technique for winning friends. including (among humans) smiles. was known throughout the school for her aggressiveness toward her peers. on a somewhat lesser scale. and by making it contingent. They would play simple card games or play in the yard skipping rope.

If the students at GCTC do their work well or get along with the other students.but not everybody. The secondary reinforcements show her that she is doing something good and acceptable in the eyes of others. as hoped. they receive a certain amount of tokens. such as trophies or ribbons. This kind of reinforcement is known as secondary reinforcement. for her achievements. and before long Rena was no problem at school. My sister always comes home telling us how many tokens she earned and what she spent them on. She really enjoys getting tokens or any other kind of secondary reinforcement. And. How are secondary reinforcers used in token economies? One well-known application of secondary reinforcement is in token economies. [Author's files] Why are tokens useful in institutional settings? Tokens are useful in group settings like a training center for several reasons. using plastic poker chips or similar tokens instead of money. because you cannot eat or drink money or get any other primary reinforcement directly from it. Secondary Reinforcement Using Tokens Secondary reinforcers. you can trade money for primary reinforcers such as food and drink. With tokens. Grades are an example. but more recently they have been found useful in institutions serving learning-disabled individuals. One student writes: Everyone has a need and a want for food. They are worthless in themselves. but they can lead to primary reinforcers like pride. The plan took effect rather rapidly. However. especially at school. are learned or symbolic reinforcers. Token economies are like miniature economic systems. and love. her behavior at home also improved. My sister is mentally retarded. In a treatment facility.. The teachers there have set up a system based on tokens. and (2) reinforcement can be given immediately after a behavior. the students can go to the "store" in the school and spend their tokens on something that they want. you may recall. Sometimes reinforcements are earned to buy or attain primary reinforcements. Secondary reinforcers get their reinforcing properties from their association with primary reinforcers. Originally they were used in mental hospitals. attention. you do not need to worry about having the right type of . and the fruits of employment after graduation. (1) the same reinforcer can be given to everybody. These are just a few examples of what is known as primary reinforcements.8 became very meaningful to her. At the end of each week. Money is a secondary reinforcer. and I can tell when she has been positively reinforced for something she has done. and different people like different types of candy. sleep. candy might be an effective reinforcer with some people. She attends Gordon County Training Center. When Rena's behavior was not satisfactory at school.. this reinforcement did not occur.

and dachshunds—might be less able to learn this skill. The dog enters ." This must be a behavior the dog can already perform. If a step proves too large. Frisbee catching requires that the dog take a Frisbee into its mouth.9 reinforcer on hand. you have to decide whether your dog is physically capable of such an act. What is the technical name for "shaping"? Shaping is well described by its technical name: the method of successive approximations. Suppose you have a dog that is physically capable of catching a Frisbee. In fact. What are five rules to observe. the behavior must occur. Plan a small chain of behavioral changes leading from the entering behavior to the target behavior. How do you do it? According to rule #1. More about Shaping Before you can reinforce a behavior. consider the task of teaching a dog to catch a Frisbee. to avoid satiation (getting "full"). so use a cheap Frisbee you do not mind ruining. you know it is quite impressive. Most dogs are capable of doing this without any training. in small steps. into the target behavior. Make sure the target behavior is realistic and biologically possible. How are the five rules illustrated by teaching a dog to catch a Frisbee? To illustrate the five rules. 3. muscular build which permit them to leap high into the air to snatch Frisbees out of the breezes. break it into smaller. while using shaping? 1. Here are five simple rules for shaping. To do successive approximations is to get closer by small steps. What if the behavior is not occurring? Then you must use a technique called shaping . you can give reinforcements immediately (in tokens) and the patient can "spend" the tokens later at a store in the hospital. so you might start by reinforcing the dog for the entering behavior of holding the Frisbee in its mouth. It should be a behavior that can be transformed. Shaping works by starting with whatever the organism can already do and reinforces closer and closer approximations to a goal. Pekinese. 5. they will gladly puncture a Frisbee with their canine teeth. Other breeds—bulldogs. To approximate something is to get close to it. more manageable steps. Suppose you want to teach your dog this trick. 2. If you have ever seen dogs catch a Frisbee. 4. The national champion Frisbee-catching dogs are usually dogs like whippets with a lean. Rule #2 says "specify the current (entering) behavior. mentioned earlier in connection with teaching a rat to press a bar in a Skinner Box. Specify the current (entering) behavior and desired (target) behavior. Use reinforcers in small quantities.

you might drop it about an inch through the air.) Keep rule #4 in mind. You might start one inch above the dogs mouth. The dog does not know what to do when it sees the Frisbee coming. That is a first. Next. you lose your reinforcer. How can you get from "here" to "there"? One approach is to toss the Frisbee about a foot in the air toward the dog. Now the most critical part of the shaping procedure takes place. (For literate dogs outside the U. If you use 50 yummies getting it up to the point where it is catching a Frisbee that falls eight inches. work up to two inches. you should be able to work up to longer distances. This brings us to rule #4. Retrieval games are themselves reinforcing to many dogs. If a step is too large (such as going directly from chewing the Frisbee to snatching it out of the air) you must break it into smaller steps. You gradually allow the Frisbee to fall a greater and greater distance before the dog bites it.S. and Britain. dogs respond well to social reinforcement (praise and pats). so dog trainers usually reserve their most powerful reinforcers for occasional use. then a foot. this probably will not work. hoping it will perform the skill so you can reinforce it. If satiation occurs. you go back to 6 inches for a while. then work back to 8. simple step. The dog is likely to decide it has enough Dog Yummies and crawl off to digest the food. Unfortunately. use centimeters and meters. Finding such a sequence of steps is the trickiest part of shaping. a good way to start is to hold the Frisbee in the air. and so on. you are in business. and your behavior modification project grinds to a halt. right into the dog's mouth. If you are holding the Frisbee above the dog. the trainer used retrieval of . Rule #3 says to devise a series of small steps leading from the entering behavior (holding the Frisbee in his mouth) to the target behavior (snatching the Frisbee from the air). When I took a dog obedence course. if the dog cannot grab the Frisbee when it falls 8 inches. Once the dog is lunging for Frisbees that you flip toward it from a distance of a few feet. In the Frisbee-catching project. until finally the dog can grab the Frisbee when it falls a whole foot from your hand to the dog's mouth. Why is satiation unlikely to be a problem in this situation? Actually. The dog will probably rise up on his hind legs to bite it. That is why it is called an entering behavior. even if the dog has chewed on it in the past.10 the experimental situation with this behavior already in its repertoire. Satiation (pronounced SAY-see-AY-shun) is "getting full" of a reinforcer—getting so much of it that the animal (or person) no longer wants it. if the dog gets into the spirit of the game. you release the Frisbee a split second before the dog grabs it. you will probably not get much further that day. then 10. You let the dog grab it in his mouth. Eventually. From there to a full-fledged Frisbee retrieval is only a matter of degree. Suppose you are using Dog Yummies to reinforce your Frisbee-catching dog. then three. then you release it. It hits the dog on the nose and falls to the ground. and that never gets old to a loving dog. Rule #5 says to have reinforcers available in small quantities to avoid satiation.

A coach who helps a small child hold a baseball bat. Prompting and Fading Prompting is the act of helping a behavior to occur. That was a fine example of the Premack Principle in action because a preferred behavior (retrieving a tennis ball) was used to reinforce nonpreferred behaviors (demonstrating obedience techniques). People hardly noticed the transition to the new signs. For example. the baseball coach gradually allows the child to feel more and more of the bat's weight." After about 20 repetitions there is no need to touch the back of the dog's legs. Then the words were removed. differential reinforcement is used when a behavior already occurs and has good form (does not need shaping) but tends to get lost among other behaviors. How did a city use prompting and fading? Prompting and fading was used by one city when it switched from signs with the English words "No Parking" to signs with only an international symbol (a circle with a parked car in it and a diagonal line crossing it out). it is "faded out. This is a useful way to start teaching a behavior. The upward tug on the collar and the arm behind the back knees are called a prompt because they help or prompt the behavior to occur. This forces the dog to sit. The solution is to single out the desired behavior and reinforce it. The procedure of gradually removing the prompt is called fading. Unlike shaping. to teach a dog to sit. . the new signs contained both the international symbol and the English words. is using prompting.11 a tennis ball to reinforce her dog at the end of a training session." What is prompting and fading? Prompting and fading is commonly used in dog obedience training. Differential Reinforcement Differential reinforcement is selective reinforcement of one behavior from among others. Meanwhile one holds the dog's collar so the head stays up. How is prompting and fading used in dog obedience training? The command is a stimulus that eventually functions as an S+. The prompt becomes weaker and weaker. until the coach is no longer holding it. When the dog sits. the trainer praises it or offers it a morsel of food. For the first three months. Eventually the child swings the bat alone. one gives the command (sit) then forces the dog to comply with it by gently sweeping the arm into the dog's back knees from behind. one says "sit" and the dog sits. because their behavior was transferred smoothly from one controlling stimulus to another. For example. to teach a proper swing. Fading is said to occur when the trainer gradually withdraws the prompt. so the dog's back legs buckle gently and its rump goes down to the ground. The prompt has been "faded away.

A response class is a set of behaviors—a category of operants—singled out for reinforcement while other behaviors are ignored or (if necessary) punished. while discouraging any fighting. Such a group is labeled a response class. back flip. and as long as it does not do a particular behavior for a certain period of time. the animal can do whatever it wants. One porpoise "jumped from the water. The porpoises caught on to the fact that they were being encouraged to do new and different things. Pryor set up a contingency whereby the porpoise got fish only for performing novel (new) behaviors. They tried their old tricks but got no fish. For example. a truly bizarre act for an entirely aquatic animal" (Pryor. it receives a reinforcer. tailwave. In the behavioral laboratory. At first this amounted to an extinction period. The only limitation on the response class is that the organism being reinforced must be able to discriminate it. given praise or a star on a chart or more time to do what they wanted. The response class. for example. These were reinforced if the porpoise had never done them before. How did Pryor reinforce creative behavior? Karen Pryor is a porpoise trainer who became famous when she discovered that porpoises could discriminate the response class of novel behavior. was any behavior the animal had never before performed. Other behavior basically means any behaviors except the one you want to get rid of. Haag. and tapped the trainer on the ankle with its rostrum or snout. the concept of cooperative play could be explained to them in simple terms. the variety of behavior increased. The "cooperative play" behaviors would form a group singled out for reinforcement. & O'Reilly. one might reinforce any sort of cooperative play. The animals were getting no fish. The animals also emitted four new behaviors— the corkscrew. How did the porpoises' natural reaction to extinction help Pryor? As usual when an extinction period begins.12 What is differential reinforcement? How is it distinguished from shaping? What is a "response class"? Differential reinforcement is commonly applied to a group of behaviors." In other words. DRO is technically defined as "delivery of a reinforcer if a specified time elapses without a designated response. and the porpoises showed a higher level of activity than normal. In the case of preschoolers at a day care center. Pryor reinforced two porpoises at the Sea Life Park in Hawaii any time the animals did something new. Then they tried variations of old tricks. skidded across 6 ft of wet pavement. Children observed to engage in cooperative play would then be reinforced in some way that worked. the porpoises showed extinction-induced resurgence. and inverted leap—never before performed spontaneously by porpoises. if one was working in a day care center for children. in this case. 1969). DRO and DRL A special form of differential reinforcement is differential reinforcement of other behavior. abbreviated DRO. In other words. .

You cut off reinforcements to the behavior you want to get rid of (extinction) and you reinforce any other behavior (DRO). Most of the time. If the conversation turns to aches and pains.13 What is DRO? What are situations in which DRO might be useful? DRO is used to eliminate a behavior without punishment. My roommate is a wonderful person. You could say. One is to increase productivity in industry. I simply ignored her or left the room. Now my roommate talks less and I don't get as aggravated with her. DRO involves extinction of the problem behavior. if you feel you must discipline a child. you should not merely punish the wrong responses. After the psychology lecture on differential reinforcement for a low rate of behavior. many animals can learn a contingency in which responding slowly produces reinforcement. Sometimes this works too well! . DRL occurs when you reinforce slow or infrequent responses. Psychologists were initially surprised that such a thing as DRL could exist. reinforcement is defined as increasing the rate of behavior. When I asked her a simple question and received a lengthy answer. She talks constantly. you can achieve what you want through positive reinforcement alone without any punishment. Another variation of differential reinforcement is DRL or differential reinforcement of a low rate of behavior. A simple yes or no question receives a "15 minute lecture" answer. but she talks too much. For example. How can DRO supplement or replace punishment? Whenever punishment is used. I decided to try this method. So how can you encourage your roommate to stop talking about health? One approach is to use DRO. As this example shows. Most people will work extra hard if they can get some time off from work as a reward. you pay attention and act friendly. After all. DRO should be used as well. Negative reinforcement also has many uses. What is DRL? A student reports using DRL to deal with a roommate problem: My experience with my roommate is an example of DRL. Eventually. you should positively reinforce correct responses. Suppose you have a roommate who complains constantly about poor health. This is negative reinforcement because the level of work is increased (the behavior becomes more frequent) as the result of having an aversive stimulus removed (time off from work). [Author's files] Using Negative Reinforcement So far we have considered variations on the theme of positive reinforcement. "Stop talking about your health" but that would be rude. If the roommate spends a minute without talking about health. I tried to seem interested and even discussed her answer. When she gave a simple reply. if the procedure works. However. your roommate will stop talking about health problems. you stop talking.

then 60 transmissions in 40 minutes. Workers on one of the lines were going too slow. The reinforcer (time off from work) involved removal of an aversive stimulus so it was "negative" reinforcement.. then we will use a negative reinforcer.. it produced a higher frequency of behavior. If the workers produced their quota of 60 transmissions before the end of the hour.until the managers refused to let it continue. But soon 80 transmissions took only 50 minutes. they complained about being exploited.14 Here is a story told by a guest lecturer in a Behavior Modification class. they accepted the new quota. how come those guys are getting a 20 minute break every hour?" The plant managers had not expected this to happen.. OK. The workers grumbled.and then only 45 minutes. because the management would not allow any extra salary incentives. because (as they saw it) they were being asked to produce more work for the same salary. holding up the entire plant. The line began to manufacture 60 transmissions in 45 minutes.. but they seldom achieved this objective. Workers on the other assembly lines started asking for a similar system. but at least the line was finally meeting its quota. and they had no plans for dealing with it. Bad feelings existed all around. . They felt abused. and the workers were back to taking a 15 minute break every hour. Then a funny thing happened. Soon workers on the other lines were grumbling. The psychologists conceived of this as a negative reinforcement contingency. They arrived at the idea when they realized there was no positive reinforcer available. The psychologists suggested a contingency. The supervisors were disgusted. If the workers hate being "pushed" all the time. The program worked like magic. How does the story about the automatic transmission assembly line illustrate the potential power of negative reinforcement? A major automobile manufacturer asked some behavioral psychologists to help solve a morale and production problem at an automobile assembly plant in Ypsilanti. At this point. 60 transmissions were produced in 50 minutes. They were supposed to manufacture 60 transmissions per hour. They boosted the production quota to 80 transmissions per hour. the plant managers decided the experiment had to end. they could take a break until the end of the hour. It was unacceptable to have people taking a break every hour. "We knew it wouldn't work." the managers said. "Hey. The managers sent the behavior modifiers home and went back to the old system. Michigan. Productivity leaped. leaving 10 minutes for a break every hour. [Author's files] The assembly line story is an example of negative reinforcement. we will let them take time off (and avoid being pushed) when they meet their quota. When managers urged them to speed up. At first. but given the alternative of going back to the old plan with no hourly break. Like positive reinforcement. Nobody expected the "problem" that occurred next. said the psychologists. producing 80 transmissions took almost the whole hour. The supervisors felt strange seeing the workers "goof off" every hour.

and this was one issue in a 1998 strike against General Motors. a modified version of the policy with a higher rate of production was re-affirmed in a new contract. but actually the machines were capable of 10 per hour. change its diapers. an hourly wage reinforces slow behavior. Non-human babies of all different types practice this form of operant conditioning instictively. What finding surprised behavior modifiers? This finding actally surprised behavior modifiers when it was first discovered. Studies showed that babies whose parents responded less to their crying (leaving the baby crying longer or more often) actually suffered more crying in the future. because it was so counterintuitive. unless steps are taken to maintain quality control and safety on the job. following their parents. and it even became part of union contracts. crying (in babies under one year old) is not reinforced by the application of love. How are babies "masters of behavior modification"? Almost nothing is more aversive to parents than the cry of a baby. The obvious disadvantage is that workers who are rushing to complete their quota might produce a poor quality product. The less energy a worker puts into the job. In the end. For example.15 Eventually a simplified form of this incentive did find a home in the auto industry. making unpleasant noises until fed. Perhaps this is because crying cannot be completely extinguished. By contrast. get up in the middle of a deep sleep…all to prevent crying. and adult humans will do almost anything to eliminate or prevent that stimulus. Babies as master behavior modifiers Babies are masters of behavior modification who use negative reinforcement to increase the frequency of parenting behaviors in adults. . so parents who respond slowly are essentially using intermittent reinforcement and are teaching their babies persistence. juvenile birds fresh out of the nest will fly from branch to branch. They will feed a baby. Fortunately for parents. Parents must respond eventually. In an echo of the above story (which I heard as an undergraduate in the early 1970s) news articles in 1997 reported that workers in a stamping plant at Flint. not quietude. Michigan were going home after only 4 hours instead of the usual 8. Apparently the union had negotiated a production quota based on the assumption that certain metal-stamping machines were capable of stamping out 5 parts per hour. GM decided to eliminate the "go home early" policy. or endanger themselves. when people are paid according to how much they produce (so-called "piecework" systems) they work very quickly to maximize their gain. What is reinforced by an hourly wage? By piecework? What is the disadvantage of piecework? If you think about it. the more money the worker receives per unit of energy expended. dance around with it. The workers speeded them up to 10 parts per hour and met the quota specified in their union contract within 4 hours.

The boy had to be kept in a straitjacket or padded room to keep him from hurting himself. therefore. He again stopped his headbanging activity for three minutes. and then made three contacts with the floor in quick succession. or because it disrupts the activity of nerve cells. Later. mentally retarded boy who caused himself serious injury by head-banging. strapping them on so he could not remove them. He did not bang his head for a full three minutes. electric shock "penetrates" when other punishers fail to work. At the end of that time he made one more contact. Any stimulus that has the effect of lowering the frequency of a behavior it follows is a punisher. punishment was a very effective technique for eliminating undesirable behavior. Electric shock is often the most effective punishing stimulus. but that is not always the case. the boy banged his head up to a thousand times in an hour. Each time he banged his head. What is punishment? Treatment of head-banging Whaley and Mallott (1971) tell of a nine-year-old. and did not bang his head for the remainder of the one-hour session. Dickie stopped abruptly and looked about the room in a puzzled manner. Most people assume the stimulus has to be unpleasant (aversive). they delivered a mild shock to his leg. The first time he banged his head and was given a shock. The psychologist working with Dickie stressed that the shock used was . receiving a mild shock after each one. Soon the head banging had stopped completely and the mat was removed from the room. even if it does not seem like one. This prevented normal development. by definition. Something had to be done. Whatever the reason. after a shock was given the first time Dickie banged his head. in his case. How did punishment help the child who banged his head? The researchers decided to try a punishment procedure. and they will do almost anything to avoid it. They placed shock leads (electrodes) on the boy's leg. Left unrestrained in a padded room. Currently Dickie no longer needs to be restrained or confined and has not been observed to bang his head since the treatment was terminated. organisms never become accustomed to it. with the consequence that the behavior becomes less frequent or less likely. Perhaps because electricity is an unnatural stimulus. received a shock.16 Using Punishment Punishment is the application of a stimulus after a behavior. he acted more like a three-year-old than a nine-year-old. On subsequent sessions. he abandoned this behavior. a successful attempt was made to prevent Dickie from banging his head in other areas of the ward.

it was still being debated. Punishment can cause avoidance and emotional disturbance. When humans punish animals. her sneezing remained under control. people who regarded any use of electric shock with humans as unacceptable attacked this technique as cruel.The total time that the teenager actually received shocks during the entire treatment was less than three minutes. for some reason. and six hours later it had stopped completely. and it comes immediately after the problem behavior.17 mild and. punishment therapy is justifiable. Twenty years after this technique was developed.. because it is so quick and effective. It involves uncontrollable sneezing in a 17-yearold girl. However.the girl spent a full night and day without a single sneeze. She sneezed every few minutes for six months. with occasional subsequent applications of the shock apparatus. How does the case history of the sneezing girl illustrate therapeutic use of electric shock? The shock began as soon as the sneeze was emitted and lasted for half a second after its cessation. 1971). Her parents spent thousands of dollars consulting with medical specialists. but nobody could help. To be effective. The punishment does not cause injury. For the first time in six months. making medical treatment and other interactions difficult. one could argue. Treatment of Uncontrollable Sneezing If you try to identify the common element in problems that respond well to aversive treatment.. they often involve a "stuck circuit"—a biologically based behavior pattern that. .. Within a few hours the sneezing became less frequent. such as a swat or pretend bite. even merciful. (Whaley & Mallott. A mother wolf (or lion or tiger) shows effective punishment procedures with its babies. Two days later she was discharged from the hospital and. The problem was solved (again) with mild electric shocks. and it need not be injurious. compared to the harm and possible danger involved in Dickie's head banging. In these cases. was certainly justified (Whaley & Mallott. . 1971) Punishment from the Environment Punishment often has negative side effects. a punishment must occur immediately after a behavior. Consider another case reported by Whaley and Mallott. but it conveys disapproval. Animals lose their trust of humans who punish them. Misbehavior is followed by a quick and largely symbolic act of dominance. plus it stopped a destructive habit that might have persisted for years if left unchecked. is triggered again and again with self-injurious consequences.. the animals often fail to learn because they do not know which specific behavior is being punished. It worked and spared the child further self-injury.

a good alternative is punishment from the environment. the cat learns to hide when the human comes home. which typically occurred much earlier." but some will respond with an active defense reflex that could involve biting. . Gadgets designed to deter this behvaior typically combine a motion sensor with a can of pressurized air or a high-frequency audio alarm. because even enzyme treatments designed to eliminate the odor do not eliminate all traces of it. What conditional response bedevils cat owners? How do automatic gadgets help? Sometimes cats get into the nasty habit of defecating or urinating on a carpet. Dunbar points out. and punishment by a human does not deter it. Once the problem starts. The blast of air (or alarm) is triggered by the presence of the cat in the forbidden area. it is likely to continue. The behavior occurs when no human is present." For example. the cat merely learns to avoid the human (so the human becomes an S-). The cat does not associate punishment with the forbidden behavior. When the cat jumps to the counter it lands on the cardboard. even if the dog is normally friendly. even cats. "a well-designed booby trap usually results in one-trial learning." It means the cat has learned that the human does unpleasant things when first arriving home. a cat can be discouraged from jumping on a kitchen counter by arranging cardboard strips that stick out about 6 inches from the counter. and punishes the cat. Typically the cat will continue to perform the same forbidden act when the human is not present. for reasons discussed above (punishment comes too late and the animal fails to connect the punishment with the behavior). punishment is not usually effective. these devices sometimes work when all else has failed. The cans go flying up in the air. and the odor "sets off" the cat in the manner of a conditional response. so entrepreneurs have responded. The cat quickly learns to stay off the counter. Some dogs will "take it. If the human punishes the cat. It works with all animals." For similar reasons. and the whole kit and caboodle crashes to the floor. What are several reasons dog trainers recommend against harsh punishment? Dog trainers also recommend not using harsh punishment. weighted down on the counter with empty soda cans. Meanwhile the cat does not blame this event on humans. What to do? The problem is urgent and motivates online buying.com. "A cat will only poke its nose into a candle flame once. so the cat does not avoid humans. This does not mean the cat feels "guilt. What is "punishment from the environment" and how can it be used to keep cats off the kitchen counter? If punishment from a human does not work very well. just the kitchen counter. If the human discovers evidence of a cat's forbidden behavior upon coming home.18 What are some negative side effects of punishment? What typically happens when a human tries to punish a cat? Dunbar (1988) noted that if a cat owner sees a cat performing a forbidden act such as scratching the furniture. According to reviews by troubled cat owners at places like amazon. They are also a good example of punishment from the environment.

as if shock is always inhumane. Small shocks do not cause physical injury. and a usually-friendly dog can surprise a child with a vicious response to being harassed. They do not "test" it the way they test non-electric fences (often bending or breaking them in the process). it should be as mild as possible. Sometimes this only makes the behavior worse. You can touch an electric fence yourself. The Punishment Trap Ironically. If that fails. if cat owners have a kitty that likes to wake them up too early in the morning. punishment is unnecessary with dogs. which can injure animals severely.19 (Terrier breeds are particularly prone to this problem. the simplest and gentlest approach is negative punishment or response cost. If a child responds to punishment by doing more of the same bad behavior. Gentle methods are to be preferred with all animals. For example. you will not be harmed. Modern horse trainers win horses over with gentle and consistent positive reinforcement. Dogs have been bred to desire the approval of humans. If so. They respond very well to positive reinforcement as simple as a word of praise. Trainers who handle wild horses no longer "break" them. a squirt gun works. It works just as well and results in a horse that enjoys human company. in principle. If the behavior becomes more frequent. the intended punisher is actually a reinforcer.) Moreover. How can you tell when something intended as punishment is functioning as reinforcement? Observe the frequency of the behavior. How should cat owners respond to unwanted morning awakenings? Is electric shock punishment ever justified? Some people argue against all use of electric shock. for example. most parents will step up the level of punishment. and they are very effective punishers that discourage a repetition of harmful behavior. What is an example of "effective and humane" use of electric shock? In the case of electric fences used by ranchers. How can you tell when attempts at punishment are actually reinforcing a behavior? What is the "punishment trap"? What typically happens when children are well behaved? . even if the electricity is turned off. Avoidance behaviors are self-reinforcing. Simply put the kitty out of the room. Sometimes this is necessary and desirable. Then they avoid it. so large animals will continue to avoid a thin electric fence wire. shock is effective and humane. Electric fences also allow farmers and ranchers to avoid using barbed wire. When punishment is used with any pet or domesticated animal. the parents are caught in the punishment trap. the way they did a century ago. But even large animals like horses will not test an electric fence more than a few times. and although you will get a jolt. stimuli intended as punishment may sometimes function as reinforcers. But electric shocks come in all sizes.

Children learn to misbehave in order to get attention. probably. the little girl. and the next time Jessie messed with the plant.. but all of a sudden she got up. parents must give attention. Jessie was throwing dirt again. when children are well behaved. parents tend to ignore them. Most children are reinforced by attention.20 How could such a pattern occur? Consider these facts. and then sat back down. Dee-Ann simply ignored her. That ended the dirt-throwing problems. which my parents never gave me. Similar dynamics can occur in a school classroom. Dee-Ann then sat back down and continued with our conversation. DeeAnn's attention reverted to her. I always told lies. I then remembered the story about the parents hitting the kids for messing with things. even if I did not do anything wrong." In such cases. By contrast. due partly to having children.. leaving Jessie to play with her toys. picked up the plant and sat it out of Jessie's reach. Then Dee-Ann and I sat down to talk. Why did little Jesse throw dirt? Dee-Ann could not understand why Jessie was acting like that. One student remembers this from her childhood: When I was a little girl. [Author's files] How can a stimulus intended as punishment actually function as a reinforcer. serious behavior problems may be established. is that she wanted attention more than she feared pain. But some children receive almost no attention unless they are "bad. Parents must break up fights. DeeAnn got her to play with her toys. How can a parent avoid the punishment trap? . Again. I explained this to Dee-Ann. So there you have all the necessary ingredients for the punishment trap. [Author's files] Little Jessie probably got lots of loving attention when her mother was not engrossed in conversation with a friend. in this type of situation? The answer to this student's question. Some children deliberately misbehaved in order to see their names on the board. But in a few minutes Jessie was throwing dirt. I played with Jessie. when children misbehave. Therefore. for a while. The average parent is very busy. One student noticed the misbehavior-for-attention pattern while visiting a friend: I was at my friend's trailer one weekend visiting with her and her small daughter. Why would I lie when I knew my dad was going to spank me with a belt? It really hurt. So I thought maybe Jessie was being reinforced for throwing dirt because each time she threw dirt. prevent damage to furniture or walls or pets. But one thing puzzles me. She played quietly for a while. Dee-Ann quickly scolded her and got her to play with her toys again. and threw dirt on the floor. and respond to screams or crying. I think the only reason I lied was to get attention. but the kids wanting attention and doing it more often. stuck her hand in the potted plant. The parent enjoys peace and quiet when children are being good or playing peacefully. Any attention-even getting hit with a belt-is better than being totally ignored. In a few minutes. One of my students told about a teacher in elementary school who wrote the names of "bad" children on the board.

If you are a parent with a child in a grocery store. point this out to your own children and tell them how grateful you are that they know how to behave in public. in order to get attention from humans. Don't wait for them to misbehave. Sincere social reinforcement of desirable behavior is a very positive form of differential reinforcement. When they are creative. What did one vet call the "single most common problem" he encountered? Another vet specialized in house calls so he could see a pet misbehave in context. What did Tanzer write about in the book titled Your Pet Isn't Sick ? Pets are also capable of something like the punishment trap.21 The solution? It is contained in the title of a book (and video series) called Catch Them Being Good. This reduces the overall level of punishment considerably. The solution was the same as with many child behavior problems: "catch them being good. the occasional reprimand or angry word is genuinely punishing. Owners will run over to a pet and comfort it. or limping. attention. With such a child. coughing. and appreciationwhen it is deserved. Of course. too. can learn to misbehave or pretend to be ill. first you have to rule out genuine medical problems. and you observe other children misbehaving." Praise the pet and give it lots of love when it acts healthy. ignore it when it starts coughing or limping. compared to those kids in the next aisle who are yelling and screaming. which is usually adequate when a child cares about pleasing the parent. a parent should admire what they are doing. A loving parent realizes this and adopts a gentler approach. 1977). Usually the problem goes away. It explained how owners who accidentally reinforced symptoms of illness caused pet problems. It encourages a set of behaviors that might be called sweetness. Negative punishment is more commonly called response cost and . if it makes a funny noise like a cough. Parents should go out of their way to give sincere social reinforcement-love. Tanzer found that if the animals were not reinforced for the symptoms (after a thorough check to rule out genuine problems) the symptom would go away. He said unwitting reinforcement of undesired behavior was the single most common problem he encountered. Soon the pet would be coughing all the time. a parent should praise their products. Point out how "mature" they are. a parent should let them demonstrate it. When children are playing quietly or working on some worthy project. that he wrote a book called Your Pet Isn't Sick [He Just Wants You to Think So] (Tanzer. When they invent a game. They. Response Cost with a Rabbit Recall that there are two forms of reinforcement (positive and negative) just as there are two forms of punishment (positive and negative). A child who loves you and trusts you and looks forward to your support may be genuinely stricken by harsh words. One veterinarian saw so many malingering animals trying to get attention by acting ill.

quite simply. making the behavior less frequent. Recall the discussion of discriminative stimuli. because the rabbit's behavior was punished by removing a reinforcing stimulus. Good luck! This is a fine example of response cost. An S. In this case the reinforcing stimulus was being allowed in the house. he learnt that if he peed. We discovered that if we kept the bed covered with a tarp. I imagine that if Jordy is put outside of the bedroom and denied affection for the rest of the evening he'll learn pretty quickly. How was response cost used with a rabbit? . On the internet. Whenever he did this he would immediately be put back in the hutch outside. Behavior reliably emitted or suppressed in the presence of a particular stimulus is said to be under stimulus control. [A British list member responded:] We have two "outdoor" rabbits that come inside for about an hour a day. after about 10 repetitions. or that punishment may be coming. Eventually. with the exception of his bed habit. but we can't keep washing bedding every day. though not always. He seems to have adjusted quite well. This stimulus was removed. and the problem behavior was eliminated. The older (male) rabbit used to pee on the bed. my husband and I got married and Jordy (we call him Monster) and I moved in with my husband.. Antecedents are things that happen before an event." We haven't had a problem since then. An S+ is a stimulus indicating reinforcement is available. with only a few infrequent accidents. he learned the consequence of his behavior. animals learn to perform a behavior in the presence of an S+ and to suppress it in the presence of an S-. But antecedents of behavior are important. Naturally.22 consists of removing a reinforcing stimulus. several discussion groups cater to rabbit owners. Applied Analysis of Antecedents So far most of our examples of applied behavior analysis have involved changing the consequences of behavior. he wouldn't be able to play with "Mum" and "Dad. This has the effect of punishing a behavior.. After about 10 times of peeing on our bed.Please help us.. But then. and they may control behavior by signaling when a behavior will or will not be followed by a reinforcer.is a signal that reinforcement is not available. we thought we had him pretty well trained. [An American list member writes:] I have a 1 1/2 year old French lop and for his entire 1 1/2 years he has been obsessed with peeing on the bed. too. we want to keep Jordy as happy as possible. Here is an example from one of them in which the solution to a problem involved response cost. Up until about a month ago. .. it would usually deter him from the bed.

Walking probably became a stimulus for mulling over his lectures for the day. . The familiar time and location triggers the studying behavior. Lindsley's Simplified Precision Model recommends first pinpointing the behavior to be modified. Books about studying in college typically advise that students set aside a particular time and place for study.m. he got up and ate breakfast. During baselining. baseline measurements should continue until a stable pattern of behavior is observed. while also scheduling adequate time for his other activities. Time and the environment of his home office served as discriminative stimuli to get him started on his writing. The first step in any behavioral intervention is to specify the behaviors targeted for change. recording the rate of that behavior. Around 10 a.F. Skinner. changing the consequences of the behavior.F. In the afternoon he attended meetings and scheduled appointments.F. then (if one fails at the first attempt to change behavior) trying again with revised procedures. Skinner apply this principle to increase his writing productivity? B. . Summary: Applied Behavior Analysis Applied behavior analysis is the application of principles from operant conditioning to "real life" problems outside the conditioning laboratory. How did B. whose research on operant conditioning underlies virtually all of the second half of this chapter. Skinner took a walk down to campus (Harvard) to meet his morning classes.23 How can you manipulate antecedent stimuli to help study more? B. Skinner The powers of daily habit can jumpstart important life activities.m. Usually studying is not too painful once one gets started. then he wrote for about five hours. Problems occur when a person never gets started or procrastinates until there is too much work for the remaining time. Next. That is important with studying because getting started is half the battle. With this routine he was always able to put in a few good hours of writing every day during his prime time. He followed a rigid daily schedule. early morning. At 4 a. used stimulus control to encourage his scholarly work.

" Animals can also learn to misbehave or act ill. they are often important in behavior analysis. . When this is done deliberately (for example. B.F. The Premack principle suggests that a preferred behavior can be used to reinforce less likely behaviors. Punishment is effective in certain situations. Dieters are often advised to avoid eating in front of the TV. if it gets them attention. In human child-rearing. Analysis of antecedents can prove helpful in changing behavior. to help people stop smoking) it is called self-monitoring. Prompting and fading is a technique in which a behavior is helped to occur. Skinner used this technique when he set aside a certain time every morning for writing. Time of day can be used as a discriminative stimulus for desirable behaviors such as studying.24 antecedent stimuli should also be observed. parents must beware of the "punishment trap. Electric fences are arguably more humane than alternatives such as barbed wire for horses and other grazing animals. then help is gradually withdrawn or faded out until the organism is performing the desired behavior on its own. respond better to kindness than punishment. Negative reinforcement works wonders when employees are given "time off" as a reinforcer for good work. while ignoring other behaviors. Differential reinforcement is the technique of singling out some behaviors for reinforcement. The solution is to "catch them being good." which occurs when children are ignored until they misbehave. They. so television does not become an S+ for eating. Baseline measurement may itself produce behavior change. Shaping is a technique that employs positive reinforcement to encourage small changes toward a target behavior. too. Babies are master behavior modifiers who use negative reinforcement to encourage nurturing behavior in parents.

Master your semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master your semester with Scribd & The New York Times

Cancel anytime.