You are on page 1of 11

JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR 2010, 93, 129–139 NUMBER 1 (JANUARY)

DELAYED REINFORCEMENT OF OPERANT BEHAVIOR


KENNON A. LATTAL
WEST VIRGINIA UNIVERSITY

The experimental analysis of delay of reinforcement is considered from the perspective of three
questions that seem basic not only to understanding delay of reinforcement, but, also, by implication,
the contributions of temporal relations between events to operant behavior. The first question is
whether effects of the temporal relation between responses and reinforcers can be isolated from other
features of the environment that often accompany delays, such as stimuli or changes in the temporal
distribution or rate of reinforcement. The second question is that of the effects of delays on operant
behavior. Beyond the common denominator of a temporal separation between reinforcers and the
responses that produce them, delay of reinforcement procedures differ from one another along several
dimensions, making delay effects circumstance dependent. The final question is one of interpreting
delay of reinforcement effects. It centers on the role of the response–reinforcer temporal relation in the
context of other, concurrently operating behavioral processes.
Key words: delay, reinforcement, gradient, temporal contiguity, chained schedule, tandem schedule,
primary principle, derived principle

________________________________________

Along with rate, quality, and magnitude, neither of these reviews was on free-operant
delay has been considered a primary determi- behavior.
nant of the effectiveness of a reinforcer (e.g., The experimental analysis of delay of re-
Catania, 1979; Kimble, 1961). The study of inforcement has resulted not only in an exten-
delay of reinforcement in the experimental sive empirical literature, but the results of
analysis of behavior is a contemporary mani- these analyses have been incorporated into a
festation of the long-standing question in the number of integrative theories of reinforce-
history of ideas, from Aristotle to Hume and ment, such as delay-reduction theory (e.g.,
on to James, of how the temporal relations Fantino,1969), the correlation-based law of
between events influence the actions of organ- effect (Baum, 1973; cf. Williams, 1983), and
isms. Early in the history of experimental behavioral economics (e.g., Mazur, 1987).
psychology, Thorndike (1911) noted that Practically, delay of reinforcement is part and
response acquisition is negatively related to parcel of many human endeavors, as well as
the interval between a response and its effect the applications of behavioral research (e.g.,
or consequence, and Watson (1917) studied Hayes & Hayes, 1993; Stromer, McComas, &
delay of reinforcement experimentally by Rehfeldt, 2000). Perhaps because of its long
imposing a period of time between rats’ history of study, rather than in spite of it, delay
responses of digging through sawdust and of reinforcement continues as a fruitful area of
subsequent access to food. Thereafter, a research, application, and theory.
plethora of experiments have examined delay Delay of reinforcement is considered herein
of reinforcement using a variety of methods from the perspective of three questions. The
and from an equal variety of theoretical first is one of separating the time period
perspectives. Earlier work was placed in per- between a reinforcer and the response that
spective in previous reviews (Renner, 1964; produces from other features of the environ-
Tarpy & Sawabini, 1974), though the focus of ment that can, and often do, accompany the
introduction of a delay, such as stimuli or
changes in the temporal distribution and rate
Thanks to Alicia Roca, Rogelio Escobar, Carlos Can- of reinforcement. The second question is that
çado, Andrés Garcia Penagos, David Jarmolowicz, Toshi- of the effects of delays on operant behavior.
kazu Kuroda, and Allison Tetrault for thoughtful discus- Beyond the common denominator of tempo-
sions and reviews of earlier versions of this article. ral separation, delay of reinforcement proce-
Address correspondence to the author at Department
of Psychology, West Virginia University, Morgantown, WV
dures differ from one another along several
26056-6040 (e-mail: klattal@wvu.edu). dimensions, making delay effects circum-
doi: 10.1901/jeab.2010.93-129 stance-dependent. The final question is one

129
130 KENNON A. LATTAL

of interpreting delay of reinforcement effects. successive reinforcers. Resetting delays typical-


It revolves around the contribution of the ly are fixed, though in principle they need not
response–reinforcer temporal relation in the be. A variable delay can be arranged such that
context of other, concurrently operating be- the values are selected and then set for the
havioral processes. Before considering these duration of each specific delay by using a
three questions, however, answering a more variable-time (VT) schedule (e.g., Cicerone,
basic question is in order. 1976). With a nonresetting unsignaled delay,
the actual delay preceding reinforcers also will
vary as a function of when responding occurs
WHAT IS DELAY OF REINFORCEMENT? during the delay. In the latter case, the
Reinforcement is delayed whenever there is nominal delay value defines the upper limit
period of time between the response produc- of the delay, with actual or obtained delays
ing the reinforcer and its subsequent delivery. typically being less than the programmed delay
This time period has been arranged in value (cf. Dews, 1981; Sizemore & Lattal, 1977,
different ways. Skinner (1938) programmed 1978).
the delay in the presence of the same stimuli
that were in effect during the nondelay period, ISOLATING DELAYS FROM OTHER
as did Dews (1960) and Azzi, Fix, Keller, & CONCURRENTLY OPERATING VARIABLES
Rocha e Silva (1965). Ferster (1953), and many
subsequent researchers (e.g., Chung, 1965; The first question posed in the introduction
Chung & Herrnstein, 1967; Pierce, Hanford, & is, in essence, whether there is a delay of
Zimmerman, 1972; Richards, 1981) correlated reinforcement effect. Imposing either signaled
the delay interval with a stimulus change. or unsignaled delays of reinforcement can
These two procedures define, respectively, simultaneously alter other features of the
unsignaled and signaled delays of reinforce- environment, which in turn can contribute to
ment. With both, there are two components: the behavioral changes nominally attributable
one before the reinforced response, and the to imposing the delay. This is most obvious in
other between it and the reinforcer. Thus, in signaled delays, where there is an immediate,
the conventional schedule nomenclature of response-produced stimulus change that can
Ferster and Skinner (1957), an unsignaled have conditioned reinforcing, eliciting, or
delay of reinforcement may be categorized as a overshadowing effects on operant responding,
type of tandem schedule and a signaled delay independently of the effects of the delay.
of reinforcement as a type of chained schedule Breaking the chains of conditioned rein-
of reinforcement. forcement, or these other stimulus functions,
Other features of the delay are important in is easily accomplished by eliminating the
its discussion. When delays have been accom- signal. When unsignaled delay value is varied,
panied by a distinct stimulus, the stimulus a delay of reinforcement gradient is observed
most often is a blackout, but changes in both (Sizemore & Lattal, 1978) that qualitatively
visual stimuli (e.g., key color lights, lever resembles the gradient obtained with signaled
retraction) and auditory stimuli also have been delays (Richards, 1981). The unsignaled delay
investigated (Ferster, 1953; Pierce et al., 1972; gradient, based on obtained delay values,
Lattal, 1987). Delays also can be nonresetting however, is characterized by lower response
or resetting. In the former, once the delay rates and a steeper slope than the gradient
interval is initiated by a response, further obtained with otherwise equivalent signaled
responses have no effect. A resetting delay is delays (Richards; Sizemore, 1976).
one in which each response during the delay Even with unsignaled delays, other potential
returns the delay to its initial value. Resetting problems remain. Consider a typical manipu-
and nonresetting delays are most often com- lation for imposing an unsignaled delay of
bined with unsignaled delays, because the reinforcement on responding maintained by a
stimuli present during a signaled delay usually variable-interval (VI) 60-s schedule of immedi-
control the behavior during the delay (Lattal, ate reinforcement. When a delay of, for
1987). The delay duration also can be either example, 20 s is imposed, the schedule is
fixed, that is, the same each time it is effected, converted to a tandem VI 60-s fixed-time FT
or variable, such that it changes before 20-s schedule. One problem is that the
DELAY OF REINFORCEMENT 131

structure of the schedule changes such that Different approaches to assessing the role of
reinforcers that previously could occur within delay relative to changes in rate of reinforce-
less than a second of one another now are ment were used by Shull, Spear, and Bryson
always separated by 20 additional seconds. This (1981, Experiment 2; cf. also Moore, 1979)
lengthens postreinforcement pausing, which and Weil (1984). Shull et al. arranged for
in turn can be reflected as lower overall pigeons’ key pecks during the initial link of a
response rates, the same effect that might be chained schedule to produce a terminal link
expected with a 20-s delay. Another problem is composed of a fixed-duration reinforcement
that the rate of reinforcement is reduced from cycle, accompanied by a key color change.
an average of one per 60 s to one per 80 s, also During this cycle three food presentations
potentially reducing response rate indepen- occurred. The first occurred at a fixed time
dently of the delay effects. after the choice response was made, as did the
Different solutions to these two problems third, which was near the end of the reinforce-
have been employed. Chung (1965; see also ment cycle. Because the time of the second
Lattal, 1984), arranged a concurrent VI 1-min reinforcer was varied across different condi-
VI 1-min schedule in which responding on one tions of the experiment, it was possible to
of two keys produced scheduled reinforcers examine the effects of different delays between
after a delay signaled by a blackout. Pecks on second-reinforcer onset and the initial-link
the second response key yielded reinforcers response while holding overall reinforcement
immediately after they were programmed. At rate constant. Responding varied as a function
the same time, according to a VI 1-min of these delays.
schedule, pecks on this same key also produced Weil (1984) employed a related technique
blackouts of identical duration to those of the to eliminate the potential confound between
signaled delay on the other key. These black- reinforcement rate changes and delay dura-
outs were not followed by reinforcement. Thus, tions. He used a schedule derived from the
reinforcement rate during the immediate and work described by Schoenfeld and Cole
delayed reinforcement conditions were the (1972). Under this arrangement, a repeating
same. At delay values greater than 1 s, relative time cycle, T, is composed of two time periods.
response rates were lower on the key correlated During tD the first response results in rein-
with the delay of reinforcement. The proce- forcement at the end of the period. During tD,
dure, however, converts the schedule on the responses have no consequence. By holding T
nondelay key to a multiple VI Extinction constant and varying the placement (before or
schedule with effects on responding on both after tD) and duration of tD, Weil generated
that key as well as on the key associated with the different obtained delays to reinforcement
delayed reinforcement. Lattal (1984) found while holding reinforcement rate constant.
that adding such a blackout to a VI schedule For 3 of 4 pigeons, response rates were a
often increased response rates, instances of decreasing monotonic function of increases in
behavioral contrast (Catania, 1961; Reynolds, the obtained (as opposed to programmed,
1961). Sizemore and Lattal (1977, 1978) used which was unrelated to response rate) delay
as the immediate reinforcement baseline a value. This led Weil to conclude that ‘‘both
tandem VT FI schedule in which the values of obtained delay and reinforcer frequency ap-
the VT and FI schedules were equivalent to the pear to contribute to response rate, but
values of the subsequent unsignaled delay obtained delay does so to a much greater
condition (i.e., a tandem VI FT schedule). degree’’ (p. 154).
Thus, the distribution and rate of reinforce- Another potential confounding variable
ment remains unchanged from the immediate when interpreting delay of reinforcement
reinforcement baseline condition. This proce- effects occurs when delays are unsignaled.
dure works well for assessing the effects of a The response is free to occur during un-
single delay value; however, when constructing signaled nonresetting delays, making the
a delay gradient by comparing different delay obtained delays less than the nominal delays.
values, reinforcement rate still varies as the Dews (1981, p. 216) suggested that results
delays are varied. As a result, variations in from such a procedure were difficult to
reinforcement rate seem an inevitable con- interpret because they involve variable delays
found of introducing delays in this manner. of reinforcement and the obtained delays are
132 KENNON A. LATTAL

likely to be of brief duration. By implication, responding than does a similar schedule of


this latter point would mean that they do not response-independent food delivery (e.g., Cat-
generate a sufficient range of delay values to ania & Keller, 1981; Sizemore & Lattal, 1977;
allow a functional relation to be established Williams, 1976). If the programmed delays are
between responding and delay value. Un- relatively long (30 s), then the differences
signaled delays, however, do yield orderly between the two conditions are more equivo-
delay gradients, based on either nominal or cal (Gleeson & Lattal, 1987). This latter
averaged obtained delays (Sizemore & Lattal, finding, however, simply underlines the fact
1978). Nonresetting delays are indeed vari- that at some point delays become sufficiently
able, rather than fixed. The available evidence, long that their effects are indistinguishable
which is not extensive, suggests that fixed and from those resulting from an absence of a
variable delays have different behavioral ef- response–reinforcer dependency.
fects (e.g., Cicerone, 1976; Logan, 1960).
When delays are variable, mean delay values
DELAY OF REINFORCEMENT EFFECTS
also may affect responding differently as a
function of the distribution of those delays Given that an effect of delays unconfounded
(e.g., as in the different effects on VI respond- by procedural variables can be established, the
ing of an arithmetic or a constant probability second question is: What are the effects? As a
distribution of interreinforcer intervals, cf. function of circumstances, delays of reinforce-
Catania & Reynolds, 1968). ment can decrease or increase behavior, or
For assessing the behavioral effects of delays leave it unchanged relative to immediate
of reinforcement, Dews (1981) favored reset- reinforcement. Furthermore, the same delay
ting delays over either signaled delays in which value may have different effects as a function
the opportunity to respond was removed (cf. of other parameters of both the delay and at
Pierce et al., 1972), or nonresetting delays. least some of the maintaining conditions of
Although the resetting delay keeps the ob- reinforcement (but cf. Shahan & Lattal, 2005).
tained delay value constant, it creates other A primary consideration concerning how
problems. One is that reinforcement rates are responding will be affected by delays of
determined by whether responses occur dur- reinforcement is the baseline on which the
ing the delay. This becomes a problem delay is imposed. The most common proce-
particularly with longer delays, because some dure in the experimental analysis of behavior
responding during the unsignaled delay typi- is to impose a delay of reinforcement after
cally occurs and each response reduces rein- obtaining steady-state responding on some
forcement rate, which in turn can reduce schedule with immediate reinforcement. Un-
response rates independently of the effects of der these conditions, delays typically reduce
the delay. response rates, but, even here, reductions are
The response that produces the reinforcer not invariably the outcome. A less common
initiates the delay, at the end of which the procedure is to impose delays of reinforce-
reinforcer is provided independently of fur- ment in the absence of a previously established
ther responding. In the case of unsignaled operant response.
delays, this procedure raises the question of
whether simply providing the established Response Differentiation with Delayed Reinforcement
reinforcers independently of responding in the Absence of Response Training
would have similar effects on behavior—that Skinner (1938) was the first to differentiate
is, does the dependency matter in formulating operant responses in the absence of immedi-
delays of reinforcement? The effects of un- ate reinforcement. He observed that lever-
signaled delayed reinforcement therefore have press responding of experimentally naive rats
been compared to those of response-indepen- was established when reinforcement followed
dent reinforcers occurring at the same rate an unsignaled delay of up to 8 s. Lattal and
and with the same temporal distribution as the Gleeson (1990) systematically investigated
delayed reinforcers. The effect depends criti- such acquisition. In their first experiment,
cally on the duration of the delay. If the they magazine trained experimentally naive,
programmed delays are relatively brief (e.g., food-deprived rats and pigeons. Then, with
3 s), delayed reinforcement maintains more some pigeons, tandem fixed-ratio (FR) 1 FT
DELAY OF REINFORCEMENT 133

30-s and with other pigeons and rats tandem reinforced approximation, shaping of naive
FR1 differential-reinforcement-of-other-behav- pigeons’ key peck responses occurred quickly,
ior (DRO) 30-s schedules of reinforcement completed within a single session of 1 hr or
were implemented. Under the former sched- less. When, however, a 10-s delay was initiated
ule, the first response initiated an unsignaled by activating the shaping switch, responding
delay of 30 s for pigeons that terminated with was not shaped in 1 pigeon in the course of
food delivery, regardless of whether further more than ten 1-hr sessions. Visual observation
responding occurred during the delay interval. of the pigeon revealed that during the last
The arrangement was similar in the second training sessions, it continuously oriented to
schedule, except that every response during the chamber wall containing the response key
the delay (30 s for pigeons; 10 s for rats) reset and the grain magazine. The lateral distance
the delay timer, thereby imposing a 10- or 30-s of its movements appeared to be constrained
period of no operant responses immediately by the 10-s delay in that when its beak was near
before food delivery. Without response shap- the key, the shaping switch was activated. The
ing or any other form of response training, the pigeon then would continue to engage in
response (key pecking by pigeons; lever other behavior during the delay interval,
pressing by rats) developed in most of the ensuring that there was considerable variability
animals in periods of time ranging from a few in the responses that occurred just prior to
minutes to a few hours. Thereafter, the food delivery. The 10-s delay delineated the
response was maintained. Subsequent re- outer limits of the pigeon’s physical distance
search ruled out the control of responding from the key. A similar pattern occurred with
by such variables as simple exposure to the the 1-s delay, only the distance from the key
apparatus (Lattal & Gleeson), evocation of the allowed by this delay before reinforcement was
response by food delivery per se, (Lattal & considerably less. Thus, the delay contingency
Gleeson; Wilkenfield, Nickel, Blakely, & Pol- seemed to constrain behavior spatially, but the
ing, 1993), and auditory feedback associated delay was sufficiently long in the 10-s case that
with the response (Critchfield & Lattal, key pecking failed to develop over a time
1993; Schlinger & Blakely, 1994). Similar period that was more than adequate to shape
response differentiation with delayed rein- key pecking with immediate or even briefly
forcement also has been reported using delayed (1-s) reinforcement.
different strains of rats (e.g., Anderson &
Elcoro, 2007; Hand, Fox, & Reilly, 2006), Imposing Delays on Operant Responding Previously
Siamese fighting fish (Elcoro, daSilva, & Lattal, Maintained by Immediate Reinforcement
2008; Lattal & Metzger, 1994), and humans Response differentiation with delayed rein-
(Okouchi, 2009). forcement increases operant response rates
relative to the baseline condition, which is
Response Shaping with Delayed Reinforcement either zero or near zero. The more common
The shaping of responding by the differen- effect associated with delays of reinforcement
tial reinforcement of successive approxima- is response rate reduction; however, under
tions has been suggested to be most likely with some conditions responding has been ob-
immediate differential reinforcement of such served to be unchanged from immediate
approximations to the target or criterion reinforcement. Under others, response rates
response (Skinner, 1953). Given the preceding have increased when delays are imposed.
findings of response acquisition in the absence Delays of reinforcement most often are
of explicit attempts to train the response, it was introduced at full value following an immedi-
of interest to attempt to shape responding with ate reinforcement baseline. Leaving aside for
delayed reinforcement. The procedure was the moment the complexity of interpretation
identical to a conventional shaping procedure, introduced by signaled delays of reinforce-
except that the shaping switch was arranged so ment, Ferster (1953) suggested that the effects
that when an appropriate approximation to of signaled delays might be attenuated if the
the criterion response occurred, activating the delay is introduced gradually and titrated as a
shaping switch initiated a nonresetting un- function of the organism’s behavior, a tech-
signaled delay. When reinforcement was either nique used with considerable success subse-
immediate or delayed by 1 s from the to-be- quently by Terrace (1963) to introduce nega-
134 KENNON A. LATTAL

tive discriminative stimuli during discrimina- response rates increase as they do with brief
tion training. Using this titration technique, unsignaled delays (Lattal & Ziegler; Richards,
Ferster maintained responding of pigeons 1981).
under VI 60-s schedules with blackout-signaled By far, the most typically reported effect of
delays of up to 120 s. His data, however, were delay of reinforcement imposed following
limited to a sample of cumulative records steady-state responding on schedules of imme-
illustrating terminal performance and details diate reinforcement is a reduction in response
of the titrating procedure were not reported. rates relative to those observed when rein-
Using a related procedure with baboons as forcement is immediate. In some cases, how-
subjects, Ferster and Hammer (1965) reported ever, conclusions about the effects often have
developing sustained responding by baboons been compromised because, particularly in
with a 24-hr signaled delay between respond- some of the early experiments in which
ing on one of two keys and delivery of different parameters of both the delay and
reinforcement following a response on a the maintaining conditions of reinforcement
second key when ‘‘the delay in reinforcement were manipulated, they failed to employ
was increased slowly, paced with the monkeys’ appropriate control procedures as discussed
performance’’ (p. 249). Although responding in the previous section. Where the appropriate
was sustained, it ‘‘differed significantly from controls have been incorporated, the effects of
[that] usually recorded with immediate rein- the delay depend on parameters of both the
forcement’’ (p. 252) and, in a follow-up delay and the reinforcement schedule. The
experiment, introducing an 18-hr delay sud- former parameters include delay duration,
denly as opposed to gradually showed that whether the delay is fixed or variable (Cicero-
‘‘performance maintained with long delays to ne, 1976), type of signal (Lattal, 1987), and
reinforcement does not depend on a gradual, whether the delay is signaled, unsignaled, or
paced increase in the length of the delay’’ (p. partially signaled (e.g., Richards, 1981; Schaal
253). & Branch, 1988). The latter include the type of
Imposing brief (around 0.5 s) unsignaled schedule (e.g., Gonzales & Newman, 1976;
nonresetting delays between a response and Kendall & Newby, 1978; Morgan, 1972; Ri-
the reinforcer often increases, and character- chards, 1981) and parameters of reinforce-
istically changes the structure of, responding ment (e.g., Mazur, 1987). An exception is the
relative to that observed with immediate finding of Shahan and Lattal (2005) that
reinforcement. This has been observed when unsignaled 3-s delays reduced response rates
key-peck responses of pigeons were initially maintained by different rates of reinforcement
maintained by immediate reinforcement on proportionally the same.
either VI or differential-reinforcement-of-low-
rate (DRL) schedules (Arbuckle & Lattal,
BEHAVIORAL PROCESSES IN DELAY
1988; Hall, Channell, & Schachtman, 1987;
OF REINFORCEMENT
Lattal & Ziegler, 1982; Richards, 1981; Size-
more & Lattal, 1978). Lattal and his colleagues Historically, theoretical accounts of delay of
have shown that, regardless of whether re- reinforcement have distinguished between a
sponse rates increase, there is a consistent primary and a secondary or derived gradient of
increase in the number of short (, 0.5 s) reinforcement delay (see Renner, 1964, for a
interresponse times. Lattal and Ziegler (1982) review). Hull (1943), for example, asserted
suggested that, with brief delays to reinforce- that the derived delay of reinforcement
ment, if the first peck of what is to be a series gradient was mediated by conditioned rein-
of pecks initiates a delay, this allows the forcement and the primary one was not.
remaining pecks in the burst to occur, thereby Spence (1947) initially treated all delays of
making the reinforcer contiguous with a burst reinforcement as involving conditioned rein-
of responses. This same effect occurs if the forcement, including their mediation by pro-
0.5-s delays are resetting (Lattal & Ziegler). If, prioceptive stimuli, but later modified his
however, that same response starts a delay position to distinguish two types of delay of
signaled by a blackout, potential bursts are reinforcement, a chaining and nonchaining
more likely to be truncated and, as a result, type, each involving different behavioral mech-
neither IRT distributions change nor do anisms (Spence, 1956).
DELAY OF REINFORCEMENT 135

These historical themes recur in the exper- as is suggested by the observations quoted in
imental analysis of delay of reinforcement. the preceding paragraph, responding invari-
Response–reinforcer temporal contiguity was ably occurs during the delay, accompanied by
assigned a primary role in the effectiveness of a stimulus change or not, raising the possibility
reinforcement by Skinner (1948, 1953; Ferster that such behavior contributes to the delay of
& Skinner, 1957) and thereafter by others reinforcement effect. Baum (1973) more
(e.g., Peele, Casey, & Silberberg, 1984; see also broadly questioned the importance of re-
Schneider, 1990). A number of subsequent sponse–reinforcer temporal contiguity in the
experiments employed unsignaled delays of maintenance of operant behavior. He pro-
reinforcement to examine more systematically posed that delay of reinforcement has its
the functional relations between rate of effects because disrupting response–reinforcer
response and the disruption of response– temporal contiguity weakens the correlation
reinforcer temporal contiguity, in the form of between responding and reinforcement (cf.
delay of reinforcement, unencumbered by also Williams, 1976).
exteroceptive stimuli accompanying the delay Regardless of the conceptualization of de-
(Sizemore & Lattal, 1977, 1978; Williams, lays as disrupting correlations or actual conti-
1976). guity, introducing a delay of reinforcement, by
Ferster (1953) appealed to mediating be- definition, relaxes or loosens the response–
havior during the delay to account for his reinforcer relation. The response–reinforcer
observations of sustained operant responding dependency constrains and focuses respond-
under delays of reinforcement up to 120 s, ing (e.g., Staddon, 1979; Timberlake & Alli-
where a blackout or other stimulus change son, 1974; Zeiler & Buchman, 1979) when
accompanied each delay. Eliminating the reinforcement is immediate. Relaxing this
stimulus accompanying the delay, and thus a constraint in a schedule of reinforcement
source of immediate conditioned reinforce- allows other forms of behavior to intrude, in
ment for operant responses, still led to a manner similar to the intrusion of nonnative
interpretations in terms of mediating behavior species into an otherwise stable ecosystem.
during the delay. Azzi et al. (1965) observed Like nonnative species, these new behavioral
mediating behavior in the form of ‘‘some sort forms can either compete with the operant
of dipper contact, with a regularity similar to behavior or they can complement or even
that which typifies ratio responding’’ (p. 161) augment it.
during their unsignaled delay periods. A How relaxing but not eliminating the
comment by Dews (1960), also studying response–reinforcer dependency affects re-
unsignaled resetting and nonresetting delays sponse variability is illustrated by the pigeon’s
of reinforcement, underlines the combined behavior during the attempt to shape behavior
influence of contiguity and mediating behav- with delayed reinforcement described in pre-
ior: ‘‘[p]resumably, some behavior occurring ceding section. A systematic analysis of such an
during [the] delay will be fortuitously rein- effect was reported by Escobar and Bruner
forced’’ (p. 229). (2007), who studied response differentiation
A primary delay of reinforcement gradient by experimentally naı̈ve rats in a chamber
based simply on the temporal relation between containing an array of seven levers. The center
the reinforcer and the last operant response lever was associated with a tandem random-
that produced it, that is, a ‘‘pure’’ effect of time FT x-s schedule, where x varied from 1–
delay, isolated from behavior, seems un- 32 s for different groups. Responses on the
achievable, if not implausible. Certainly, with other six levers were recorded, but had no
delays there is a structural or procedural other effect. At all delay values, responding was
temporal separation between operant respons- highest on the center lever. Responding on
es and reinforcers that follow, but this does not the other levers was more confined to the
necessarily correspond to a functional separa- levers adjacent to the center lever at 0-s delay.
tion between responding and reinforcement. Although the effect was variable, with 1–8 s
Delays of reinforcement do not dam the delays responding on levers physically more
behavior stream. They simply rechannel it. distant from the operative lever tended to
Thus, the delay is not a period of behavioral increase. With 16-s and 32-s delays, responding
emptiness through which time passes. Rather, to all of the levers was low to zero. Both of
136 KENNON A. LATTAL

these experiments suggest that longer delays reinforcement reorganized temporal patterns
allow greater behavioral variation, that is, the of responding and often even increased
opportunity for different response forms to response rates relative to those occurring with
intrude and thereby interact with the extant immediate reinforcement. Furthermore, there
contingencies. is considerable evidence that events occurring
Schaal, Shahan, Kovera, and Reilly (1998) in proximity to reinforcement in chained and
provided an example of how the intrusion of tandem schedules, of which operant delays of
other behavior can compete with the operant reinforcement are an example, do affect
response during a delay of reinforcement responding earlier in those schedules (cf.
procedure. In one experiment, pigeons’ key Lattal & Crawford-Godbey, 1985; Starin, 1987).
pecking was maintained on a VI schedule. Lejeune, Richelle, and Wearden (2006)
After the interfood interval expired, a peck to observed that mediating behavior often occurs
a second, food, key was required to activate the in experimental analyses of temporally con-
food hopper. In different conditions, the food trolled responding, but they questioned
key was or was not illuminated and a feedback whether it played a necessary role in such
sound was or was not produced by responses circumstances. A similar conclusion seems
on the food key during the interfood interval appropriate in the case of delay of reinforce-
arranged by the VI schedule. Response rates ment in that not all experiments reporting
on the VI key were lower when the food key delay of reinforcement effects report system-
was illuminated and when the feedback sound atic patterns of mediating behavior. The
was present during the interfood interval. experiments in the preceding paragraphs
Schaal et al. suggested that their results are certainly show that relaxing the response–
analogous to what happens during unsignaled reinforcer relation affects response variability,
nonresetting delays to reinforcement: Rather which in turn affects response rate. Whether
than other behavior occurring at the time of such an increase in response variability in all
reinforcement being adventitiously reinforced, cases constitutes evidence of mediating behav-
hopper-related observing behavior can be ior seems questionable.
explicitly reinforced by hopper access. This Delays of reinforcement are both imposed by
intermittent reinforcement of observing in contingencies and impose contingencies. Zei-
competition with the operant responses there- ler (1977) distinguished between direct, pro-
by reduces operant response rates. These grammed variables in reinforcement schedules
results also support Azzi et al.’s (1965) and indirect ones. The latter result from the
informal observation of feeder-directed behav- interaction of the direct variables with behavior.
ior during and before the delay and related Delay of reinforcement effects are determined
observations during signaled delays by Iversen in part by such direct variables as the schedule
(1981). Nor are they unique to delay of of reinforcement and parameters of the delay
reinforcement. Nevin (1971), for example, itself. Such effects, however, seem equally
exposed pigeons to two-key concurrent fixed- determined by the dynamic generated when
interval (FI) VI schedules. Negatively acceler- these direct variables interact with behavior.
ated responding developed on the VI key as a Given the difficulties of isolating temporal
function of the lapsed time on the concur- contiguity and the ambiguities of mediating
rently available FI schedule. That is, the behavior, it seems useful to consider delayed
concurrently reinforced FI responding com- reinforcement of operant behavior in function-
peted with the VI responding, changing the al terms, that is, in terms of the behavioral
rate and pattern of the latter. Related results dynamics that emerge from the interaction of
were reported by Lattal and Abreu-Rodrigues responding with the formal properties of both
(1997) when FT schedules operated concom- the maintaining conditions of reinforcement
itantly with VI schedules. and the characteristics of the delay itself.
Examples of intruding behavior accompany-
ing the introduction of delays of reinforce- CONCLUSION
ment complementing or augmenting operant
behavior have been reported in several exper- Thorndike’s law of effect states that ‘‘[o]f
iments as well. As already noted, Lattal and several responses made to the same situation,
Ziegler (1982) showed that brief delays of those which are accompanied or closely
DELAY OF REINFORCEMENT 137

followed by satisfaction to the animal will, Catania, A. C. (1961). Behavioral contrast in a multiple
and concurrent schedule of reinforcement. Journal of
other things being equal, be more firmly the Experimental Analysis of Behavior, 6, 335–342.
connected to the situation, so that, when it Catania, A. C. (1979). Learning. Englewood Cliffs, NJ:
recurs, they will be more likely to recur (1911, Prentice Hall.
p. 244).’’ Left for future generations was the Catania, A. C., & Keller, K. J. (1981). Contingency,
task of exploring the implications of ‘‘closely contiguity, correlation, and the concept of causation.
In M. D. Zeiler, & P. Harzem (Eds.), Predictability,
followed by’’ for the understanding of rein- correlation, and contiguity (pp. 125–167). Chichester,
forcement. The research discussed in this England: Wiley.
Perspective describes some of the empirical Catania, A. C., & Reynolds, G. S. (1968). A quantitative
and interpretive fruits of Thorndike’s be- analysis of the responding maintained by interval
schedules of reinforcement. Journal of the Experimental
queathal derived from the experimental anal- Analysis of Behavior, 11, 327–383.
ysis of behavior. Chung, S-H. (1965). Effects of delayed reinforcement in a
Separated from structural confounds associ- concurrent situation. Journal of the Experimental Analysis
ated with their imposition, delays of reinforce- of Behavior, 8, 439–444.
Chung, S-H., & Herrnstein, R. J. (1967). Choice and delay
ment have diverse behavioral effects, as a of reinforcement. Journal of the Experimental Analysis of
function of the circumstances of their con- Behavior, 10, 67–74.
struction and imposition. All delays involve a Cicerone, R. A. (1976). Preference for mixed versus
relaxing of the response–reinforcer temporal constant delay of reinforcement. Journal of the Exper-
relation and hence, in a structural sense at imental Analysis of Behavior, 25, 257–261.
Critchfield, T. S., & Lattal, K. A. (1993). Acquisition of a
least, disruption of temporal contiguity be- spatially defined operant with delayed reinforcement.
tween the response and reinforcer that follows. Journal of the Experimental Analysis of Behavior, 59,
Whether this structural change in temporal 373–384.
contiguity translates into functional change is Dews, P. B. (1960). Free-operant behavior under condi-
tions of delayed reinforcement: I. CRF-type schedules.
ambiguous. Both correlational and mediation- Journal of the Experimental Analysis of Behavior, 3,
al accounts of delay of reinforcement ques- 221–234.
tion, in different ways, the primacy of disrup- Dews, P. B. (1981). Effects of delay of reinforcement on the
tions in temporal contiguity in determining rate of steady-state responding. In C. M. Bradshaw, E.
Szabadi, & C. F. Lowe (Eds.), Quantification of steady-state
delay of reinforcement effects. The ambiguity operant behavior (pp. 215–229). Amsterdam: Elsevier.
about the functional nature of response– Elcoro, M., da Silva, S. P., & Lattal, K. A. (2008). Visual
reinforcer temporal contiguity in combination reinforcement in the female Siamese fighting fish
with the varied effects that delays have as a Betta splendens. Journal of the Experimental Analysis of
function of the kinds of variables described Behavior, 90, 53–60.
Escobar, R., & Bruner, C. A. (2007). Response induction
previously invites a broader view of delay of during the acquisition and maintenance of lever
reinforcement. To wit, delay of reinforcement pressing with delayed reinforcement. Journal of the
seems more usefully viewed as a dynamic Experimental Analysis of Behavior, 88, 29–49.
behavioral process resulting from the actions Fantino, E. (1969). Conditioned reinforcement, choice,
and the psychological distance to reward. In D. P.
of direct and indirect variables on behavior Hendry (Ed.), Conditioned reinforcement (pp. 163–191).
rather than as simply a static parameter of Homewood, IL: Dorsey.
reinforcement. Ferster, C. B. (1953). Sustained behavior under delayed
reinforcement. Journal of Experimental Psychology, 45,
218–224.
REFERENCES Ferster, C. B., & Hammer, C. (1965). Variables determin-
ing the effects of delay of reinforcement. Journal of the
Anderson, K. A., & Elcoro, M. (2007). Response acquisi- Experimental Analysis of Behavior, 8, 243–254.
tion with delayed reinforcement in Lewis and Fischer Ferster, C. B., & Skinner, B. F. (1957). Schedules of
344 rats. Behavioural Processes, 74, 311–318. reinforcement. New York: Appleton Century Crofts.
Arbuckle, J. L., & Lattal, K. A. (1988). Changes in Gleeson, S., & Lattal, K. A. (1987). Response-reinforcer
functional response units with briefly delayed rein- relations and the maintenance of behavior. Journal of
forcement. Journal of the Experimental Analysis of the Experimental Analysis of Behavior, 48, 383–393.
Behavior, 49, 249–263. Gonzales, F., & Newlin, R. J. (1976). Effects of a delayed-
Azzi, R., Fix, D. S. R., Keller, F. S., & Rocha e Silva, M. I. reinforcement procedure on performance under
(1965). Exteroceptive control of response under IRT.t schedules. Journal of the Experimental Analysis
delayed reinforcement. Journal of the Experimental of Behavior, 26, 221–235.
Analysis of Behavior, 7, 159–162. Hall, G., Channell, S., & Schachtman, T. R. (1987). The
Baum, W. M. (1973). The correlation-based law of effect. instrumental overshadowing effect in pigeons: The
Journal of the Experimental Analysis of Behavior, 20, role of response bursts. Quarterly Journal of Experimental
137–153. Psychology, 39B, 173–188.
138 KENNON A. LATTAL

Hand, D. J., Fox, A. T., & Reilly, M. P. (2006). Response Okouchi, H. (2009). Response acquisition with humans
acquisition with delayed reinforcement in a rodent with delayed reinforcement. Journal of the Experimental
model of attention-deficit/hyperactivity disorder. Be- Analysis of Behavior, 91, 377–390.
havioral Brain Research, 175, 337–342. Peele, D. B., Casey, J., & Silberberg, A. (1984). Primacy of
Hayes, S. C., & Hayes, L. J. (1993). Applied implications of interresponse time reinforcement in accounting for
current JEAB research on derived relations and rate differences under variable-ratio and variable-
delayed reinforcement. Journal of Applied Behavior interval schedules. Journal of Experimental Psychology:
Analysis, 26, 507–511. Animal Behavior Processes, 10, 149–167.
Hull, C. L. (1943). Principles of Behavior. New York: Pierce, C. H., Hanford, P. V., & Zimmerman, J. (1972).
Appleton-Century Crofts. Effects of different delay of reinforcement procedures
Iversen, I. H. (1981). Response interactions with signaled on variable-interval responding. Journal of the Experi-
delay of reinforcement. Behaviour Analysis Letters, 1, 3–9. mental Analysis of Behavior, 18, 141–146.
Kendall, S. B., & Newby, W. (1978). Delayed reinforcement Renner, K. E. (1964). Delay of reinforcement: A historical
of fixed-ratio performance without mediating extero- review. Psychological Review, 61, 341–361.
ceptive conditioned reinforcement. Journal of the Reynolds, G. S. (1961). Behavioral contrast. Journal of the
Experimental Analysis of Behavior, 30, 231–237. Experimental Analysis of Behavior, 4, 57–71.
Kimble, G. A. (1961). Hilgard and Marquis’ Conditioning Richards, R. W. (1981). A comparison of signaled and
and Learning. New York: Appleton Century Crofts. unsignaled delay of reinforcement. Journal of the
Lattal, K. A. (1984). Signal functions in delayed reinforce- Experimental Analysis of Behavior, 35, 145–152.
ment. Journal of the Experimental Analysis of Behavior, 42, Schaal, D., & Branch, M. (1988). Responding of pigeons
239–253. under variable-interval schedules of unsignaled, brief-
Lattal, K. A. (1987). Considerations in the experimental ly signaled, and completely signaled delays to rein-
analysis of reinforcement delay. In M. L. Commons, J. forcement. Journal of the Experimental Analysis of
Mazur, J. A. Nevin, & H. Rachlin (Eds.), Quantitative Behavior, 50, 33–54.
studies of operant behavior: The effect of delay and of Schaal, D. W., Shahan, T. A., Kovera, C. A., & Reilly, M. P.
intervening events on reinforcement value (pp. 107–123). (1998). Mechanisms underlying the effects of un-
New York: Erlbaum. signaled delayed reinforcement on key pecking of
Lattal, K. A., & Abreu-Rodrigues, J. (1997). Response- pigeons under variable-interval schedules. Journal of
independent events in the behavior stream. Journal of the Experimental Analysis of Behavior, 69, 103–122.
the Experimental Analysis of Behavior, 68, 375–398. Schlinger, H. D., & Blakely, E. (1994). The effects of
Lattal, K. A., & Crawford-Godbey, C. (1985). Homoge- delayed reinforcement and a response-produced
neous chains, heterogeneous chains, and delay of auditory stimulus on the acquisition of operant
reinforcement. Journal of the Experimental Analysis of behavior in rats. Psychological Record, 44, 391–409.
Behavior, 44, 337–342. Schneider, S. M. (1990). The role of contiguity in free-
Lattal, K. A., & Gleeson, S. (1990). Response acquisition operant unsignaled delay of positive reinforcement: A
with delayed reinforcement. Journal of Experimental brief review. The Psychological Record, 40, 239–257.
Psychology: Animal Behavior Processes, 16, 27–39. Schoenfeld, W. N., & Cole, B. K. (1972). Stimulus schedules:
Lattal, K. A., & Metzger, B. A. (1994). Response acquisition The t-T systems. New York: Harper & Row.
by Siamese Fighting Fish (Betta splendens) with delayed Shahan, T., & Lattal, K. A. (2005). Unsignaled delay of
visual reinforcement. Journal of the Experimental Anal- reinforcement, relative time, and resistance to
ysis of Behavior, 61, 35–44. change. Journal of the Experimental Analysis of Behavior,
Lattal, K. A., & Ziegler, D. R. (l982). Briefly delayed 83, 201–219.
reinforcementAn interresponse time analysis. Journal Shull, R. L., Spear, D. J., & Bryson, A. E. (1981). Delay or rate
of the Experimental Analysis of Behavior, 37, 407–416. of food delivery as determiners of response rate. Journal
Lejeune, H., Richelle, M., & Wearden, J. H. (2006). About of the Experimental Analysis of Behavior, 35, 129–143.
Skinner and time: Behavior-analytic contributions to Sizemore, O. J. (1976). The relation of response-reinforcer
research on animal timing. Journal of the Experimental dependency and contiguity to signalled and unsignalled
Analysis of Behavior, 85, 125–142. delay of reinforcement. Unpublished doctoral disserta-
Logan, F. A. (1960). Incentive. New Haven: Yale University tion, West Virginia University.
Press. Sizemore, O. J., & Lattal, K. A. (l977). Dependency,
Mazur, J. E. (1987). An adjusting procedure for studying temporal contiguity, and response- independent
delayed reinforcement. In M. L. Commons, J. E. reinforcement. Journal of the Experimental Analysis of
Mazur, J. A. Nevin, & H. Rachlin (Eds.), Quantitative Behavior, 27, 119–125.
analysis of behavior: Vol 5. Effects of delay and of Sizemore, O. J., & Lattal, K. A. (l978). Unsignaled delay of
intervening events on reinforcement value. Hillsdale, NJ: reinforcement in variable-interval schedules. Journal of
Erlbaum. the Experimental Analysis of Behavior, 30, 169–175.
Moore, J. (1979). Choice and number of reinforcers. Skinner, B. F. (1938). The behavior of organisms: An experimental
Journal of the Experimental Analysis of Behavior, 32, analysis. New York: Appleton Century Crofts.
51–63. Skinner, B. F. (1948). ‘‘Superstition’’ in the pigeon.
Morgan, M. J. (1972). Fixed-ratio performance under Journal of Experimental Psychology, 38, 168–172.
conditions of delayed reinforcement. Journal of the Skinner, B. F. (1953). Science and human behavior. New York:
Experimental Analysis of Behavior, 17, 95–98. MacMillan.
Nevin, J. A. (1971). Rates and patterns of responding with Spence, K. W. (1947). The role of secondary reinforcement
concurrent fixed-interval and variable-interval rein- in delayed reward learning. Psychological Review, 54, 1–8.
forcement. Journal of the Experimental Analysis of Spence, K. W. (1956). Behavior theory and conditioning. New
Behavior, 16, 241–247. Haven, CT: Yale University Press.
DELAY OF REINFORCEMENT 139

Staddon, J. E. R. (1979). Operant behavior as adaptation Weil, J. L. (1984). The effects of delayed reinforcement on
to constraint. Journal of Experimental Psychology: General, free-operant responding. Journal of the Experimental
108, 48–67. Analysis of Behavior, 41, 143–155.
Starin, S. (1987). Responding under homogeneous versus Wilkenfield, J., Nickel, M., Blakely, E., & Poling, A. (1993).
heterogeneous chained schedules. Psychological Record, Acquisition of lever-press responding in rats with
36, 69–77. delayed reinforcement: A comparison of three proce-
Stromer, R., McComas, J. J., & Rehfeldt, R. A. (2000). dures. Journal of the Experimental Analysis of Behavior, 58,
Designing interventions that include delayed rein- 431–443.
forcement: Implications of recent laboratory research. Williams, B. A. (1976). The effects of unsignaled delayed
Journal of Applied Behavior Analysis, 33, 359–371. reinforcement. Journal of the Experimental Analysis of
Tarpy, R. M., & Sawabini, F. L. (1974). Reinforcement Behavior, 26, 441–449.
delay: A selective review of the past decade. Psycholog- Williams, B. A. (1983). Revising the principle of reinforce-
ical Bulletin, 81, 984–987. ment. Behaviorism, 11, 63–88.
Terrace, H. S. (1963). Discrimination learning with and Zeiler, M. D. (1977). Schedules of reinforcement : The
without errors. Journal of the Experimental Analysis of controlling variables. In W. K. Honig, & J. E. R.
Behavior, 6, 1–27. Staddon (Eds.), Handbook of operant behavior (pp. 201–
Thorndike, E. L. (1911). Animal intelligence. New York: 232). New York: Prentice Hall.
MacMillan. Zeiler, M. D., & Buchman, I. B. (1979). Response
Timberlake, W., & Allison, J. (1974). Response depriva- requirements as constraints on output. Journal of the
tion: An empirical approach to instrumental perfor- Experimental Analysis of Behavior, 32, 29–49.
mance. Psychological Review, 81, 146–164.
Watson, J. B. (1917). The effect of delayed feeding upon Received: June 27, 2009
learning. Psychobiology, 1, 51–59. Final Acceptance: October 19, 2009

You might also like