You are on page 1of 21

The "Response" in Behavior Theory

W. N. SCHOENFELD

Queens College (City University o~ New York)


and Cornell University Medical College

Abstract-The term "response" is a basic one in behavior theory,


particularly reflex theory, but its definition is not clear. The origin of
the term in the common vocabulary has affected its later extensions in
the analysis of behavior. Some contemporary theorists accept the exist-
ence of two "types" of response, coordinating one with the Pavlovian con- '-
ditioning procedure, the other with the operant conditioning procedure
of the so-called "contingent" variety. Reservations are expressed here
about such distinctions between response classes and conditioning para-
digms, emphasizing the difficulties that arise from certain conventions
and inadequacies in current definitions and conceptions of "response."
The critical nature of the problem for behavior theory is illustrated once
again by the recent laboratory finding that a familiar and accepted
conditional reflex, that of the "conditioned cardiac CR," can be frac-
tionated into "parts" and is therefore perhaps no longer to be treated as
a single unitary "response."

1. BEHAVIOR SCIEN'nSTS plucked file word "response" out of the


common lexicon, and gave it several important roles in their theo-
ries, one of which was as a term in the "reflex." Skinner's (1931)
review of the development of the concept of the reflex, w h e t h e r or
not it satisfies historians of science, seems correctly to stress the
point that "reflex," as opposed to "reflex arc," is a correlational
concept uniting the two terms "stimulus" (S) and "response" ( R ) .
The second of these two terms is the subject of attention here, while
the first, which deserves equal scrutiny, m a y be reserved for
another time.
Even as part of a reflex, the R term is usually defined by deno-
tation. Textbook writers are given to defining a "response" in terms
of an "effector," and the latter in terms of muscle or gland, so that
a "response" is either a muscle contraction or a glandular secretion.
But rarely, if ever, do they honor such a definition. Instead, they
usually deal with a "response" as a m o v e m e n t of the organism (or
part of the organism, say, a limb), or as a consequence of such a
m o v e m e n t (say, the depression of a lever). As one can readily see,

Tile new experimental finding reported herein originated in the Behavior


Research Laboratory, F.D.R., V. A. llcalth Care Facility, Montrose, NY 10548.
129
130 SCIIOENFELD Pav. J. Biol. Sel.
July-Sept. 1976

however, even such characterizations of "response" are not honored


in our current literature. Rather, the "response" in any study is
taken to be whatever the researcher chooses to measure as his
dependent variable; it is that bit or segment of his organism's be-
havior stream which he elects to record as its Ll~ehavioral output.
The inditference of his choice, added to the fact that tlle word "'re-
sponse" never quite overcame its origin in the market place, has
resulted in'a breadth of meaning and a non-precision that can com-
promise its value to behavior theory. Thus, "response" often breaks
out of the confines of the "reflex," and is a desigflation given to any
of tile manifold lwhavioral and somatic scr of experimental
procedures which are not limited to "stimtdus" operations. When
this happens, the R term becomes equatcd merely to "effect" and
is no longer of much use to reflex theory in particular, or to be-
havior theory in general. Such an equation, such inconclusiveness
of meaning, and such reliance in the day-to-day usage by workers
in the field upon a wholly denotational identification of R, must
in the end neutralize the technicalservice that the term can render
behavior theory. The term becomes too elusive for theory to tol-
erate well, even as its empirical referents become too diverse to
promote stable communication anaong laboratory workers. Theory,
which aims to unify the facts and observations supplied by its
experimental researchers, will not be content to achieve that aim by
making its terms indefinite. This is all the more true when that in-
definiteness is accompanied by unexpressed conventions as to how
any given term is to be used, and by concealed assumptions re-
garding its properties and its functions. In the case of "response,"
convention made it acceptable to apply the term equally to saliva-
tion, lever-pressing, pupillary contraction, blood pressure and car-
diac rate changes, and any number of other events. Convention
also directed theory to regard R as a punctate event, so that two
or more Rs can not occur at the same time. And it was only con-
vention which endorsed the currently eommo~i assumption that all
the variegated events called "response" can be expected to obey
the same behavioral laws and be treatable by the same behavior
theory (Schoenfeld, 1966; 1972).
Pavlov's paradigm for behavior modification (nowadays called
"classical conditioning"), and his conceptualizations of that proce-
dure and its behavioral effects, were in tile reflex tradition familiar to
him. It was that same tradition to which Skinner (1938) attempted
to relate his formulation of "operant conditioning," but in the con-
ceptualizations to which Skinner was led the reflex was soon down-
graded in favor of the R term alone. Among most of his followers,
the expression "operant conditioning" became synonymous with
Volume 11
Number 3 "RESPONSE" IN BEHAVIOR THEORY 131

"response conditioning," and they did not hesitate to say so. While
the casualness of Pavlovian researchers about choosing the R for
their theories would eventually have produced criticism like the
present paper, it has been more the developments in the field of
operant conditioning which have forced the need to review the
place of the R term in general behavior theory. These develop-
ments have brought to the fore old problems that now can be more
clearly perceived than they once were. They are getting compla-
cent behavior theorists to rub their eyes for a new look at their
t e r m "response."
Even before these developments, however, the need to review
the status of R in our behavior theo,'ies had been indicated by the
variety of "measures" that laboratory workers had come to use as
their R term. Response latency, duration, amplitude, and other
indices of avail were treated as equally valid, and even interchange-
able, in behavior-theoretical formulations. Several decades ago, a
few theorists became concerned over whether the several presumed
measures of R actually co-varied. Their thought was that such co-
variation, if it could be substantiated, would make defensible the con-
cept of "reflex strength" which Skinner had voiced, since it would
justify the pooling cf different "responses" and different response
"measures" into a generalized R term for conditioning theory to
use. But the acttml laboratory fimlings did not reve:d any sudl co-
wlriati(m to sigljific:mt degree, an(1 the concept of "reflex strength"
disappeared frolu the literature leaving behind its original prob-
lems still unresolved. Moreover, convention has added still other
novel usages for R which rest easily today in the minds of many
operant conditioners but which have only made the original prob-
lems all the worse. Thus, having begun with lever-pressing, oper-
ant theorists have extended R to a range of "responses" like an
author "writing a novel," a couple "getting marriedi" a parent
"helping his child," a researcher "doing an experiment," a social
reformer "designing a society." Propositions once formulated about
lever-pressing by rats have been given parallels in the cases of all
activities, or hehavior segments, which may be said to be "reinforce-
able," when that term means only, in some not wholly specified
way, that the organism can be encouraged to repeat the action,
to try again.
But the problems raised by operant conditioning, even from
its beginnings, have not all been so literary. On a more technical
and operational level, some questions of theoretical force are gen-
erated by two of operant conditioning's traditional hallmarks. The
first of these was the use of rate as a response measure; the second
was the paradigmatic practice of making the delivery of a reinforc-
Pay. J. Biol. Sol.
132 SCHOENFELD J uly-Sept. 1976

ing stimulus contingent upou the ~ccurreuce of a response. These


two features were ])cliev(,tl tmi(lut; to operant conditioning, and
sufficient to distitlguish it from Pavlovian conditioning even if other
alleged differences were laid aside. This was a fundamental error
to begin from. It has led to some of the in'esolvable dilemmas that
contemporary theorists struggle with either when they persist in
treating Pavlovian and operant conditioning as different "types'" of
learning, or when they try to unify all learning within a general
behavior theory. Among these dilemmas are several that are specifi-
cally related to the R term in their theories.
9. Although Skinner sought to derive his "operant conditioning"
from the same reflex considerations that he applied to Pav|ov's
pn)cedure, he also helieved the two cases were radically different.
One of flae ditferences involved the relation of the reinforcement to
R: in classical conditioni,~g, as he saw it, the delivery of UCS was
not contingent upon whether UCR or CR had occurred, whereas
the essence of the operant procedure was that the "reinforcer" (S r)
was not provided unless the specified R had occurred. To his mind,
there was a correlation between one "type" of R (the respondent)
and its "type" of conditioning (Type S), and between a second
"type" of R (the operant) and its "type" of conditioning (Type R).
These "types" and correlations he saw as exhaustive for condition-
ing theory, but more important to his discussion was the feature
that there was one pair of "types" for which reinforcement-con-
tingency-upon-R was a main rais.on d'etre, whereas no such con-
tingency was needed to effect the correlation of the other pair
of "types." The alleged nature of the operant as an "emitted" R
required operant conditioning to feature "contingency" of reinforce-
ment. At least, so it seemed at the beginning, and so it was that
"contingency" entered behavior theory (Schoenfeld and Cole, 1972;
Schoenfeld, et al., 1973). It was the midwife at the birth of two
"types" of R, but in another sense it was the reason for the twin-
ning. That there were, indeed, two separate infants was certified
by Schlosberg (1937) and others who had observed that training
(read, conditioning) procedures differed in effectiveness depending
on the R chosen for the experiment.
Nevertheless, the comparison that was being made at the
time respecting contingency in Pavlovian a~d operant conditioning
was between a singte trial in a Pavlovian training session, and a
single SR delivery in an operant training session. In file former,
UCS or CS could he said to precede the response on that trial; in
the latter, a response singled out by some rule of identification
(Schoenfeld and Cole, 1975) had to occur before Sr was delivered.
Volume 11
Number 3
"RF~PONSE" IN" BEHAVIOR THEORY 133

But the temporal relations called "before" and "after" are not lim-
ited to single trials, or to any single event in a series of (postulated)
identical response events. In reality, given the whole course of a
conditioning session or experiment, covering many trials and span-
ning many responses, every occurrence of either UCS or Srt pre-
cedes some responses and follows others. A similar neglect to con-
sider the whole course of a training session or experiment affected
the theoretical treatment of "aversive" conditioning. In "avoidance"
responding, for example, whether tile training is trial-by-trial o r
"free operant," a "successful" response does not permanently avert
the "aversive" stimulus, but only delays it; that is to say, the avoid-
ance training begins with at least one "escape" response (i.e., an
"unsuccessful" avoidance trial), and as training continues "unsuc-
cessfur' trials or responses occasionally recur with the result that
the aversive stimulus also recurs but as a delayed one.
It has been noted elsewhere (Schoenfeld and Cole, 1975) that
all "contingent" schedules of reinforcement are really rules for
response identification (that is, for identifying the response-to-be-
reinforced), and only "non-contingent" schedules can be said to be
really schedules of reinforcement. Leaving such consideratigns
aside, however, the contingency of SR upon R, which operant con-
ditioning theory emphasizes, involves a temporal relation between
Srt and R which is imposed by the experimenter. In practical
terms, it means that the experimenter stipulates something about R
as a precondition for his delivery of Sa. The "something" can be
any property or feature of R (often, what the apparatus requires
to detect the occurrence of an R at all): a stipulated amplitude, or
duration, or locus, or whatever. Such stipulations, and the fact that
any stipulations of any kind are set by the experimenter, not only
affect the experimental findings, but also raise for operant theory
problems of a rational sort. Among the latter is the cluestion of
what are the dependent and independent variables in such a pro-
cedure (Schoenfeld and Farmer, 1970; Schoenfeld, Cole, et aI., 1972;
Sehoenfeld, et al. 197:3; Schoenfeld and Cole, 1975). When rein-
forcement is made contingent or "response-dependent," the response
feature which is the stipulated requirement for SR delivery may
be reported as the dependent variable, but it also partakes of the
character of an independent variable; and conversely, the delivery
of SR may be described as the independent variable, but it also
partakes of the character of a dependent variable. The matter
stands the same in the so-called R "shaping" procedures that are
popular today: these procedures merely broaden the problem to
a succession of different "responses" before that R occurs which the
experimenter had decided to measure explicitly. In any operant
1.~ SCIIOENFELD Pay. ]. Biol. Sci.
July-Sept. 1976

study of the contingent variety, tile only "pure" dependent variables


are those R measures that are recorded "incidentally" to the one
stipulated as a requirement for SR. These incidental R changes, or
changes in Rs other than the experimentally specified one, have of
late been named "schedule effects" and have been treated as re-
sponse by-products (ff SR, as a response side-show that is whimsical,
albeit interesting. It sh(~uld be evident, of course, that all response
changes under a given reinforcement schedule are products of that
schedule. The allegedly incidental ones are simply the unrequired
ones, and for them the reinforcement schedule is of the "non-
contingent" sort. In every operant conditioning experiment, built
upon a response-dependent reinforcement formula, the reinforce-
ments delivered are both contingent and non-contingent at the same
time with respect to different aspects of R, or with respect to differ-
ent Rs. This fact perhaps should have been seen earlier by operant
theorists as foreshadowing the prolninent place fllat non-contingent
reinforcement schedules have today come to assume in laboratory
research, and in the current thinking of behavior theorists.
By his stipulations and non-stipulations, the operant condi-
tioner weakens the traditional synonymity between the "response"
and his experimental "dependent variable." Because he feels (for
behavioral reasons that govern his own behavior) that his stipula-
tions make him a partner in his experinaent in a way that Pavlov
was not in his own, the operant conditioner also tends to drift away
in his theoretic~al thinking from the "reflex" with which he began.
Eventually, as said earlier, he may come to conceive of his "oper-
ant" paradigm as being "response conditioning," and not so much
a procedure for training a "conditional reflex."
3. From the start of his effort to formalize a theory of "operant"
behavior and of operant conditioning, Skinner faced the problem
of response measurement. He took his goal to be the extension of
reflex theory to cover behavior which in the higher animals was
lodged in the central nervous system and in striate skeletal muscle.
In the case of man, as least, such behavior was of the sort tradi-
tionally called "voluntary." The struggle of science, as Skinner
saw it, was to subsume such behavior within a philosophy of be-
havioral determinism, and specifically wiflain the behavior-analytic
category of the reflex. En route to this goal, several strategems
were resorted to, including a distinction between two classes of
response: the respondent, which was described as having been the
main target of Pavlov's work, and the operant, which had been the
main object of Thorndike's work. The operant was seen as an
"emitted" response, not in the sense that it was not reflex, but only
Volume
Number 11
3 "RESPONSE" IN" BEHAVIOR THEORY 135

that its antecedent stimulus causes were unknown to the experi-


menter at its first occurrences and not manipulable by him, al-
though eventually he would assume control over its emissions.
From its very first appearance, the operant occurs full-blown, and
the outcome of "conditioning" it (as well as the experimenter's
goal in conditioning it) is to increase its rate of appearance above
its pre-conditioning rate or "operant level" (Schoenfeld, et al.,
1950; Keller and Schoenfeld, 1950). The experimenter can put the
operant he has chosen to observe under known stimulus control by
arranging a correlation lwtw~'c'l~ an allt('cc'dent stimulus (the so-
called "discrimimdive stimulus," or S t') and the post-response "re-
inforcing" stimulus. Whether under stimulus co,atrol or not, how-
ever, the mark of conditioning for an operant is control over its
rate of appearance. Of course, control over the rate could also be
obtained by a trial-by-trial procedure which forces the response
rate to conform with the trial rate (say, by withdrawing the ma-
nipulandum between trials, and allowing only a fixed number of
responses, perhaps only one, on each trial). But the procedure
regarded as the purest, or the optimal, one for realizing the full
potential of operant behavior is that of the "free operant," in which
R is "free" to occur at its "own" rate under the selected stimulus
conditions and reinforcement schedule. These considerations,
among others, provided a starting point for Skinner's formulation
of the field of operant conditioning.
From the outset, however, the notion of a "free operant," and
the reliance upon a rate measure, created several problems, both
practical and conceptual, for behavior theolT. Among these were:
(a) Since the reinforcement for an opc,nmt is set experimentally
to follow the response, then once the response has occurred and is
past, what can it mean to say that "its" rate is changed by the re-
inforcement? And when the rate of R is reported by the recording
apparatus as having been raised, where had those responses been
residing when they were still the future ones which the SIt was
going to draw from to make up a new rate? Questions like these
(which had led years before to the charge that Thorndike's formu-
lation of law-of-effect learning was teleological and outside the
domain of respectable science) do not raise any necessary practical
problem, since it is obvious that the rate of something called "R"
in fl~e experiment has indeed been raised, and behavior theory
would certainly wish to deal with that fact. Nevertheless, such
questions, while they pose no block to operant investigations, do
affect the way behavior theory will conceive of, and define, the
"response" event, and the way the practical rcsearcher t, ndertakes
Pay. J. Biol. Sei.
136 SCI-|OENFELD July-Sept. 1976

to measure that event. T o accept "response" rate as a measure is


to presume that all the responses are equable to one another, since
if every response were unique it would not have a rate. Skinner
(1935) rationalized that presumption early in his work by invoking
the concept of the generic R, with its implication that S r affects
not the response it follows but the whole class, R, of which that
response is a member, and from which the responses are d r a w n
which yielcl a rate.* But since those responses, at the m o m e n t Sr
is delivered, lie in the future, the generic concept of R offers no
answer to the question of where responses reside before they occur.
W h a t is n e e d e d in addition, as Skinner saw, was a generic concept
of the stimulus, and the presumption that a m e m b e r of the R class
occurs only conjoined with a m e m b e r of the S class. F u t u r e Rs
do not reside anywhere, but occur only following their appropriate
and equatablc Ss. Reflex theory offers some possibility of dealing
with such a question,, but when operant theory cuts Icx~se from
reflex theory and reduces its scope to only "response conditioning,"
its explanatory p o w e r is also reduced. Yet, by the same token that
the generic concepts of R and S salvage the reflex as a basis for
behavior theory, they actually make superfluous the concept of the
"free operant" and the measure of response rate. T h e rate of R is
again seen to be d e p e n d e n t upon the rate at which S is provided,
a n d it is the S rate which has to be explained as the source of the
"free" operant's rate. It was this conclusion whida led E. R. Guth-
rie to the principle of postremity; that principle has often been

~ The severed "responses" which are categorized together in the generic


class R are said to belong together because they show lawful co-variation
with the independent variables manipulated during the experimental analysis
of behavior. While a criterion of "lawfulness" for this cateorization is accept-
able, it is routine in operant conditioning laboratories that the range of re-
sponses being classed together is likely to encompass responses of different
"top:~graphy." Indeed, this conglomeration is sometimes cited as a sign of the
power of the conditioning paradigm, and as a justification for a "lawfulness"
criterion to define a generic class. "Topographic" differences, however, moan
differences in effcctors, or in muscle groups. As earlier theorists (e.g., E. R.
Cuthrie) pointed out, an R class of this sort is really defined by the outcome
of responding, not by the responding member or effector class. It is the "act,"
and not the effector response, which is being observed. The practice goes
back to Thorndike's "string-pulling" cat, Small's maze with its "correct" turns
and "erroneous" cul entrances, and Skiwmer's "lever-pressing" rat or "key-
pecking" pigeon-in each case, the effector 'is ignored in favor of the response's
outcome. But when different effectors are involved, what becomes of the
"reflex"? Can the "operant" responses that operant conditioners and operant
theorists discourse upon be val!dlv described as "reflexes"? There seems to be
a contradiction between the historical reflex and the behavior that an operant
experiment actually measures. Despite Skinners original wish and intent, the
descent of his operant conditioning from a reflex ancestor is gerleologically
questionable.
Volume 11 "RE.SPONSI~."
IN BEHAVIOR THEORY 137
Number 3

spoken of as his theory of learning, but it was only his application


of the age-old principle of cause-and-effect to the relation between
Ss and Rs. It is, indeed, the self-same principle that every reflex
theorist adopts, and the one that every natural science rests upon.
Skinner arrived at this point, too, although he never put it in those
terms; the nearest he came to saying it explicity was in his descrip-
tion of the closed, stereotyped chain of reflexes the rat went through
when pressing the lever and ingesting its pellet. It is a circle of
reflexes,'* spamling the delivery of an S f~, that the well-conditioned
subject runs through rep~'atedly, and in which the delivery of a
"reinforcement" is only one in a sequence of c>pcrational stimuli.
It is the rate of turning this circle of reflexes which is measured by
the rate of appearance of any one of the Rs composing it (such as
the lever press itself). From this viewpoint, the "response rate"
treated by operant theory is really "reflex rate," or the rate at which
a single reflex in the circle comes round. Such is one of the implica-
tions of Skinner's analysis of the lever-pressing situation, and it
restores the desired kinship between the reflex and operant condi-
tioning. It would seem also to restore the not altogether unimpor-
tant possibility that Pavlovian and operant conditioning may yet
be unified within one behavior theory which will not treat them as
different "types" of learning.
(b) In actual practice, an operant conditioner waits upon the
"emission" full-blown (as reported by his recording machine) of
the response he wishes to incl'ease in frequency, and keys his de-
liveries of reinforcement to suda emissions or appearances. Why,
then, does he speak of his operations as "conditioning?" It is not
a new "response" he is operating on, but an old one and an already
complete one. If he believed that he is conditioning a new reflex,
then his description of his experiment, and his theory, would have
to go on to tell us (again, as Guthrie tried to do) how th6 reinforce-
ment he is using acts to bring about once again the stimulus condi-
tions that cause R to reappear. When he chooses instead to speak
of conditioning the response, he implies nothing more than the
fact that he will be measuring response rate. An operant condi-

9" Kantor might prefer to speak not of a chain of.distinguishable reflexes,


but rather c~f coiltinuous "interbehavioral" flmctions of stimulus (or better,
energy) inputs and response (or better, energy) outputs, lie holds that this
would accord better with the fact that the flow or stream of behavior is obvi-
ously continuous, while a sequence of discm~tim~ousand saltatory behavioral
"units" like the "reflex" is only supposed. A description of behavit~r by field
equations would implement such a view, but this has yet to be accomplished
(Schoenfeld, 1972; Schoe~lfeld& Farmer, 1970). Whatever the case, however,
it is the timing of the repetitive circle of bchavi~r from which a rate measure
h~r any component "response" in it is derived.
Pay. J'. BioL $ci.
1,~ SCHOENFELD July-Sept. 1976

tioner may opt to pin his work to a rate measure, but behavior
theory cannot be content with that. Nor can behavior theory long
continue to use the term "conditioning" for both Pavlovian and
operant procedures when the researchers themselves offer no reason
for believing that the same behavioral principles will be found to
hold throughout. Theory will rest content with a common language
for the two only when the cases are shown to be the same on actual
analysis. Workers with the "response" of salivation, and the "re-
sponse" of lever-pressing, are entitled to call both studies of "con-
ditioning" only if these responses are equivalent by explicit criteria
which are relevant to the term "conditioning." Behavior theory, in
the strongest and most rational form it has yet been able to assume,
tells us that reflex criteria are the only ones which confer upon
those workers the right to use "conditioning" for their two labo-
ratory exercises.
(e) Although it is often loosely said that an operant R has its
rate raised by cotlditioning, operant procedures are designable
which can lower the rate of the R under observation (see Skinner,
1938; Wilson and Keller, 1953). The so-called DRL and DRO
procedures, among others, are of this sort. Accordingly, it is more
accurate to say in general that operant conditioning aims at control
over the R, that is, control over whatever measure of R is chosen.
While this measure is usually that of rate, workers in the field have
successfully controlled other measures of R, such as its latency (in
trial-by-trial procedures), amplitude, site, "topogral)hy," and so on.
(d) The pr~edure of response "shaping," which so often is an
early stage in an operant conditioning experiment, may seem to
r the traditional picture of the experimenter patiently
waiting for his selected R to be "emitted" so that he can reinforce it.
On inspection, however, the term "shaping," like so many other
everyday usages of laboratory workers, does not mean what it says.
It seems to tell us that the R is being created piecemeal, is being
put together somehow, and once put together is having its ampli-
tude raised to the threshold of the recording machline so that in-
stances of its occurrence could be recognized. But that is not what
the "shaping" consists of. Rather, when the desired R does appear
in the process of shaping, it appears full-blown. Pedaaps it is the
organism's response repertc, ry which should be thought of as being
shaped; or perhaps we ought imagine that the operant level of
the desired response has been raised. Perhaps it is easy for us to
metaphorize the process as "shaping" because it is on occasion a
slow one, and because labeling the shaping procedures as "selective
reinforcement" and "successive approximations" makes us imagine
Volume
Number I1
3
"RESPONSE" IN BEHAVIOR THEORY 139

that a single object like a painting or piece of sculpture is being


formed (Keller and Schoenfeld, 1950). The metaphor is dramatic,
but can be misleading. In the case of responses which come but
once in the shaping process and are gone, where eventually we
will be dealing (as it is argued) with members of a generic class
R, no individual "response" is being selectively reinforced or suc-
cessively approximated. What is being shaped, if anything, is the
repertory of responses exhibited by the subject: the desired re-
sponse, when it emerges, still does so full-blown so far as the
experimenter's recorder is concerned. Shaping should perhaps he
described as sneaking up upon, or inflating, the operant level of
the desired response. The procedure increases the pre-experimental
rate of the response so that it can be exposed to the planned rein-
forcement schedule or other experimental operations. If R had a
sufficient operant level to begin with, shaping would not be needed.
It is perhaps worth noting, as an aside, that while "shaping" is
generally thought of in connection with operant conditioning only,
a parallel procedure deserving the same metaphorical title can be
designed for the Pavlovian conditioning of a so-called "respond-
ent" R.
(e) In contemporary operant laboratories and theories, the cus-
tomary, if not universal, view of the "response" is that of a punctate
event. This view is not forced upon theory, but rather has come
about because of practical requirements when responses are to be
measured. Simply, it has been easier for behavioral researchers to
regard responses as point events in space and time, and to do what
A. N. Whitehead (1937) noted was characteristic of an earlier age
in other sciences, namely, "to count things, or to note their existence,
instead of measuring them (as a few of the giants, like Galileo, did
on occasion)." In this same sense, the index of response rate is a
response count, and is perhaps not a "measure" at all. But even if
we put such fretful doubts aside, the acceptance of a punctate char-
acter for the response has created a number of problems for both
laboratory workers and theorists. Even when researchers turned
their attention to response variability as a datum, they used the
variability of punctate R measures, such as variability of R latencies
following a cue, or the variability of loci of R occurrences (say,
along an extended manipulandum). And yet, the possibility has
always existed of using an R which has extension spatially and
temporally. That possibility has not as yet been sufficiently ex-
ploited. It was implied early in Skinner's work by his expanded
description of lever-press behavior as a circle of reflexes into which
the well-conditioned organism's behavior becomes stereotyped, and
l'av. J. Biol. Sol.
140 SCIIOENFELD July-Sept. 197~,

in which the lever press is only one of tile recurring responses. If


the turning circle were not seen as composed of "reflex" units, but
if instead observations were made of extended segaaaents of the be-
havior, the recurrence rates of those segments might still be taken
as experimental data but certain restrictive assumptions hitherto
concealed could be exposed, and discarded if a theorist so desired.
Among the latter would be such assumptions as that no more than
one response can occur at the same time, or that every response
must be given the same "'weight" when computing the response
rate, or still others (Schoenfeld and Cole, 1975).
From the beginning of the field, however, the thinking of oper-
ant conditioners, and the character of operant theory, led to a focus
on response rate as the behavioral measure of priority. Rate was
never seriously challenged, and no alternative was ever widely
proposed for general use. The emphasis upon rate helped the "free
operant" lose its tics with the r~,llex, a~d bec~me more and more
an "emitted" rcsp~nse that could be dealt with adequately without
thought to its causative or controlling stimulus. Divested of its
identity as a reflex, the "operant" could be defined as a tally on
the experimenter's response recorder. Since those tallies were
recorded on an all-or-none basis when the recorder's detection
threshold was reached, the rate of occurrence came to hand as a
natural and "given" measure. The rate of other response events
below the recorder's threshold were seldom attended to (Skimaer,
1938; Notterman and Mintz, 1965), and no effort was made to
segment the behavior stream otherwise than into punctate events,
or to deal with segments by other than point measures (Schoen-
feld, 1972; Sdaoenfeld and Farmer, 1970). Though operant con-
ditioners in the laboratory, and operant theorists at their desks,
were not limited perforce to rate as their measure, nor to R as
a punctate event, they usually believed they were, and in practice
acted as if they were.
4. Contemporary behavior theorists generally accept the exist-
ence of two "types" of conditioning, the Pavlovian and the operant.
By this acceptance, they are led further to accept, or at last to con-
done, the existence of two types of "response." A dual classification
of response was explicit in Skinner's earliest work, and has remained
explicit in the thinking of most of his followers. The duality is
less evident in the thinking of Pavlovian workers. Some theorists
are given to making what they believe to be an operational dis-
tinction between the Pavlovian and operant conditioning proce-
dures, but that di.stinction is also used to conceal a distinction
between two types of response. The alternative would" be to dis-
Volume 11
Number 3 "RESPONSE" IN BEHAVIOR THEORY 141

tinguish two types of reflex that are eonditionable by the two


separate procedures, but that alternative might require a distinction
to be made as well between the S terms of the reflexes, and most
theorists would be reluctant to consider stimuli as being of two
"types." Howbeit, the assertion of an operational difference between
two conditioning paradigms-aside from the question, which even-
tually will need also to be resolved, of whether the operational
distinction as described is correct-raises the query of just why the
two paradigms were historically needed in the first place. The
often-unexpressed reason resides in a pair of beliefs: first, thateach
paradigm, as described, is successful by itself as a conditioning
procedure; and, second, that the distinction between the two c~ndi-
tioned products is to be found in the response effects of the
conditioning procedures, and not in the Ss.
Whichever way a theorist turns today, the definition of "re-
sponse," and the plausibility of separate categories of R, are in-
escapable and critical problems for him. The problems become
particularly acute when he considers the possibility of reducing
the two "types" of conditioning to a single one. In these days, a
theorist is likely to iudge the feasibility of that reduction in the
light of several considerations, among which are the following.
(a) Is the division between classes of reflexes, or between any
other "units" of behavior being learned, assignable to differences
in S? As pointed out earlier, reflex theorists are reluctant to dis-
tinguish Ss even if they believe in two types of learning, or in two
types of reflex. Either of the latter two beliefs is really a belief in
two types of "response." Needless to say, these problems do not
affect so strongly, if at all, those learning theorists who do not
lean upon the reflex to begin with, but who rather conceptualize
in terms sudl as perceptual learning, or meaning and information,
or insight learning, or some emergent properties of "stimuli not
physically describable, and so on. But for reflex theorists, at least,
while the Sterna may for its own reasons call for examination, the
judgment of reducibility of "types" of learning does not depend
upon a new treatment of S. For them, it is the "response" which
must take center stage.
(b) At the beginning of operant theory (Skinner, 1938), it was
argued that the "respondent" and the "operant" were different
classes of R, and not merely that they figured in different reflexes.
Thus, it was said, the one R class was characterized by smooth
muscle and glandular tissue, the other by striate skeletal muscle;
the one by autonomic innervation, the other by central nervous
system innervation; the one could be demonstrated and measured
].42 SCIIOENFEI,D Pay. J. Biol. Sci.
July-Sept. 1976

in single elicitations accol'ding to "static" laws, the other obeyed


"dynanfic" laws; the one functioned to "prepare" the organism
systemically for some event about to befall, the other operated
upon tile environment to produce some consequence therein; the
one was sufficient for the organism to cope with constant and im-
mutable features of the environment and so could, in an efficient
evolution of the species, be structurally "wired into" tile organism
as an elici'table and invariant reaction, while the other had to cope
with variable features of the environment, and so had to be learned
on the basis of success in coping. Such exemplary differences be-
tween the alleged classes of R are no longer as persuasive for
theorists as they once were. After all these years, it has become
more evident, and less easily overlooked, that "respondent" prop-
erties can be illustrated with striate muscle, and "operant" proper-
ties with smooth; greater attention is given to the plain old fact that
every part of the nervous system is comaected, however indirectly,
with every other part, and that all parts work interdependently; it
is widely surmised that the relative variability and multiplicity
of stimulus controls as between the two alleged classes of R will
determine which, if not both, will exhibit "static" reflex properties
on single elicitations; it is recognized that the.organism which has
been "prepared" by a respondent often does not do anything
further in "operant" fashion after the preparation, while the en-
vironmental consequence produced "operant"-wise is often not
used by the organism; it is accepted that immutability and fixity
of environment are matters of degre[; it is known that "success" is
a conventional term not necessarily honored by the learner himself
who will often learn to behave in ways that are by convention
regarded as unsuccessful or maladaptive (notwithstanding Spencer's
argument that "hedonistic" behavior is generally beneceptive, and
thus mediates the survival of individuals and the evolution of
species ).
(e) Perhaps the last-ditch defense of the doetrine of two classes
of reflex, and by implication two classes of R, is the alleged opera-
tiomd-paradigmatic distinction between the two conditioning pro-
cddurt~, file Pavlovian and the operant. It is said that, in tile
fonner, stimtdus pairing antecedent to R is the operation; con-
trasted with that is the presentation in operant conditioning of a
stimulus ("reinforcement") after the R. It is urged that the opera-
tional distinction is plain, and that the existence of two types of
learning must be conceded whatever be the terms in which the
theorist might choose to couch his descriptions of those types. When
Skinner originally coordinated the Pavlovian procedure (called by
him "Type S conditioning") with the respondent, and the Thorn-
Volume II
Number 3
"RESPONSE" IN BEHAVIOR THEORY 143

dikian procedure ("Type R conditioning") with his operant, he


also anticipated the possibility of Type R conditioning of respon-
dents. He was inclined to reject the possibility (though he did
not hesitate to test it, in an experiment which failed), but it has
since become a popular demonstration ill tilt; field under various
names like "operant conditioning of autonomic responses," "bi~-
feedback conditioning," and so forth. Skinner's argument at the
time was, and seems to have remained, that Pavlovian conditioning
is e?daibited by changes in the experimental re~Iex's "static" prop-
erties, whereas operant conditioning changes the "dynamic"
properties of the operant being conditioned. But the prior essential
question is whether the two conditioning procedures actually differ
operationally as alleged.
Some reasons have already been cited for doubting that the
two procedures are validly distinguishable by the temporal rela-
tions they establish between the stimuli and responses under ob-
servation, or by the supposedly "emitted" and "elicited" character
of an operant and a respondent. The two "types" of procedure are
both abstractions from experimental manipulations performed upon
the behavior stream, and from changes in that stream when we
intrude into it those stimuli which we label "CS" and "UCS" and
"reinforcer." In both cases, the R involved is a segment of the
behavior stream, and to define R requires that one define "seg-
ment" and "behavior" (Schocnfeld and Farmer, 1970). Segment,
of course, iueans whatever stretch of the behavior stream the
experimenter decides to measure; behavior may be defined by
various criteria, according to the worker's choice, and indeed vari-
ous criteria are applied even by operant conditioners and theorists
when they extrapolate their principles from the Skinner box to com-
plex learned behaviors, to social situatious, and to whole societies.
The definition of any particular operant R in any given experiment
will therefore depend on such factors as the social conformities
in the definer's thinking, and on what he judges, in an economic
sense, to be the benefits accnling to the organism from the environ-
mental changes produced by its response. These factors obviously
go beyond the properties of the operant under observation, or the
supposed differences between operants and respondents.
Because the definition of R does depend upon such extrinsic
factors, we may properly inquire into the assumptions and conven-
tions, both overt and hidden, which governed the thinking of the
earlier theorists whose work we have ildlerited and live by today.
Their assumptions and conventions regarding R determined in part
the starting points for their thin,ties, and informed the foundational
propositions of their theories. For example, while Skimaer actually
Pay. J. Biol. Sci.
144 SCHOZrgFELO July-Sept. 1976

defined an operant response as a "movement" (just as Thorndike


and Guthrie did), he proceeded to measure it by the rate of ap-
pearance of one component of the movement rather than by the
organism's position in space and time, as described by field equa-
tions, or perhaps as by some energy output functions in the inter-
reinforcement times (Farmer and Schoenfeld, 1964). Pavlov pro-
vides another example in that he might have used a cumulative
response recorder for tlle drops of saliva which flowed on each
conditioning trial and between trials, instead of adding up all the
drops to give a presumed magnitude measure of R on each trial
as a separate salivary episode. If, faced with such examples, it be
argued that the rate of the salivary R (when that R is taken, as
Pavlov did, to be the total salivation on a given trial) is not a
satisfactory measure of R because it is fixed by the rate at which
trials are given, then it might be noted that the same is tnle of
operant conditioning. That is to say, if after every operant R the
environmental stimulus situation were changed sufficiently so that
R would not occur in that strange situation, then the rate of R
would depend upon the rate at which the experimenter returned
the environment to the state in which R would occur (which is
reminiscent, of course, of Guthrie's postremity principle, and of
the objective definition which realists like Guthrie and Skinner
offered of "reinforcement," or any other stimulus, as an environ-
mental change). From such considerations it would appear that,
if a "fi'ee operant" is regarded as validly measurable by its rate
when "reinforcement" is not provided (or by the rate at which the
environment is returned to its pre-reinforcement state, that state
which may have been disturbed by the reinforcing stimulus itself),
then a Pavlovian CS may be similarly interpreted as acting to return
the environment to the state it was in when the R was last made.
In either case, the R will he made again only under its original con-
ditions, on the general principle that it takes similar causes (in this
ease, the stimulus situation) to have similar effects (in this case,
the R). In short, this line of reasoning leads to the conclusion that
conditioning theory is not forced to postulate two different classes
of R-the respondent and the operant-any more than it does two
different classes of reflex. If that conclusion is correct, a condition-
ing theorist who wishes to work within reflex theory need neither
complicate nor compromise his task by multiplying the categories
of reflex and of response. He may as well start from the premises
that a refex is a reflex, and a response is a response.
Once it is recognized, or the original belief is reaffirmed, that
operant theory is part of general reflex theory, and once-distinctions
are no longer drawn between classes of responses or classes of re-
Volume
Number I1
3 "RESPONSE" IN BEHAVIOR THEORY 145

flexes, then the unification of Pavlovian and operant conditioning


is not a matter of mere option or hope. They must, and will be,
unified; whether they be unified within the historical framework
and vocabulary of the one or the other-and theorists differ in their
preferences-may be a moot point of discussion for some time to
come. Eventually one or the other may be successful, or perhaps
a successful reduction will come through a theory different in some
degree from either. At this time, the point of prior importance is
to restore the connection between operant conditioning and reflex
theory. Nevertheless, it should be recognized that eventually the
unification of tilt, two may rc~luire the<~rists to move heyo,d reflex
theory as it has come down to us historically. Many of the experi-
mental observations coming from the laboratories of our imagina-
tive contemporary researchers resist interpretation within either
classical Pavlovian or operant terms, and even general reflex theory
finds those observations recalcitrant. Theorists may yet have to
stake out new conceptual guidelines to cope with them. Such a
new structuring of behavior theory may find it easier to subsume
learning theories that have never before felt any "kinship with reflex
theory. For the moment, however, the task for general reflex theorv
and for any single conditioning theory is to deal as satisfac(orily as
possible with the new experimental data. If, as it seems, they
stumble over the definition and measurement even of such of their
primitive terms as "response," t h e y face an immediate crisis. An
experimental finding which ]lighlights all of these problems has
recently been reported in the area of cardiac conditioning. That
finding has exposed a difficulty with the definition of the term
"cardiac conditioned response," and by implication the broader
difficulty of defining "response" in general behavior theory.
5. Cardiac conditioning has lately grown into a field of wide-
spread research. Workers in it are now also extending their interest
to other aspects of the c~trdiovascular system (particularly among
mammals), but the heart still gets the major attention. Experi-
mental designs have for the most part relied upon the Pavlovian
procedure, using an auditory or visual CS paired with an aversive
UCS (often electric shock), and a CS-UCS interval of some con-
siderable length during which the cardiac CR is observed. This
CR has been measured most, often as a change in cardiac rate
(inter-systolic times, as a rule); indeed, a common way of referring
to the effect of the conditioning procedure is as a "cardiac rate
response," or "cardiac rate CR," or "conditioned cardiac rate
change." The CR in this case is not a punctate one, like a bar press,
but the distribution of "responses" in a segment of behavior over a
significant span of time. Recent experimental demonstrations of
Pay. J. Biol. Sci.
146 SCIIOENFELD July-Sept. 1976

operant conditioning of "automatically mediated" systems, when


they have involved cardiac function, have used the same index of
conditioning. The "conditioned cardiac response" exhibited as a
change in rate has been accepted by all parties.
The use of the heart as an effector organ, of heart rate dmnge
as a "response," and of the amount of change as a measure of that
"response,'.' all raise questions for a theorist. The questions are
all the more interesting because they have long dwelt just below
the awareness threshold of most workers. They have not been faced
explicitly; instead, the conventional practices of researchers have
been the substitutes for explicit answers. But the questions deserve
to be brought out into the open, and they must be resolved properly
if the theoretical impasses which now impend are to be avoided.
Of those questions, there arc some which should have received
attention long before dais: for example, if cardiac rate change is
acceptable as the measure of cardiac conditioning, ought not the
pre-conditioning cardiac rate be equated to "operant level," like
that of pre-conditioning lever-pressing rate, and, if so, may not
the cardiac rate changes produced by Pavlovian conditioning be
equated to the rate changes produced in an operant by law-of-effect
conditioning? And if Pavlov had taken each drop of saliva as his
"response" instead of the total flow per trial, might he not then have
used dropping rate as his measure of conditioning; and, conversely,
if Skinner had measured total energy output of his lever-pressing
rate between reinforcements, instead of cumulatively recording
singles presses, might not his behavioral functions have looked like
Pavlov's? If Pavlov had used a cumulative recorder for saliva
drops, might not his observed functions in the CS-UCS interval, or
in the inter-trial interval, have been marked by features like the
"FI scallops" so notorious in operant conditioning? When an on-
going and self-paced system like the cardiac one is used (is not
lever-pressing also "c:n-going" at its operant level, and is it not
paced by its determining stimuli albeit not in quite so servo-fashion
as the endogenously stimulated heart?) in a conditioning proce-
dure, is the delivery of a UCS (may that be equatable to an operant
conditioner's "reinforcement?") a Type S or a Type R opel~ltion,
since it must be conceded (even more obviously than in the case
of lever-pressing) that the UCS delivery must precede some heart
beat(s) and follow other(s)? When cardiac rate does change as
a result of conditioning, can we speak of that (as some operant
conditioners do when describing their conditioning operation) as
"conditioning an IRT" (with each beat considered a "response,"
and every intersystolic time an "IRT"), with rate rises, and falls
being the elimination of long and short IRTs, respectively?
Volume 11
Number 3 "RESPONSE" IN BEttAVIOR THEORY 147

Some theorists may prefer to argue that questions like these


can be deferred, perhaps indefinitely, since they raise issues of
no practical importance: they say that the issues can be dealt with
empirically, without requiring either intervention by theory, or a
pause in the research work until theory sorts out and reforms its
definitions and premises. But this know-nothing attitude cannot
be so easily assumed when it is discovered that the conventionally
accepted "cardiac rate response" can be broken down into com-
ponent parts, that is to say, that a,a accepted "response" is a syn-
thesis of elements which are subjectable to separate experimental
controls under which they yield different functions. In the face
of such a discovery, the theorist is by main force led back to the
primitive question of how his "response" term is to be defined, and
how any occurrence of a "response" is to he identified. That, in
turn, leads hir,! back to the practical question of what is to be
measured; and that, in return, to how he (whether he be experi-
menter or theorist) ought to assemble, inter-relate, and talk about
his empirical findings.
Such a discovery has in fact been made in the area of cardiac
conditioning (Schoenfeld, et al., in preparation). The experiment,
in brief, was as follows. In the first experimental stage, a cardiac
CR was established in several rhesus monkeys with the routine
procedure of pairing a visual CS with an electric shock (UCS);
CS came on 10 sec before UCS, and terminated with UCS. After
sufficient pairings of CS and UCS, the commonly ohserved condi-
tioned cardiac response was of a bi-phasic sort during CS, consist-
ing of an initial acceleration in cardiac rate, a peaking out of rate,
and a deceleration back toward the pretrial rate. Ill the next experi-
mental stage, two dnlgs we,'e used, pl'opralIoloI and atropine. When
the subjects were put under propranolol alone, the CS produced a
rather tnmcated CR in which the initial acceleratory wing of the
usual bi-phasic function was diminished, while the later decelera-
tory wing was spared; under atropine alone, the acceleratory wing
was spared, while the deceleratory wing was reduced; when the
two drags were given together, the entire bi-phasic function was
abolished, and no cardiac rate change was seen across the entire
duration of CS. Aside from what these observations may have to
tell us about nervous system control over the cardiac CR, and the
possibilities they suggest for exploring the parameters of the ex-
periment across their full range, the experimental finding clearly
demands that the conventional definition of the "cardiac CR" be
re-examined, and the measurement of that CR as a rate change
be re-thought. The whole hi-phasic rate change function is what
researchers had been calling "the c~trdiac CR." That CR had never
Pay. J. BioL Sci.
148 SCI'tOENFELD July-Sept. 1976

been considered as having "parts" which could be separately con-


trolled, yet the two drugs in our experiment acted selectively to
eliminate the CR by parts. To speak now of the separate "parts"
of a Pavlovian conditioned "response" would be unusual for be-
havior theory, to say the least-at least as unusual as it would be
in the operant case to speak of th'e cumulative response curve
drawn by .many lever-presses as "one response" having as many
"parts" as there are individual bar presses, or as there are local
variations in slope.f
What can it mean in any case of conditioning to say that a
response has "parts?" Are the parts of an R, if experimentally
separable, different responses? What are the implications of such
an experimental separation for behavior theory, for the definition
of the particular "CR" involved, for the definition of "response"
generally, for the use of rate as a "response" measure, and for a
distinction between Pavlovian and operant "types" of conditioning
which rests partly on the different response measures used by the
two "types?" Is the difference in measures used by the "types" a
matter of convention and history, or a matter of logical and opera-
tional necessity? That a single experimental observation can raise
such critical questions for theory is not unknown in any science, nor
is it in behavior science. This time the crisis centers on the "re-
sponse," but other fundamental terms and. practices of behavior
theory may well be the next targets.
6. It is evident that every science has a stake in the definition
of its terms, and this is as true of behavior science as of any other.
In its continuous efforts to keep its theory abreast of its experi-
mental findings, every science may find that it must returu again
to its beginnings, to the terms and definitions with which it started.
That, too, is as true of behavior science as of any other, and the
concept of "response" is such an instance. Once again, behavior
theorists will need to look at that term, and clarify its place and
meaning in their technical vocabulary. The effort will no doubt
cause some pain. It may ramify in as yet unpredictable ways
through the rest of our vocabulary, and through the whole of be-
havior theory as we now have it. But unless the effort is made, and
soon, behavior theory will become more and more distant from
our growing body of empirical behavioral data, and less and less
useful to anyone. That there is already some estrangement is

f It may be noted that the suggestion of some operant researchers that a


whole cumulative response curve be regarded as a "multi.operanl:" does not
meet the problems discussed here.
Volume II
Number 3 "RESPONSE" IN BEHAVIOR THEORY 149

known to every worker in the field, b u t the experiment reported


here m a y help bring the problem more into the open.

References
Farmer, J. and Schoenfeld, W. N.: Inter-reinforcement times for the bar-
pressing response of white rats on two DRL schedules. I. Exp. Anal.
Behav. 7:119-122, 1964.
Keller, F. S. and Schoenfeld, W. N.: Principles of Psychology. Appleton-
Century-Crofts, New York, 1950.
Notterman, J. M. and Mintz, D. M.: Dynamics of Response. Wiley, New York,
1965.
Schlosberg, H.: The relationship between success and tile laws of conditioning.
Psychol. Rev. 44:379-394, 1937.
Schoenfeld, W. N.: Some old work for modem behavior theory. Cond. Reflex,
1:219-223, 1966.
Schoenfeld, W. N.: Conditioning the whole organism. Cond. Reflex, 6:125-128,
1971.
Schoenfeld, W. N.: Problems of modem behavior theory. Cond. Reflex. 7:33-
65, 1972.
Schoenfeld, W. N., Antonitis, J. J., and Bersh, P. J.: Unconditioned response
rate of tile white rat in a bar-pressing apparatus. 1. Comp. Physiol.
Psychol. 43:41-48, 1950.
Schoenfeld, W. N. and Cole, B. K.: Behavioral control by intermittent stim-
ulation. In Reir[orcement: Behavioral Analyses. R. M. Gilbert ard J. R.
Millenson, eds., Academic Press, New York, 1972.
Schoenfeld, W. N., Cole, B. K., et al.: Stimulus Schedules: The t-r Systems.
Harper and Row, New York, 1972.
Schoenfcld, W. N., Cole, B. K., Lat~g, J. and Mankoff, R.: "Contingency" in
behavior theory. In Contcmpomry AI)l~n~achcs to Conditioning and Lean)-
inK. F.J. McC,uigan and I). lt. Lumsden, eds., V. II. Winston, Washing-
ton, D. C. 1973.
Schoenfeld, W. N. and Cole, B. K.: What is a "'schedule of reinforcement"?
Pay. 1. Biol. Sc/., 10:52-61, 1975.
Schoenfeld, W. N. and Farmer, J.: Reinforcement schedules and the "behavior
stream." In The Theory of Rein[orcement Schedules. W. N. Schoenfeld,
ed., Appleton-Century-Crofts, New York, 1970.
Schoenfeld, W. N., Kadden, R. M., and McMillan. J. C.: Sympathetic and
parasympathetic control of the cardiac conditional response ih Macaca
mulatta. In preparation.
Skinner, B. F.: The generic nature of the concepts of stimulus and response.
1. Gen. Psychol., 12:40-65, 1935.
Skinner, B. F.: Behavior o[ Organisms. Appleton-Century-Crofts, New York,
1938.
Skinner, B. F.: The concept of the reflex in the description of behavior. 1. Gen.
Psychol. 5:427-458, 1931. See also the added foreword to this paper in
B. F. Skinner, Cumulative Record. Appleton-Century-Crofts, New York,
1959, pp. 319-320.
Whitehead, A. N.: Science and the Modern World. Macmillan, New York,
1937.
Wilson, M. P. and Keller, F. S.: On the selective reinforcement of spaced
responses. 1. Comp. Physiol. Psychol., 46:190-193, 1953.

You might also like