You are on page 1of 38

forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

Remembering (Short-Term) Memory. Oscillations of an Epistemic Thing

May 30, 2011

Running Head: Remembering Memory

Uljana Feest
TECHNISCHE UNIVERSITÄT BERLIN
Institut für Philosophie, Literatur-, Wissenschafts- und Technikgeschichte
Straße des 17. Juni 135, Sekr. H72
10623 Berlin, Germany
Phone: +49(0)30-314-79408
feest@mail.tu-berlin.de

Abstract

This paper provides an interpretation of Hans-Jörg Rheinberger‘s notions of epistemic things and

historical epistemology. I argue that Rheinberger‘s approach articulates a unique contribution to

current debates about integrated HPS, and I propose some modifications and extensions of this

contribution. Drawing on examples from memory research, I show that Rheinberger is right to

highlight a particular feature of many objects of empirical research (―epistemic things‖) –

especially in the contexts of exploratory experimentation – namely our lack of knowledge about

them. I argue that this analysis needs to be supplemented with an account of what scientists do

know, and in particular, how they are able to attribute rudimentary empirical contours to objects

of research. These contours are closely connected to paradigmatic research designs, which in

turn are tied to basic methodological rules for the exploration of the purported phenomena. I

suggest that we engage with such rules in order to develop our own normative (epistemological)

categories, and I tie this proposal to the idea of a methodological naturalism in philosophy of

science.

1
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

Remembering (Short-Term) Memory. Oscillations of an Epistemic Thing

1. Introduction

In chapter X of his Structure of Scientific Revolutions, Thomas Kuhn suggests that after a

paradigm shift, ―familiar objects are seen in a different light and are joined by unfamiliar ones as

well.‖ (Kuhn, 1962, p. 111) This statement is commonly taken to mean that paradigms provide

the conceptual structures that allow us to parse up the world in particular ways, and that

paradigm shifts involve changes of perspective, during which formerly familiar objects become

unfamiliar. The implication appears to be that during normal science, the objects of research

remain fairly familiar. Contrary to this picture I want to pursue the thesis that the line between

the familiar and the unfamiliar is much more fragile and dynamic than that: phenomena become

objects of research precisely because there is an unsettling sense of unfamiliarity associated with

them. At the same time they can become objects of research only insofar as some things about

them are taken for granted. To study the small-scale process whereby a phenomenon is

investigated empirically, therefore, is to study the productive interplay between what is

unfamiliar and what is taken for granted.

There are, by now, a few philosophical and historiographical accounts that focus on the

development of the conception of a specific object of research.1 While some characterize their

approach as ‗biographical‘ (e.g., Daston, 1999; Arabatzis, 2006), thereby (implicitly or

explicitly) assuming the existence of criteria that guarantee the identity of such ‗objects‘ through

their life span, others are more interested in describing a point in the investigative process where

the very identity conditions of the relevant objects or phenomena are not yet clear (e.g., Steinle,

1997), such that the question is how conceptualizations of the objects of research takes shape.

2
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

Recognizing the epistemic situation in which scientists find themselves at such a moment, Hans-

Jörg Rheinberger has coined the expression ―epistemic thing‖ as a general term to describe such

‗objects.‘2 Of these things he says that ―paradoxically [they] embody what one does not yet

know‖ (Rheinberger, 1997, p. 28). This formulation is similar to my own above assertion that

there is a sense of ‗unfamiliarity‘ about research objects. Both formulations remain enigmatic,

however, and call for further unpacking.

This paper will make an attempt at such an unpacking by way of a case study that looks

at short-term memory as an object of empirical research. Section 2 will begin by sketching my

framework of analyzing the dynamics of memory research, and section 3 will argue that short-

term memory is a good example of an object of research that is both, familiar and ill-understood

(or unfamiliar). Section 4 will provide an analytical explication of Rheinberger‘s notion of an

epistemic thing to elucidate what it could mean for an object of research to be unfamiliar, while

section 5 will show that this construal dovetails with some of my own previous work about the

importance of operational definitions as temporarily fixing the identity conditions for particular

objects of research, thereby encoding what is treated as ‗familiar.‘

The type of analysis at stake in this paper – how to account for the dynamic processes by

which objects of research are conceptualized and investigated – is one that Rheinberger has

referred to as ―historical epistemology.‖ We will therefore (in section 6) turn to the question of

what warrants the label ―epistemology‖ to describe this kind of analysis. It will be emphasized

that Rheinberger‘s is not a general theory of knowledge, but rather a theory of the generation of

scientific knowledge within a specific domain. I will suggest, therefore, that what is at stake here

is not whether all epistemology should be historical, but rather how an analysis of scientific

knowledge generation can claim to be an epistemological analysis as well, given that historical

3
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

studies are usually assumed to aim for descriptive accuracy, whereas epistemological studies are

usually assumed to aim for normative evaluations. Situating this question within the context of

recent debates about the possibility of an integrated history and philosophy of science, I will (in

section 7) argue that the seeming chasm can be bridged if we practice a kind of philosophy of

science that derives its normative criteria by way of a critical engagement with its historical

material. In the case at hand, this means that an analysis of memory research has to explicate and

critically engage with the methodological norms that are operative in the empirical individuation

and exploration of the phenomena in question.

2. Memory as an Object of Research

Psychologists commonly distinguish among several types of memory. Probably the oldest

distinction (within modern theories of memory) is one between short- and long-term memory,

where the former is thought of as the ability to keep material in mind for the duration of a few

seconds, whereas the latter is conceived of in terms of the ability to store and retrieve items over

a longer period of time. Something like this distinction is sometimes traced to William James‘s

―primary memory,‖ and found a prominent expression by Atkinson & Shifrin (1968). This

taxonomy was subsequently further broken down in different types of both short- and long term

memories. For example within the category of short-term memory, Baddley and Hitch (1974)

introduced a conception of short-term memory (which they called ―working memory‖) as a

system consisting of several modality-specific sub-units (such as the so-called ―phonological

loop‖ and the ―visuo-spatial sketch pad‖) on the one hand and an executive control unit with a

modality-unspecific storage unit on the other. Similarly, the past 30 years have seen a

proliferation of types of long-term memory: starting with a distinction between declarative and

4
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

procedural memory (the former storing facts, the latter storing abilities) (e.g., Cohen & Squire,

1980), continuing with distinctions between several kinds of declarative memory (for example

semantic memory on the one hand and ―episodic‖ or autobiographical on the other; e.g., Tulving,

1983), and most recently positing a distinction between explicit and implicit memory, where one

is a conscious form of retrieval, whereas the other is an unconscious form of retrieval (e.g.,

Schacter, 1990).

The case of memory research is intriguing, because on the one hand it seems obvious

what we mean by the word ―memory,‖ while on the other hand there is still a lot of scientific

disagreement over what kind of ‗thing‘ memory is, and what kinds of memory there are. As

such, it lends itself for an exploration of the questions stated at the outset of this paper. We will

do so by taking a close look at some of the factors that contribute to the ways in which short-

term (or working) memory has taken shape as an object of research over the last several decades.

Such an analysis will have to explain what makes memory so familiar and elusive at the same

time. There are any number of factors that could be taken into account as part of a thick

historical description of the processes alluded to here. Providing such a comprehensive account

is not my aim in the current paper, however. Instead I will focus my analysis on one factor: the

function of specific experimental designs in empirically individuating the object in question.

While I do not claim that such a focus can provide us with an exhaustive account of knowledge

generation in memory research, I argue that it provides a vital part of the story.

To a first approximation, let us consider the basic rationale that underlies most

experimental memory research. During the so-called study phase, human subjects are exposed to

items, usually under an instruction to engage with the material in some way (the subjects may or

may not know that they are participants in a study of memory). Following an intermediary

5
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

phase, there is a test phase in which the learned material is elicited by means of some kind of

memory test (again: the subject may or may not be aware of the purpose of the test) (Lockhart,

2000). This way of proceeding, I would argue, is based on a systematized and tidied-up version

of a very broad notion of memory as being connected to the ability to display some kind of

behavior that indicates the recollection of something that was previously experienced (Roediger

& Goff, 1999).

It may be objected that this characterization of memory is too broad, and that a basic

requirement of memory should be conscious recollection.3 In response, it must be pointed out

that if we accepted this definitional constraint, then the above mentioned category of implicit

memory (memory in the absence of a conscious recollection or recognition) would be an

oxymoron. The objection points to an intriguing fact about memory research, however (and, I

would argue, about empirical research in general), namely that the ways in which scientists

choose to classify a given empirical effect (in this case, empirical evidence of recollection in the

absence of consciousness) as instantiating – or not instantiating – a particular kind of

phenomenon (in this case, a memory phenomenon) importantly relies on particular

presuppositions about the phenomenon.

What is at stake here are the very contours of the phenomenon, in that the question is not

―how can instances of memory phenomena be scientifically explained?‖, but ―on what grounds

are we to identify and classify empirical data, or patterns of data, as instances of memory

phenomena to begin with?‖ One thesis of this paper is that while this question cannot be decided

a priori, the empirical research that may ultimately provide an answer to it has to start out by

taking a stance, typically informed by some prior conceptual assumptions about the subject, such

as the one mentioned above. To put this differently: memory researchers assume that they can

6
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

gain epistemic access to this object of research by virtue of the fact that memory (regardless of

how it is otherwise characterized) is closely tied to the disposition to retain information and to

exhibit behavior that provides evidence of that retention. This understanding is constitutive of

memory as an object of research insofar as it enables researchers to ask empirical questions

about memory. Notice that the claim is not that memory can be reductively defined as a

disposition. What I have in mind here, rather, is something akin to Reichenbach‘s notion of a

constitutive a priori (see Reichenbach, 1965 [1920]), that is, a structure which – in some sense –

provides the conditions of the possibility of collecting data about the (presumed) phenomenon.

As will be explained below (section 5.1), such constitutive assumptions become operative in

research insofar as they enter particular experimental set-ups thought to generate data that exhibit

the phenomenon in a particularly clear form.

3. The Case of Short-Term Memory

With this background in mind, I turn to the case of short-term (or working) memory.4 This

presumed phenomenon was investigated by Atkinson & Shiffrin (1968; 1971) and popularized

especially by Allan Baddeley and his colleagues (e.g., Baddeley & Hitch, 1974). There are two

characteristic features that are thought to distinguish short-term memory from long-term

memory: duration, and capacity. The first characteristic is supposed to answer to the question of

what is the time span of short-term memory, and the answer is usually taken to be something like

20 seconds. The second characteristic is supposed to answer to the question how many items can

be kept in short-term memory. It is commonly assumed that the answer to this question is 7

plus/minus 2 (Miller, 1956). Both of these answers, however, are fraught with methodological

problems and theoretical disagreements. For example, with respect to duration, psychologists

7
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

frequently assume that this has something to do with decay. But aside from the theoretical

question of how to characterize decay, there are also significant methodological problems in

figuring out how to get a ―pure‖ measure of decay, that is, one that is not contaminated by other

factors, such as rehearsal effects or retrieval from long-term memory (Cowan, 2008, p. 326 ff.).

With respect to the capacity of short-term (or working) memory, again, there is some

disagreement. It is by now widely held that Miller‘s original 7 +/- 2 figure overestimated the true

capacity. In an influential article in Behavioral and Brain Science, Nelson Cowan argued that 3

+/- 1 is more likely to be accurate (Cowan, 2000). However, there is also a competing account

that holds the capacity of short-term memory to be 1 (McElree, 2001). As in the case of duration,

there are several different aspects to be considered in this debate. On the theoretical side, both of

the above-mentioned authors subscribe to a so-called ―unitary store‖ model, according to which

short-term memory is not a separate storage space, but is rather characterized by the fact that

items in long-term memory become the focus of attention. The disagreement, then, is over the

question of how many pieces can be the focus of attention at once. As Jonides et al (2008, p.

201) point out, even if one were to agree that only one piece can be the focus of attention at any

one time, one would need an account of what constitutes one piece, which is also not a

straightforward empirical question and is closely related to questions about the nature of

―chunking,‖ a strategy long thought to improve short-term memory.

As implied by some of the above, the question of how to characterize short-term memory

empirically is bound up with theoretical and methodological, even terminological issues. They

concern the relationship between short- and long-term memory, the relationship between short-

term memory and working memory, and the relationship between behavioral and

neurophysiological evidence. With respect to the divide between long- and short-term memory,

8
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

this distinction was initially made plausible by neurophysiological data, which suggested that

certain kinds of brain damage can lead to differential deficits, in that patients with medial

temporal lobe damage fared poorly on tests for long-term declarative memory formation and

retrieval, while less affected on short-term memory tasks (e.g., Baddeley & Warrington, 1970).5

This picture was soon elaborated by way of the idea of a working memory system, comprised of

multiple component systems. Given the neuropsychological evidence, it was long assumed that

the distinction between long- and short-term memory refers to two separate architectures in the

brain. However, this assumption has come to be challenged by recent psychological theories of

memory (Cowan 2000), and by re-evaluations of the neuropsychological evidence (Jonides et al.,

2008). According to these accounts, short-term memory is not a special storage space in the

brain, but rather a specific kind of mental state that occurs when items in long-term memory

become activated by virtue of being at the focus of attention. In the light of this, one might

wonder whether short-term-memory has effectively been eliminated as an explanatory category

or otherwise useful theoretical concept. In considering this question, several issues need to be

kept apart. First, even if short-term memory is not a brain system, there might still be specific

types of (explanatory) storage-, maintenance-, and retrieval mechanisms responsible for the

empirical effects attributed to short-term memory (Jonides et al., 2008). Second, even if we were,

for the time being, to remain neutral with respect to the question of what ‗underlies‘ the

empirical effects that gave rise to ideas about short-term memory, the concept might still

continue to be of theoretical value if it turned out that the effects it describes play a role in other

research contexts as well. As we will see shortly, this is indeed the case.

In the psychological literature, the terms ―short-term-memory‖ and ―working memory‖

are sometimes used interchangeably, and sometimes (though not consistently) used to mark

9
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

theoretical and empirical differences. One thing that is clear is that today‘s interest in this

phenomenon is primarily driven by the recognition that working/short-term memory is crucial to

a variety of other cognitive functions (e.g., Ricker et al., 2010). For example, we could not so

much as understand the end of a spoken sentence if we were not able to remember its beginning.

Likewise, we would not be able to solve the simplest math problem if we were unable to keep

the individual digits in consciousness. By viewing our most basic cognitive functioning as

closely linked to short-term memory, contemporary researchers highlight an aspect that was

already present in Baddley‘s original multi-component proposal, namely the importance of

executive control function, as managing the information that is temporarily represented in short-

term or working memory. One intriguing feature of this research is that the question is no longer

simply how many items can be kept active for how long (capacity and duration), but rather, how

many items can be kept active for how long, given the demands of another task that is carried

out at the same time. It is by now known that there is an empirical dissociation between the two

abilities, in that the latter is highly correlated with measures of intelligence, whereas the former

is not (Conway et al., 2005). This in turn has prompted many researchers to treat them as two

distinct phenomena, referring to the former as ―short-term memory‖ and the latter as ―working

memory‖ (Cowan, 2008). Much of the current research is about the latter phenomenon, which

possibly accounts for the fact that the term ―working memory‖ has come to be much more

widely used.6 However, some continue to use the term ―short-term-memory‖ to refer to this

phenomenon (e.g., Jonides et al. 2008), some treat working memory as including short-term

memory (Cowan, 2008), and yet others refer only to attention-related aspects (as opposed to

modality specific storage) of short-term memory as ―working memory‖ (Conway et al., 2005).

10
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

Despite these confusing terminological and theoretical issues, there is some agreement

about the existence of two types of empirical effects: those that are, and those that are not,

correlated with other cognitive abilities like intelligence. For our purposes, the important thing to

keep in mind is that the research just alluded to pretty much follows the experimental design

outlined above (learning phase – testing phase), and that the distinction between two types of

phenomena is essentially tied to a distinction between two types of tests used in such designs.

We will return to this in section 5 below.

4. On the Idea of an Epistemic Thing as „Blurry‟

The case study just outlined gives us some sense of the equivocality and ambiguity of taxonomic

categories in memory research along several dimensions: The term ―short-term memory‖ is used

by some to refer to a specific type of empirical regularity, by some to refer to encoding or

retrieval mechanisms, and by some to refer to a storage system in the brain. Moreover, the term

is by some used synonymously with ―working memory,‖ and by some to mark important

empirical and theoretical differences, having to do with simple vs. complex uses to which the

memory-type in question can be put. Even if we try to characterize the empirical regularities in

purely descriptive terms, it turns out that there is room for uncertainty and disagreement about

even the most basic characteristics, such as duration and capacity. Another feature of this

research is that while capacity is still a central area of research, there has been a shift in focus, in

that the research is often motivated by the desire to understand the role of capacity and duration

in relation to other cognitive abilities and traits, such as reasoning and intelligence. Even though

researchers in this field continue to refer to the phenomenon of interest as a memory

phenomenon, there is a sense in which working memory capacity, in this literature, is treated

11
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

more as a feature of the attention span. This is consistent with the currently dominant model,

according to which there is only one storage location, which is activated by attention.

Given these discrepancies in terminology, extension, and theoretical assumptions, one

may wonder about the status of short-term memory as an object of research. When philosophers

discuss such cases, they typically refer to them as cases of conceptual change, and they typically

ask whether such shifts can be reconstructed as rational. By contrast, my interest here is not so

much in such diachronic shifts in the classification of phenomena, but rather in the synchronic

variations in the ways in which research programs, theoretical assumptions, and classificatory

practices interpret what appears to be a common object of research. In other words, it is not only

the case that scientific concepts are fluid in the (well-known) sense that their intensions and

intended extensions can change over time. They are also fluid in the (less often discussed) sense

that at any given time different groups of scientists who take themselves to be investigating the

same thing can have different conceptions of what that thing is and what kinds of empirical data

instantiate it.7 The overall impression one gets from reading this literature is that while

psychologists working on this field work with rigorous experimental methods, and while at least

some of them are at pains to define their terminology, they do not seem to be able to get their

object of research into clear focus. This notwithstanding, there are interesting theoretical and

empirical developments in this area of research.

The metaphor of not being able to get the object of research into clear focus resonates

with Hans-Jörg Rheinberger‘s concept of an epistemic thing as an object or phenomenon that

attracts our scientific curiosity. As Rheinberger suggests, such objects present themselves ―in an

irreducible vagueness‖ (Rheinberger 1997, p. 28). According to Rheinberger, this is not a sign of

the deficient nature of epistemic things, but rather contributes productively to the research

12
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

process. For reasons that will become apparent below, I prefer the term ―Verschwommenheit‖

(blurriness), which Rheinberger uses in the German version of his book (Rheinberger, 2001, p.

27), over the term ―vagueness.‖ The very fact that he seems to regard the two words as

interchangable points to a feature of his thinking that is confusing to many readers, namely his

reluctance to draw a clear distinction between concepts and objects. (We tend to attribute

vagueness to concepts, not to objects). In the following analysis, I will argue that (contrary to

Rheinberger) the distinction between objects and concepts remains essential since it allows us to

distinguish between (a) questions about the limits of our knowledge about material objects or

phenomena and (b) questions pertaining to the (semantic or methodological) functions played by

the ways in which we conceptualize such objects. We will begin to elucidate this distinction by

turning to an analysis of the metaphor of blurriness.

Discussing Rheinberger‘s work, Marcel Weber (2006) has addressed the question of

whether we are to interpret the ―blurriness‖ of an epistemic thing as a feature of an object or of a

concept. Weber quickly rules out the former reading, since most objects are simply not blurry.

This leaves him with the second reading, of which he distinguishes two possible interpretations:

According to one, a concept can be blurry in the sense of being vague. According to the other, a

concept can be blurry in the sense of being referentially indeterminate. First, a concept is vague

if it refers to a property that exists on a continuum, such that there are clear cases and a gray zone

in between, with no definite answer to the question of where to draw the line. Since short and

long are on a continuum, one might suspect that the distinction between short- and long-term

memory is vague in this sense. However, while psychologists do indeed discuss the question of

how to measure the duration of long-term memory, such debates typically do not turn on the

question of how to draw a line between long and short. Rather, they are concerned with issues

13
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

like how to identify clear-cut criteria for measuring the duration of short-term memory (i.e., how

to identify and control variables that might contaminate such a measure).

If the concept of short-term memory is not blurry in the sense of being vague (or at least

not in a way that has much significance for scientific practice), then perhaps it is blurry in the

sense of being referentially indeterminate? Within linguistics, the expression ―referential

indeterminacy‖ means that different subjects vary in the ways in which they name objects. This

appears to come closer to characterizing the situation of our case study. Within philosophy, the

expression ―referential indeterminacy‖ typically means something stronger, namely that there is

no fact of the matter as to what a term refers to (what is the class of objects in the extension of

the corresponding concept). There are two typical arguments in support of the idea of referential

indeterminacy. One derives from a verificationist theory of meaning, according to which the

semantic properties of a term are exclusively determined by the linguistic behavior of speakers,

and according to which such linguistic behavior is compatible with infinitely many

interpretations (see Nimtz, 2005). The other draws on the history of science to show that both

meaning and reference of certain scientific terms changed over time, and that there is no theory-

neutral way of settling which one in fact picked out the true referent. This latter version of

referential indeterminacy is commonly associated with the incommensurabilty thesis (Kuhn,

1962).

Both arguments for indeterminacy give rise to worries about anti-realism and relativism.

To counter these threats, advocates of causal theories of reference have challenged a central

premise of the above-mentioned approaches to meaning, namely the premise that reference is

fixed solely by facts about speakers (facts about their linguistic behavior or their beliefs). In this

vein, they argue that reference is fixed – at least in part – by the objects that are in the extension

14
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

of the terms in question. This implies that while reference is not indeterminate, knowledge about

the nature of the referent of a term can be limited and incomplete. This, then, would get us to a

second second reading of ‗blurriness,‘ according to which this metaphor describes not a feature

of a concept, but our lack of knowledge about its referent.

It seems to me that we can be skeptical of the kind of essentialism implied by causal

theories of meaning, while still appreciating the difference between the semantic issue of

whether reference is determinate and the epistemic issue of whether we have knowledge about

the referent.8 In this vein, I suggest that we read talk of the ‗blurriness‘ of epistemic things as

referring to a purely epistemic predicament; a predicament that arises out of (a) assuming that

there is a material ‗thing‘ (entity, process, phenomenon) out there, to be studied by empirical

means, while (b) having only limited understanding of the nature of this object of research.

By construing the notion of the blurriness of an epistemic thing neither as describing an

actual object or phenomenon with fuzzy contours, nor a vague or indeterminate concept of an

object or phenomenon, but rather in terms of lack of knowledge about the class of objects that a

given scientific term refers to (if it refers to anything at all), we can remain neutral not only with

respect to semantic, but also to scientific realism. The question asked in this paper is not whether

facts about the meanings of scientific concepts are settled by the way the world is. Nor is it

whether scientific concepts succeed in correctly describing the world. Instead, the question

addressed here is simply what are the dynamics by which scientific ideas about objects of

research are formed and developed. The whole point of the notion of an epistemic thing-term is

that the issue of what exactly it refers to has not yet been settled. However, this does not mean

that anything goes or that there are no material circumstances constraining the kinds of things

one can say about the purported objects of research (Rheinberger, 2005).

15
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

My analytical reconstruction of the notion of an epistemic thing departs significantly

from Rheinberger‘s own exposition (which is cast in the tradition of French thinkers like

Bachelard and Derrida). However, I maintain that it captures at least some of the essential issues

and insights he emphasizes, i.e., to analyze the microprocesses of scientific developments by

paying close attention to the very specific combinations of factors that have an impact on the

ways in which objects of scientific research are conceptualized and investigated, and to do so in

a way that radically adopts the scientists‘ perspective, particularly acknowledging the extent to

which scientists don‘t ‗know‘ their objects of research.9

5. Experimental Paradigms, Operational Definitions, and Norms of Research

If we take the notion of the blurriness of an object of research to refer to the fact that it is ill-

understood, it still seems that at least some minimal assumptions must be made about it.

Otherwise scientists would not be able to identify instances of ‗it.‘ The question is (a) what are

the origins of the minimal assumptions in question, and (b) how do they affect the process of

experimental knowledge generation. A different way of phrasing the question is to ask what

enables the relatively stable use of basic concepts as picking out, and investigating, the purported

(if ill-understood) objects of research. One prominent attempt in the literature to answer these

two questions was provided by Thomas Kuhn, who argued that the tools for both the

individuation and investigation of research objects are provided by paradigms.10 Such paradigms

provide shared exemplars, which enable scientists to identify their objects of research, while also

providing some of the methods used to study them. The question, then, is how paradigms do this.

5.1 Putting the Paradigm Back into “Paradigm”

16
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

We don‘t have to look very far to find examples of shared exemplars in the context of

experimental research. They are even called ―paradigms.‖ For example, in psychology and

cognitive neuroscience, scientists refer to standard ways of producing effects of a given kind as

―experimental paradigms” (Sullivan, 2009, p. 513). The key phrase here is ―of a given kind,‖

since it suggests that the experimental paradigm already shapes our interpretation of the resulting

data as instantiating a particular kind of effect. Hence, it appears that they have rudimentary

concepts built into them. To better understand this, we need to take a closer look at the notion of

an experimental paradigm. Sullivan (op cit, p. 514) suggests an analysis that distinguishes

between production procedures on the one hand, and measurement- and detection procedures on

the other. On her account, production procedures are the experimental interventions,

measurement procedures specify the response variable to be measured, and detection procedures

specify under what conditions a response can be treated as instantiating the effect in question.

Applied to our example, if I wanted to study the effects of a particular intervention on short-term

memory, I would have to specify not only the kind of intervention, but also the procedure by

which I was planning to empirically detect its effects. In memory research, such procedures are

typically memory tests.

In experimental psychology, when scientists talk about their ―experimental paradigm,‖

they sometimes use this expression to include the way in which an experimental intervention is

conducted. However, they more often are simply referring to the measurement and detection

procedures they use, i.e., the tests (e.g., Owen et al., 2005).11 For the case at hand, this means

that if we want to understand the role of paradigms in the empirical individuation and

exploration of short-term and/or working memory, we need to take a close look at the

measurement and detection procedures (memory tests) at play. We can roughly distinguish

17
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

between two classes of tests, simple span tests and complex span tests, where the former is

assumed to tap short-term recollection in the absence of any interfering other tasks, and the latter

is assumed to tap the way short-term recollection works while one is attending to a different

problem (as already mentioned, this is often referred to as working memory). In recent times, the

so called N-back tests have come to replace complex span tests as a measure of working memory

(see Kane et al., 2007). In this vein, researchers, when describing their choice of methods, often

mention that they used this or that version of a span-paradigm or the N-Back paradigm.

Simple span tests – of which Ebbinghaus‘s nonsense syllable method is probably the

most well-known example – present subjects with lists of items and then take some measure of

their recollection. Complex span tests are basically simple span tests with some additional task.

The aim of such tests is to determine working memory capacity (WMC). As outlined by Conway

et al. (2005), three common complex-span tests are the especially wide-spread: the reading span

test, the counting span test, and the operation span test. In the simplest reading span test,

subjects are asked to memorize words, but are in addition given some other task, related to the

words. For example, in the original version, subjects had to read sentences with the instruction to

remember the last words, while also judging the logical accuracy of the sentence (Daneman &

Carpenter, 1980). As Conway et al (2005) lay out, however, there are by now any number of

variations of the reading span test. Moreover, while the items and the test in the reading span test

are related, other tests were soon developed, in which they were not. For example, the so-called

operation span test requires subjects to judge the correctness or incorrectness of a simple

equation while trying to remember a word (Turner & Engle, 1989). Lastly, counting span tests

involve the counting of particular shapes, while being asked to ignore other shapes (e.g., Engle,

Tuholsky et al (1999). By contrast to the complex span tests just outlined, N-back tests presents

18
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

subjects with series of stimulus, with the instruction to report whether any given stimulus

matches the one that appeared N items ago. Here, too, it is assumed that this taps working

memory, since subjects have to focus on new stimuli, while keeping past stimuli active in

memory.

This brief sketch of experimental paradigms will at best provide a rough idea of short-

term (or working) memory research. What matters for our purposes is that the number of

experimental paradigms is fairly limited and that they function as points of reference for

researchers across the board. By emphasizing the importance of experimental paradigms, I do

not mean to downplay the role of theoretical considerations. To the contrary, the design of

experiments that make use of specific experimental paradigms are often guided by theoretical

hypotheses. In turn the empirical findings they produce can contribute to theoretical speculations,

concerning, for example, the function of executive control as underlying not only performance

on working memory tasks, but also other complex cognitive tasks. Lastly, the classifications that

are enabled by experimental paradigms are productive of further research. For example, once

complex span tasks were designated as a paradigm for producing working memory effects, it

became possible to empirically investigate questions like whether the capacity of working

memory changes with age (Wingfield et al., 1988), what is the relationship between working

memory capacity and the accuracy of recollection (Peters et al., 2007), what are neural correlates

of access to working memory (Nee & Jonides, 2008), what is the relationship between working

memory and ADD and schizophrenia, what is the relationship between pain experience and

working memory (Sanchez, 2011), among others.

5.2 Operational Definitions and Norms of Research

19
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

The account just hinted at gives some idea of how research objects about which we lack

knowledge are empirically identified such that scientists can proceed to investigate them. To

avoid misunderstanding, I would like to emphasize again that even though there is a sense in

which paradigms fix the conditions of application for specific epistemic-thing-terms in an

investigative context, this is not to be understood as reference-fixing in any semantically more

substantial sense. The ability of the experimental paradigm to indicate or instantiate a stable

phenomenon of scientific interest is as defeasible as other, more ‗theoretical,‘ claims associated

with the concept (see Feest, 2009). To relate this to our case study: while complex span tasks can

be used to investigate the presumed phenomenon of working memory, it is quite conceivable that

in the end the experimental effects produced by this task will be treated as effects of attention or

executive control rather than short-term memory. It is also conceivable that we will still refer to

instances of this empirical effect as memory-effects, but will no longer use it to empirically

investigate some presumed underlying memory system or processes. Therefore, the experimental

effects produced by particular experimental paradigms cannot be appealed to when trying to fix

the reference of a particular theoretical term. In this respect I part ways with approaches in the

literature that argue that the continued use of particular experimental techniques suggests

continuity of reference through theoretical change (Arabatzis, 2006; Arabatzis, this issue; Chang,

this issue).

By contrast to a semantic reading of the ways in which an experimental paradigm can fix

the reference of a term, I endorse a methodological reading. According to it, scientists use

experimental paradigms as tools to temporarily fix the reference of a given epistemic object

term. I call this reading ―methodological‖ for two reasons. First, it makes reference to a scientific

method, the method of exploring phenomena by means of specific experimental paradigms.

20
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

Second, this method is suggested by a methodological maxim in psychology, namely the maxim

of operationism (Feest, 2011). Operationism urges scientists to ―operationally define‖ their basic

concepts in terms of paradigmatic experimental conditions of application. As we just saw, such

paradigmatic conditions of applications are typically provided by experimental paradigms, which

in turn have as a central component a measurement procedure or test. Operational definitions,

thus, are specifications of how to empirically measure the (presumed) object of interest,12 and

operationism emphasizes the importance of using operational definitions as tools for the

empirical exploration of purported or ill-understood objects of research (or, in the terminology

adopted in this paper, of epistemic things), thus highlighting the pragmatic and temporary

character of such ‗definitions‘ (Feest, 2010). Moreover, by urging scientists to explicate their

operational definitions, the maxim of operationism emphasizes the importance of making

transparent some of the implicit conceptual presuppositions engrained in an experimental design.

The account just laid out is only a rough sketch. For the purposes of this article, the

crucial point is that methodological rules and maxims, such as those demanded by operationism,

formulate norms of adequate scientific practice. Those norms in question are operative on two

levels. First, operational definitions formulate norms for the application of central scientific

concepts. Second, the methodological maxim of operationism stands for a more general norm,

which calls for an explication of the conceptual assumptions implicit in a given research setting,

thereby making them accessible to critical assessment and making it possible to spot possible

sources of error. As I argued in section 2 above, norms about the use of the term ―memory‖

derive from a basic understanding of memory as being linked to an organism‘s ability to display

behavioral evidence of previously learned material and are quite directly reflected in the

rationale for the experimental study of memory. When it comes to the classification and

21
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

experimental exploration of specific types or subdivisions of memory, such as short-term

memory, the relevant concepts are tied to the ways in which ‗behavioral evidence of previously

learned material‘ is detected, i.e., to specific experimental paradigms and testing procedures.

6. The History of Epistemic Things as Historical Epistemology

In the previous sections, I provided an analysis of the type of research process where relatively

ill-understood or unfamiliar purported objects or phenomena are empirically investigated.

According to my analysis, Hans-Jörg Rheinberger‘s concept of an epistemic thing captures a part

of the intuitions we may have about such ill-understood objects of research. However, I argued

that the idea of an epistemic thing needs to be supplemented by an analysis of how such objects

of research are empirically individuated, such that they can be explored by experimental means,

and I put forth such an analysis, which emphasizes that empirical investigations of epistemic

things rely on shared norms for their empirical individuation. Such norms, I argued, are provided

by operational definitions, which in turn rely heavily on paradigmatic experiments.

As is well known, Rheinberger has referred to his theoretical framework for analyzing the

small-scale dynamics of experimental research as ―historical epistemology.‖ In this section, we

will subject this expression to close scrutiny. More specifically, the question is what warrants the

term ―epistemology‖ for this type of historiographical project. This section will argue that (a) the

research project in question is best understood as a specific attempt to integrate historical and

philosophical analyses of scientific knowledge generation, and (b) one of the problems that

notoriously haunt such an integration can be overcome by supplementing Rheinberger-style

historical epistemology with a focus on the methodological considerations that shape the

research in question.

22
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

6.1 Towards an Integrated History and Philosophy of Science

There are several research programs that go by the name of historical epistemology. 13 In this

paper we have focused on Hans-Jörg Rheinberger‘s understanding since the question he pursues

– how to account for the dynamics of knowledge generation by experimental means – resonates

with the one pursued here. Rheinberger‘s unit of analysis is what he calls an ―experimental

system.‖ By this he means a hybrid constellation of material, technical, and cognitive factors that

give rise to new questions, generate surprising insights, and contribute to the formation of an

epistemic thing (Rheinberger 2001, p. 25). My focus on operational definitions is compatible

with some version of this notion of an experimental system, but emphasizes the conceptual work

that goes into designing and implementing such systems. This gives a specific spin to the idea of

an epistemic thing, distinguishing between (a) the guiding (if fallible) assumption that there is an

object or phenomenon out there about which we currently lack knowledge, and (b) the dynamics

at work in conceptualizing and investigating such a purported object.

Clearly, Rheinberger‘s version of historical epistemology is not a universal theory of

what constitutes knowledge or how our knowledge claims can be justified,14 but rather is a

specific theory of the development of scientific knowledge within the domain of 20th-century

molecular biology. However, even if it is ‗merely‘ a theory of the development of a specific type

of scientific knowledge, the question remains by what standards the results of those

developments can be characterized as ―knowledge.‖ A different way of putting this is to ask what

is the relationship between Rheinberger‘s historical epistemology and the epistemological

question raised by his historical cases: Is historical epistemology merely a specific way of

practicing the history of science, to be supplemented with a genuinely (universal)

23
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

epistemological analysis; or is the epistemological analysis meant to result (‗locally,‘ as it were)

from the historical investigation itself? Rheinberger himself aims for the latter. But how can we

explicate this?

The question just raised concerns the relationship between the history of science and the

philosophy of science, insofar as the latter, but not the former, is primarily interested in the

epistemic status of scientific results. The nature of this relationship has repeatedly received

attention in the literature over the past 50 or so years (see Schickore, forthcoming, for a

comprehensive overview). As Schickore lays out, one answer that has been prominent within

philosophy of science since the 1970s is what she calls a ―confrontational‖ model, according to

which the history of science provides case studies, which philosophers draw on to illustrate (or

provide evidence for) their own epistemological points. The underlying vision, then, is one of a

division of labor, whereby historians of science provide descriptively accurate accounts of

specific historical episodes, whereas philosophers subject these episodes to epistemological

scrutiny. Assuming that this kind of division of labor was possible, the envisioned project is one

in which the historical account in and of itself makes no philosophical contribution. It is only in

the light of the philosophers‘ normative categories that the historical material becomes

philosophically illuminating.

I suggest that we interpret Rheinberger-style historical epistemology as a particular

proposal of how to integrate history and philosophy of science. His historical narratives of the

dynamics of research is „philosophical‟ insofar as it challenges philosophers to pay attention to

the kinds of questions that arise if we take seriously the perspective of the scientists with their

epistemic limitations. The notion of an epistemic thing highlights this perspective, raising the

question of what an analysis of the exploration of objects of research might look like that is not

24
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

skewed by knowledge of ‗how things turned out.‘ On the account just presented, Rheinberger‘s

historical epistemology exemplifies a way in which historical work can give rise to a

philosophical question. However, what is still missing is an account of how the historical

analysis itself can contribute to the development the analytical tools used to answer, or at least

address, this questions. This is another way of asking how the descriptive results of an historical

analysis can contribute to a normative philosophical project. In response to this question, I

suggest that we center our historical narrative on the methodological categories that scientists

themselves employ, and that this can provide a starting point for our own normative analysis. In

this vein, this article is concerned with experimental designs, paradigms and operational

definitions typically used to explore particular (purported) objects of research. I argue that

scientists choose such experimental designs with the aim of operationally defining their central

concepts, and that this is part of a methodological strategy, which aims at temporarily fixing the

identity conditions of a given object of research, and making conceptual presuppositions

available to critical discussion.

In supplementing Rheinberger‘s focus on experimental systems with one on

methodological norms and norms of concept use, my approach takes an insight from

Rheinberger‘s historical epistemology, while departing from it in one significant respect: As

Rheinberger has emphasized repeatedly, he aims at (a) practicing epistemology by practicing the

history of science, and (b) doing so in a way that does away with traditional epistemological

categories, such as the concept of justification and the dichotomy between objects and concepts

or world and mind. With respect to the first aim, I interpret Rheinberger as proposing some

version of an integrated history and philosophy of science, in that the philosophical analysis is

supposed to inform his historical work and vice versa. It is this part of his approach that I am

25
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

sympathetic to. With respect to the second aspect, however, I maintain that for analytical

purposes the distinction between objects and concepts remains crucial, and that ultimately any

epistemological analysis is going to have to employ some normative analytical vocabulary. As

already indicated above, my proposal is that the normative analytical vocabulary we use as

philosophers of science is to be developed out of (rather than superimposed on) the scientific

cases we study historically.

Several questions immediately arise, however. First, the fact that the scientists who figure

in our historical or contemporary case studies employ particular norms of concept use and

investigative practice does not mean that we as philosophers have to accept them as adequate.

Second, if the aim is to derive epistemological questions and analyses locally from specific

historical cases, this raises concerns about the scope of the analyses. We now turn these

questions, elucidating them with our case study, and making a few suggestions about further

philosophical work on these issues.

6.2 Methodological Naturalism and the Case of Short-Term Memory

The fact that scientists sometimes express a commitment to specific methodological norms does

not mean (a) that they in fact follow those norms, or (b) that the norms are adequate for the

purposes scientists put them to. In this latter vein, the methodological maxim of operationism has

a history of being quite severely attacked by philosophers, who criticized it on semantic and

epistemological grounds. As I have shown elsewhere, however, with respect to psychology, these

criticisms rely on historical and conceptual misunderstandings about what the methodological

maxim of operationism in fact claimed, what function it was supposed to play in the research

process, and how its advocates responded to some obvious problems with it (Feest, 2005).15 This

26
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

case is thus a good example of the advantages of historical scholarship in correcting

philosophical misconceptions about the nature and aims of scientific methods.

Nonetheless, of course I do not want to claim that methodological norms and the kind of

research they enable are above reproach. Our question, then, concerns the vantage point from

which critical philosophical discussions of scientific methodologies and scientific results can be

conducted. The answer I suggest is informed by a commitment to a particular vision of

philosophical method, methodological naturalism. According to it there is no in principle

distinction between the methods employed by philosophers and scientists. This is so because

there is no area of knowledge that is the exclusive domain of philosophy (Papineau, 2009),

though philosophers are likely to focus on different questions than scientists.16 It follows that a

philosophical discussion of the epistemic status of scientific methods or results is on a continuum

with scientific discussions about those questions. Consequently, we can evaluate scientific

processes in two ways, i.e., (1) by holding scientists accountable to their own norms (i.e., to

discuss their practice relative to those norms), and (2) by discussing the adequacy of those very

norms themselves.

Let me illustrate each of these points by means of my case study on short-term memory.

The norms in question concern the conditions under which central concepts are to be applied and

the explication of these conditions in terms of so-called operational definitions. As I argued

earlier, experimental memory research is intimately connected to a particular understanding of

memory as tied to the ability to display behavior that indicates the recollection of something that

was previously experienced. Moreover, specific types of memory (such as short-term memory)

are closely tied to specific experimental paradigms, which contain operational definitions. Now,

in engaging with this type of research, a philosopher of science might inquire whether a given

27
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

experimental protocol really implements a given operational definition (Feest, 2011). Or she

might question whether a given experimental result, even if reliable, really warrants certain

extrapolations (Sullivan, 2009). This type of critical discussion can be formulated by engaging

with the scientific methods operative in the research, as reconstructed by our historical work. In a

similar vein, though at a more fundamental level, one might also cast doubt on the operational

definition at play, for example by arguing that a given experimental paradigm does a poor job at

indicating or exemplifying the research object under investigation. Other scholars, such as Kurt

Danziger (2008) even question the very assumption that a phenomenon like memory can be

adequately captured by the types of operational definitions that are encouraged by the demand of

experimentation.

Given the kind of empirically grounded approach to epistemological analysis suggested

here, the question remains whether such bottom-up analyses will result in a fragmentation of

‗local‘ philosophical accounts at the expense of more unified analyses of science. It need hardly

be pointed out that the history, philosophy, and social studies of science of the past 15 or so years

has seen a proliferation of debates about disunity and pluralism in science. In the present context,

however, I choose to stay neutral with respect to these debates. Suffice it to point out that even if

it is the case that the different subject matters and research questions of various fields of study

call for different methodologies, this does not imply that the methodologies in question have

nothing in common that might still be described from a more abstract philosophical point of

view. With respect to our case study, for example, we may ask whether the analysis of

operationism and operational definitions – if valid – is useful only for this particular case, or also

for other cases of memory research, other cases of psychological research, of research in the

special sciences, etc. For the time being, I choose to restrict the analysis of operationism to cases

28
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

where the term is actually used by scientists, though I am inclined to argue that it is in general

applicable to domains that are characterized by exploratory research. It is probably no

coincidence that the maxim of operationism has in the 20th century been especially popular

within novel and complex domains of research, such as psychology, neurobiology, and the social

sciences, where researchers are often quite acutely aware of the intangible character of their

objects.

7. Conclusion

This paper started out with the question of how to characterize the dynamics of empirical

inquiries into specific objects of research, where our knowledge of such knowledge is limited.

Using an example from memory research, I explicated the notion of an ‗ill-understood‘ object of

research in terms of Rheinberger‘s concept of an epistemic thing, supplementing it with an

analysis of the ways in which such ‗things‘ are empirically individuated by means of temporary

(operational) definitions, which in turn are engrained in experimental paradigms. I emphasized,

however, that what is taken for granted can itself be revised or discarded.

One possible response to the project of analyzing the dynamics of research in this way is

that this is a historical project, which has little bearing on questions concerning the epistemic

status of the research results. I have attempted to answer to this objection by placing my analysis

within the context of discussions about the nature of the relationship between history and

philosophy of science. I argued in favor of an integrated model whereby the historical narratives

does not illustrate some prior philosophical thesis, but rather informs the very philosophical

questions we ask. With this construal in mind, I argued that Rheinberger‘s approach points our

philosophical curiosity in the direction of accounting for the processes of knowledge generation

29
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

in situations of epistemic uncertainty, especially in the realm of exploratory research.

Acknowledging that an epistemological analysis requires some kind of normative perspective on

the research processes under investigation, I suggested that such a normative stance be developed

in close contiguity with the methodological norms operative in the sciences. In the case study at

hand, the norms in question concern the application of particular (if preliminary) rules for the

application of concepts thought to individuate the object under investigation, and the more

general maxim to explicate those rules. I argued that by virtue of being closely tied to

experimental paradigms, the rules in question literally shape the exploration and

conceptualization of the objects of research. Analyzing them therefore not only allows us to

construct a particular type of historical narrative of the investigative process, but also puts us in a

position to develop our own evaluative categories by engaging critically with the normative

underpinnings of the research.

In conclusion, I would like to emphasize that even though the approach of this paper was

developed by way of an analysis of Hans-Jörg Rheinberger‘s conception of historical

epistemology, I am not committed to using this label for my own analysis. For one thing, I do not

claim to have done justice to the philosophical tradition that informs Rheinberger‘s approach. 17

My aim here has been to highlight some features of this approach, suggesting that it presents us

with a thought-provoking framework not only for the investigation of experimentation, but also

for practicing an integrated history and philosophy of science.

30
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

Acknowledgements

The author would like to thank the participants of the conference ―What (Good) Is Historical

Epistemology?‖ for helpful questions and suggestions. In particular, I thank Chrysostomos

Mantzavinos for his insightful comments at the conference, as well as Thomas Sturm, Carl

Craver and an anonymous referee for this journal, whose valuable criticisms prompted me to

make some significant changes to the paper.

31
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

REFERENCES

Abel, G. (2010). Epistemische Objekte als Zeichen- und Interpretationskonstrukte. (In: S.

Tolksdorf / H. Tetens (Eds.): In Sprachspiele verstrickt. Oder: Wie man der Fliege den

Ausweg zeigt, (127-156). Berlin: De Gruyter.)

Arabatzis, T. (2006). Representing Electrons: A Biographical Approach to Theoretical Entities.

(Chicago: University of Chicago Press)

Arabatzis, T. (this issue). On the Historicity of Scientific Objects.

Atkinson, R. & R. Shiffrin, R. (1968). Human Memory: A proposed system and its control

processes. (In K. W. Spence & J. T. Spence (Eds.), The psychology of learning and

motivation (Vol 2). (89-195), New York: Academic Press.)

Atkinson, R. & Shiffrin, R. (1971). The Control of Short-Term Memory. Scientific American,

224, 82-90

Baddeley, A. & Hitch, G. (1974). Working Memory. (In G. Bower (Ed.), Recent Advances in

Learning and Motivation, Vol. 8, (pp. 49-90). New York: Academic Press.)

Baddeley, A., & Warrington, E. (1970). Amnesia and the Distinction Between Long- and Short-

Term Memory, Journal of Verbal Learning and Verbal Behavior, 9, 176-189

Chang, H. (2004). Inventing Temperature. Measurement and Scientific Progress (Oxford:

Oxford University Press)

Chang, H. (this issue). The Persistence of Epistemic Objects through Scientific Change.

Cohen, N. & Squire, L. (1980). Preserved learning and retention of pattern analyzing skill in

amnesia: Dissociation of knowing how and knowing that. Science, 210, 207–209

32
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

Conway, A.; Kane, M., Bunting, M., Hambrick, D. Z., Wilhelm, O. & Engle, R. (2005). Working

memory span tasks: a methodological review and user's guide. Psychonomic

Bulletin,12(5), 769-786.

Cowan, N. (2000). The magical number 4 in short-term memory: A reconsideration of mental

storage capacity. Behavioral and Brain Sciences, 24, 87–185

Cowan, N. (2008). What are the differences between long-term, short-term, and working

memory? Progress in Brain Research, 169, 323-338

Cowan, N. (2010). Multiple Concurrent Thoughts: The Meaning and Developmental

Neuropsychology of Working Memory. Developmental Neuropsychology, 35(5), 447-474

Daneman, M. & Carpenter, P.A. (1980). Individual Differences in Working Memory and

Reading. Journal of Verbal Learning and Verbal Behavior, 19, 450-466

Daston, L. (1999). Biographies of Scientific Objects. (Chicago University of Chicago Press)

Danziger, K. (2008). Marking the Mind. A History of Memory. (Cambridge: Cambridge

University Press)

Engle, R. W., Tuholsky, S. W., Laughlin, J.E. & Conway, A.R. (1999). Working Memory, Short-

Term Memory and General Fluid Intelligence: A Latent Variable Approach. Journal of

Experimental Psychology: General, 128, 309-331

Feest, U. (2005). Operationism in Psychology - What the Debate is About, What the Debate

Should Be About. Journal for the History of the Behavioral Sciences, XLI(2), 131-150

Feest, U. (2009). What Exactly is Stabilized When Phenomena are Stabilized? Synthese (online

first, DOI: 10.1007/s11229-009-9616-7)

33
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

Feest, U. (2010). Concepts as Tools in the Experimental Generation of Knowledge in Cognitive

Neuropsychology. Spontaneous Generations: A Journal for the History and Philosophy of

Science, 4(1), 173-190

Feest, U. (2011). Revisiting the Experimenter‘s Regress. Topical Skepticism and the

Epistemology of Discovery (unpublished manuscript)

Jonides, J., Lewis, R. L., Nee, D. E., Lustig, C. A., Berman, M. & Sledge Moore, K. (2008). The

Mind and Brain of Short-Term Memory. Annual Review of Psychology, 59, 193-224

Kane, M.; Conway, A.; Miura, T. & Colflesh, J. H. (2007). Working Memory, Attention Control,

and the N-Back Task: A question of Construct Validity. Journal of Experimental

Psychology: Learning, Memory, and Cognition, 33(3), 615-622

Kitcher, P. (this issue). Epistemology without History is Blind.

Kuhn, T. (1962). The Structure of Scientific Revolutions. (Chicago, IL: University of Chicago

Press)

Lockhart, R. S. (2000). Methods of Memory Research. (In Tulving, E. & Craik, F. (Eds.), The

Oxford Handbook of Memory (pp. 45-57). Oxford: Oxford University Press.)

McElree, B. (2001). Working Memory and Focal Attention. Journal of Experimental

Psychology: Learning, Memory, and Cognition,27, 817-835

Méthot, P.-O. From Concepts to Experimental Systems. Trends in Historical Epistemology. (In:

Schmidgen, H.; Schöttler, P. & Braunstein, J.-F. (Eds.). History and Epistemology. From

Bachelard and Canguilhem to Today’s History of Science. Berlin: Max Planck Institute for

the History of Science (in preparation).)

Miller, G. (1956). The magical number seven, plus or minus two: some limits on our capacity for

processing information. Psychological Review, 63, 81-97

34
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

Miller, G., Galanter, E. & Pribram, K. (1969). Plans and the Structure of Behavior. New York:

Holt, Rinehat and Winston)

Nee, D & Jonides, J (2008). Neural Correlates of Access to Short-Term Memory. PNAS,

105(37), 14228-14233.

Nimtz, C. (2005). Reassessing Referential Indetermnacy. Erkenntnis 62(1), 1-28

Owen, A. M, McKillan, K. M, Laird, A. R. & Bullmore, E. (2005). N-back working memory

paradigm: a meta-analysis of normative functional neuroimaging studies. Human Brain

Mapping, 25(1), 46-59

Papineau, D. (2009). Naturalism. The Stanford Encyclopedia of Philosophy (Spring 2009

Edition), http://plato.stanford.edu/archives/spr2009/entries/naturalism/)

Peters, M; Jelicic, M; Verbeek, H.; Merckelbach, H. (2007). Poor working memory predicts false

memories. Journal of Cognitive Psychology, 19(2), 213 — 232

Reichenbach, H. (1965 [1920]). The Theory of Relativity and A Priori Knowledge. (Berkeley &

L.A.: University of California Press)

Rheinberger, H.-J. (1997). Towards a History of Epistemic Things. Synthesizing Proteins in the

Test Tube. (Stanford: Stanford University Press)

Rheinberger, H.-J. (2001). Experimentalsysteme und epistemische Dinge. (Göttingen: Wallstein)

Rheinberger, H.-J. (2005). A Reply to Bloor: ‗Toward a Sociology of Epistemic Things.‘

Perspectives on Science, 13, 406-410

Rheinberger, H.-J. (2006). Epistemologie des Konkreten. Studien zur Geschichte der modernen

Biologie (Frankfurt: Suhrkamp)

Ricker, T; AuBuchon, A. & Cowan, N. (2010). Working Memory. Wires: Cognitive Science.

Volume 1, p?

35
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

Roediger, H. L. & Goff, L. M. (1999). Chapter 17: Memory. (In Bechtel, W. & Graham, G.

(Eds.) A Companion to Cognitive Science (250-264), Malden MA: Blackwell.)

Rouse, J. (2002). How Scientific Practices Matter. Reclaiming Philosophical Naturalism.

(Chicago: University of Chicago Press)

Sanchez, C. A. (2011). Working Through Pain: Working Memory Capacity and Differences in

Processing and Storage Under Pain. Memory, 19(2), 226-232

Schacter, D. (1990). Introduction to ‗Implicit Memory: Multiple Perspectives.‘ Bulletin of the

Psychonomic Society, 28(4), 338-40

Schickore, J. (this issue). The significance of re-doing experiments: A contribution to historically

informed methodology.

Schickore, J. (forthcoming). More Thoughts on HPS. Another 20 years later. Perspectives on

Science.

Steinle, F. (1997). Entering New Fields: Exploratory Uses of Experimentation. Philosophy of

Science, 64, S65-S74.

Stotz, K., Griffiths, P. E. & Knight, R. (2004). How Scientists Conceptualise Genes: An

Empirical Study. Studies in History & Philosophy of Biological and Biomedical Sciences,

35(4), 647-73

Sullivan, J. (2009). The Multiplicity of Experimental Protocols: A Challenge to Reductionist and

Non-Reductionist Models of the Unity of Neuroscience. Synthese, 167, 511–539

Tulving, E. (1983). Elements of Episodic Memory. (Oxford: Clarendon Press)

Turner, M. & Engle, R. (1989). Is working memory capacity task dependent? Journal of Memory

and Language, 28, 127-154

36
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

Van De Linden, M., Collete, F., Salmon, E.; Delfiore, E., Degueldre, G. & Luxen, A. (1999).

The Neural Correlates of Updating Information in Verbal Working Memory. Memory,

7(5/6), 549-560

Weber, M. (2006). Die Geschichte wissenschaftlicher Dinge als Epistemologie. Nach

Feierabend. Zürcher Jahrbuch für Wissensgeschichte, 2, 181-190

Wingfield, A., Stine, E., Lahar, C. & Aberdeen, J. (1988). Does the Capacity of Working

Memory Change with Age. Experimental Aging Research, 14(2), 103-107

1
The term ―object‖ is in scare crows here to indicate that what is meant are objects of research, which can include

phenomena, processes, mechanisms, or whatever scientists choose to investigate.


2
The expressions ―epistemic objects‖ or ―epistemic thing‖ are mostly used by historians of science, but there are

some recent attempts to broaden their scope to objects of non-scientific knowledge (e.g., Abel, 2010).
3
I owe this objection to one of the referees of this article.
4
I am using the terms synonymously here, but will shortly explain different usages that exist in the literature.
5
More information about standard short-term and working-memory tasks will be provided in section 5.1 below.
6
According to Cowan, 2010, the term ―working memory‖ first showed up in Miller, Pribram & Galanter‘s seminal

work about the planning of behavior (Miller et al., 1969).


7
To my knowledge, this type of situation has not received much attention in the philosophical literature, but see

Stotz et al (2004) with respect to the gene concept.


8
I would like to thank Thomas Sturm for drawing my attention to the fact that I glossed over this distinction in a

previous version of this paper.


9
According to Joseph Rouse, it is a misunderstanding to construe the blurriness/vagueness of epistemic objects as

―merely‘ epistemic‖ (Rouse, 2002, p. 338). His argument is part of an interesting and ambitious project to study the

relationship between normativity, naturalism, and scientific practices. Space does not permit me to discuss his

approach in more detail at this point.


10
See Kuhn‘s clarification of the concept in the postscript to his Structure of Scientific Revolutions.

37
forthcoming in Erkenntnis as part of special issue about historical epistemology (guest editors: Thomas Sturm & Uljana Feest)

11
This is compatible with the above definition since the application of psychological tests typically involves an

intervention (for example, an instruction to perform a task), to be distinguished from the fact that tests are often run

in experiments in order to determine the effects of another intervention (the independent variable of the test).
12
See Feest (2005) for an analysis of historical origins of, and common misconceptions about, operationism.
13
See the editors‘ introduction to this volume.
14
He explicitly rejects the very idea of such a theory (Rheinberger, 2006).
15
Chang (2005) makes a similar case in defense of Bridgman‘s operationism.
16
While there are some parallels between this idea and Chang‘s (2004) notion of ―complementary science‖ and

Kitcher‘s ―pragmatic naturalism‖ (this issue), they cannot be followed up here.


17
For an informed and original account of this tradition, readers are referred to a recent article by Pierre-Olivier

Methot (forthcoming), which also focuses on the operational character of concepts in an experimental context, but

does so by way of a comparative analysis of Rheinberger and Canguilhem.

38

You might also like