You are on page 1of 10

Rethinking Relevance (1984)

Go to Peter Tillers' Home Page



Rethinking Relevancy
by Peter Tillers
Copyright 1984, 2002
The material below is a transcription of recently-rediscovered rough notes that I prepared for a
lecture that I delivered at University College London in May, 1984. Even though the notes below
are hopelessly wordy, I have reproduced them almost word-for-word and almost entirely
unedited. Furthermore, there are many things in these rough notes that I would say differently
today or not at all. Nonetheless, it is possible that my original notes will be of interest to
someone. But, bear in mind, that the notes below were extraordinarily rough -- even by the rough
standards I used in 1984. Thank you.
What is relevancy? What is the principle of relevancy? I had to face these questions because some years
ago I began to revise the first volume of [John Henry] Wigmore's treatise on evidence. I decided that
somewhere in the notes and in the text I should both tell the reader about the literature on relevancy in
the last 40 years or so and give the reader some of my own views on the subject of relevancy. I did that.
[See, e.g., 1A Wigmore, EVIDENCE IN TRIALS AT COMMON LAW Sections 30 & 37 and extensive
note material in, e.g., Sections 24, 26, 28-34 & 41 (Tillers rev., 1983). See also note material in 1
Wigmore, EVIDENCE IN TRIALS AT COMMON LAW Sections 1-2 and text & notes in Section 14.1
(Tillers rev., 1983).] I do not want to restate or summarize the product of my efforts. You can look at my
books to see what I said and the conclusions I reached. Instead I would like to give you a general
impression of the kinds of issues and problems that I saw as I examined the literature and case law
regarding relevancy. This is useful, I think, since the right answers can be given only if the right
questions are asked. Furthermore, of course, I may not have given the right answers to the questions I
did ask. It is useful, I think, to step back from the details of any problem now and then and to reflect
generally on the type of problem that faces us. So this is what I would like to do. With your indulgence,
however, I would like to keep my remarks brief and informal.
For more than 80 years the usual starting point of any discussion of relevancy has been [James Bradley]
Thayer's dictum that a rational system of evidence presupposes the principle of relevancy. Since 1898
this dictum has been repeated innumerable times by writers such as Wigmore, [Edmund] Morgan, [Jack
B.] Weinstein, and practically everyone else who has written about relevancy. This dictum, however,
puzzled me, for two reasons. First, I was puzzled because I was not sure how or why this principle of
relevancy leads to the conclusion that the trial judge should exclude irrelevant evidence. I was able to
agree, at least initially, that a rational trier cannot be influenced in his decision by irrelevant evidence.
http://tillers.net/relevancerethink.html (1 of 10)3/25/2005 5:31:43 AM
Rethinking Relevance (1984)
This proposition, however, seemed to be a mere truism, following from the very definition of "relevant."
What was unclear to me is why the trial judge should decide that the jury should not see irrelevant
evidence merely because of the principle that a rational trier will not have his decision affected by
irrelevant evidence.
Second, I was puzzled by Thayer's dictum because I had a hard time seeing how the principle of
relevancy is, as it was said to be, the fundamental basis of the entire law of evidence. In examining
various casebooks, I was struck by the relatively meager amount of attention that was devoted to such a
putatively important important topic. Thus, for example, in the sixth edition of the leading American
casebook on evidence -- at that time edited by [John] Maguire, [James H.] Chadbourn, Weinstein, and
[John] Mansfield -- there were, I believe, only about 3 or 4 pages expressly devoted to the subject of
relevancy. While the Maguire casebook was an extreme example, the disproportion was still severe in
most other American casebooks. The overwhelming portion of those books was devoted to discussion of
specific exclusionary rules rather than to the principle of relevancy. To be sure, sometimes it was said
that various exclusionary rules -- the rule excluding evidence of subsequent repairs, the character
evidence rule, the rules concerning habit evidence, and even the hearsay rule -- somehow incorporated
the principle of relevancy. But this claim also puzzled me and once again I was struck by how little
discussion was devoted to this subordinate claim.
In my work on the revision [of the first volume of Wigmore's multi-volume treatise on the law of
evidence] my main concern was with the general principle of relevancy rather than with its relationship
to specific exclusionary rules[,] so my attention was focused on my first question, viz., how and why
does the principle of relevancy authorize the trial judge to exclude evidence that he deems irrelevant? As
I approached this question, however, the question of the relationship of the relevancy principle to the
exclusionary rules was in the back of my mind. When I turned to the law reviews to examine discussions
of relevancy, I became more puzzled. I looked, first, at Morgan's discussion of the principle of
relevancy. [Morgan, BASIC PROBLEMS OF EVIDENCE 183-188 (1961)]
While much of what Morgan said seemed to me to be quite sensible, I had the nagging suspicion that he
was not talking about relevancy but about something quite different. Morgan, and the many other
persons who followed his analysis, set forth, as far as I could determine, a method for evaluating the
probative force of evidence rather than its relevance. By now all persons agreed that evidence is relevant
if its existence makes some material ... fact more or less probable than it would be without the evidence.
Morgan's analysis, as far as I could tell, instructed the judge how to assess the probative force, the
strength, of the evidence, but not how to assess its relevance. If this is what relevance means, why is it
that in a discussion of relevance the trial judge is being given a method for assessing the strength or
probative force of the evidence? I was tempted of course to conclude that Morgan was not talking about
relevancy at all but, still, he said he was talking about relevancy. If he was not talking about relevancy,
why did he think he was?
It seemed apparent to me, of course, that a method for assessing the probative force of evidence would
be useful to a trial judge in many contexts since in many contexts the decision whether to admit or
exclude requires, at least in part, an assessment of the probative force of evidence. The most obvious
http://tillers.net/relevancerethink.html (2 of 10)3/25/2005 5:31:43 AM
Rethinking Relevance (1984)
example of this, of course, is the rule allowing the trial judge to exclude unduly prejudicial evidence.
But, whatever the service Morgan's method performs for the analysis of problems other than relevancy,
what did his method have to do with relevancy? I began then to wonder whether evidence is ever
excluded on the ground of its irrelevancy alone. Was Morgan, without knowing it, implicitly rejecting
Thayer's dictum that relevancy is the foundation of the law of evidence?
As I thought about these questions, I recognized the possibility that the explanation for the prominence
of the label "relevancy" might be primarily attributable to historical factors involving the need to
legitimate judicial administration of the exclusionary rules. Perhaps the principle of relevancy became
important because of the need during Thayer's time to justify the conclusion that the rule of relevancy
does not illegitimately intrude into the prerogative of the jury to be the trier of fact. Perhaps the
relevancy principle became important because that principle made possible the distinction between the
relevancy of evidence and its weight and enabled scholars and judges to say that the jury, rather than the
judge, weighs the probative value of the evidence.
Having done no significant historical research to substantiate this thesis, in my revision I offered this
thesis only as a speculative hypothesis, but this idea did lead me to think further about some other things
of interest. My speculation about historical considerations now induced me to hold to the working
hypothesis, for purposes of research, that the principle of relevancy, as articulated by Morgan and his
successors, in fact has nothing to say about the legitimation of the exclusionary rule that goes under the
name of "relevancy." I came to the conclusion -- to which I still adhere -- that the bare principle of
relevancy in no way explains why the trial judge, rather than the jury, should determine whether
evidence is relevant. Something may explain and justify the rule of relevancy but it is not the principle
of relevancy that does this.
With my skeptical historical hypothesis in mind, I began to wonder whether I had not also inadvertently
onto an explanation for the relatively meager attention given to the subject of relevancy in the casebooks
and, I might mention, in the cases: Is it possible that as the importance of legitimating judicial
administration of the factfinding process by the jury became a less burning issue, scholars and judges
were more willing to diminish the importance attributed to the principle of relevancy?
Wigmore had an expansive notion of relevancy -- enabling him to explain or depict many decisions to
exclude as applications of the principle of relevancy -- but, ever since George James wrote a so-called
seminal article in 1941, the modern scholarly consensus favored an extremely narrow notion of
relevancy known as "logical relevancy." The orthodox view in America was that evidence is relevant if
it has the slightest probative value. This theory, ironically enough, undermined the significance of the
idea of relevancy as a device for explaining the existence of the exclusionary rules since it became
increasingly hard to see how evidence such as evidence of subsequent repairs, character evidence, and
the like could be regarded as irrelevant. The irony is that this theory of logical relevancy was trumpeted
in articles proclaiming the importance of the principle of relevancy. However, from an historical
perspective, the evisceration of the force of the exclusionary rule of relevancy made eminent sense; few
people today worry about whether or not the exclusionary rules unduly abridge the factfinding
prerogatives of the jury. Morgan's emphasis on a technique for assessing the probative force of the
http://tillers.net/relevancerethink.html (3 of 10)3/25/2005 5:31:43 AM
Rethinking Relevance (1984)
evidence, rather than its relevancy, made eminent sense from this historical point of view. Morgan
intuitively realized, if he did not say so explicitly, that the principle of relevancy had become practically
of no importance to the proper administration of the exclusionary rules and that the practical issue in
almost every case concerns the weight of the evidence to be admitted or excluded.
Feeling secure in my conviction that Morgan and other[s] like him were in fact tendering a theory for the
assessment of the probative force of evidence, I turned my attention to the details of the recipe offered
by persons such as Morgan, Trautman, [George] James, Weinstein, and, yes, Wigmore. Still having the
imprint of my earlier musings in mind, I first focused on some puzzles involved in the notion of
relevancy.
Having read [Jerome] Michael and [Mortimer] Adler, [see, e.g., Michael & Adler, THE NATURE OF
JUDICIAL PROOF: AN INQUIRY INTO THE LOGICAL, LEGAL, AND EMPIRICAL ASPECTS OF
THE LAW OF EVIDENCE (1931)], who in many respects propounded a theory of inference similar to
that of James, Morgan, and the others, I briefly entertained the thesis that no evidence can be shown to
be irrelevant. I eventually dismissed this thesis, recognizing that I was drawing this potential implication
on the basis of the theory of Michael and Adler and that the conclusion that no evidence is irrelevant is
nothing more than a reductio ad absurdum of the theory propounded by Michael and Adler, or at least of
one part of it.
In their view, evidence is relevant if an evidential hypothesis connects a factum probans and a
factum probandum. The difficulty is that Michael and Adler offered no criteria for the legitimacy
of an evidential hypothesis and thus openly asserted that any fact can be made relevant to any
other by the manufacture of an appropriate evidential hypothesis. They fell into this error, if that
is what it was, as a result of their effort to view problems of relevancy and proof purely as
problems of the logical relations between propositions. Being of this view, and being influenced
by positivist thought, they were not able to imagine that logic alone could constrain the
manufacture of evidential hypotheses -- those hypotheses being in the nature of the universal
premise in a syllogism. Hence, they were willing to assert that any fact might be relevant to the
existence of any other fact.
In any event, I was convinced that a rational person can regard some evidence as irrelevant but, although
I was now convinced that any rational person is entitled to view at least some evidence as ... irrelevant to
the question of the existence of some fact, I was led to re-examine the view that a rational trier of fact
must disregard irrelevant evidence in making his estimate of the probability of the existence of some
fact. It occurred to me that it is altogether necessary to make a distinction between a trier's decision on
the basis of the evidence before him, on the one hand, and his investigation of the evidence, his
acquisition of evidence, in order to make a decision, on the other [hand]. To be sure, a [rational] trier
must disregard evidence that he regards as irrelevant but it does not follow that he must not examine
evidence that proves to be irrelevant; [o]ne must look at the evidence to see whether or not it is
irrelevant since one cannot always assume that one knows, before looking at the evide[n]ce in detail,
what the precise character of that evidence is a[n]d how it may bear on the ... facts in issue. (This
http://tillers.net/relevancerethink.html (4 of 10)3/25/2005 5:31:43 AM
Rethinking Relevance (1984)
process of investigation is familiar enough when there is a bench trial and the trial judge examines an
offer of proof -- or, as I prefer to say, an offer of evidence -- in order to determine whether it is relevant
and determine whether to "exclude" the evidence.)
These ruminations buttressed my conviction that the rule of relevancy cannot be legitimated solely on
the ground that that the judge only excludes irrelevant evidence and leaves it to the jury to assess its
weight. I also acquired the conviction, which I try to explain in my revision, that determinations of
relevancy are not, as is usually claimed, essentially or significantly different from determinations of
weight. In each case, the trier employs a complex inferential process to determine the significance of the
evidence in question[,] and the fact that the outcome of that process does not alter the trier's prior
assessment of probability does not change the nature of the evaluation employed. Eventually I used
something called Bayesian theory to explain this conclusion. I also now think that it makes ... eminent
sense to say, sometimes, that irrelevant evidence has provided valuable information, viz., to say that
evidence that has not altered my prior assessment of the probability of a fact in issue has nonetheless
provided me with important information that gives me more confidence in my assessment of that fact in
issue.
These ruminations ... about the accuracy of the standard formulation and description of the principle of
relevancy, however, did not take center stage in my thinking about Morgan's and James' descriptions of
the inferential process. As I considered their views, I was struck by the prominent place accorded
generalizations in their scheme. I was puzzled, in some measure, by their attacks on Wigmore since it
seemed to me that Wigmore, on the one hand, and Morgan, James, and Trautman (to name a few), on
the other hand, were not that far apart in their views of the nature of the inferential process. In thinking
about this disagreement among members of the same fraternity,I was struck by the emphasis of
Wigmore's critics on the importance of explicit formulation of all generalizations that support a
particular inference.
While I was in broad accord with the thesis that explicit rational evaluation of inference was likely to be
helpful in improving the quality of our inferences, I thought it was naive to claim, as one writer did, that
no inference could be considered justifiable or rational, if all generalizations that are required to support
it are not stated. This claim struck me as astonishing and still does. Having a vague recollection of what
logicians and philosophers had said about the complexity of the background assumptions that form the
basis of our interpretations of our world, I thought that the aspiration to explicit rational of inference had
led the proponents of logical relevancy to an untenable extreme; if one were to state all the premises
upon which a particular inference rests, one would have to be able to describe, in logical and systematic
form, one's entire view of man and the cosmos.
I expressed such sentiments in my revision and, for support, I drew on literature in the philosophy of
science that questions the ability of human beings to "process" all information "explicitly." Thus, a
major issue involved in my general approach, an issue that I have not yet resolved, is how much and
what we can expect fr[o]m rational analysis of evidence and inference given that there is much in our
intellectual and emotional and biological framework that we do not understand well. What reason do we
have for believing that imperfect rational analysis of inference does not distort our inferences as much as
http://tillers.net/relevancerethink.html (5 of 10)3/25/2005 5:31:43 AM
Rethinking Relevance (1984)
it improves them?
My suspicion is that thinking about our inferences and careful attention to details improve our
inferences. But there is reason to worry about a method of analysis that purports to be a kind of decision-
procedure that the trier should follow. We must first of all be sure that the decision-procedure described
is rational and correct but we must also be sure that in using this decision procedure we do not obliterate
factors in inference that are difficult to formulate and describe. Can we devise a general decision
procedure that is universally applicable but that yet avoids the danger of distortion?
The person who has most carefully investigated this question is sitting in this room, nam[e]ly, William
Twining. I think his efforts to develop some sort of a decision procedure -- or, at least, a reflection and
self-reflection procedure -- and to apply it to concrete cases are of the greatest importance and, besides,
may I s[ay], exhibit the great glory of the English empiricist tradition, a tradition that asserts that
theories must eventually be put to the test. In this sense, I think, William [Twining's] work is more
advanced than anything else that has been done by legal theorists, including myself, in the last 80 years.
But to return to my theme. In examining the details of the theory of logical relevancy, I had the nagging
suspicion that there was something wrong with [Morgan's account of] the way in which [a person such
as a trial judge or a trier of fact should] discount the force of evidence on the basis of the gener[a]
lizations thought to be required to support [an] inference. That method of discounting was admirably
simple.
In brief, [Morgan's recipe] told the trier to to estimate the probability of each supporting gener[a]lization
and, to reach a final estimate about the probability of the fact in issue, to multiply all of the probabilities
of all of the gener[a]lizations involved in the chain of inferences that happen to be involved in the
inference about the final fact in issue. From one point of view, this seemed to make eminent sense. If,
for example, evidence that defendant frequently carries a knife is offered to prove that defendant killed a
person with a knife on a particular occasion, it might be said that 3 gener[a]lizations are required to
support the proposed inference: (1) people who carry knives often intend to use them aga[i]nst other
persons (under certain circumstances) and (2) people who intend to use knives against other persons
often do use knives against other persons (under circ's) and (3)people who use knives against other
persons (often) kill other persons. If this sequence of gener[a]lizations accurately describes the chain of
inferences involved in the assessment of the probative force of the evidence of the knife on the question
of killing (and it surely does not), Morgan instructs us that, to compute the force of the evidence, the
trier must discount the probative force of the evidence by the uncertainty present in each gener[a]lization
in each link of the chain of inferences. Thus, for example, if only 30% of persons carrying knives
contemplate or intend using them against other persons, and if only 20% of the persons having this
intent do use them against other persons, and if 40% of the persons using those knives in this way kill
the other person, then the probability of of a killing, on the basis of the evidence of the knife, is only
30% x 20% x 40%, viz., [.024 or 2.4%].
It was apparent to me and it must be apparent to you that Morgan's picture of a chain of inferences
http://tillers.net/relevancerethink.html (6 of 10)3/25/2005 5:31:43 AM
Rethinking Relevance (1984)
presents many puzzles. Thus, for example, how does one know for sure how many links there are in the
chain? (The number of links and their character clearly can have a decisive impact on the outcome of the
trier's computations.) Also, for example, why did Morgan not speak about ... converging chains of
evidence and their evaluation? (Perhaps because Morgan thought he was talking only about relevancy
or, at least, only about the admissibility of separate pieces of evidence.)However, questions of this sort,
while very important, were overshadowed by my nagging suspicion that Morgan's account [of]
inferential chains was caught in some sort of a paradox. Using [Morgan's] scheme, it seemed to me that
the same piece of evidence might simultaneously be considered as evidence favorable to a proposed
inference (e.g., killing) and as unfavorable to the proposed inference. Thus, by a reasoning process I
cannot fully re[c]ount [here, in London, today], I became convinced that the question is not, for
example, how often people who frequently carry knives intend to attack other people but how much
more likely [it is that] such people [people who carry knives]... attack other people than do people who
do not habitually carry knives. To use a comfortable example, something is amiss with Morgan's
analysis if the evidence is escape from jail and the issue is the guilt of the [escaping] defendant. If one
believes the gener[a]lization that 10% of those who escape from jail are guilty and if one believes that
no other links of inference are involved, then the evidence of defendant's escape establishes that the
probability of defendant's guilt is 10% when the evidence of escape is considered by itself. But this is all
wrong. After all, if it turns out that we believe that the greater the number of the people who escape from
jail are innocent, the evidence not only does not favorably affect the [probability] of guilt but in fact
decreases it. But if one uses Morgan's analysis, the evidence of escape is relevant and is admissible to
show guilt. The paradox ... is that the same evidence is also favorably relevant to the [hypothesis] of non-
guilt. Something is quite wrong here.
To make a long story very short, as a result of ruminations such as these, I became convinced that
Morgan and ... similar writers had a vastly oversimplified theory of probability that could lead to very
strange conclusions indeed. I sensed that Morgan was employing the product rule and that there was
something wrong with the use of the product rule here. Since I did not understand the theory of
probability well enough to say exactly why Morgan's use of the product rule in this context was wrong, I
decided I had better learn something about formal probability theory. [year 2002 note: Morgan's work
was not the only reason why Tillers decided to study probability theory.] It was plain to me I would not
get much farther in my thinking until I did so. (As a result of such efforts at self-education, I came to
believe that Morgan's theory of inference has affinities with what is called the relative frequency theory
of probability and that my criticism of Morgan's theory had affinities with the critique of relative
frequency theories of probability by Bayesians.)
I do not [now] want to recount [in detail] what I think I learned from [my] study [of probability theory].
[Instead] I want to make the point that it is remarkable how little scholars of evidence (including myself)
know about formal theories of probability. This ignorance is remarkable since the law of evidence
purportedly addresses itself to decisionmaking under uncertainty -- to the problem of producing reliable
estimates of probabilities. Probability theory, of course, deals, at least in part, with the same subject.
This ignorance [of probability theory], I am now convinced, is also dangerous. It can, first of all, lead to
the kinds of errors that Morgan committed. It may also make us vulnerable to claims by those
http://tillers.net/relevancerethink.html (7 of 10)3/25/2005 5:31:43 AM
Rethinking Relevance (1984)
acquainted with formal probability theory that the legal process ought to be restructured to make it more
rational, viz., so that the methods used to assess probabilities in trials and other legal processes are more
in conformity with those methods of assessment that are rational from the standpoint of some kind of
formal probability theory.
But the danger is not simply that some sort of flawed vision of the factfinding process will be foisted on
us by mathematicians and probability theorists because we -- law teachers, lawyers, and other law-
trained persons -- are helpless to make any sensible objections to such proposals. The [bigger] danger, in
my mind, is that we are ... using notions of probability that are at root wrong or, at least, oversimplified.
We have something to learn from formal theories of probability.
[If my talk about the importance of formal probability frightens you,] let me reassure you that I do not
think that the importance of studying probability [grows out of any supposed importance of] the project
of comprehending the varieties of probability theory, deciding which theory makes most sense, and then
applying that particular theory to construct an ideal model of the factfinding process. I think that the
importance of studying formal probability theory lies in part in understanding the limits of formal
probability theory.
However, I do not want to reassure you too much. To say that formal probability theory has limits is not
to say that it has no value. It does have value, and if your only defense to the use of formal probability
theory in the courtroom is that it has limits and therefore should not be used in the courtroom, you are
likely to be dismissed by the scientifically educated public -- a rapidly growing segment of the
population -- as [a] Philistine and a know-nothing. The precise object is to decide precisely what those
limits are. If you leave yourself defenseless against the advocates of formal probability theory of one
kind or another, ... the result will be [I believe] that ... formal probability will increasingly come [into the
courtroom] by the back door as a result of increasing delegations of factfinding responsibilities to expert
witnesses and, more generally, to expert factfinding [entities that] use probabilistic analyses to resolve
issues of fact. This is already happening on a wide scale, at least in the United States.
A final word now [about] what the Germans might call this genetic [account] of ... rethinking relevancy.
As I pondered one particular interpretation and application of the standard calculus of probability --
namely, Bayesian theory -- I became preoccupied with the contrast between the Bayesian approach and
the approach [to probability and uncertainty] usually found in the legal literature.
The legal literature on inference [generally] put[s] great emphasis on ... generalizations while, strikingly,
Bayesian theory of the personalist stripe [year 2002 qualifiers: "seemingly and generally"] regards
generalizations as wholly immaterial to a rational analysis of the inferential process. [Today -- in the
year 2002 -- I would say and I would have to say much more about this point than I did in 1984.] On the
whole, I was much in sympathy with the general view that theories of of the factfinder about man and
his world do and must have a crucial part to play in the inferential process[,] and, quite justly, I was
enjoined by William Twining to pay more attention to this issue since, quite appropriately, he told me
that I had to consider the important new theory of probability developed by L. Jonathan Cohen, who,
http://tillers.net/relevancerethink.html (8 of 10)3/25/2005 5:31:43 AM
Rethinking Relevance (1984)
like the legal scholars, places great emphasis on ... generalizations in inference.
However, while I remain convinced of the importance of theory in inference, I am not yet altogether
convinced that speaking about gener[a]lizations is the appropriate way to speak about the [role] of
theory in at least one important class of cases -- specifically, those cases involving an assessment of the
probable behavior of human beings. For while I readily confess that generalizations about human
behavior a[r]e sometimes all that one can meaningfully talk about ...., I am not yet convinced that talk
about generalizations is ... sufficient in many contexts. My ruminations on Bayesian theory -- and my
prior background in philosophy (including my interest in Kant, as William rightly pointed out in
forthcoming essay) make me believe that much more attention must be paid to the various ways in
which human beings structure the evidence that is presented to them.
I submit that it is often impossible for a trier of fact to reliably predict the behavior of persons in
unobserved situations unless the trier is able to understand the world of thought, reason, and emotion
that the human being in question lives in and it is my view that th[i]s sort of necessary understanding by
the trier cannot be adequately captured by talk about gener[a]lizations. Whether or not I am right, this is
an issue of great importance and I think it must occupy an important place in future discussions of the
nature of inference. The question of the role of generalizations [in inference], unfortunately, raises
fundamental epistemological questions of the most intractable sort, so quick progress on [the] question
[of inference about human behavior] cannot be expected. [year 2002 note: I revisited this general
question, first in 1986 in "Mapping Inferential Domains," 66 B.U. L. Rev. 883, and, second, in 1998 in
What Is Wrong with Character Evidence?]
[I have deleted the concluding note material. It is too murky to interest anyone. That material dealt in
part -- murkily! -- with German Idealism's emphasis on the constructive role of the observer in making
judgments about the world.]

Go to Peter Tillers' Home Page



http://tillers.net/relevancerethink.html (9 of 10)3/25/2005 5:31:43 AM
Rethinking Relevance (1984)

http://tillers.net/relevancerethink.html (10 of 10)3/25/2005 5:31:43 AM

You might also like