Welcome to Scribd. Sign in or start your free trial to enjoy unlimited e-books, audiobooks & documents.Find out more
Download
Standard view
Full view
of .
Look up keyword
Like this
1Activity
0 of .
Results for:
No results containing your search query
P. 1
Simple Proven Approaches to Text Retrieval

Simple Proven Approaches to Text Retrieval

Ratings: (0)|Views: 15|Likes:
Published by Simon Wistow

More info:

Published by: Simon Wistow on Dec 05, 2009
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF or read online from Scribd
See more
See less

12/04/2009

 
Technical Report 
Number 356
Computer Laboratory
UCAM-CL-TR-356ISSN 1476-2986
Simple, proven approachesto text retrieval
S.E. Robertson, K. Sp ¨arck Jones
December 1994
15 JJ Thomson AvenueCambridge CB3 0FDUnited Kingdomphone +44 1223 763500
http://www.cl.cam.ac.uk/ 
 
c
1994 S.E. Robertson, K. Sp ¨arck JonesThis version of the report incorporates minor changes to theDecember 1994 original, which were released May 1996,May 1997 and February 2006.Technical reports published by the University of CambridgeComputer Laboratory are freely available via the Internet:
http://www.cl.cam.ac.uk/TechReports/ 
ISSN 1476-2986
 
Simple, proven approaches to text retrieval
S.E. Robertson and K. Sparck JonesMicrosoft Research Ltd, Cambridge and Computer Laboratory, University of CambridgeDecember 1994 version with updates May 1996, May 1997, February 2006
Abstract
This technical note describes straightforward techniques for document indexing and re-trieval that have been solidly established through extensive testing and are easy to apply.They are useful for many different types of text material, are viable for very large files, andhave the advantage that they do not require special skills or training for searching, but areeasy for end users.
The document and text retrieval methods described here have a sound theoretical basis,are well established by extensive testing, and the ideas involved are now implemented in somecommercial retrieval systems. Testing in the last few years has, in particular, shown that themethods presented here work very well with full texts, not only title and abstracts, and with largefiles of texts containing three quarters of a million documents. These tests, the TREC Tests (seeHarman 1993–1997; IP&M 1995), have been rigorous comparative evaluations involving manydifferent approaches to information retrieval.These techniques depend on the use of simple terms for indexing both request and documenttexts; on term weighting exploiting statistical information about term occurrences; on scoringfor request-document matching, using these weights, to obtain a ranked search output; and onrelevance feedback to modify request weights or term sets in iterative searching.The normal implementation is via an inverted file organisation using a term list with linkeddocument identifiers, plus counting data, and pointers to the actual texts.The user’s request can be a word list, phrases, sentences or extended text.
1 Terms and matching
Index terms are normally content words (but see section 6). In request processing, stop words(e.g. prepositions and conjunctions) are eliminated via a stop word list, and they are usuallyremoved, for economy reasons, in inverted file construction.Terms are also generally stems (or roots) rather than full words, since this means thatmatches are not missed through trivial word variation, as with singular/plural forms. Stemmingcan be achieved most simply by the user truncating his request words, to match any invertedindex words that include them; but it is a better strategy to truncate using a standard stemmingalgorithm and suffix list (see Porter 1980), which is nicer for the user and reduces the invertedterm list.The request is taken as an unstructured list of terms. If the terms are unweighted, outputcould be ranked by the number of matching terms – i.e. for a request with 5 terms first bydocuments with all 5, then by documents with any 4, etc. However, performance may beimproved considerably by giving a weight to each term (or each term-document combination).In this case, output is ranked by sum of weights (see below).3

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->