You are on page 1of 7

International Conference on Information and Communication Technology ICICT 2007, 7-9 March 2007, Dhaka, Bangladesh

Md. Tawhidul Islam1 Durgaprasad Bollina2 Abhaya Nayak3 Shoba Ranganathan3'4 Department of Chemistry and Biomolecular Science, 'Biotechnology Research Institute and 3Department of Computing, Macquarie University, Sydney, Australia 4Department of Biochemistry, Yong Loo Lin School of Medicine, National University of Singapore, Singapore. * ~ ~12 2*3 12 * * * Email: mislam(&micros.com , abhay ics.mg.edu.au,

Intelligent Agent System for Bio-medical Literature Mining

shoba(i4els.mq.edu.aul"2,4

dbollina(bio.mg.edu.au12,

Abstract
With the advances of World Wide Web technology and advanced research in bioinformatics and systems biology domain has highlighted the increasing need for Automatic Information Extraction [IE] System to extract information from scientific literature databases. Extraction of scientific information in biomedical articles is a central task for supporting Biomarker discovery efforts. In this paper, we propose an algorithm that is capable of extracting scientific information on biomarker like gene, genome, disease, allele, cell etc from the text by finding out the focal topic of the document and extracting the most relevant properties of that topic. The topic and its properties are represented as semantic networks and then stored in a database. This IE algorithm will extract the most important biological terms and relation by statistical and pattern matching NLP techniques. This IE tool expected to help the researchers to get the latest information on Biomarker discovery and its other biomedical research advances. We

obstacles to develop clinically useful biomarker tests, including technical challenges associated with validating potential markers, and challenges associated with developing, evaluating, and incorporating the screening and diagnostics that make use of those markers into clinical practice.
Over the past few decades there has been remarkable growth in the amount of biomedical data. In particular, the sequencing of the human genome and of quite a few other organisms has generated complete genomic sequences of unprecedented number and size. This development is accompanied by much data of various kinds, including protein sequences, results from large-scale genomic and proteomic experiments, and a lot of published literature [10]. These literatures are potential source of knowledge discovery and can help scientists to gather recent research outcomes on biomedical concepts such as genes, proteins, diseases, drug discovery and many other topics [11].

showitsprei inary res emratingta. te potential method has eymethodhas. a strong potential to biomarker discova

Numbers of articles

strong

to biomarkerdiscov-

1. Introduction A biomarker associated with a biological function of disease process is used as biochemical indicator that can be used to measure the progress of disenase or thtoffects of treatment.

published each year for biomedical domain is increasing rapidly, to read makes which all the it no longer possible for a researcher relevant articles manually. Figure- 1 shows the
growth of literature in PubMed for "biomarker".
New Paper Publisshed in PubMed on Biomarker

140000
120000 100000

121892
93304

t proten.utheomcs have made it peiagnoyiin geoisand

0rm18 t 05 Me

57

This rapid growth of text urges the need of automatic tools to extract information from these large volumes of texts that is of individual researcher's interest.Text mining is a technology that makes it possible to discover patterns and trends semiautomatically from huge collections of biomedical literatures using natural language processing[NLP], information retrieval, information extraction, and data mining[9,12,13,14].
One of the most promising approaches of mining text is natural language processing (NLP), The goal NLP is to analyze text that understands and generate languages that humans use naturally which encompasses a variety of computational methods ranging from simple keyword extraction to semantic analysis, parsing the document etc. Advanced NLP systems use statistical and machine learning methods to recognize not only relevant keywords, but their distribution within a document. The spectrum of machine learning technologies applicable to text mining in bioinformatics includes genetic algorithms, artificial neural networking, bio-statistical methods, Bayesian methods, Support Vector Machines, decision trees etc. Support vector machines (SVMs), a supervised machine learning technique, have been shown to perform well in multiple areas of biological analysis including evaluating DNA microarray expression data [15], detecting remote protein homologies etc [16]. In this paper we will discuss an agent based system architecture using advanced NLP techniques to retrieve information from biomedical documents.
In the first section we describe overview of the system architecture, and then we explain the algorithm with necessary examples. Next we describe about some of the experiment result. Finally in the conclusion we discuss about evaluation, limitation and extension of our algorithm.

Web Search, Filter, Extract, Rank, Summarize and output. The system is consisting of multiple agents performing specific tasks. Figure 2 shows the architecture of the system in terms of agent interaction. Briefly, the Query Processor agent does key-phrase extraction for topic identification from input query and disambiguation processing for the topic. Query processor uses Wikipedia [7], Stanford Biomedical Abbreviation Server [4] and AcroMed [5] to disambiguate sense, abbreviation and acronyms. Web Crawler and Literature Extractor agents interact with each other to fetch web pages based on search-keys employing search-engines (e.g. PubMed). Literature Extractor agent also reads each web-pages and produce simple text files by extracting data from those. Literature Ranker does the ranking of extracted literature by literature extractor. Text/sentence ranker agent does the sentence ranking with in the extracted literature. Text summarizer agent creates summarized text chunks from the extracted text files and presents it to the user.
User

Query

|---.-.

_
Literature Extractor

Quer Processor

Web Crawler

Text Mining

CulMl

www/ e.g.

Text Sum-

|Text/Sentence |

Literature

2. Proposed Architecture
The objective of the system is to generate summary of literatures on the biomedical domain based on user queries. In the system, we did not require extensive linguistic analysis or machine learning but shallow language processing. In the context and content of the Web, we rely on conventional search engines, web encyclopedia and exploit the structure Of the web pages to identify candidate phrases for information retrieval. To create the contents web encyclopedia (e.g. Wikipedia) and multiple but unique web pages returned by search engines are considered. The features added or improved comparing to the previous system [3, 19] are Improved
58

Figure -2: Top Level Architecture of the System


3. Algorithm and Implementation: In this section we will discuss the proposed algorithms to implement the system. 3.1 Query Processor A user types query in the form of a question. The query analyzer first validates linguistically if a proper question has been asked. For a valid ques-

tion, Q, using a shallow language parser it extracts the Topic, T from Q. T = W1 + W2 +. + Wc Where, Wi indicates words C = count of words in T and at present C < 10 W, + adverb, pronoun, preposition, determiner, auxverb

Consider a Search-key, K= STi and then prepares a search string to search for K using the online encyclopedia, Wikipedia. The search-link is, Wi {WiL1 }; Web crawler then invokes PubMed search engine and fetches the search-result page for K and receives a set of links from Web-page extractor agent. Hence the set of links for K is, Pi {PILI , P1L2 ......... PIL, } ; where we limit n<1000. We set the limit to 1000 because if ListIdentifiers request results in more than 1000 hits, PMC-OAIdtill returnethesfirst 1000 hits.
=

For example if someone types "what are the biomarkers for Leukemia? "; Query analyzer finds it as a valid question and outputs Topic, T, biomarkers for Leukemia". But sometimes T may have ambiguous meaning or context. For example if someone asks, "what are the biomarkers for Prostrate Cancer? or What is Cdc28?", the topic "cancer" and "Cdc28" have different contextual senses. In order to deal with this issue query analyzer first searches for T in Wikipedia, the online encyclopedia, and does a data-mining in the web-page returned by Wikipedia for the key "For other uses" which essentially gives the web-link to decipher the possible contextual senses for topic T. If such datamining is successful, T is ambiguous and the query processor further does data-mining in the web-page returned by that particular disambiguation link of Wikipedia for the possible senses. "Cdc28" is also known as "Cyclin-dependent kinase 1" or just Cdkl", for abbreviation or acronyms like "Cdc28" query processor will search for other senses in Stanford Biomedical Abbreviation Server [4] and AcroMed [5] and expand the search criteria with all the retrieved terms.
In the case of multiple senses we consider 10 maximum senses and hence this maximum number may need to adjust during the implementation phase to get the best result.

In the same way the agent invokes PubMed again to receive the set of links for different prefix added search-key, K as indicated below. P2 =P2L P2L P2L n<1000

P4, {P4LI,P4L2I......... P4Ln} n<1000 The above sets of links are obtained from PubMed for K, where K= "Who What When" + ST,; K= "History of' + STi and K= "About" + STi, ; respectively. The search-string used to invoke PubMed search engine as an http request is

P3

P2 . {P3LP {P3LP3L2 .........

P3L}n

PL

nJOOO

htt1 ://eutils.nbi.nmnh.gov/entrez/eutisech.fi gi?db pubmed&term_Ktr11ate_ 60&dat :ped


used

The search string to search K f http.//euti1s.i. g/ e e gi?db_1ourna sterm=K and Journal search engine gives the following sets of links as indicated; IL JI,J Ji = {JlL JL2.........JL n n10 000 J 2, {J2L1,J2L2 ......... J2Ln } n<1000 The above sets of links are obtained from Journal
to invoke Journal
.
I

at&retmax. I OOO&usehistr

search result for K, where K = ST, and K= "About" + STi; respectively. The agent then makes the set of unique links Twegetasetofsenseswhichwenameas, given by Google and Yahoo! search engines and Topic-Sense, TS = {TS1,,TS 2. ^ TS N } finally makes a set of unique links, U,, which serves as the sources of knowledge for the search-topic, ST, N= Number of Senses found for the Topic, N < 10 ~P, =Pl u P2, u P3 u P4i = Query Analyzer makes a set of search topic for the J u J2 1 J web crawler agent and hence, l URL-set, Ui = UP i of The set Search-Topic, uP ST={ST 1 ST2 , ..T, STN } and each Search-Topic, ST1= T +TS1 ; 1<i <N The web crawler agent also finds definition of the using ZZ ~~~~~~~~~~~topic Google's search string for finding definition. For example, 3.2 Web Crawler http.//www.google. corn/search?num=l1O&hl=en & The Wb Craler gent mploy websearc en- q=define: "+ K; returns the definition about K. If the gines like PubMed and Journal to fetch relevant agent fails to find any definition for a given Topic, web-pages based on search topic. For each search- ST,, it forms sub-topics by taking the portion of the topic, ST,, from set STthe agent does the following, search topic and tries to retrieve definition. AlgoFor example "Cancer" returns 6 such senses and we considered the all 6 senses. Hence for an ambiguous

59

rithm to find a definition using Google is given below:

Begin

Search-Topic, ST= STi Wi is the list of words in ST Search-Key, SK=NULL Set j = C ; where is the number of words in ST Set Definition, d=NULL While (d=NULL) Search-key, SK = X 1 < <C 1=1 d= getDefinitionFromGoogleFor(SKj) If (d = NULL)
Else exitthe loop Loop While

Extracted Page, EPi = {HT 1,HT2 ....... HTR }; Thus a page may have R (O<R<50) many HT tuples. Finally the agent produces a list of Extracted Pages (EPL) which is further analyzed to prepare automatic content. The output of the agent can be represented as following, (M many extracted pages, each having R many HT tuples) EPL ={EP1, EP2. EPM }
3.4 Literature Ranking A total of N<=n (n=number of pages extracted by the literature extractor for each document) pages are retrieved, but all of them are not equally fruitful to the query. Here the selection procedure used is the measurement of Avg-TF-IDF [6][7] of each page. The rule to calculate Avg-TF-IDF is given below: (1)

thenjlj-1

End

3.3 Literature Extractor The web-page extractor agent basically opens an URL link as an http request and saves the content as a file for further processing(Section 3.4 and 3.5). After h R-e,U rmtewb After receiving the URL-set, U, from the webcrawler agent, it retrieves a set of web-pages, WPi. WP; = ,P2 M= number of unique links for STi, in Ui The agent then reads the content of each page, Pi, and retrieve text between <body> and </body> tag. While reading the content from the web-pages the following heuristics are followed. a. Emphasizing tags like <hl>, <h2>, <h3>, <h4>, <b>, <strong>, <big>, <i>, <em>, <u> are considered for heading or caption text. b. Ignore the heading text if longer than 125 characters. b. Omit the texts inside <script> and <style> tags. c. Collect the text chunk which appears between the other types of tags not mentioned above. d. Ignore the text that contains an URL or an email address. e. Ignore the text-chunk which is too long (e.g. more than 600 words). f. We assume that the heading represents the title for the text-chunk(s) found immediately after the heading(s). Several headings may be retrieved in a row and then we need to summarize the headings. g. Some unwanted markup text may be present in the retrieved text, so we stripped out all the text between '<' and '>' markup character. Hence the output from each Extracted Page (EP,) is a list of tuples of potential headings and text chunk, which can be represented as following (i.e. / many heading, p many texts) Heading-Text Tuple; HTk = [[hh, h2,--- h1], [T1 , T2 ,... Tp)]]

Avg-TF-IDF(d)=1TF-IDF(w,d)/W(d) (2)
W(d

TF-IDF(w,d)=TF(w)*(1+log(IDI/DF(w)

reevn. WP~~~ 'f1P1...IP

TF(w,d w D total nmereofP es DF()thenumber of Pages DF(w)=The number of pages the Keyword w was found

W(d)=The number of Keywords Here in the system, insertion sort was used to sort the pages according to their score. The algorithm as follows: 1. For each page a. Calculate the TF-IDF for each keyword. b. Take the Average of the TFIDF of those Keywords. This is Avg-TF-IDF or score of that page 2. Sort the score of the Pages using Insertion Sort 3. Take the top R pages 4. Use the Summarization method to make the Summary.
3.5. Text Summarization:
Now these R pages have to be summarized. For summarization here almost a similar formula to Average TF-IDF has been used to measure the relevance score of the sentences of each page. The AvgTF-ISFT[17] [18] has been calculated for each sentence. Then a specific percentage is multiplied with this value. And then a portion of the total word count of that sentence is calculated. This is done by multiplying the total word count of that sentence with a percentage value. These two values are added to get the relevancy score of that sentence. Lastly those sentences that have a value above the specifled percentage of the maximum relevance score are

60

selected for creating a summary of that page. Here both the Avg-TF-ISF and the word count of the sentence contribute to the measurement of the sentence relevancy. The rule to calculate Avg-TF-ISF is given as below:

TF-ISF(w,s)=TF(w)*(1+log(ISI/SF(w) (4) Avg-TF-ISF(s)=1TF-IDF(w,s)/W(s) (5)


Where, TF(w,s)= The frequency of word w in Sentence s IDI= Total number of Sentences SF(s)=The number of Sentences the Keyword w was found W(s)=The number. of words in that Sentence Avg-TF-ISF(s)= The score of the sentence from its words' TF-ISF
This is an adaptation of the conventional TF-IDF formula. To calculate the relevance score of each sentence we have: Relevance score(s)=Avg-TF-ISF* TFISFPercentage+W(s)*WordPercentage (6) To select sentences we have: IF Relevance score(s) > Maxm Relevance Score*SummaryThreshold THEN Sentence s is selected for summary. The algorithm as follows: 1. For each sentence i. Calculate the TF-ISF for each word. ii. Take the Average of the TF-ISF of those words. This is Avg-TF-ISF or score of that page iii. To calculate the relevance score use Relevance score(s)=Avg-TF-ISF* TFISFPercentage + W(s)*WordPercentage 2. Find the Maximum Relevance score 3. Select those sentences whose score is above than
Maxm Relevance Score*SummaryThreshold 4. Insert the Selected sentences to the summary along with its URL from which the page was found. For each selected page, the summary is created and it is saved to a file along with their URL.

3.5.1 Sentence ranking: This part we will discuss how sentences are given extra ranking by handcrafted rules. System also tagged all the words within the top ranked sentence. We used regular expression and cue words (CW) such as gene, genome, chromosome, disease etc.
We also sub classified as list of cue words and regular expression as targeted word (TW). For example:

1. 2. 3. 4. 5. 6.

All disease names are TW Any word with all caps are TW All gene names are TW All prepositions are W If the words exists on the title its CW If numeric values occurred before of after a TW then the numeric value is also a TW

We have considered 'five word window' to do this

analysis.

Regular expressions and cue words for this purpose were listed during the earlier analysis of the literature on biomarker discovery and other biomedical documents. Figure 3 shows an example of system generated tagged sentence. More analysis is required to fine tune this list.
The algorithm for this part is: 1. For each sentence i. Check if the sentence contains CW ii. IF yes -Rank the sentence with a higher score - Tag the words of the sentence - Store the tagged text in to the relevant table. iii. ELSE go to next sentence The algorithms for tagging the words are:
1. For each word i. IF the word exists on TW list tag it as

<tw>word</tw> ii. IF its CW but not in TW tag it as <cw>word</cw> iii. ELSE tag it as normal word, i.e. <w>word</w>.

In addition to the statistical calculation we also propose some hand crafted rules for sentence ranking, which will be discussed as a subsection (3.5.1).

This targeted words (TW) are then used determine the marker for associated diseases and to populate the associated table.

61

PM108 <SID- l><S><w>Aberrant</w> <tw>gene</tw><tw>promoter</tw>< tw>methyltion</tw><cw>profiles</c w><w>have</w><w>been</w><cw >well-studied<cw> <w>in</w><w>human</w><tw>pro state<tw><cw>cancer</cw></S> PM108 <SID-2><S> <cw>Therefore</cw><w>we</w> <cw>rationalize</cw><w>that</w>< tw>multigene</tw><tw>methyltion</ tw><cw>analysis<c/w><w>could</ w><w>be</w> <cw>useful</cw><w>as</w><w>a</ w><cw>diagnostic</cw><cw>bioma rker<cw></S> PM108 <SID-3><S><w>We<w><cw> hypothesize</cw><w>that<w><w> a/<w><w>new</w><w>method</w> <w>of</w><tw>multigene</tw> <tw> methyla-

We only considered first 1000 literatures returned by PubMed. Figure 4 shows actual number of literatures returned by PubMed and the number of potential literature processed by mExtract from the first 1000 literature. Figure indicates that our algorithm was able to filter 70% of the documents during the processing phase which is definitely a great achievement. However the number of literature is still considered to be high enough for any researcher for manual analysis. Hence we have extended our algorithm on a noble approach to extract, Disease and disease specific information, genes name, Sequence (Nucleotide/Protein) etc.

We have used a shallow processing approach to perform the text mining. We focused on the targeted tagged words (i.e.<tw>word<tw> ) to extract information. For example on the extract below:

tion<tw><cw>analysis</cw><w> could</w><w>be</w><w>a</w><w >good</w><cw>diagnostic</cw> <w>and</w><w>staging</w><cw> biomarker</cw><w>for</w><tw> prostate</tw><tw>cancer<tw> </S> PM1 08 <SID-4><S><w>The</w><tw> ROC</tw><w>curve</w><cw> analysis</cw><cw>showed</cw> <w>a<w><cw>significant</cw><cw >difference</cw><w>between </w><tw>M</tw><cw>score</cw>< w>and</w><tw>PSA</tw><tw>P</t
Figure 3: Extracted Tagged Sentences

cancer<tw><w> and</w> <w>is</w> <cw>related</cw> <w>to</w><cw> disease</cw> <w>via<w> <cw>progression</cw> <cw>increased</cw> <tw>cell</tw> <tw>proliferation</tw> <w>in</w><tw> prostate</tw> <tw>cancer</tw><tw> cells</tw>.

<SID-7><w>These</w> <cw>findings</cw> <w>that</w> <cw>suggest</cw> <tw>CpG</tw><tw>hypermethylation</tw><w>of </w> <tw>MDRI </tw> <tw>promoter</tw> <w>is</w> <w>a</w><cw> frequent</cw><cw> event</cw><w> in</w> <tw>prostate</tw><tw>

w><tw>0).010</tw></S>

4. Discussions
Our system is tested on making different search query. Out of the returned documents, PubMed mExtract Query String

We have considered 'seven word window' i.e. for any targeted word that is a disease compare seven words prior and after that word. With in a 'seven word window' if a targeted word has clue words before it and gene name, or sequence afterwards it, then it indicates strong relation and we have mapped the diseases and associated gene, sequence or markers. To compare disease and gene we have used GeneCards [20] database. This part of the system requires more research on the domain for an optimal
out put.

|cancer

Biomarker biomarker for ovarian cancer biomarker for ovarian cancer biomarker for brain cancer biomarker for breast

294608 5074
10154

349 309
241

5. Conclusion
A text summarization algorithm for biomedical scientific literature has been proposed in this paper. It identifies that, the central topic of biomarker discovery and any important information (such as specificity) with in the literature. We have confined our work in to biomedical literature mining. In future, extracted summaries can be used to extract more sophistaced informtaion towards protein structure and image datamining. Machine learning approach can be used for beffer results.. The

2697

265

11120 289 ll| Figure 4. Literatures return by PuMed VS number Of potential literature processed by mExtract.

62

intelligent agent is task-oriented, semi-autonomous, active, trustworthy, adaptive and collaborative. While additional work is necessary to optimize the system so that it can support high loads with fast response with multi-database support. It's highly system can evlove expected thet eisionysemcakin fievld awto make its that this s ma int xplactedthe decision making field as well as in the d place in next generation human computer interaction through the World Wide Web.
Reference

[11] Uramoto, N. ; Matsuzawa, H. ; Nagano. T. ; Murakami, A. ; Takeuchi, H. ; Takeda, K. A text-mining system for knowledge discovery from biomedical documents. IBM System Journal. Volume 43, Number 3, 2004
brr

Source of New Knowledge," Bulletin of the Medical Lisoito


8 N.1 93

[12]D- Swanson, "Medical Literature as a potential


19)

[13]V. Brusic and J. Zeleznikow, "Knowledge Discovery and Data Mining in Biological Databases," Knowledge Engineering Review 14, No. 3, 257-277 (1999). [14] D. Swanson and N. Smalheiser, "An Interactive System for Finding Complementary Literatures: A Stimulus to Scientific Discovery," Artificial Intelligence 91, No. 2, 183-203 (1997).
[15] Brown,M., Grundy,W., Lin,D., Cristianini,N., Sugnet,C., Furey,T., M. Ares,J. and Haussler,D. (2000) Knowledge-based analysis of microarray gene expression data by using support vector machines. Proc. Natl. Acad. Sci. USA, 97, 262-267.

[1] D Palmer [2000] Tokenisation and Sentence Seg mentation. Chapter 2 in R Dale, H Moisl and H Somers (eds.), Handbook ofNatural Language Processing. Marcel Dekker. [2] D Jurafsky and J H Martin [2000] Word Sense Disambiguation and Information Retrieval. Chapter 17 in Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recogni tion. Prentice-Hall

[3] Mostafa A. M. Shaikh,, Ishizuka Mitsuru and Taw[16] Jaakkola,T., Diekhans,M. and Haussler,D. (1999) hidul Islam, Auto-Presentation: A Multi-Agent Syste for Buidin Auoai Mut-oa presentation Using the Fisher kernel method to detect remote protein ation homologies. In Proceedings of the 7th International ConWorld of a Proceeding of IEEE WICoACM Int'l conff on Intelli- ference on Intelligent Systems for Molecular Biology AAAI Press, Menlo Park, CA. gent Agent Technology, pages 246-249, Compiegne, France, September, 2005. 17] Liren Chen and Katia Sycara, "WebMate: A per[4] Chang JT Schtze H and Altm RB (2002). sonal Agent for Browsing and Searching", Carnegia MelSchite Dictionary of AbbreviationsloUnvriySetmr30197 H,*and Altman br (2002). Creating an Online 'S ' from MEDLINE. The Journal of the American http://www.cs.cmu.edu/-softagents/webmate Medical Informatics Association. 9(6): 612-20.

Topicdfro

TideoWeb Irnfor
An

ChangJT[

[5] J.Pustejovsky,J.

Linguistic Knowledge Extraction from Medline: Automatic Construction Of an acronym Dwatabase, An updated version ofthe paper presented at Medinfo, 2001

tecki,

M.

Castaelo, Morrell,

B.

a. A.

[18] Yihong Gong, Xin Liu, "Generic Text SummarizaM Kohtion Using Relevance Measure and Latent Semantic . Analysis", NEC USA, C & C Research Laboratories, Rumsis. Y

USA, SIGIR'01, September 9-12, 2001, New Orleans,


Louisiana, USA. Copyright 2001 ACM 1-58113-331 6/01/0009

[6] J. Castafio, J. Zhang, J. Puste jovsky[2002]Anaphora Resolution in Biomedical Literature. International Symposium on Reference Resolution, Alicante, Spain.

[7]

lhlttp://en.wikipedia.org/wiki/

[8] http://www.-pubmedcentral.nih.gov/about/oai.htmI
[9] M. Hearst, "Untangling Text Data Mining," Proceedings ofthe 37th Annual Meeting ofthe Association for Computer Linguistics (ACL'99), College Park, MD, ACL, East Stroudsburg, PA (1999), pp. 3-10. [10] Bryan Bergeron [2002], "Applied Bioinformatics Data Mining" webpage: Computing:

[19] Shaikh Mostafa Al Masum, Md. Tawhidul Islam: Designing and Implementing an Interactive Education Counselor by Agent Technology, IAWTIC/CIMCA 2004 (Proc. IAWTIC 2004 Int'l Conf. on on Intelligent Agents, Web Technology and Internet Commerce, ISBN 1740881893: M. Mohammadian (Ed.), Gold Coast - Australia, pp. 473-482 (2004,7). 2]ht:/w~eead~r/ne~hm

63

You might also like