This action might not be possible to undo. Are you sure you want to continue?

# (IJCSIS) International Journal of Computer Science and Information Security, Vol. 10, No.

9, September 2012

**Hybrid Model of Rough Sets & Neural Network for Classifying Renal stones Patient
**

SHAHINDA MOHAMED AL KHOLY¹*, AHMED ABO AL FETOH SALEH, AZIZA ASEM,KHALED Z. SHEIR,M.D. ²*

Department of IS,Faculty of Computers and Information Science,and Urology and Nephrology Center ²* Mansoura University,Mansoura, Egypt AbstractـــDespite advances in diagnosis and therapy, renal stone Rough sets Developed by Zdzislaw Pawlak in the early of disease remains a signiﬁcant health problem. Considerable amount 1980’s deals with the classificatory of data tables and focus on of time and effort has been expended researching and developing structural relationships in data sets. Rough Sets theory systems capable of classifying renal stones patients truly. This constitutes a framework for inducing minimal decision rules, paper proposes a new approach for this classification by using a these rules in turn can be used to perform a classification task. hybrid model of Rough Sets & ANN to predict optimum renal stone The main goal of the rough set analysis is to search large fragmentation in patients being managed by extracorporeal shock databases for meaningful decision rules and finally acquire new wave lithotripsy (ESWL). Rough Sets used to reduce the input knowledge [3]. factors and detect the most important ones that are used as input In this paper Rough sets try to synthesize approximation of vectors for the ANN. The result (of treatment and the model) will be Free (success in removing the stone completely) or not Free. concepts from acquired data. The starting point of Rough set theory is an observation that the objects having the same description are indiscernible (similar) with respect to the KeywordsـــــRenal Stones, Rough Sets, Artificial Neural available information (the defined attributes values). Network (ANN), Fragmentation extracorporeal shock wave Previous work lithotripsy (ESWL). In recent years various approaches to predict the useful results Patients& MethodsـــWe reviewed the records of patients who have been proposed. underwent ESWL as monotherapy for renal stones at Center for One of the ﬁrst reports of ANNs in urolithiasis was by Kidney and Urology in Mansoura from 1998 to 2010. Data Michaels et al. [4] who compared standard computational included clinical characteristics, stone-free rate and its methods (linear and quadratic discriminative analysis) and relationship to stone size and location, lithotripter and complications. ANNs in the prediction of stone growth after ESWL. A threelayer feed-forward ANN with EBP was used to analyze a data set of 98 patients: a training set of65 and test set of 33 patients. I. INTRODUCTION An ANN was used to predict increased stone volume using Two decades ago open surgery was the main treatment for variables including pre-existing metabolic abnormalities, symptomatic renal stones, but it is increasingly being replaced infection and stone size. The ANN showed that no single by less invasive therapies. At present ESWL, percutaneous variable per se was predictive of continued stone formation. nephrolithotomy and other endoscopic methods of stone Comparing linear and quadratic discrimination function, retrieval are being used to treat >90% of patients with renal analysis with ANN revealed a sensitivity of 100 and 91% and a stones. Open surgical procedures are now infrequent for speciﬁcity of 0 and 91%. They concluded that ANNs can treating renal stones and thus the incidence of morbidity accurately predict future stone activity after ESWL. associated with stone surgery has also markedly decreased. Poulakis et al. [5] further used a feed-forward ANN with EBP Since its introduction in 1980 [1] ESWL has considerably to evaluate variables shown to affect lower pole calculi changed the management of renal stones and has become the clearance after ESWL using a data set of 680 patients: 101 therapeutic procedure of choice in most cases. kidneys for training and 600 for testing. Overall stone clearance The management of urolithiasis is a clinical challenge rate was reported at 68%, with 26.1% of cases requiring further worldwide which may result in difficulty in diagnosis, intervention in the form of further ESWL, ureteroscopy or treatment and prevention of recurrence , especially with regard PCN. The most inﬂuential prognostic variables for clearance to choice of procedure. Interventional options include were pathological urinary transport [19] ,ANN was shown to extracorporeal shock wave lithotripsy (ESWL), ureteroscopic have a 92% predictive accuracy of lower pole stone clearance. lithotripsy and percu-taneous nephrolithotomy (PCN). Clinical Gomha et al. [6] compared ANN with a logistic regression challenges include the decision to treat and the choice of model to predict stone-free status after ESWL. The logistic procedures outcome is dependent on multiple pre-determined regression model was constructed using a backward likelihood variables. At present[2] decision-making is based on clinical ratio selection and ANN using a three-layer feed-forward expertise, and statistical models such as matched-pair and model with EBP. Both models were trained on 688 cases and multivariate regression are often followed by the non-linearity tested on 296 cases. Comparing logistic regression with ANN and high variability of medical data, termed the inherent revealed a sensitivity of 100 and 77.9%, a speciﬁcity of 0 and ‘noise’.

129

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 10, No. 9, September 2012

75%, a positive predictive value of 93.2 and 97.2% and an overall accuracy of 93.2 and 77.7%. They concluded that ANNs were better at identifying cases that were unlikely to respond to ESWL and in which further treatment would be necessary. Hamid et al. [1] assessed the ability of a feed-forward ANN with EBP to predict optimum renal stone fragmentation after ESWL using data of 82 patients: 60 cases for training and 22 for testing. ANN identiﬁed stone size as the most inﬂuential variable, followed by total number of shocks given and 24-h urinary volume. ANN accurately predicted optimal fragmentation in 17 of 22 patients, and identiﬁed ﬁve patients in whom fragmentation did not occur irrespective of the number of shocks given. A 75% correlation was found between the number of shocks given and the number predicted by ANN. Cummings et al. [7] developed an ANN for calculating the probability of spontaneous ureteric stone passage; the model was trained on 125 patients and was accurate in 69% of the participants. Sonke et al. [8] used several noninvasive variables including the IPSS and produced an MLP network to predict the outcome of urinary pressure ﬂow studies. Their model was trained and tested on 1903 patients and yielded 69% speciﬁcity at 71% sensitivity. Bertrand et al.[9]compare discriminant analysis, logistic regression analysis and ANN to assessing the risk of urinary calcium stone among men, Both models were trained and tested on 215 cases. Comparing discriminant analysis logistic regression with ANN revealed a sensitivity of 66.4% and 62.2 %, a speciﬁcity of 87.5% and 89.8%, a positive predictive value of 75.8 and 74.4 % Neeraj K et al.[10]compare the accuracy of ANN analysis and multivariate regression analysis (MVRA) for renal stone fragmentation by ESWL. Data of 196 patients were used for training BP network and MVRA model. The predictability of trained ANN and MVRA was tested on 80 subsequent patients giving sensitivity (prediction of number of shocks) the MVRA was 57.26 % and ANN was 93.29 %. Studies have examined the role of ANNs in prediction of stone presence and composition, spontaneous passage clearance and regrowth after treatment. This paper suggest that ANNs can identify important predictive variables and accurately predict treatment outcome with using RS. All of these researches try to arrive to good Classification, they use the neural networks, and some of these researches arrive already to good tools but with taking all patient attributes. ANNs can be useful for assessing several urological diseases provide an ‘intelligent’ means of predicting useful outcomes with greater accuracy and efficiency, but with using Rough Sets it will present medical advantages, i.e. no need for prolonged anti-inflammatory treatment and no need for pyelography (because diagnosis is based also on KUB, echography and helical CT only when necessary), anaesthesia and systematic hospitalization. 1. The paper is organized as follows: Section 2 gives an overview about Preliminaries of Artificial Neural Network ANN and Rough sets as a two concept used in this paper.

Section 3 presents the proposed hybrid model that predicts optimum renal stone fragmentation based on previous preliminaries. Section 4 show simple experimented results of proposed model and section 5 conclude this paper. 2. Preliminaries 2. 1 Rough Set Theory Rough set theory proposed by Pawlak is an effective approach to imprecision, vagueness, and uncertainty. Rough set theory overlaps with many other theories such that fuzzy sets, evidence theory, and statistics. From a practical point of view, it is a good tool for data analysis. The main goal of the rough set analysis is to synthesize approximation of concepts from acquired data. The starting point of Rough set theory is an observation that the objects having the same description are indiscernible (similar) with respect to the available information. The indiscernibility relation is a fundamental concept of the rough set theory which used in the complete information systems. The starting point of rough set theory which is based on data analysis is a data set called an information system ( IS ). IS is a data table, whose columns are labeled by attributes, rows are labeled by objects or cases, and the entire of the table are the attribute values. Formally, IS= (U, AT) , where U and AT are nonempty finite sets called “the universe” and “the set of attributes,” respectively. Every attribute a AT, has a set of Va of its values called the“domain of a ”. Any information table defines a function ρ that maps the direct product U ×AT into the set of all values assigned to each attribute .The concept of the indiscernibility relation is an essential concept in rough set theory which is used to distinguish objects described by a set of attributes in complete information systems. Each subset A of AT defines an indiscernibility relation as follows: IND (A) = {(x, y) U ×U: ρ (x, a) = ρ(y, a) a A, A AT} (1) Obviously, IND(A) is an equivalence relation, the family of all equivalence classes of IND(A) , for example, a partition determined by A which is denoted by U/IND (A) or U /A[11]. Obviously IND (A ) is an equivalence relation and: IND (A ) = ∩ IND (a )where a A (2) A fundamental problem discussed in rough set is whether the whole knowledge extracted from data sets is always necessary to classify objects in the universe; this problem arises in many practical applications and will be referred to as knowledge reduction. The two fundamental concepts used in knowledge reduction are the core and reduct. Intuitively, a reduct of knowledge is its essential part, which suffices to define all basic classifications occurring in the considered knowledge, whereas the core is in a certain sense it’s most important part.[3] A reduct can be thought of as a sufficient set of features – sufficient, that is, to represent the category structure.

130

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 10, No. 9, September 2012

The reduct of an information system is not unique: there may be many subsets of attributes which preserve the equivalence-class structure (i.e., the knowledge) expressed in the information system. The set of attributes which is common to all reducts is called the Core: the core is the set of attributes which is possessed by every legitimate reduct, and therefore consists of attributes which cannot be removed from the information system without causing collapse of the equivalence-class structure. The core may be thought of as the set of necessary attributes – necessary, that is, for the category structure to be represented.[11] 2. 2 Artificial Neural Networks Artificial neural networks are relatively crude electronic networks of "neurons" based on the neural structure of the brain. They process records one at a time, and "learn" by comparing their classification of the record (which, at the outset, is largely arbitrary) with the known actual classification of the record. The errors from the initial classification of the first record is fed back into the network, and used to modify the networks algorithm the second time around, and so on for many iterations. ANNs have been shown to make accurate predictions in many aspects of medical practice by pattern recognition and learning .In medicine, artiﬁcial neural networks (ANNs) are the most widely described form of artiﬁcial intelligence (AI),a branch of computer science concerned with the emulation of complex human thought processes such as adaptive learning, optimization, reasoning and decision-making. ANNs are inspired by, and loosely modeled upon the structure and the function of biological nervous systems, being composed of a series of interconnecting parallel nonlinear processing elements (nodes) with a limited number of inputs and outputs. Medical practice requires human acquisition, analysis and application of a vast amount of information in a variety of complex clinical scenarios. At present, there is no adequate substitute for the expertise of an experienced clinician .However, computers can process and analyze large quantities of data rapidly and efficiently, and so in theory, AI could facilitate clinical decision-making processes: diagnosis, treatment and prediction of outcome.[2] 3. Hybrid model of Rough Sets and ANN The hybrid model discussed in this paper is considered to be a combination system that contains two intelligent methodologies which are Rough Sets and ANN. 3.1 Attributes Reduction Using Rough Sets The hybrid combinational model is divided into two main parts; the first part consists of RS steps to reduce the input attributes and detect which more affecting ones. It will receive N objects with multi valued attributes as input in Figure 2 Sample Information System

Object P1 P2 P3 P4 P5 O1 O2 O3 O4 O5 O6 O7 O8 O9 1 2 0 1 1 1 2 0 1 1 2 0 0 1 0 0 0 1 2 1 2 1 0 2 1 0 0 1 2 2 2 0 0 1 0 0 1 2 2 1 2 1 0 2 2

O10 2 0 0 1 0 Figure 2: Sample Information System When the full set of attributes P = {P1, P2, P3, P4, P5} is considered, we see that we have the following seven equivalence classes:

Thus, the two objects within the first equivalence class, {O1,O2}, cannot be distinguished from one another based on the available attributes, and the three objects within the second equivalence class, {O3,O7,O10}, cannot be distinguished from one another based on the available attributes. The remaining five objects are each discernible from all other objects. The equivalence classes of the P-indiscernibility relation are denoted [x]P. It is apparent that different attribute subset selections will in general lead to different indiscernibility classes. For example, if attribute P = {P1} alone is selected, we obtain the following, much coarser, equivalence-class structure:

Let be a target set that we wish to represent using attribute subset P; that is, we are told that an arbitrary set of objects X comprises a single class, and we wish to express this class (i.e., this subset) using the equivalence classes induced by attribute subset P. In general, X cannot be expressed exactly, because the

131

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 10, No. 9, September 2012

set may include and exclude objects which are indistinguishable on the basis of attributes P.[11] For example, consider the target set X = {O1, O2, O3, O4}, and let attribute subset P = {P1, P2, P3, P4, P5}, the full available set of features. It will be noted that the set X cannot be expressed exactly, because in [x] P, objects {O3, O7, and O10} are indiscernible. Thus, there is no way to represent any set X which includes O3 but excludes objects O7 and O10. A reduct can be thought of as a sufficient set of features – sufficient, that is, to represent the category structure. In the example table above, attribute set {P3, P4, P5} is a reduct – the information system projected on just these attributes possesses the same equivalence class structure as that expressed by the full attribute set:

Johnson Reducer algorithm that has a natural bias towards ﬁnding a single prime implicant of minimal length.The reduct B is found by executing the algorithm outlined below , where S denotes the set of sets corresponding to the discernibility function, and w(S) denotes a weight for set S in S that automagically gets computed from the data. 1. Let B =Φ . 2. Let a denote the attribute that maximizes ∑ w(S), where the sum is taken over all sets S in S that contain a. Currently, ties are resolved arbitrarily. 3. Add a to B. 4. Remove all sets S from S that contain a. 5. If S = Φ return B. Otherwise, goto step 2. Algorithm 1: Basic Johnson Reducer Algorithm Supports for computing approximate solutions is provided by aborting the loop when “enough” sets have been removed from S, instead of requiring that S has to be fully emptied. The support count associated with the computed reduct equals the reduct’s hitting fraction multiplied by 100, i.e., the percentage of sets in S that B has a non-empty intersection with.[12] The application will receive an Excel File contain all patient and stone characteristics (Feature Extraction Record) as listed in Table 1 & the variables definition listed in Table 2. TABLE 1:ESWL treatment parameters

serialNo age 1 2 3 4 5 6 7 8 sex 61 50 45 46 39 56 42 29 2 1 1 2 2 1 1 1 side 2 1 1 1 1 2 2 1 number 1 1 1 1 1 2 1 2 length 10 15 25 15 10 19 18 18 opacity 1 1 1 1 1 1 1 1 nature solitary 1N 1N 1N 2N 1N 1N 1N 2N morpholo ager 1 1 3 3 1 3 1 3 free 2 2 2 2 1 2 2 1 1 1 1 1 1 1 1 1 anatomy2 j stent 0 0 0 0 0 0 1 0 morph3 0 0 0 0 0 0 0 0 1 1 3 3 1 3 1 3 comp2 0 0 0 0 0 0 0 0 postpr 0 0 0 0 0 0 0 0 site2 1 1 1 1 1 1 1 1 lengthm2 sessn 1 1 2 1 1 2 2 2 Class 1 1 1 2 2 2 2 1 1 1 1 1 1 1 1 1

Attribute set {P3,P4,P5} is a legitimate reduct because eliminating any of these attributes causes a collapse of the equivalence-class structure, with the result that .The reduct of an information system is not unique: there may be many subsets of attributes which preserve the equivalence-class structure (i.e., the knowledge) expressed in the information system. In the example information system above, another reduct is {P1, P2, P5}, producing the same equivalence-class structure as [x]P.The set of attributes which is common to all reducts is called the Core: the core is the set of attributes which is possessed by every legitimate reduct, and therefore consists of attributes which cannot be removed from the information system without causing collapse of the equivalence-class structure. The core may be thought of as the set of necessary attributes – necessary, that is, for the category structure to be represented. In the example, the only such attribute is {P5}; any one of the other attributes can be removed singly without damaging the equivalence-class structure, and hence these are all dispensable. However, removing {P5} by itself does change the equivalence-class structure, and thus {P5} is the indispensable attribute of this information system, and hence the core.[11] In order to reduct the input attributes and detect the core with the same manner as described earlier we use the ROSETTA application V1.4.41 that support The Reduction Algorithms. We select

**Table 2: Variables Definition
**

Value Sex 1 2 Side 1 2 Number 1 2 Opacity 1 Label MALE FEMALE RIGHT LEFT SINGLE MULTIPLE OPAQUE

132

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

2 Nature 1 2 Morpholo 1 2 3 4 5 6 7 8

LUCENT DENOVO RECURRENT PERFECT MILD HYDRONEPHROSIS MODERATE HYDRONEPHROSIS SEVERE HYDRONEPHROSIS MILD PYELONEPHRITIS MODERATE PYELONEPHRITIS SEVERE PYELONEPHRITIS OTHERS

Length 1 2 Sessn 1 2 3 Free(o/p) 1 0

<= 15 mm > 15 mm ONE SESSION 2 SESSIONS >2 SESSIONS Free Not Free

It will apply the algorithm as described above and generate the reduced attributes which have 8 variables {age, sex, side, number, length, opacity, morphology, site} instead of 20 variables. 3.2 ANN for predicting optimum renal stone fragmentation The second part of our model consists of ANN to classify renal stone patients into free (success in removing the stone completely) or not Free .This may be achieved by constructing a predictive model, taking into account all reduced variables that affecting stone-free status resulting from Rough Set model. We trained several 3layer, feed for-ward neural networks with the back propagation of error algorithm to predict stone-free status. Feed forward, Back-Propagation The feedforward, back-propagation architecture was developed in the early 1970's by several independent sources (Werbor; Parker; Rumelhart, Hinton and Williams). This independent codevelopment was the result of a proliferation of articles and talks at various conferences which stimulated the entire industry. Currently, this synergistically developed backpropagation architecture is the most popular, effective, and easy-to-learn model for complex, multi-layered networks. Its greatest strength is in non-linear solutions to ill-defined problems. The typical back-propagation network has an input layer, an output layer, and at least one hidden layer. There is no theoretical limit on the number of hidden layers but typically there are just one or two. Some work has been done which indicates that a maximum of five layers (one input layer, three hidden layers and an output layer) are required to solve problems of any complexity. Each layer is fully connected to the succeeding layer. As noted above, the training process normally uses some variant of the Delta Rule, which starts with the calculated difference between the actual outputs and the desired outputs. Using this error, connection weights are increased in proportion to the error times a scaling factor for global accuracy. Doing this for an individual node means that the inputs, the output, and the desired output all have to be present at the same processing element. Training inputs are applied to the input layer of the network, and desired outputs are compared at the output layer. During the learning process, a forward sweep is made through the network, and the output of each element is computed layer by layer. The difference between the output of

Ager 1 2 Anatomy 0 1 JJstent 0 1 Morph3 1 2 3 Comp 0 1 postpr 0 1 Site 1 2 4 9 10

<=40 > 40 NORMAL ABNORMAL NOT STENTED STENTED PERFECT PYELONEPHRITIC OBSTRUCTED NON COMPLICATED COMPLICATED None Yes UPPER CALYX MIDDLE CALYX RENAL PELVIS MULTIPLE SITES LOWER CALYX

133

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

the final layer and the desired output is back-propagated to the previous layer(s), usually modified by the derivative of the transfer function, and the connection weights are normally adjusted using the Delta Rule. This process proceeds for the previous layer(s) until the input layer is reached.[12] Architecture of the Feed-Forward Back Propagation Classifier:

than the decision threshold). The numbers of hidden nodes (23) were chosen by the best performance on the separate test set through a cascade learning paradigm. Table 3. ANN Input Data

Variables Input Neurons: Age Sex 2 Side 2 Stone No. 2 Stone Length Stone Opacity Morpholo 2 2 3 5 15 or less, Greater than 15 mm Opaque, Lucent Perfect, Hydronerphrotic, Pyelonephritic Upper Calyx, Middle Calyx, Renal pelvis, Multiple Calyx, Lower Calyx 1 Stone Free, 0 Not free Single , Multiple Right , Left No. of Neurons 2 2 Categories

40 or Younger, Older than 40 Male , Female

Figure 1 : Feed-Forward Back Propagation architecture Basic Back Propagation Learning Algorithm Actual algorithm for a 3-layer network (only one hidden layer): Step 1 Initialize the weights in the network (often randomly) Do For each example e in the training set O = neural-net-output(network, e) ; forward pass T = teacher output for e Step 2: Calculate error (T - O) at the output units Step 3: Compute delta_wh for all weights from hidden layer to output layer ; backward pass Step 4: Compute delta_wi for all weights from input layer to hidden layer ; backward pass continued Update the weights in the network Until all examples classified correctly or stopping criterion satisfied Return the network

Site OutpNeurons

Algorithm 2: Basic Back Propagation Learning Algorithm [13] In the constructed ANN the input layer had 25 neurons (table 3). An input neuron was assigned for each categorical value of a categorical variable with a value of 1 when the category was present and 0 otherwise. The output layer consisted of 2 neuron, giving the class value 1 for stone-free status and the class value 0 if the not free status. Network output was actually between 0 and 1, that was then converted according to a decision threshold to class 0 (when output was equal to the decision threshold or less) or class 1(when output was greater

Using these values a random set of weights is initially assigned to the connections between the layers. The ﬁrst output of the network is determined using these weights. The output thus obtained is compared with the actual output of the pattern pair and the mean square error calculated. An error optimization algorithm then minimizes this error. A feed-forward backpropagation ANN system has the property of self-optimization of the error during training. Thus, the ﬁnal weight of a particular variable is decided by the system itself, determined precisely by the relative impact of the variable in the dataset in relation to the actual output variable. The network was trained using the XL Miner software system; the working code for the ANN was constructed so that it was compatible with the analysis and processing of the input data. The ANN was trained using a single-layer feed-forward backpropagation network. To assess the training status of the ANN, 480 randomly selected data from the training set were used for validation (validation set), and once validation was satisfactory further training was stopped. After the ANN was considered to be reasonably trained, input variables (similar to those used for training) from subsequent patients were serially fed into the trained ANN and optimum fragmentation of each patient, as predicted by the ANN via the output data, recorded. Using the usual protocol these patients subsequently underwent EWSL and the results (observed values) were recorded. The predicted and the observed values were then compared.

134

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

RESULTS In this study we show the ability of an ANN with RS model to predict stone-free status in patients with ureteral stones treated with ESWL, so 449 patients (93.5%) were free of stones, while the remaining 31 (6.4%) required other treatment modalities due to inadequate stone disintegration. Evaluating the performance of the model on the test set revealed a sensitivity of 93,754 % when the number of hidden neurons are 23 neuron the processing time was 4 sec ,While with using whole input variables ANN only correctly classify 78% of the participants.

calculous passage by an artiﬁcial neural network , ” vol.164 ,pp.326–438 ,J Urol 2000. [8] Sonke GS, Heskes T, Verbeek AL, de la Rosette JJ, Kiemeny LA,” Prediction of bladder outlet obstruction in men with lower urinary tract symptoms using artiﬁcial neural networks, ”,vol. 163,pp. 300–305, J Urol 2000. [9 ]Bertrand Dussol Æ Jean-Michel Verdier ,”Artiﬁcial neural networks for assessing the risk of urinary calcium stone among men., ” Urol Res ,vol.34,pp. 17–25 ,2006. [10 ]Neeraj K.Goyal ,Abhay Kumar et al ,”A Comparitive Study of Artificial Neural Network and Multivaiate Regression Analysis to Analyze Optimum Renal stone Fragmentation by Extracorporeal Shock Wave Lithotripsy, ” Department of Urology, Institute of Medical science ,Hindu Univeristy ,vol.1 ,India 2010. [11] http://en.wikipedia.org/wiki/Rough-set 5-4-2011 [12] A. Øhrn ROSETTA Technical Reference Manual. Department of Computer and Information Science, Norwegian University of Science and Technology (NTNU), Trondheim, Norway,pp.28, 2000 13 http://en.wikipedia.org/wiki/Backpropagation 23-7-2011

Conclusion While ANN tool provide accurate results for predicting renal stones disease, using the Rough Set make ANN provides a better prediction as RS reduce the time consumed in the univariate analysis to detect which variables are the most effective . This model provides a new way to study stone disease in combination with Rough set technique and ANN. REFERENCES [1] Hamid A, Dwivedi US, Singh TN,” Artiﬁcial neural networks in predicting optimum renal stone fragmentation by extracorporeal shock wave lithotripsy, ” A preliminary study, vol .91, pp. 821–824 , BJU Int 2003 . [2 ] Prabhakar Rajan and David A. Tolley, “ Artiﬁcial neural networks in urolithiasis, ” Department of Urology, The Scottish Lithotriptor Centre, Western General Hospital,Crewe Road South, Edinburgh. UK Current Opinion in Urology,vol .14 ,pp.133–137 ,2005. [3] ZDZISLAW PAWLAK, “Rough Sets Theoretical Aspects of Reasoning about Data, ” Institute of Computer Science, Warsaw University of Technology, Kluwer Academic Publishers, Australia, 1991. [4] Michaels EK, Niederberger CS, Golden RM , “Use of a neural network to predict stone growth after shock wave lithotripsy, ” Urology, vol.51 ,pp.335–338 ,1998. [5] Poulakis V, Dahm P,Witzsch U, “ Prediction of lower pole stone clearance after shock wave lithotripsy using an artiﬁcial neural network, ”,vol.169 ,pp.1250–1256 ,J Urol 2003. [6 ]Gomha MA, Sheir KZ, Showky S ,” Can we improve the prediction of stone-free status after extracorporeal shock wave lithotripsy for ureteral stones? A neural network or a statistical model? , ”vol.172 ,pp.175–179 ,J Urol 2004. [7 ]Cummings JM,Boullier JA,Izenberg SD, Kitchens DM, Kothandapani. RV, “Prediction of spontaneous ureteral

AUTHORS PROFILE Authors Profile …

¹*SHAHINDA MOHAMED MOSTAFA AL KHOLY ,Teaching Assistant ,IS Dept ,Faculty of Computers and Information Science,Mansoura University,Egypt Phone :00201069493073 ,Email:shahdalkholy@yahoo.com.

135

http://sites.google.com/site/ijcsis/ ISSN 1947-5500