## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

4, July 2010

An Optimized Clustering Algorithm Using Genetic Algorithm and Rough set Theory based on Kohonen self organizing map

1

**Asgarali Bouyer, 2Abdolreza Hatamlou
**

1,2

3

Abdul Hanan Abdullah

Department of Computer Science Islamic Azad University – Miyandoab Branch 1 Miyandoab, Iran 2 University Kebangsaan Malaysia 2 Selangor, Malaysia 1 basgarali2@live.utm.my, 2hatamlou@iaukhoy.ac.ir

1

Department of Computer and Information Systems, Faculty of Computer Science and Information Systems, 3 Universiti Teknologi Malaysia 81310 Skudai, Johor Bahru, Malaysia 3 hanan@utm.my

Abstract—The Kohonen self organizing map is an efficient tool in exploratory phase of data mining and pattern recognition. The SOM is a popular tool that maps high dimensional space into a small number of dimensions by placing similar elements close together, forming clusters. Recently, most of the researchers found that to take the uncertainty concerned in cluster analysis, using the crisp boundaries in some clustering operations is not necessary. In this paper, an optimized two-level clustering algorithm based on SOM which employs the rough set theory and genetic algorithm is proposed to defeat the uncertainty problem. The evaluation of proposed algorithm on our gathered poultry diseases data and Iris data expresses more accurate compared with the crisp clustering methods and reduces the errors. Index Terms- SOM, Clustering, Rough set theory, Genetic Algorithm.

Clustering algorithms attempt to organize unlabeled input vectors into clusters such that points within the cluster are more similar to each other than vectors belonging to different clusters [7]. The clustering methods are of five types: hierarchical clustering, partitioning clustering, density-based clustering, gridbased clustering and model-based clustering [8]. The rough set theory employs two upper and lower thresholds in the clustering process which result in a rough clusters appearance. This technique also could be defined in incremental order i.e. the number of clusters is not predefined by users. Our goal is to optimized clustering algorithm that will use in poultry disease predictions. The clustering will assist in improving further analysis of the poultry symptoms data in detecting outliers. Analyzing outlier can reveal surprising facts hidden inside data like ambiguous patterns that are still assumed to belong to one of the predefined or undefined classes. Clustering is important in detecting outlier to avoid the high cost of misclassification. In order to cater for the complex nature of data of our problem domain, clustering technique based on machine learning approaches such as self organizing map (SOM), kernel machines, fuzzy methods, etc for clustering poultry symptoms (based on observation data – body, feathers, skin, head, muscle, lung, heart, intestines, ovary, etc) will prove to be a promising tool. In this paper, a new two-level clustering algorithm is proposed. The idea is that the first level is to train the data by the SOM neural network and the clustering at the second level is a rough set based incremental clustering approach [9], which will be applied on the output of SOM and requires only a single neurons scan. The optimal number of clusters can be found by rough set theory which groups the given neurons into a set of overlapping clusters (clusters the mapped data respectively). Then the overlapped neurons will be assigned to the true clusters they belong to, by apply

I.

INTRODUCTION

The self organizing map (SOM) proposed by Kohonen [1], has been widely used in industrial applications such as pattern recognition, biological modeling, data compression, signal processing and data mining [2]-[5]. It is an unsupervised and nonparametric neural network approach. The success of the SOM algorithm lies in its simplicity that makes it easy to understand, simulate and be used in many applications. The basic SOM consists of neurons usually arranged in a two-dimensional structure such that there are neighborhood relations among the neurons. After completion of training, each neuron is attached to a feature vector of the same dimension as input space. By assigning each input vector to the neuron with nearest feature vectors, the SOM is able to divide the input space into regions (clusters) with common nearest feature vectors. This process can be considered as performing vector quantization (VQ) [6]. Also, because of the neighborhood relation contributed by the interconnections among neurons, the SOM exhibits another important property of topology preservation.

39

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 4, July 2010

genetic algorithm. A genetic algorithm has been adopted to minimize the uncertainty that comes from some clustering operations. In our previous work [3] the hybrid SOM and rough set has been applied to catch the involved ambiguity of clusters but the experiment results show that the proposed algorithm (Genetic Rough SOM) outperforms the previous one. the next important process is to collect poultry data from the common and important diseases, which can affect the respiratory and nonrespiratory system of poultry. The first phase is to identify the format and values for input parameters from available information. The second phase is to investigate and develop data conversion and reduction algorithms for input parameters. This paper is organized as following; in section II the basics of SOM algorithm are outlined. The basic of rough set incremental clustering approach are described in section III. In section IV the essence of genetic algorithm is described. The proposed algorithm is presented in section V. Section VI is dedicated to experiment results and section VII provides brief conclusion and future works.

kernel function around the winning neuron c at time t . The neighborhood kernel function is a non-increasing function of time and of the distance of neuron i from the winning neuron c . The kernel can be taken as a Gaussian function:

hic (t ) = e

−

Pos i − Pos c 2σ ( t ) 2

2

(2)

where Posi is the coordinates of neuron i on the output grid and σ (t ) is kernel width. The weight update rule in the sequential SOM algorithm can be written as:

⎧w (t ) + ε (t ) hic (t )(x(t ) − wi (t ) )∀i ∈ N c wi (t + 1) = ⎨ i wi (t ) ow ⎩ (3)

II.

SELF ORGANIZING MAP

Competitive learning is an adaptive process in which the neurons in a neural network gradually become sensitive to different input categories, sets of samples in a specific domain of the input space. A division of neural nodes emerges in the network to represent different patterns of the inputs after training. The division is enforced by competition among the neurons: when an input x arrives, the neuron that is best able to represent it wins the competition and is allowed to learn it even better. If there exist an ordering between the neurons, i.e. the neurons are located on a discrete lattice, the competitive learning algorithm can be generalized. Not only the winning neuron but also its neighboring neurons on the lattice are allowed to learn, the whole effect is that the final map becomes an ordered map in the input space. This is the essence of the SOM algorithm. The SOM consist of m neurons located on a regular low-dimensional grid, usually one or two dimensional. The lattice of the grid is either hexagonal or rectangular. The basic SOM algorithm is iterative. Each neuron i has a d -dimensional feature vector wi = [ wi1 ,..., wid ] . At each training step t , a sample data vector x(t ) is randomly chosen for the training set. Distance between x(t ) and all feature vectors are computed. The winning neuron, denoted by c , is the neuron with the feature vector closest to x(t ) :

c = arg min x(t ) − wi ,

i

Both learning rate ε (t ) and neighborhood σ (t ) decrease monotonically with time. During training, the SOM behaves like a flexible net that fold onto a cloud formed by training data. Because of the neighborhood relations, neighboring neurons are pulled to the same direction, and thus feature vectors of neighboring neurons resemble each other. There are many variants of the SOM [10, 11]. However, these variants are not considered in this paper because the proposed algorithm is based on SOM, but not a new variant of SOM. The 2D map can be easily visualized and thus give people useful information about the input data. The usual way to display the cluster structure of the data is to use a distance matrix, such as U-matrix [12]. U-matrix method displays the SOM grid according to neighboring neurons. Clusters can be identified in low inter-neuron distances and borders are identified in high inter-neuron distances. Another method of visualizing cluster structure is to assign the input data to their nearest neurons. Some neurons then have no input data assigned to them. These neurons can be used as the border of clusters [13]. III. ROUGH SET INCREMENTAL CLUSTERING

i ∈ { ,..., m} 1

(1)

A set of neighboring nodes of the winning node is denoted as N c . We define hic (t ) as the neighborhood

This algorithm is a soft clustering method employing rough set theory [14]. It groups the given data set into a set of overlapping clusters. Each cluster is represented by a lower approximation and an upper approximation ( A(C ), A (C )) for every cluster C ⊆ U . Here U is a set of all objects under exploration. However, the lower and upper approximations of Ci ∈ U are required to follow some of the basic rough set properties such as: (1) 0 ⊆ A(Ci ) ⊆ A (Ci ) ⊆ U / / (2) A(Ci ) ∩ A(C j ) = 0, i ≠ j (3) A(Ci ) ∩ A (C j ) = 0, i ≠ j / (4) If an object u k ∈ U is not part of any lower approximation, then it must belong to two or more upper approximations.

40

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 4, July 2010

Note that (1)-(4) are not independent. However enumerating them will be helpful in understanding the basic of rough set theory. The lower approximation A(C ) contains all the patterns that definitely belong to the cluster C and the upper approximation A (C ) permits overlap. Since the upper approximation permits overlaps, each set of data points that are shared by a group of clusters define indiscernible set. Thus, the ambiguity in assigning a pattern to a cluster is captured using the upper approximation. Employing rough set theory, the proposed clustering scheme generates soft clusters (clusters with permitted overlap in upper approximation). For a rough set clustering scheme and given two objects u h , u k ∈ U we have three distinct possibilities:

• • •

Both u k and u h are in the same lower approximation A(C ) . Object u k is in lower approximation A(C ) and u h is in the corresponding upper approximation A (C ) , and case 1 is not applicable. Both u k and u h are in the same upper approximation A (C ) , and case 1 and 2 are not applicable.

the fitness to optimization and machine learning. GA provides very efficient search method working on population, and has been applied to many problems of optimization and classification [16]-[17]. General GA process is as follows: (1) Initial the population of genes. (2) Calculates the fitness of each individual in the population. (3) Reproduce the individual selected to form a new population according to each individual’s fitness. (4) Perform crossover and mutation on the population. (5) Repeat step (2) through (4) until some condition is satisfied. Crossover operation swaps some part of genetic bit string within parents. It emulates just as crossover of genes in real world that descendants are inherited characteristics from both parents. Mutation operation inverts some bits from whole bit string at very low rate. In real world we can see that some mutants come out rarely. Fig.1 shows the way of applying crossover and mutation operations to genetic algorithm. Each individual in the population evolves to getting higher fitness generation by generation.

Crossover Mutation

The quality of a conventional clustering scheme is determined using within-group-error [15] Δ given by:

Δ=

∑ ∑ distance (u

i =1 u h , u k ∈C i

m

h , uk

)

010000001001 010000011101

011100011001

(4)

×

111001011101 111001001001 011101011001

Figure 1. Crossover and Mutation

**where u h , u k are objects in the same cluster C i .
**

For the above rough set possibilities, three types of equation (4) could be defined as following:

Δ1 = Δ2 = Δ3 =

∑ ∑ distance (u ∑

i =1 u h , u k ∈ A ( X i ) m

m

h , uk )

V.

h , uk )

∑ ∑ distance (u

i =1 u h , u k ∈ A ( X i )

i =1 u h ∈ A ( X i ) and u k ∈ A ( X i ) m

∑ distance (u

h , uk )

(5)

GENETIC ROUGH SET CLUSTERING OF THE SELF ORGANIZING MAP

**The total error of rough set clustering will then be a weighted sum of these errors:
**

Δtotal = w1 × Δ1 + w2 × Δ2 + w3 × Δ3 where w1 > w2 > w3. (6)

In this paper rectangular grid is used for the SOM. Before training process begins, the input data will be normalized. This will prevent one attribute from overpowering in clustering criterion. The normalization of the new pattern X i = {xi1 ,..., xid } for i = 1,2,..., N is as following:

Xi = Xi . Xi (7)

Since Δ1 corresponds to situations where both objects definitely belong to the same cluster, the weight w1 should have the highest value. IV. GENETIC ALGORITHM

Genetic algorithm was proposed by John Holland in early 1970s, it applies some of natural evolution mechanism such as crossover, mutation, and survival of

Once the training phase of the SOM neural network completed, the output grid of neurons which is now stable to network iteration, will be clustered by applying the rough set algorithm as described in the previous section. The similarity measure used for rough set clustering of neurons is Euclidean distance (the same used for training the SOM). In this proposed method

41

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 8, No. 4, July 2010

(see Fig.2) some neurons, those never mapped any data are excluded from being processed by rough set algorithm.

as possible. Therefore, a precision measure needs to be used for evaluating the quality of the proposed approach. A possible precision measure can be defined as the following equation [14]:

certainty = Number of objects in lower approx Total number of objects

Lower approx

(9)

Upper approx

VI.

EXPERIMENT RESULTS

Figure 2. Clustering of the Self Organizing Map. The overlapped neurons are highlited for two clusters.

From the rough set algorithm it can be observed that if two neurons are defined as indiscernible (those neurons in the upper approximation of two or more clusters), there is a certain level of similarity they have with respect to the clusters they belong to and that similarity relation has to be symmetric. Thus, the similarity measure must be symmetric. According to the rough set clustering of the SOM, overlapped neurons and respectively overlapped data (those data in the upper approximation) are detected. In the experiments, to calculate errors and uncertainty, the previous equations will be applied to the results of SOM (clustered and overlapped data). Then for each overlapped neuron a gene is generated that represents the alternative distances from each cluster leader. Fig.3 shows an example of the generated genes for m overlapped neurons on n existing cluster leaders. gene 1 …. dn-1 dn d1 d2 d3 d4 gene 2 …. dn-1 dn d1 d2 d3 d4 gene 3 …. dn-1 dn d1 d2 d3 d4 . . . . . . …. .

gene m

d1

d2

d3

d4

….

dn-1

dn

To demonstrate the effectiveness of the proposed clustering algorithm GR-SOM (Genetic Rough set Incremental clustering of the SOM), two phases of experiments has been done on the well known Iris data set [18] and our gathered data. The Iris data set, which has been widely used in pattern classification, consists of 150 data points of four dimensions and our collected data has 48 data points. The Iris data are divided into three classes with 50 points each. The first class of Iris plant is linearly separable from the other two. The other two classes are overlapped to some extent. The first phase of experiments, presents the uncertainty that comes from the data set and in the second phase the errors has been generated. The results of GR-SOM and RI-SOM [3] (Rough set Incremental SOM) are compared to I-SOM [4] (Incremental clustering of SOM). The input data are normalized such that the value of each datum in each dimension lies in [0,1] . For training, SOM 10 × 10 with 100 epochs on the input data is used. The general parameters for the genetic algorithm have been configured as Table I. Fig.4 shows the certainty generated from epoch 100 to 500 by (9) on the mentioned data set. From the gained certainty it’s obvious that the GR-SOM could efficiently detect the overlapped data that have been mapped by overlapped neurons (table II). In the second phase, the same initialization for the SOM has been used. The errors that come from the data sets, according to the (5) and (6) have been generated by our proposed algorithms (table III). The weighted sum (6) has been configured as (10).

TABLE I. GENERAL PARAMETERS OF THE GENETIC ALGORITHM Population Size Number of Evaluation Crossover Rate Mutation Rate Number of Generation 50 10 0.25 0.001 100

Figure 3. Generated genes. m number of overlapped neurons and n is number of existing clusters. The highlighted di is the optimize one that minize the fitness function

After the genes have been generated the genetic algorithm is employed to minimize the following fitness function which represents the total sum of each d j of the related gene:

F=

∑∑ g (d

i i =1 j =1

m

n

TABLE II. THE CERTAINTY-LEVEL OF GR-SOM, RI-SOM AND ISOM ON THE IRIS DATA SET FROM EPOCH 100 TO 500. Epoch I-SOM RI-SOM GR-SOM 100 33.33 67.07 69.45 200 65.23 73.02 74.34 300 76.01 81.98 83.67 400 89.47 91.23 94.49 500 92.01 97.33 98.01

j)

(8)

The aim of the proposed approach is making the genetic rough set clustering of the SOM to be as precise

42

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

I‐SOM 100 90 80 70 60 50 40 30 20 10 0 100 200

RI‐SOM

GR‐SOM

The artificial data set is a 569 30-dimensional data set which is trained twice, once with I-SOM and once with RI-SOM. The errors of generated results are calculated from the difference between the results of equation (9) and 1, see “Fig. 5”. From the “Fig. 5” it could be observed that the proposed RI-SOM algorithm generates less error in cluster prediction compare to I-SOM. VII. CONCLUSION AND FUTURE WORK

Certainty

300 Epoch

400

500

Figure 4. Comparison of the certainty-level of GR-SOM, RI-SOM and I-SOM on the Iris data set.

∑w

i =1

3

i

=1

and for each wi we have : wi = 1 × ( 4 − i ). 6

(10)

TABLE III. COMPARATIVE GENERATED ERRORS OF GR-SOM AND I-SOM ON THE IRIS DATA SET ACCORDING TO EQUATIONS (5) AND (6). Method Iris Data set GR-SOM I-SOM

Δ1

1.05

Δ2

0.85

Δ3

0.04

Δ total

1.4 2.8

In this paper a two-level based clustering approach (GR-SOM), has been proposed to predict clusters of high dimensional data and to detect the uncertainty that comes from the overlapping data. The approach is based on the rough set theory that employs a soft clustering which can detects overlapped data from the data set and makes clustering as precise as possible, then GA is applied to find the true cluster for each overlapped data. The results of the both phases indicate that GR-SOM is more accurate and generates fewer errors as compare to crisp clustering (I-SOM). The proposed algorithm detects accurate overlapping clusters in clustering operations. As the future work, the overlapped data also could be assigned correctly to true clusters they belong to, by assigning fuzzy membership value to the indiscernible set of data. Also a weight can be assigned to the data’s dimension to improve the overall accuracy. REFERENCES

[1] T. Kohonen, “Self-organized formation of topologically correct feature maps”, Biol. Cybern. 43. 1982, pp. 59–69. [2] T. Kohonen, Self-Organizing Maps, Springer, Berlin, Germany, 1997. [3] M.N.M Sap and Ehsan Mohebi, “Hybrid Self Organizing Map for Overlapping Custers”. The Springer-Verlag Proceedings of the CCIS 2008. Hainan Island, China. Accepted. [4] M.N.M Sap and Ehsan Mohebi, “Rough set Based Clustering of the Self Organizing Map”. The IEEE Computer Scociety Proceeding of the 1st Aseian Conference on Intelligent Information and Database Systems 2009. Dong Hoi, Vietnam. Accepted [5] M.N.M Sap and Ehsan Mohebi, “A Novel Clustering of the SOM using Rough set”. The IEEE Proceeding of the 6th Student Conference on Research and Development 2008”. Johor, Malaysia 2008. Accepted [6] R.M. Gray., “Vector quantization”. IEEE Acoust. Speech, Signal Process. Mag. 1 (2) 1984. pp. 4–29. [7] N.R. Pal, J.C. Bezdek, and E.C.K. Tsao, “Generalized clustering networks and Kohonen’s self-organizing scheme”. IEEE Trans. Neural Networks (4) 1993. pp. 549–557. [8] J. Han, M. Kamber, “Data mining: concepts and techniques”, Morgan-Kaufman, San Francisco, 2000. [9] S. Asharaf, M. Narasimha Murty, and S.K. Shevade, “Rough set based incremental clustering of interval data”, Pattern Recognition Letters, Vol. 27, 2006, pp. 515-519.

Furthermore, to demonstrate the effectiveness of the proposed clustering algorithm (RI-SOM), two data sets, one artificial and one real word data set were used in our experiments. The results are compared to I-SOM (Incremental clustering of SOM). The input data are normalized such that the value of each datum in each dimension lies in [0,1] . For training SOM 10 × 10 with 100 epochs on the input data is used.

Figure 5. Comparison the error between I-SOM and RI-SOM proposed algorithms on artificial data set.

43

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

[10] Yan and Yaoguang., “Research and application of SOM neural network which based on kernel function”. Proceeding of ICNN&B’05. Vol.1, 2005. pp. 509- 511. [11] M.N.M. Sap and Ehsan Mohebi. “Outlier Detection Methodologies: A Review”. Journal of Information Technology, UTM, Vol. 20, Issue 1, 2008. pp. 87-105. [12] A. Ultsch, H.P. Siemon., “Kohonen’s self organizing feature maps for exploratory data analysis”. Proceedings of the International Neural Network Conference, Dordrecht, Netherlands 1990. pp. 305–308. [13] X. Zhang, Y. Li. “Self-organizing map as a new method for clustering and data analysis”. Proceedings of the International Joint Conference on Neural Networks, Nagoya, Japan 1993. pp. 2448–2451.

[14] Pawlak, Z., “Rough sets”. Internat. J. Computer Inf. Sci. vol.11, 1982. pp. 341–356. [15] S.C. Sharma and A. Werner., “Improved method of grouping provincewide permanent traffic counters”. Transaction Research Report 815, Washington D.C. 1981 pp. 13-18 . [16] Goldberg D.E, “Genetic Algorithm in Search Optimization and Machine Learning”. Addison-Wesley Pubishing Co.inc, 1989. [17] Ebrehart, R, Simpson P. Dobbins R., “Comptational Intelligent PC Tools”, Waite Group Press, 1996. [18] UCIMachineLearning,www.ics.uci.edu/mlearn/MLRepos itory.html.

44

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

- dwmcluster
- Example
- Approximation Frequency of Dominant Modes in Double Layer Grids by Using Fuzzy Logic and Genetic Programming
- SPE-165374-Global Model for Failure Prediction for Rod Pump Artificial Lift Systems
- SCRF2010 07.Kwangwon SCRF10 Metrel
- A Novel Fragile Watermarking Scheme For Image Tamper Detection Using K Mean Clustering
- Data Analysis Methodology
- Report for Jury Evaluation Cepeda
- 7 Cluster Analysis
- What is Apache Mahout.pdf
- 6 SasMiner Clustering
- Cluster Analysis
- Event Characterization and Prediction Based on Temporal Patterns in Dynamic Data System - 2014
- Clustering Methods for Credit Card Using Bayesian Rules Based on K-Means Classification
- A Survey on Clustering Techniques for Identification Of
- 525-157
- 1011.2879
- Hj 2614551459
- Comparison of Graph Clustering Algorithms
- A Comparative Agglomerative Hierarchical Clustering Method to Cluster Implemented Course
- A Structured Simulation-based Methodology for Carpooling Viability Assessment
- Computerized Paper Evaluation Ppt
- unit6RM
- Sportscience
- Delineating Europe's Cultural Regions_ Population Structure and Surname Clustering
- Data Miner Junior Poster Copy
- SPSS Base User's Guide 15.0
- icde12mqo
- 1-s2.0-S0925231211001378-main
- Weka

Skip carousel

- tmpDF60.tmp
- IJSRDV3I110455.pdf
- Implementation Of Web Document Clustering Methods For Forensic Analysis
- Clustering on Uncertain Data
- IJSRDV4I30068Design of new cluster based routing protocol in vanet using fuzzy particle swarm optimization
- Cluster Analysis Techniques in Data Mining
- A Survey on Medical Data Classification
- Lemon Disease Detection Using Image Processing
- Tumor Detection using Particle Swarm Optimization to Initialize Fuzzy C-means
- A Survey on Optimization of User Search Goals by Using Feedback Session
- A Mining Cluster Based Temporal Mobile Sequential Patterns in Location Based Service Environments using CTMSP Mining
- A Review on Image Segmentation using Clustering and Swarm Optimization Techniques
- A Comparative Study on Distance Measuring Approaches for Clustering
- General purpose computer-assisted clustering and conceptualization
- Introduction to Multi-Objective Clustering Ensemble
- Measuring Underemployment
- A Survey on Hard Computing Techniques based on Classification Problem
- A Review Paper on Cluster Based Ant Defence Mechanism Technique For Attack In MANET
- tmpD453
- Effectient Clustered Based Oversampling Approach to Solve Rare Class Problem
- A Survey on Android application as an advertisement portal
- Text Based Fuzzy Clustering Algorithm to Filter Spam E-mail
- tmpA3B3.tmp
- Analysis of Infected Fruit Part Using Improved K-Means Clustering Algorithm
- A Survey of Various Intrusion Detection Systems
- An enhanced Method for performing clustering and detecting outliers using mapreduce in datamining
- tmp77F6.tmp
- Analysis And Detection of Infected Fruit Part Using Improved k-means Clustering and Segmentation Techniques
- A Survey On Mining Conceptual Rule and Ontological Matching For Text Summarization
- tmp1529.tmp

- Journal of Computer Science IJCSIS April 2015
- Journal of Computer Science IJCSIS November 2015
- International Journal of Computer Science IJCSIS September 2015
- Journal of Computer Science IJCSIS January 2016
- Journal of Computer Science IJCSIS March 2016 Part II
- Journal of Computer Science IJCSIS March 2016 Part I
- Journal of Computer Science IJCSIS January 2018 Full Volume
- Journal of Computer Science IJCSIS July 2015
- Journal of Computer Science IJCSIS June 2015
- Journal of Computer Science IJCSIS April 2016 Part I
- Journal of Computer Science IJCSIS August 2015
- Journal of Computer Science IJCSIS April 2016 Part II
- Journal of Computer Science IJCSIS Special Issue February 2016
- Journal of Computer Science IJCSIS December 2015
- Journal of Computer Science IJCSIS October 2015
- Low Footprint Hybrid Finite Field Multiplier for Embedded Cryptography
- Journal of Computer Science IJCSIS March 2015
- Policy-Based Smart Adaptive Quality of Service for Network Convergence
- An Analysis of Various Algorithms For Text Spam Classification and Clustering Using RapidMiner and Weka
- Unweighted Class Specific Soft Voting based ensemble of Extreme Learning Machine and its variant
- A Novel Approach to Malware Detection using Static Classification
- Fraudulent Electronic Transaction Detection Using Dynamic KDA Model
- Embedded Mobile Agent (EMA) for Distributed Information Retrieval
- An Efficient Model to Automatically Find Index in Databases
- Performance Evaluation of IEEE 802.15.6 Improvised and Scheduled Access Modes for Remote Patient Monitoring Applications
- Base Station Radiation’s Optimization using Two Phase Shifting Dipoles
- Security Architecture with NAC using Crescent University as Case study
- Comparative Analysis based Classification of KDD’99 Intrusion Dataset
- A Survey
- State of the Art

Sign up to vote on this title

UsefulNot usefulRead Free for 30 Days

Cancel anytime.

Close Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

An Optimized Clustering Algorithm Using Genetic Algorithm and Rough set Theory based on Kohonen self organizing map will be available on

Loading