You are on page 1of 6

2018 21st International Conference of Computer and Information Technology (ICCIT), 21-23 December, 2018

A Comprehensive Analysis on Risk Prediction of


Acute Coronary Syndrome using Machine Learning
Approaches

M. Raihan1 , Muhammad Muinul Islam2 , Promila Ghosh3 ,


Shakil Ahmed Shaj4 , Mubtasim Rafid Chowdhury5 , Saikat Mondal6 and Arun More7
Khulna University of Engineering & Technology, Khulna-9203, Bangladesh1,2,5
North Western University, Khulna-9100, Bangladesh1,3,4
Khulna University, Khulna-9208, Bangladesh6
Department of Cardiology, Ter Institute of Rural Health, Murud-413510, Latur, India7
Emails: raihanbme@gmail.com1 , mmi.kuet@gmail.com2 , promilaghoshmonty@gmail.com3 ,
shakilahmedshaj@gmail.com4 , mubtasimrafid@yahoo.com5 , saikatcsebd@gmail.com6 , arunmoregsa@gmail.com7

Abstract—Acute Coronary Syndrome (ACS) is liable for the field. If computer intelligence employs as an intelligent di-
sudden death. The originator of tachycardia is drug addiction, agnosis tool, healthcare industries get the benefit. Different
hyperpiesia polygenic disorder, lipidemia. From the healthcare computational intelligence like Naive Bayes (NB), Random
unit, ACS patients dataset has been collected. By preprocessing forest, Artificial Neural Network (ANN) are implemented to
the information the chances of the exigency of tachycardia by spot heart disease [5]. In this respect, by utilizing the proper
possessing machine learning (ML) approaches are analyzed.The
model of ML algorithms; we get the highest accuracy. As
proficiency of ML techniques for prediction is authentic than any
other traditional systems. The central scheme of this analysis is ACS is a burning issue at present. Like other countries, many
to anticipate the significant contingency of tachycardia. Neural people in Bangladesh are affected by ACS. By predicting the
Network, SVM, AdaBoost, Bagging, K-NN, Random Forest ap- most significant risks the rate of ACS affected patients can be
proaches are used as long as anticipating the betrayal of ACS. decreased. From this perspective, this analysis topic is chosen.
The high-grade exactness with AdaBoost and Bagging are 75.49%
and 76.28%. The precision and recall for AdaBoost are 0.741; The other part of the palimpsest is arranged as follows:
0.75 and 0.755; 0.763 for Bagging techniques respectively. in section II, section III the related works and methodology
have been elaborated with a distinguishing destination to the
Keywords—ACS, Heart Disease, Neural Network, SVM, Ad- justness of the classier algorithms respectively. In section IV
aboost, Bagging, K-NN, Random Forest.
the experiential aftermath of this analysis has been clarified
with the impulsion to justify the novelty of this exploration
I. I NTRODUCTION work. Finally, this research paper is terminated with section
Just after the moving of blood in the coronary infarction V.
decreases Acute Coronary Syndrome (ACS) arises. It indicates
the sudden unstable condition of coronary arteries. One type of II. R ELATED W ORKS
condition of ACS is tachycardia. It’s a paramount infirmity at
From different studies of CHD prediction, we can see that
this juncture that originates recurrently.We have been come
ML approaches are adopted for obtaining the forecasting of
to know from a survey every year many people die from
coronary infarction meticulously. J. Thomas et al. explained
tachycardia [1].An available breed of angina is Coronary
how the data mining formulation such as ANN, K-NN have
Artery Disease (CAD) [1]. It arises by reason of the blockage
the proficiency to construe congenital angina and got accuracy
of the boulevard that furnishes vital fluid toward heart [1]. As
80.6%. Nave Bayes, Neural Network, KNN approaches are
reported by the recent report of World Health Organizations
used [6]. R. Aniet al. described an IoT based system on
(WHO) published in 2017, Due to Coronary Heart Disease
the ensemble classifier that works for diagnostic prediction
(CHD) the death estimate is 14.31% of outright expiration in
and monitoring the patient that produced the best output
Bangladesh. Based on age the death estimate is 108.99 per
using random forest at 93%. Random Forest, Bagging, Naive
100,000 of the population and the world rank of Bangladesh
Bayes, K Nearest Neighbor are used [7]. Multilayer Perceptron
is 104 [2].
(MLP) and Sequential Minimal Optimization (SMO) classifier
In our analysis, the collected dataset is from AFC Fortis algorithms has used to anticipate heart disease and Bayes
Escorts Heart Institute, Khulna, Bangladesh and Rural Health Net and SMO classifiers have produced best results 87% and
Progress Trust, India [3][4]. The readiness progression of 89% for the collected data [8]. S. Nikon et al. planned a
tachycardia by the conventional system is not adequate can be system to single out the uncertainty of tachycardia [9]. C.
presumed. The diagnosis system of tachycardia cannot assure Suvarna et al. used Particle Swarm Optimization technique
the liability of anticipating heart attack precisely. Computer for an effectual heart disease prognostication system [10]. B.
intelligence plays an incredible appearance in the diagnosis Gnaneswar et al. explained the prognosis and interpretation by

978-1-5386-9242-4/18/$31.00 ©2018 IEEE


adopting data mining formulation accurately [11]. T. Mahboob TABLE I: Feature Lists
et al. adopted the ensemble approach to spotting the CHD [12].
K. Pahwa et al. used SVM formulation for features selection Features Subcategory Data Distributions
for getting best results. Nave Bayes, Random Forest are used Age Lowest: 15 M ean ± SD
Highest: 100 57.21 ± 13.25
got 84.1584% with 10 highly features and 84.1604% with 12 Sex Male 24.7%
features [13]. By using classification J. Singh et al. explained Female 75.3%
the heart diseases prediction [14]. S. Pouriyeh et al. explained Profession Business 68.2%
Govt 2.2%
a comparative analysis to spot the liability of angina adopting Household 10.5%
different ML algorithm as Decision Tree (DT), Naive Bayes Other Service 6.9%
Private 5.5%
(NB), K-Nearest Neighbor (K-NN), Single Conjunctive Rule Unemployed and Retired 6.7%
Learner (SCRL), Radial Basis Function (RBF), Multilayer Height Lowest: 136 cm 162.75 ± 8.99
Perceptron (MLP) including SVM [15]. Highest: 190 cm
Weight Lowest: 38 kg 66.20 ± 10.33
This research study's purpose is to identify the performance Highest: 115 kg
BMI Lowest: 15.24 24.96 ± 3.61
of some popular ML algorithms in predicting the ACS disease, Highest: 37.95
more specifically predicting the heart attack. Family History Yes 30%
No 70%
Smoking Yes 34.2%
III. M ETHODOLOGY Ex 12.5%
No 53.4%
The collected dataset contains 506 instances and 70 fea- Tobacco Yes 21.3%
tures [3][4]. We have selected 30 features by using Info- No 78.7%
HTN Yes 69.6%
GainAttributeEval with Ranker Algorithm which is ba- No 30.4%
sically the symptoms and some diagnostic outputs in our DLP Yes 13.4%
analysis. In Fig. 1 the integrated working procedure is shown. No 86.6%
DM Yes 43.9%
No 56.1%
We can segregate our strategy in these three parts: Physical Inactivity Yes 4.7%
No 95.3%
• Selection of the Attributes Psychological Stress Yes 20.8%
No 79.2%
• The Dataset Training Stroke Yes 9.9%
No 90.1%
• Applications of the Algorithms CAG Yes 14.2%
No 85.8%
1) Selection of the Attributes: InfoGainAttributeEval is CABG Yes 3.2%
No 96.8%
an attribute evaluator. The InfoGainAttributeEval evaluator Previous Heart Attack Yes 19.4%
needs Ranker algorithm. Ranker algorithm, an approach No 80.4%
which is only adequate to electrocute generating a stacked list Drug History Yes 77.3%
No 22.7%
for attribute evaluators. InfoGainAttributeEval evaluates each Chest Pain Yes 76.9%
attribute in the context of the resultant variable. Ranker method No 23.1%
ranks the attributes in order. Dyspnea Yes 40.5%
No 59.5%
2) The Dataset Training: Cross-validation, a progression Palpitation Yes 9.7%
No 90.3%
is usually adopted for the comparison with selected an accus- Syncope Yes 4.7%
tomed predictive model. In statistical analysis, cross-validation, No 95.3%
the technique for determining to hypothesize an autonomous Sweating Yes 35.2%
No 64.8%
dataset. It is adapted for assuming the completion of the Nausea Yes 10.1%
predictive model in training too. It resolves the analytical No 89.9%
valuation into heretical data sets. In cross-validation, there is Vomiting Yes 20.8%
No 79.2%
enough data for splitting the training dataset and test set. Radiation Yes 7.5%
No 92.5%
3) Applications of the Algorithms: In cross-validation, a Exertional Chest Pain Yes 10.3%
give data sample is split into k. k's value is needed to be No 89.7%
Creatinine Abnormal 62.3%
chosen. This value may be adopted in place of k. For training, Normal 37.7%
the dataset 12-fold Cross-validation is adopted. ACS (Class) Yes 69.8%
No 30.2%
AdaBoost (AB) is an approach adopted as a linear com- *Ex = Ex Smoker
bination. For improving the performance it can be adopted *SD = Standard Deviation
in consolidation with many approaches of machine learning
approaches. The resultant of the other learning approaches
are mingled into a weighted sum that epitomizes the final requires k. In the round, I, the tuples from G are sampled to
harvest of the boosted classifier. AdaBoost is sensitive, outliers form the training set, Gi, of sized. Each chance of triple to be
and adaptive. The scheme of Adaboost as a boosted classifier. selected is depended on its weight. Qi, a classifier model is
AdaBoost is a popular boosting approach. Let the given dataset derived from Gi. Its error is then calculated using Gi. If a tuple
G. G is d class-labeled tuples, (X1, y1),(X2, y2),...,(Xd, yd), is correctly classified, its weight is decreased and vice-versa.
here Xi is class label of the tuple. AdaBoost assigns training The more often it has been misclassified. The basic idea is
tuple 1/d at first. Generating k classifiers for the ensemble that when a classifier is built. Some classifiers may be better
at classifying some difficult tuples than others. To compute For instance, if q1 . . . qm is the fraction of the records
the error rate of model Qi, we sum the weights of each of the belonging to the m different classes in a node N, then F(N) of
tuples in Gi that Qi misclassified. the node N is like :
m
d X
error(Qi) =
X
wj × err(Xj ) F (N ) = 1 − qi2
i=1
j=1

Bagging is a general proceeding. It can be adapted for


If the tuple was misclassified, then err(Xj) is 1, otherwise, reducing the variance of the approaches that have high variance
it is 0. 1. The rate of error Qi affects how the weights of approaches. Here is the example how to Bagging works. Given
tuples are updated. If a tuple in the first round was correctly a set D, of d tuples. For iteration i(i = 1, 2,..., k), a training set
classified. Its weight is multiplied by error(Qi)/ (1 - error(Qi)). Di of d tuples is sampled with replacement from the original,
The lower a classifier’s error rate, the more accurate it is. D. Some of the original tuples of D may not be included in
The Classifier’s weight Qi ’s vote is : Di for the using of replacement. A classifier model Mi and
training set Di. X and Mi return its class prediction. The
bagged classifier M* counts the votes and assigns the class
1 − error(Qi) with the most votes to X. The approach is conceived in Fig. 2.
wi = log( )
errorQi Bagging can be applied to the prediction of continuous values
by taking the average value.
The Bagging algorithm builds an ensemble classification.
Start Let have a training tuples D, k is the value of models in the
ensemble as input. The model creates from 1 to k and creates
a bootstrap sample too [16]. Then the bootstrap sample is
Imported data of created. For driving a model bootstrap sample is used. For
506 patients 
classification of X, the ensemble model needs to be used [16].
Random Forest (RF) follows the decision tree system of
Selected 30 attributes   rules for classification analysis. At the time of training, It
by Ranker Algorithm is conducted by building a multitude of decision trees. The
resultant is the mode of the classes for classification. Random
Set   Forest is a very nearby and easy to use approach. The attributes
cross­validation  number is bigger than H. For the building of a decision tree
 = 12 fold
classifier, Qi, randomly select, at each node, H attributes as
long as the split at the node. Forest-RI is the random forests
Apply Algorithms
with random input selection. Another form of random forest,
called Forest-RC, uses random linear combinations of the
input attributes. Instead of randomly selecting a subset of the
attributes, it creates new attributes. These attributes are a linear
combination of the existing attributes. An attribute is generated
Adaboost Bagging K­NN
Random
SVM
Neural by specifying L. At L attributes are randomly selected and
Forest Network added together with coefficients that are uniform random
numbers on [-1,1]. H linear combinations are generated. This
form of random forest is useful when there are only a few
attributes available.
Determine
Statistical Metrics

Compared better  
performance

End

Fig. 1: Work-flow of the analysis.

Bagging (BAG) reintegrates the regularization and it im-


proves the exactness and consistency of ML approaches. Bag-
ging (BAG), an ensemble meta-approach too. It uses decision
tree methods. A hierarchical partitioning is constructed by the Fig. 2: Bar-chart between ACS and Serum Creatinine.
decision tree. A decision tree relates to the different partitions.
Neural Network (NN) is roused by living effrontery D. Precision
scheme is [17]. A program begins at the primary line of code,
The Precision is TP / predicted positive.
ends it and goes on to the next, following linear instructions.
A genuine neural network does not take a linear way. Data is
Tp
prepared together, in parallel all through a network of nodes. P recision =
It’s the most eloquent approach. A set of input values (xi) Tp + Fp
associated weights (wi), a function (g) that sums the weights,
maps the results to an output (y) is a unit in a neural network. E. Recall
Neurons are organized into three layers: input, hidden and The Recall is the TP rate what fraction of those that are
output. The input layer is are inputted to the next layer of actually positive was predicted positive.
neurons. The next layer is the hidden layer. Several hidden
layers can exist in one neural network. The final layer is the Tp
Recall =
output layer. Tp + Fn
K-Nearest Neighbor (K-NN) A simple algorithm is the F. F-measure
k-nearest neighbor algorithm (K-NN). The resultant is a class
member in K-NN classification. An object is classified by a The harmonic mean of precision and recall is called F-
maximum vote of its neighbors, with the object being certified measure.
to the lesson most common among its k closest neighbors. P recision × Recall
F =2×
When k = 1, the class of the training tuple that is closest to it P recision + Recall
in pattern space. Nearest-neighbor classifiers are depended on
learning. K-NN is the comparison of a given test tuple with G. MCC
training tuples. The training tuples are described by attributes. The MCC is the connection of Precision and Recall.
Each tuple represents a point in dimensional space
Support Vector Machine (SVM) is a supervised learning H. ROC Area
emulation [18]. It maps their inputs implicitly into high di- ROC area is obtained by ranking test instances based on
mensional feature spaces. It is a method for the classification the classifiers’ class probability.
of both linear and nonlinear data. For the transformation of
the original training data, an SVM uses a nonlinear mapping. I. PRC Area
It searches for the linear optimal separating hyperplane sep-
arating the tuples of one class from another. The SVM finds By treating instances of the class the PRC area is calculated
this hyperplane using support vectors and margins. separately for each class.
The features those we have classified have been stated in In our analysis, age (p=0.004), sex (p=0.001), smok-
TABLE I. WEKA tools have been adopted for our analysis. ing (p=0.001), previous heart attack (p=0.001), chest pain
The dataset has been preprocessed by WEKA and missing data (p=0.001), palpitation (p=0.007), syncope (p=0.009), sweating
are handled. (p=0.001), vomiting (p=0.002), radiation (p=0.006) are the
4) Tools and Techniques: most momentous features at 10% significant level (p=0.010).
Here p is the probabilistic value. About 70.5% people with
• Waikato Environment for Knowledge Analysis (Weka) abnormal serum creatinine, 81.5% smoker and 71.4% ex-
Version 3.8.2 smoker had the heart attack and has shown in Fig. 2 and
Fig. 3 respectively. In our dataset, about 69.76% has ACS and
• IBM SPSS Statistics 25
shown in Fig.4. In AdaBoost (AB) the perimeter of the batch
is 100. The batch is 100 for Bagging (BAG) technique and the
IV. O UTCOMES
progression of seed is 1. In K-NN approach, the progression
The result is analyzed based on 9 performance parameters of K is 2. In the SVM system, the batch expense is 100, the
such as: epsilon value is 1.0E-12 and tolerance parameter is 0.001. This
parameters are available in WEKA tools.
A. Seed
As long as the Neural network (NN) we have applied seed
A seed is a random number used to initialize a number 0, training time is 100 and batch expense is 100. The bag size
generator. percent value is 100 and seed 1 for Random Forest (RF). The
size of the batch is 100 for Bagging (BAG) technique. The
B. Correctly Classified Instances progression of seed is 1. In K-NN approach we have utilized
The labels on the test set are supposed is the correct distance function as Euclidean distance. In the SVM system,
classification. the batch expense is 100, the epsilon value is 1.0E-12 and
Tp + Tn tolerance parameter is 0.001. Applying the algorithms we have
Accuracy = evaluated different classified instances for seed 1, seed 2 and
Tp + Tn + Fp + Fn seed 3. For AdaBoost approach we have 73.517%, 73.517%
and 75.494% classified instances correctly, 0.718, 0.720 and
C. FP Rate
0.786 are the precision, the assessment of recall are 0.735,
The value of examples predicted positive which is actually 0.737 and 0.755 and 0.764, 0.762 and 0.764 are the ROC area.
negative. We’ve gained 75.49% for seed 1, 76.2846% for seed 2 and
Fig. 3: Bar-chart between ACS and Smoking.

Fig. 5: ROC curves of (a) Adaboost, (b) Bagging, (c) K-NN,


(d) Neural Network, (e) Random forest, (f) SVM.

network (NN) the accuracy of our analysis are 70.3957%,


69.9605%, 70.1581% concurrently for seed 1,2 and 3. Due
to precision 0.694, 0.692, 0.691 are the consumption. 0.704,
0.700 and 0.702 are the appraisal of the recall due to the NN
Fig. 4: Bar-chart of ACS patients.
approach. 0.698, 0.730, 0.700 for ROC area of NN.
We have got the accuracy 74.7036% for seed 1, 75.2964%
for seed 2 and 74.5059% for seed 3. 0.731, 0.738, 0.728 for
76.087% for seed 3 are the correctly classified instances for the precision progression and 0.745, 0.753 and 0.745 are the
Bagging (BAG) approach. The precision for BAG is 0.741 for progression of Recall for RF approaches. For SVM we have ac-
seed 1, 0.750 for seed 2 and 0.748 for seed 3 respectively. complished 72.13%, 72.73%, and 72.53% correctly classified
0.752 is the expense of ROC area of BAG. Applying the instances. The progression of 0.697, 0.708 and 0.704 are for
K-NN approach we have accomplished 72.332% for seed Precision and 0.725, 0.727 and 0.725 are for Recall. The ROC
1, 72.330% for seed 2 and 72.1344% for seed 3 correctly area of SVM is 0.580 for seed 1, 0.584 for seed 2 and 0.585
classified instances. 0.699, 0.700, 0.696 are the precision for for seed 3. In the last, we have correlated the performance
K-NN. The recall for K-NN is 0.723 for seed 1, 0.723 for parameters of our analysis in Table II. By correlating the
seed 2, 0.721 for seed 3. 0.688, .0.690 for KNN technique. events we can assume that the Bagging treasure is suitable
The ROC Area is 0.697 for K-NN. As long as the Neural for predicting the exposures of ACS. We’ve consummated the
significant prediction about ACS. Fig. 5 shows the different
roc curve of the algorithms. TP together with FP rate for each
of the curves ranges from 0 to 1.
TABLE II: Comparison on Performance Parameters for
SEED 1, SEED 2 and SEED 3
V. C ONCLUSION
Weighted Avg.
Performance The selection of best machine learning treasure is relative
AB BAG K-NN NN RF SVM
Parameters
Seed 3 2 1 1 2 2 to the choice of both the Classified Instances and the perfor-
Correctly mance parameters. Intelligent algorithm tools can contribute
Classified 75.49 76.28 72.33 70.40 75.30 72.72
Instances (%)
productively to enlarge the exactness of disease treatment. An
TP Rate 0.755 0.763 0.723 0.704 0.753 0.727 intelligent algorithm plays an appearance as a prevention in
FP Rate 0.425 0.418 0.516 0.440 0.753 0.559 the interim of tachycardia [19]. Not a single algorithm which
Precision 0.741 0.750 0.699 0.694 0.738 0.708 can perform the rests in each and every criterion. Hence, the
F-Measure 0.740 0.748 0.692 0.698 0.730 0.678
MCC 0.374 0.394 0.262 0.274 0.356 0.253 in-depth analysis including as many metrics and instances as
ROC Area 0.764 0.756 0.688 0.698 0.746 0.584 possible offers more options flexible in trade-off among the
PRC Area 0.779 0.773 0.710 0.732 0.771 0.627 algorithms. They provide constructive selection approaches
that are the uttermost inimitable approach. Considering all the Symposium on Computers and Communications (ISCC), Heraklion,
facts, we have determined Bagging as the best algorithms to Greece, 2017, pp. 204-207.
predict ACS in view of evaluation metrics and performance [16] J. Han, M. Kamber and J. Pei, Data mining, 3rd ed. Amsterdam: Morgan
parameters respectively which have been rationalized by the Kaufmann, 2012, pp. 330-423.
acceptor operating characteristic curves (ROC). We can use [17] T. Karayilan and . Kili, “Prediction of heart disease using neural
network”, in 2017 International Conference on Computer Science and
this Bagging technique for any system and will get the accurate Engineering (UBMK), Antalya, Turkey, 2017, pp. 719-723.
result in anticipating the heart attack risk. [18] N. Salma Banu and S. Swamy, “Prediction of heart disease at early
stage using data mining and big data analytics: A survey”, in 2016
R EFERENCES International Conference on Electrical, Electronics, Communication,
Computer and Optimization Techniques (ICEECCOT), Mysuru, India,
[1] J. Roland, “What You Should Know About Acute Coronary 2016, pp. 256-261.
Syndrome (ACS)”, Healthline, 2016. [Online]. Available:
[19] E. AbuKhousa and P. Campbell, “Predictive data mining to support
https://www.healthline.com/health/heart-disease/acute-coronary-
clinical decisions: An overview of heart disease prediction systems”, in
syndrome. [Accessed: 28- Jun- 2018].
2012 International Conference on Innovations in Information Technol-
[2] “Coronary Heart Disease in Bangladesh”, World Life Expectancy, 2017. ogy (IIT), Abu Dhabi, United Arab Emirates, 2012, pp. 267-272.
[Online]. Available: http://www.worldlifeexpectancy.com/bangladesh-
coronary-heart-disease. [Accessed: 03- Jul- 2018].
[3] M. Raihan, S. Mondal, A. More, M. Sagor, G. Sikder, M. Arab
Majumder, M. Al Manjur and K. Ghosh, “Smartphone based ischemic
heart disease (heart attack) risk prediction using clinical data and data
mining approaches, a prototype design”, in 2016 19th International
Conference on Computer and Information Technology (ICCIT), Dhaka,
Bangladesh, 2016, pp. 299-303.
[4] M. Raihan, S. Mondal, A. More, P. Boni and M. Sagor, “Smartphone
Based Heart Attack Risk Prediction System with Statistical Analysis
and Data Mining Approaches”, Advances in Science, Technology and
Engineering Systems Journal, vol. 2, no. 3, pp. 1815-1822, 2017.
[5] M. Jabbar, B. Deekshatulu and P. Chandra, “Computational Intelligence
Technique for early Diagnosis of Heart Disease”, in 2015 IEEE In-
ternational Conference on Engineering and Technology (ICETECH),
Coimbatore, India, 2015, pp. 1-6.
[6] J. Thomas and T. Princy, “Human heart disease prediction system using
data mining techniques”, in 2016 International Conference on Circuit,
Power and Computing Technologies (ICCPCT), Nagercoil, India, 2016,
pp. 1-5.
[7] R. Ani, S. Krishna, N. Anju, M. Aslam and O. Deepa, “Iot based patient
monitoring and diagnostic prediction tool using ensemble classifier”, in
2017 International Conference on Advances in Computing, Communi-
cations and Informatics (ICACCI), Udupi, India, 2017, pp. 1588-1593.
[8] M. Sultana, A. Haider and M. Shorif Uddin, “Analysis of data mining
techniques for heart disease prediction”, in 2016 3rd International
Conference on Electrical Engineering and Information Communication
Technology (ICEEICT), Dhaka, Bangladesh, 2016, pp. 1-5.
[9] S. Nikan, F. Gwadry-Sridhar and M. Bauer, “Machine Learning Appli-
cation to Predict the Risk of Coronary Artery Atherosclerosis”, in 2016
International Conference on Computational Science and Computational
Intelligence (CSCI), Las Vegas, NV, USA, 2016, pp. 34-39.
[10] C. Suvarna, A. Sali and S. Salmani, “Efficient heart disease prediction
system using optimization technique”, in 2017 International Conference
on Computing Methodologies and Communication (ICCMC), Erode,
India, 2017, pp. 374-379.
[11] B. Gnaneswar and M. Ebenezar Jebarani, “A review on prediction
and diagnosis of heart failure”, in 2017 International Conference on
Innovations in Information, Embedded and Communication Systems
(ICIIECS), Coimbatore, India, 2017, pp. 1-3.
[12] T. Mahboob, R. Irfan and B. Ghaffar, “Evaluating ensemble prediction
of coronary heart disease using receiver operating characteristics”, in
2017 Internet Technologies and Applications (ITA), Wrexham, UK,
2017, pp. 110-115.
[13] K. Pahwa and R. Kumar, “Prediction of heart disease using hybrid
technique for selecting features”, in 2017 4th IEEE Uttar Pradesh Sec-
tion International Conference on Electrical, Computer and Electronics
(UPCON), Mathura, India, 2017, pp. 500-504.
[14] J. Singh, A. Kamra and H. Singh, “Prediction of heart diseases using
associative classification”, in 2016 5th International Conference on
Wireless Networks and Embedded Systems (WECON), Rajpura, India,
2016, pp. 1-7.
[15] S. Pouriyeh, S. Vahid, G. Sannino, G. De Pietro, H. Arabnia and J.
Gutierrez, “A comprehensive investigation and comparison of Machine
Learning Techniques in the domain of heart disease”, in 2017 IEEE

You might also like