You are on page 1of 8

Proceedings of the International Conference on Augmented Intelligence and Sustainable Systems (ICAISS-2022)

IEEE Xplore Part Number: CFP22CB2-ART; ISBN: 978-1-6654-8962-1

Prediction of Rock and Mineral from Sound


Navigation and Ranging Waves using Artificial
Intelligence Techniques
2022 International Conference on Augmented Intelligence and Sustainable Systems (ICAISS) | 978-1-6654-8962-1/22/$31.00 ©2022 IEEE | DOI: 10.1109/ICAISS55157.2022.10011104

Akshat Khare
School of Computing Kanchana Mani
SRM Institute of Science and School of Computing
Technology SRM Institute of Science and
Chennai, India Technology
ak9219@srmist.edu.in Chennai, India
kanchanm@srmist.edu.in

Abstract—Since sound waves penetrate the sea more procedure of determining, orienting, and classifying by
deeply than radar and light waves, SONAR (Sound using Artificial Intelligence to directly classify the object
Navigation and Ranging) is used to explore and map the from the timed frequency signals.
ocean. When working in the mining industry, engineers may
find SONAR to be an invaluable tool for helping them II. LITERATURE REVIEW
visualize the location of rocks and minerals by charting
1. The authors of [1] examined the multi-dimensional
frequency signals. Assuming an object is within the sound
Echolocation SONAR dataset and achieved an accuracy of
pulse's range, the sound pulse will reflect off the target and
send an echo in the direction of the sonar transmitter if the
83.17 percent as well as an AUC of 0.92. While the
target is within the range of the sound pulse. The transmitter satisfactory design framework was examined to traditional
uses the power source to receive signals and figure out how machine learning classifiers like those of SVM, Random
strong the signals are. It establishes the pause in time Forest, etc. through various assessment methods such like
between the generation of the pulse and the receiving of its accuracy, sensitivity, etc., interesting results were
corresponding signal. It analyzes the duration between the discovered. Lack of precision in Deep Learning
emission of the pulse and its matching reception, which Techniques was one of the obstacles they had to overcome.
estimates the distance and location of the matter. Engineers
determine the item using an audio wave. With the assistance
2. Using a flutter sonar scanner, that is a statistical
of AI, the process of evaluating, organizing, and identifying acoustic analyzer used only to create significant reflection
the item is going to be circumvented in order to accomplish patterns of the ocean bottom, the authors of [2] conducted
the objectives of immediately identifying the item based on depositional categorization and estimated the propagation
the scheduled bandwidths. In this proposed method, PCA characteristics of sediments. Utilizing a D/A translator
and t-SNE are employed to extract features. Utilizing with an extensive dynamic range and a transceiver having
classification approaches such as Logistic Regression and large lateral components, the energy, amplitude, and phase
Random Forest Tree, an accuracy of 72% and 91%, characteristics of the FM sound pulse can also be precisely
respectively, was attained. Similarly, CNN and LSTM controlled. For sediment identification, large
models are also employed and finally they have yielded an reproducibility and strong signal definition were required,
accuracy of about 80.77% and 99% respectively. and this estimation was given. In order to identify marine
deposits and anticipate their material characteristics,
Keywords—CNN, LSTM, Random Forest, Logistic additional processing of pulse sonar data contributed
Regression frequency response and dissipation patterns.
I. INTRODUCTION 3. The authors of [3] distinguished metal cylinders
There has been much research done regarding SONAR (mines) and tubular substances applying sonar pictures or
waves and their classification, but these researches have transmissions (rocks). To compress the dimensionality of
not been very fruitful to scientific community regarding the data, authors used Neural Network classifier,
their accuracy of AI models. This paper bring-forth Functional Neuro-Fuzzy Inference Engine, and k-Nearest
efficient and outperforming models by deeply analyzing Neighbor, coupled utilizing one of the most frequently
dataset and extracting out insightful features and attributes used feature selection methods, Sequential forward
to train models. The transducer uses the power source to selection (SFS). The conclusions were as continues to
receive signals and determines the signal's amplitude. The follow: 76.92%, 86.5%, 89.42%, 81.73%, and 83.65%.
distance and placement of the material are determined by Lesser precision and model refinement posed
measuring the interval between the creation of the pulse complications for the writers
and its matching reception. The main aim is to bypass this

978-1-6654-8962-1/22/$31.00 ©2022 IEEE 140


Authorized licensed use limited to: REVA UNIVERSITY. Downloaded on February 16,2024 at 09:01:15 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the International Conference on Augmented Intelligence and Sustainable Systems (ICAISS-2022)
IEEE Xplore Part Number: CFP22CB2-ART; ISBN: 978-1-6654-8962-1

4. The authors of [4] investigated research on a densely expressed by a collection of numeric regression model that
spatial SONAR database, attaining a precision of 83.17 also are case of deviations. The key aim here is to retrieve
percent and an AUC of 0.92. With arbitrary frequency the essential and pertinent information from the statistical
assessment, the outcomes are also streamlined by evidence in order for it to be portrayed as a collection of
providing the possibility to acquire 90% accuracy. While distinct orthogonal factors.
the accomplishment of the anticipated strategy is
10. [10] introduced an efficient and effective called
contrasted to that of conventional algorithms such as SVM
and erratic woodland, employing distinctive evaluation long short-term memory (LSTM). LSTM can comprehend
characteristics such as precision, influence, etc., to connect low-level time lags in abundant of 1000
dependable outcomes are discovered. We implemented distinct-time way by assessing persistent error flow
through constant error carousels within special units.
Deep Learning models with improved precision.
Multiplicative gate units comprehend to open and close
5. The authors of [5] employed Random Forest, to access towards constant error flow. LSTM is internally
estimate the rock type at Erdenet Copper Mine, Mongolia. local in space and time. Computational complexity of
A database generated from exploratory drill holes were LSTM is O(1). The experiments of LSTM resulted in
also used create Random Forest models and estimate the artificial data involving original, distributed, real-valued,
type of material based on their locations. Standards for and noisy pattern representations.
data exploration and edition assessment have indeed been
established to preserve usability in practical systems. III. DATASET
Random Forest operated effectively with a total success This dataset was implemented by Gorman and
rate of 89% in scenarios wherein type of rock was Sejnowski throughout their numerous neural network
presumed adjacent to regions where data was readily classification investigations of sonar signals. The original
available (such as those in the instance of blocks having purpose was to train a network to differentiate amongst
bombarded). Throughout this instance in which type of sonar pulses rebounded from a tubular metallic object. The
rock was anticipated for 2 different slabs (i.e., 30 metres material makes a significant contribution to Terry
under regarded positions), the total success rate quality Sejnowski's existing collection of standards. Along with R.
rating reached 86%. Whenever a prototype of a software Paul Gorman, this dataset was generated. The mineral
for investigation was undertaken, performance level resources data comprises of 111 patterns created by
became unfavorable, with an total success rate of 59%. rebounding sonar transmissions out of a metal tube at such
6. Using Sonar frequencies, the authors of [6] devised a a wide range of angles in a broad range of conditions. The
technique for something like the forecasting of undersea rock information consists of ninety-seven designs derived
mines and rocks. Acoustic warnings are developed to from rocks grown under reasonable circumstances. The
capture the varied vibrations of underwater materials from broadcasted sonar noise might be an escalating resonance
sixty distinct points of view. Researchers established three flutter. The collection contains of impulses acquired from
boolean classification models directly proportional to their an assortment of angles, spanning from 90 ° for the tube to
precision. Subsequently, algorithms of forecasting were 180 ° for the rock. Every pattern could be comprised of
designed to estimate the mining and rock categories. Such sixty numbers that range from 0.0 to 1.0. Every number
estimation outcomes were determined by Python and reflects the stowed energy among a particular wavelength
Supervised Machine Learning Classification techniques. band, summed over a particular period of time. The
particular record's label features the letters "R" and "M" if
7. Authors of [7] implemented two methods, the material is a rock or a mine, respectively. Table 1
HSHFSCC and High Strength Self Compacting provides attribute details for the data set.
Concrete(HSSCC) using different ML models to draw
information and segregate to train, test and validation Table 1 Data Set Attribute Details
dataset. Next they implemented 19 various ML regression
models along with ANN to predict the flexural strength
related to HSHFSCC and HSSCC. For scoring they used
5-fold cross validation and root mean square via
MATLAB’s ML and DL toolkit.
8. Experimenters [8] introduced a novel process called
"t-SNE" for plotting high-spatial data by allocating every
data point a position on a 2D or 3D graph. This approach is
a variation of Stochastic Neighbor Embedding (Hinton and
Roweis, 2002) for refining, and it delivered significantly
improved renderings by lowering the potential to
concentrate points in the area of the map. t-SNE is
advantageous to existing methodologies. The t-SNE
process produces a single chart that reveals pattern at a
broad range of scales. This would be primarily useful for
multidimensional data which can be situated on numerous
unique yet associated with low-spatial dimensions, such as
particle photographs.
9. PCA is a multiple variable statistical technique that
investigates [9] a dataset within which the observations are

978-1-6654-8962-1/22/$31.00 ©2022 IEEE 141


Authorized licensed use limited to: REVA UNIVERSITY. Downloaded on February 16,2024 at 09:01:15 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the International Conference on Augmented Intelligence and Sustainable Systems (ICAISS-2022)
IEEE Xplore Part Number: CFP22CB2-ART; ISBN: 978-1-6654-8962-1

IV. PROPOSED TECHNIQUES attempting to extract distinguishing traits in 2D and 3D


We proposed two techniques both machine learning dimensions that are human-readable. For PCA 2D As
and deep learning, to explore deep insights of the dataset depicted in Fig. 3, despite lowering the dimensions of the
we employed methods both from machine and learning dataset from 60 to 2D in an attempt to identify
concepts. Simple Machine learning models from Logistic distinguishing traits, none are discovered. The dataset is
Regression to complex deep learning models such as too disorganised and dispersed to categorise them into
LSTM and CNN models. To explore more depths of the distinct groupings (red for Rock and blue for Mineral). To
dataset to come up with efficient models to classify, we do additional research on this dataset, the authors increase
implemented several methods to build models such as the dataset's dimension from 2D to 3D. The dimension of
Regularization, Transfer Learning, K-Fold, Randomized the previous PCA dataset was increased by one to create
Grid Search for optimal hyper parameters, data the 3D PCA dataset. Using the matplotlib utility of Python,
augmentation, etc. Figure 1 shows the process workflow the authors created fig. 4's 3D graph to better grasp the
we followed while executing the technical and, research unique characteristics. In this case, too, the cases were
and development phase of this paper. excessively disorganised and dispersed, making it
challenging to categorise them as distinct clusters.

Fig 2. PCA Dimensionality Reduction [14]

Fig 1. Technical Workflow for the paper

A. Feature Extraction Techniques Fig 3. PCA 2D Visualization


As total features in the SONAR dataset enumerates to
60, which is quite a large number of features for machine
learning to compute. We wanted to extract important
features from this dataset for our machine learning models
to predict on. So to extract only important and effective
features, we used algorithms such as PCA and t-SNE that
are dimension reduction techniques, which works on the
principle of creating new uncorrelated variables that
successively maximize variance and minimizes loss.
1) PCA
PCA[9] is one of several techniques for reducing
dimensionality which employs real data as source and
input feature configurations to optimally encompass the
actual data dispersion in effort to reduce its inherent
dimensionality, as shown in figure 2. In PCA, the data is Fig 4. 3D PCA
mapped upon a set of orthogonal axes, after which each
axis is evaluated. In this database, the dimensions will be 2) t-SNE
modified by minimizing the 60 original dimensions to 2- Similar to PCA, t-SNE[8] is also a method of reducing
dimensional and 3-dimensional datasets. We are dimensionality primarily employed for visualization of

978-1-6654-8962-1/22/$31.00 ©2022 IEEE 142


Authorized licensed use limited to: REVA UNIVERSITY. Downloaded on February 16,2024 at 09:01:15 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the International Conference on Augmented Intelligence and Sustainable Systems (ICAISS-2022)
IEEE Xplore Part Number: CFP22CB2-ART; ISBN: 978-1-6654-8962-1

multi-dimensional datasets, explained by Equation 1 and 2. to implementing machine learning models and feeding
We examined t-SNE with various 2D and 3D feature them with these extracted features.
reduction options. In the PCA-reduced data, there weren't
any detectable clustering of occurrences; but, in the 2D t- 1) Logistic Regression
SNE-reduced dataset shown in the Figure 5, individuals The simplest and most popular machine learning
with identical labels are clustered relatively closely, algorithm, Logistic Regression is derived from statistics.
suggesting many distinct clusters. To help understand and The model's input values are x, and its output values are
study why t-SNE generated a more distinguishable cluster f(x), which are either 0 or 1. In order to construct a
than PCA, We made the t-SNE dataset three-dimensional machine learning model, each data point of the
(3D)(Figure 6). Similar to PCA-3D dataset, it produced independent variable will be x*w (i.e., the sum of x1 * w1
mixtures of both class instances, making it difficult to draw + x2 * w2... ), which will result in a value between 0 and 1.
a plane across the dataset to distinguish the clusters of both Based on the principle of sigmoid function(Figure 8), If
classes. 0.50 is viewed as a determining value or threshold, the
result was greater than 0.5, it would be considered 1;
otherwise, it would be 0.

------(1)

-------(2) Fig 8 Sigmoid Function for LR


2) Random Forest Classifier
The random forest tree is utilised to tackle
classification and regression difficulties. The Random
Forest[11] tree produces numerous trees as in figure 9, by
extracting 'K' number of data points from the dataset at
random. For each unique 'K' data point, many guesses are
possible, with the average of all forecasts being taken into
account for subsequent processing.

Fig 5 2D t-SNE

Fig 9 Random Forest Decision Trees


C. Deep Learning Techniques
1) Convolutional Neural Networks (CNN)
We have used CNN[12] models in this proposed
method which uses transfer learning technique. The
primary objective of implementing CNN model was to
extract feature insights like patterns of amplitude of the
SONAR wave for rock and mineral, which can be better
visualized using graphs. We have used these pre-trained
Fig 6 3D t-SNE models in our work to get better results. To convert the
dataset into image dataset to be fed to the CNN models, we
B. Machine Learning Techniques converted the 60 attributes(frequencies) into a graph
After extracting features by reducing dimensions using images(Figure 10) with both axes removed to provide a
PCA and t-SNE algorithms, we proceed with these features clear image. As only 208 instances are given in the dataset
which are too less for any deep learning model to perform
and may lead to over-fitting, we performed Data

978-1-6654-8962-1/22/$31.00 ©2022 IEEE 143


Authorized licensed use limited to: REVA UNIVERSITY. Downloaded on February 16,2024 at 09:01:15 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the International Conference on Augmented Intelligence and Sustainable Systems (ICAISS-2022)
IEEE Xplore Part Number: CFP22CB2-ART; ISBN: 978-1-6654-8962-1

Augmentation on the image dataset, to mitigate the


chances of over-fitting. The images were converted into
224 x 224 shape.

Fig 11 Window slide representation for LSTM Input

Fig 10 Graph representation of SONAR frequencies


2) LSTM Model
We implemented LSTM[10] model, with a different
dataset transformation approach. We took inspiration for
the dataset from the previous dataset transformation meant
for CNN classification. As shown in the Figure 11, We
converted the dataset instances from 60 attributes with 1
label to 30 x 30 attributes to 1 label, 31st attribute as
labels. A window was implemented of 30 attribute size
corresponding to same label and 31st attribute which
skimmed from 1st attribute to 30th attribute of the original
instance creating 30 more instances, hence increasing the
dataset size by thirty times. The aim of this approach was Fig 12 LSTM Model Flowchart
to make the LSTM model learn the trends of the graphs of
the particular labels. As the architecture block (Figure 12) V. RESULTS AND DISCUSSION
describes, LSTM model takes input size as (Batch size,30) A. K-Fold Cross Validation
which are 30 attributes(trend) of a particular instance. The
model is first implemented with a stack of 4 LSTM layers From experimental results, we found that deep learning
which are the core layers of this model, these layers learn techniques were the most efficient and optimized than
the trends of the classes. Because each original instance compare to ML techniques. To better explore the dataset
has frequency of the SONAR signal for a minute i.e 60 while training the models, we implied K-fold cross
seconds therefore it has 60 attributes representing each validation [13]method. We can split the dataset into
second in a minute. As for the model to learn the trend of different group based on the value of k. The proposed
both the classes authors have implemented a window slide techniques were further tested using Stratified K-fold
of 30 attributes(trend) with labels as the class and the score method, which distributes the dataset so that every
corresponding 31st attribute. Furthermore, a Batch label has even proportion of instances, we split the dataset
normalization layer is added for regularization, the model into 5 folds for this method of training.
is split into 2 groups of layers, both group of layers are
Scoring Parameters
implemented as a number of pairs of (Dense and Batch
Furthermore, the models were evaluated using confusion
Normalization). Batch Normalization layer should be
added to every layer immediately before activation layer. matrix as a scoring parameter. Classification accuracy is
That is to say, if the neurons of a layer are zero-centered calculated by finding the number of predictions that are
and have a variance of one as input values into an correct from the total number of predictions as shown in
activation function, then we reached the same result for the equation 3
activations themselves. First group focuses on Accuracy = (TP+TN) / (TP+FP+TN+ FN)----- (3)
Classification of the instance, while another group focuses Recall computes how many positives and were correct out
on correctly predicting the 31st attribute of the instance. of the total positives as shown in equation 4
These two groups' outputs their own prediction
representing class of the instance and 31st attribute Recall=TP / (TP + FN) --------------------------- (4)
respectively. During Backtracking LSTM layers are Precision measures Within everything that has been
influenced by both the groups hence it works on both predicted as a positive, precision counts the percentage
classification and trend regression. that is correct, as shown in equation 5.
Precision =TP / (TP + FP) ----------------------- (5)
F1 score is calculated by using precision and recall
value as shown in equation 6.
F1-Score = 2 * Precision * Recall / (Precision + Recall)--
(6)

978-1-6654-8962-1/22/$31.00 ©2022 IEEE 144


Authorized licensed use limited to: REVA UNIVERSITY. Downloaded on February 16,2024 at 09:01:15 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the International Conference on Augmented Intelligence and Sustainable Systems (ICAISS-2022)
IEEE Xplore Part Number: CFP22CB2-ART; ISBN: 978-1-6654-8962-1

Using These metrics, the machine learning models suitable for classification. Overall, LSTM model
were scored and the best to perform out of these was performed better than all other literature models with an
Random Forest Classifier with an accuracy of 81% with an accuracy 99.0%. An overall comparision of all the models
F1 score of 83% and 79% for Mine and Rock labels performance is given in the Figure 17. After analysing and
respectively. Logistic Regression score an accuracy of comparing the best models we trained with other literatures
72%, and F1 score of 78% and 62% for Mine and Rock and works, we found that our custom LSTM model
respectively. performed better than any literature we compared with,
therefore being the best model among them with an
B. Feature Importance accuracy of 99.0% as shown in the Figure 17 and Table 3.
After scoring the models, Functions were implemented As with accuracy and loss vs epochs graph, it can be seen
to extract important features from the dataset based on that validation accuracy initiates with a lower but as the
which the best model I.e., Random Forest Classifier epochs increases it settles down with similar score as
performed, only top 10 important features were presented training accuracy, similar behaviour can be seen with
and ranked according to their importance as shown in the validation loss and training loss for all the models.
figure 13. Though, on how the features were ranked still
remains black box to us, but these attributes were the
deciding features for the model to make predictions.
Furthermore, the importance features provided by the
model were correlated with the Class/Label(Figure 14), to
find out correlations of these important features with label
class.

Fig 15.a LSTM Loss vs Epochs

Fig 13 Feature Importance with respect to Models

Fig 15.b LSTM Accuracy vs Epochs


Fig 14 Feature Importance with respect to Class
C. Model comparison
Following the successful implementation of CNN and
LSTM models, we trained each model for 500 and 200
epochs respectively to train them efficiently and in an
optimized way to bring out the best accuracy in the
models. We split the dataset into the same ratio as in
during machine learning models, I.e., 80:20. After training
the models for 100 epochs we found LSTM model(Figure
15.a and 15.b) to be the better performer if not the best
with an accuracy of 99.0%, whereas CNN (pre-trained
with DenseNet-v169) resulted with an accuracy of 80.77%
(Figure 16.a and 16.b). It was found that Deep learning
models performed the best while training because of the Fig 16.a Densenet169_CNN Loss vs Epochs
neural network back propagation methods and are best

978-1-6654-8962-1/22/$31.00 ©2022 IEEE 145


Authorized licensed use limited to: REVA UNIVERSITY. Downloaded on February 16,2024 at 09:01:15 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the International Conference on Augmented Intelligence and Sustainable Systems (ICAISS-2022)
IEEE Xplore Part Number: CFP22CB2-ART; ISBN: 978-1-6654-8962-1

Fig 16.b DenseNet169_CNN Accuracy vs Epochs

Table 2. Performance Analysis of different proposed Fig 17. Performance analysis of different models
models
Table 2. shows the performance metric values for
F1 different proposed algorithms. From table 2. it is clear that
Models Accuracy Recall Precision
Score LSTM model yields better accuracy when compared to
Logistic other moels. Table 3. shows performance analysis of
72% 71% 72% 76% proposed work with other existing models.
Regression
VI. CONCLUSION
Random
81% 81% 81% 81% This proposed work is used to predict rocks and minerals
Forest
from SONAR waves using Artificial Intelligence
CNN- techniques. In this approach, PCA and t-SNE are
80% 80% 80% 80%
DenseNet169 employed to extract features. Several classification
LSTM 99% 99% 99% 99% approaches sucha as Logistic Regression, Random Forest,
LSTM, CNN were used in this work. Among them, LSTM
yielded the best accuracy of 99.0. From Table 3. it is clear
Table 3. Model Performance in comparison with other that our proposed work results with better accuracy when
literature models compared with other literature works.
Author & Reference Model Used Accuracy
No Obtained
Singh[1] Random 82% REFERENCES
Forest [1] Singh, Harvinder & Hooda, Nishtha. (2019). Prediction of
Underwater Surface Target through SONAR: A Case Study of
Baida'a[4] k-Nearest 89.42% Machine Learning.
Udaya[5] SVM 83.17% [2] Schock, S.G., LeBlanc, L.A., and Satchidanarda Parda.
Sarantsatsral[6] Random 89% "Sediment Classification Using the Chirp Sonar."
Forest Paper presented at theOffshore Technology Conference, Houston,
Texas, May 1992. doi:https://doi.org/10.4043/6851-MS.
Yaramasu[7] SVM 80%
[3] Baida'a Abdul-Qader. “Techniques for classification sonar
Proposed Method LSTM 99.0% rocks vs mines” - IJSER Journal Publication 2016.
[4] M. Udaya Kiran; Vaibhav Tiwari; D Venkatesh ; G Gowtham
Tej.“Machine Learning Algorithms to Prediction of Underwater
Surface Target through SONAR”. Science, Technology and
Development Volume IX Issue X OCTOBER 2020.
[5] Sarantsatsral, N.; Ganguli, R.; Pothina, R.; Tumen-Ayush, B. A
Case Study of Rock Type Prediction Using Random Forests:
Erdenet Copper Mine, Mongolia. Minerals 2021, 11, 1059.
https://doi.org/ 10.3390/min11101059
[6] Sri Ramya Yaramasu; Uppada Sai Gayatri ; Vadlamani Manjusha ;
Vaishna C Bhanu ; Koda Indu. “UNDERWATER MINE &
ROCK PREDICTION BY EVALUATION OF MACHINE
LEARNING ALGORITHMS”. International Research Journal of
Modernization in Engineering Technology and Science.
Volume:04/Issue:04/April-2022.
[7] Kumar, Boppana Narendra, and Pariyada Pradeep Kumar.
"Prediction on Flexural strength of High Strength Hybrid Fiber Self
Compacting Concrete by using Artificial Intelligence." Journal of
Artificial Intelligence 4, no. 1 (2022): 1-16.

978-1-6654-8962-1/22/$31.00 ©2022 IEEE 146


Authorized licensed use limited to: REVA UNIVERSITY. Downloaded on February 16,2024 at 09:01:15 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the International Conference on Augmented Intelligence and Sustainable Systems (ICAISS-2022)
IEEE Xplore Part Number: CFP22CB2-ART; ISBN: 978-1-6654-8962-1

[8] Laurens van der Maaten, & Geoffrey Hinton (2008).


Visualizing Data using t-SNE. Journal of Machine Learning
Research, 9(86),2579–2605.
[9] Mishra, S., Sarkar, U., Taraphder, S., Datta, S., Swain, D.,
Saikhom, R., Panda, S., & Laishram, M. (2017). Principal
Component Analysis. International Journal of Livestock Research,
1.
[10] Hochreiter, S., & Schmidhuber, J. (1997). Long Short-term
Memory.Neural computation, 9, 1735-80
[11] Ho, Tin. (1995). Random decision forests. Document Analysis and
Recognition, International Conference on. 1. 278 - 282 vol.1.
10.1109/ICDAR.1995.598994.
[12] K. O’Shea and R. Nash, An Introduction to Convolutional Neural
Networks. arXiv, 2015. doi: 10.48550/ARXIV.1511.08458.
[13] Stone, M. “Cross-Validatory Choice and Assessment of Statistical
Predictions.” Journal of the Royal Statistical Society. Series B
(Methodological), vol. 36, no. 2, 1974, pp. 111–47. JSTOR,
http://www.jstor.org/stable/2984809. Accessed 10 Nov. 2022.
[14]https://blog.bioturing.com/wpcontent/uploads/2018/11/Blog_pca_6b.
png

978-1-6654-8962-1/22/$31.00 ©2022 IEEE 147


Authorized licensed use limited to: REVA UNIVERSITY. Downloaded on February 16,2024 at 09:01:15 UTC from IEEE Xplore. Restrictions apply.

You might also like