You are on page 1of 12

Cluster Comput

https://doi.org/10.1007/s10586-017-1323-4

Intelligent face recognition and navigation system using neural


learning for smart security in Internet of Things
Priyan Malarvizhi Kumar1 · Ushadevi Gandhi1 · R. Varatharajan2 ·
Gunasekaran Manogaran1 · Jidhesh R.1 · Thanjai Vadivel3

Received: 4 July 2017 / Revised: 15 September 2017 / Accepted: 1 November 2017


© Springer Science+Business Media, LLC, part of Springer Nature 2017

Abstract Most of the advancements are now carried out Keywords Global positioning system · Internet of Things ·
by interconnecting physical devices with computers; this is Face recognition · Feature extraction · Training module ·
what known as Internet of Things (IoT). The major prob- Navigation · Neural learning · Neural network · Smartphone ·
lems facing by blind people fall in the category of navigating Ultrasonic sensor
through indoor and outdoor environments consisting of var-
ious obstacles and recognition of person in front of them.
Identification of objects or person only with perceptive and 1 Introduction
audio information is difficult. An intelligent, portable, less
expensive, self-contained navigation and face recognition According to surveys, right now India holds the record of
system is highly demanded for blind people. This helps blind world’s greatest number of visually impaired individuals [1].
people to navigate with the help of a Smartphone, global posi- Hence, India visual deficiency is the most concerning issue.
tioning system (GPS) and a system equipped with ultrasonic Because of the gigantic population in India, there is a consid-
sensors. Face recognition can be done using neural learning erable measure of movement in the street and in these days
techniques with feature extraction and training modules. The nobody has time for converse among others, particularly in
images of friends, relatives are stored in the database of user cities. Due to this huge traffic, the blind people face so many
Smartphone. Whenever a person comes in front of the blind troubles while roaming without any external help. So, assis-
user, the application with the help of neural network gives tance of white canes and additionally controlled guides are
the voice aid to the user. Thus this system can replace the required for the blind ones for obstacle evasion. But guide
regular imprecise use of guide dogs as well as white sticks dogs can likewise be of restricted help for finding the route
to help the navigation and face recognition process for peo- to a remote area. So an intelligent, portable, less expensive,
ple with impaired vision.In this paper, we have proposed a self-contained navigation system is required.A few electronic
novel image recognition and navigation system which pro- gadgets are as of now accessible for giving direction to a
vides precise and quick messages in the form of audio to remote area, however these have a tendency to be costly,
visually challenged people so that they can navigate easily. or make utilization of a Braille interface. The diminishing
The performance of the proposed method is comparatively expense of GPS, combined with the latest development in
analyzed with the help of ROC analysis. accessibility of audio acknowledgment services, introduces
a chance to make a less expensive application for hindrance
identification. The major need of this framework is to satisfy
the user’s route needs while guaranteeing ease and trans-
B R. Varatharajan portability with a Smartphone.
varathu21@yahoo.com The other issue for blind people is that they get just
1
perspective and audio information. They cannot distinguish
VIT University, Vellore, India
people from these two senses. Hence, there is a need of a third
2 Sri Ramanujar Engineering College, Chennai, India sense in the form of an application which recognizes people
3 VelTech University, Chennai, India in front of them. The progressions in the Machine Learning

123
Cluster Comput

innovation and Artificial Intelligence [2] can make an appli- it needs definite guide of the building. The advantages of the
cation which can even supplant the human eye.Nowadays, system are portable and simple to operate whereas the dis-
Neural Network provides solutions for so many human advantage of the system is applicable only for indoor route.
related problems. A neural system learning calculation is
among the best ways to deal with machine learning par-
ticularly with images [3]. Nowadays, big data algorithms, 2.1.3 A talking location finding system with RFID and GPS
IoT technologies and machine learning methods are used
in various healthcare applications [4–12]. Sections 2 and 3 Nandhini et al. proposed a talking help sort area location
describe the related works and the proposed system. Section 4 framework [16] for both indoor and open air environment.
presents the results and discussion. Finally, Sect. 5 concludes Framework comprises of strolling stick along with GSM
the work. to transfer message to approved individual at the period of
catastrophe, sonar sensors and RF sender and collector. For
indoor confinement RFID [17] and for outside restriction
2 Related work GPS framework is utilized [18]. In this way, this GPS frame-
work utilized as a part of strolling stick decreases the cost of
2.1 Obstacle detection systems introducing numerous RFID labels in open air to distinguish
the place [19–21]. The advantages of the system are used
The intelligent navigation, detection and avoidance of objects in both outdoor and indoor roaming, simple to operate and
are found to be precise and efficient. Several systems which decreases usage and cost of RFID tags for outdoor environ-
are used by blind people for obstacle avoidance are made up ments. The disadvantage of this system isusage of costlier
of combined hardware and software devices such as RFID RFID for indoor environments [22–24].
tags, GPS, Sonar technologies etc.
2.1.4 Audio assistance for blind using visual markers for
2.1.1 White cane and guide dogs indoor route

White Cane/Stick is the most mainstream [13,14], least com- Simoes et al. proposed an indoor route wearable frame-
plex instrument for identifying obstructions because of its work [25] in light of visual markers acknowledgment and
low cost and portability features. It empowers blinds to ultrasonic deterrents perception utilized as a sound help for
adequately filter the zone in front of them and distinguish visually impaired individuals. In this model, visual markers
deterrents like openings, steps, dividers, unequal surfaces, distinguish the points in the environment; furthermore this
and so forth. Be that as it may, it must be utilized to distinguish area status is enhanced with data got progressively by dif-
obstacles up to knee-level. Its location range is constrained up ferent sensors. A guide records these points and shows the
to one to two feet as it were. Certain impediments (e.g. pro- separation and bearing between nearer points, building a vir-
jecting window sheets, raised stages, a motion vehicle, and tual way. The visually impaired clients wear likewise glasses
level bars) can’t be identified since they are dangerously near worked with sensors like ultrasonic, gyroscope, RGB cam-
the individual. Indeed, even Guide Dogs are exceptionally era, and accelerometer, thus improving the sum and quality
skilled to control these people however they can’t recognize of the accessible data [26]. The client explores uninhibitedly
conceivably impediments at head level. Guide dogs benefit in the environment by identifying the area markers. In light
stage is by and large 6 years and needs normal dog up-keeping of the beginning point data or the location point data and on
use and way of life changes. The advantages of the system the value of gyro sensor, the way to next marker (target) is
are low cost and portability whereas the disadvantages of the figured. A few ultrasonic sensors are used to raise the view
system are it just recognizes obstructions at knee level and of environment and also to avoid obstacles.The advantage
certain hindrances cannot be recognized effectively. of the system is virtual path is given audio assistance. The
disadvantages of this system are constrained just for indoor
2.1.2 Indoor navigation framework with sonar technology route and extra use of RGB camera.

Roshni et al. decides the client’s position in the building


[15], route by means of sound messages by pressing keys 2.2 Face recognition systems
on the versatile unit. It utilizes sonar innovation to recognize
the position of client by connecting ultrasonic units on roof Identifying a person or object in front is highly demanded
at standard interims. This framework is compact, simple to thing for blind people. Although there are certain applications
work and is not influenced by natural changes. Be that as it which satisfy this need, an application which is user friendly
may, this framework is restricted just for indoor route since with precise result is required [27–29].

123
Cluster Comput

2.2.1 Productive face recognition strategy using genetic application of the proposed architecture consists of two main
algorithm and RBF Kernel modules:Intelligent Navigation Module and Face Recogni-
tion Module.
Verma et al. [30] proposed a face acknowledgment frame-
work which handles higher sum and measurement of picture 3.1 Intelligent navigation module
information. At whatever point we consider this framework
at worldwide level a variety of issues as often as possible Blind guide is an innovation in route frameworks for visu-
arrives. This system uses Radial Basis Function kernel for ally disabled individuals. This product is implemented on an
administration of little training sets of high measurement android based advanced mobile phone combined with ultra-
pictures with hereditary algorithm based on weight advance- sonic sensors. The ultrasonic sensors are included remotely
ment strategy. This framework is viable for huge datasets in utilizing microcontroller based equipment and speak with
this manner it utilizes geneticalgorithm. This genetic algo- Smartphone using Bluetooth. Blind guide gives the data and
rithm gives quick learning and prepares RBF neural system direction to move starting with one place then onto the next
adequately, it diminishes effort for searching and essentially securely utilizing android based advanced mobile phone.
decreases the recognition time. This framework is extremely Text to speech (TTS) [33,34] is utilized as a part of route
viable at open places because public rush and gives quick framework to give route through voice to visually impaired
and precise acknowledgment. The advantages of the system individuals. Intelligent Navigation System is subdivided in to
are less searching effort and reduced recognition time. The two modules: Smartphone Module and Ultrasonic Accessory
disadvantage of this system is complex RBF based function. module [35–37].
Additionally, it utilizes Google map API to provide map
2.2.2 Highly authenticate navigation for blinds using SIFT information. Android Smartphone comprises of following
algorithm sensors: accelerometers, touch screen, global positioning
system (gps), compass and cameras. Here the visually
Kumar et al. [31] depicts the utilization of SIFT algorithm impaired client speaks the destination, then the application
with the end goal of offering backing to Blind. It is a more help the user to walk to the closest transport stop or junction.
prominent test for the blinds to work alone and to ful- With the end goal of discovering current position of the client
fill their necessities. The primary things that a blind can and closest intersection, the application utilizes GPS and GIS
need are to distinguish an Object and Face Recognition. [38]. For navigation, the application utilizes the compass in
This system fulfills the Blind’s needs by utilizing the SIFT the Smartphone and ultrasonic sensors. The blind can walk
algorithm [32]. SIFT algorithm depends on the image and freely to the destination using voice commands provided by
object recoverytechnique. Utilizing image preparing Tools the smart phone.
as a part of MATLAB to execute the SIFT calculation to
the pictures. It enhances the personal satisfaction for the 3.1.1 Smartphone module
blinds reestablishing their capacity to self-explore. Proce-
duresforSIFT algorithm incorporates, building a scale space, The Smartphone consists of the application installed on
pooch estimate, discovering key points, dispose of terrible android system. This plug-and-play method for blind users
key points, relegating an introduction to the key points and has the ability to improve user mobility and also object dis-
create filter highlights. Also enhance the portability without covery as well as avoidance. The application ought to utilize
an outer help utilizing self-route and individual identification. the accompanying assets of the smart phone equipment: GPS
The advantage of the system is gives a highly authenticate sensor, compass, microphone, speaker or head phone, Blue-
data to the blind people. The disadvantages of this system tooth, touch screen and other interfaces.
are complex algorithm and use of VGA.
3.1.2 Ultrasonic accessory module

3 Proposed methodology Ultrasonic sensors provide ultrasonic waves into air and rec-
ognize reflected waves from objects. These ultrasonic sensors
This section describes about how the obstacles are detected are associated with the Arduino microcontroller, and the
using ultra sonic sensors and how the application find the microcontroller associated with Bluetooth module [39]. The
person in front of the using neural learning. The main aim data from the ultrasonic sensors are prepared and change
is to propose a cheap, user friendly application which pro- over to speech information (TTS) in the microcontroller [40].
vides precise and quick messages in the form of audio to Also, this speech information is transmitted to smart phone
visually challenged people so that they can navigate easily. application by means of Bluetooth.This application is imple-
System architecture for the system is depicted in Fig. 1. The mented using Android Studio and Arduino SDK. Figure 2

123
Cluster Comput

Fig. 1 System architecture of the proposed system

represents the sequence diagram for intelligent navigation Algorithm 2: Ultrasonic Obstacle Avoidance
system. Algorithms used for intelligent navigation module is
described below: Step1: Find both distance dynamically
Algorithm 1: Ultrasonic Obstacle Detection Step 2: If LEFT ultrasonic sensor detected the obstacle
and RIGHT ultrasonic sensor not detected then
speak “TURN RIGHT SIDE” via Bluetooth
Step1: Find ultrasonic sensor duration between trigger Step 3: If RIGHT ultrasonic sensor detected the obsta-
and echo cle and LEFT ultrasonic sensor not detected
Step 2: Find distance using relation between speed of then speak “TURN LEFT SIDE” via Bluetooth
sound and duration between trigger and echo
Step 3: Detect the obstacle if the distance is less than Algorithm 3: Direction using compass and bearing
threshold distance.
Step1: Find the angle between north and “source to
destination” using android code and convert it

123
Cluster Comput

Fig. 2 Sequence diagram of intelligent navigation system

to degrees between 0 and 360 and call this angle


as BEAR
Step 2: Find the angle between north and “device head-
ing” using android compass and convert it to
degree between 0and 360 and call this angle as
HEADING.
Step 3: If HEADING less than BEAR then the destina-
tion on device’s right direction.
Step 4: If HEADING greater than BEAR then the des- Fig. 3 Person’s face as input to the system
tination on device’s left direction.

3.2 Face recognition module making training set for each of the individual which can be
in turn utilized for the training reason.The input of the sys-
Blind perceptron is face recognition android software which tem is the person’s face that is in front of the blind. This face
works with the help of an android based phone. The applica- is analyzed by trained neural system. Figure 3 represents an
tion utilize Artificial Neural system [41] keeping in mind the individual face as Input to the System. Figure 4 Represents
end goal to recognize the image of a person based on the past the Sequence Diagram of Face Recognition System.
understanding of the Learning Mechanism [42]. The appli- Functionalities of Face recognition system is as follows:
cation is intended to furnish dynamic communication with
camera on the phone to distinguish the face of the individ-
ual and by extracting the features of the face [43] and store • It can help a blind person to identify persons in-front of
the information to the database. This database is utilized for them using Smartphone camera

123
Cluster Comput

Fig. 4 Sequence diagram of face recognition system

• It provides voice enabled interface to the blind person 3.2.2 Training neural network module
with voice synthesizer and voice recognition system
• It detect the face of the person from the image captured The training of neural network is done in an ongoing way.
• It extract the facial features (example: distance between The training stage comprise of ongoing capturing of pictures
the center of two eyes) from the face using the Open CV of:
Library
• During training process a training set is created dynami- • Different emotional faces of a same person [46]
cally for each person • And one normal face of the same person
• In the Recognition Activity based on the trained images
it recognize the person These pictures are then changed over to standardized/nor-
• If the image is recognized, proper instructions are given malized values and after that it is nourish into the neural
through voice commands system.

3.2.3 Comparison and recognition module


Face Recognition System is subdivided in to three mod-
ules:
The features which are predicted by the trained neural net-
work are contrasted with those features which are stored in
3.2.1 Feature extraction module the user’s database. In the event that individual with typical
features exists in the client’s database, then inform the client
Person’s face is captured from the real time environment about the individual through voice information. Else, inquire
(Fig. 5). Then, features of that face are extracted by measuring as to whether he needs to add that individual to the database.
the facial component values by using Open CV Library [44].
In image-processing stage, to calculate the distance
between two facial points of interest, Euclidean distance 4 Results and discussion
equation is used. In arithmetic, the Euclidean separation is
the straight-line separation between two focuses in Euclidean The proposed application describes how the navigation and
space. In the Euclidean space, if p = (p1, p2) and q = (q1, q2) face recognition system can be helpful to assist blind users.
then the separation is calculated by Eq. (1) [45]. This section describes the components used in the system,
and provides the results and feedbacks from the users for

the proposed application.The major components used in
d ( p, q) = (q1 − p1 )2 + (q2 − p2 )2 (1) the application are Ultrasonic Accessory module, Android

123
Cluster Comput

Fig. 5 a Facial expressions


(Training Phase). b Expected
output (Training Phase)

Smartphone module, Feature Extraction module, Neural TalkBack is a service provided by Google for retriev-
Network training module, and Comparison and Recogni- ing audio information to the blinds which helps them to
tion module.The application also interacts with some other operate with the application. Proper internet connection
components of the operating system. It includes Google is demanded as there is a need of GPS and other ser-
TalkBack, Internet Connection, and Camera Feed. Google vices. The person’s face is captured and it is feed in to

123
Cluster Comput

Table 1 Results and Feedback from the users


User No. of faces No. of correctly % of accuracy Is it found helpful Feedback
captured identified faces for the user?

Gopalan N 14 15 93% Yes Liked the sound interface


John Jacob 23 29 80% No Didn’t like to hold
smartphone all the time
Kalyani M 17 20 85% Yes Finds easy to navigate
through indoors
Umesh Kumar 14 15 93% No Finds difficult to use
smartphone device
Rajeev Gupta 6 6 100% Yes Appreciated the face
recognition system
Praveen Kumar 12 15 80% Yes Liked the user interface for
obstacle dectection system
Joseph P 8 9 89% Yes No comments
Radhika L 4 4 100% Yes Appreciated

Table 2 Comparison of proposed application with existing systems


Approach Navigation Face recognition Indoor/outdoor Obstacle detection accuracy Face recognition

White Cane and Guide Yes No Both 50% –


Dogs
Indoor navigation Yes No Indoor 90% –
framework with sonar
technology
A talking location finding Yes No Both 92% –
system with RFID and
GPS
Audio assistance for blind Yes No Indoor 935 –
using visual markers for
indoor route
Productive face recognition No Yes – – 90%
strategy using genetic
algorithm and RBF Kernel
Highly authenticate No Yes – – 85%
navigation for blinds
using SIFT algorithm
Intelligent navigation and Yes Yes Both 95% 90%
face recognition system
using neutral learning for
blind people

feature extraction phase to identify the person.The appli- any deflection in the expressions. For persons 2 and 6, the
cation is analyzed in real time with several people facing accuracy rate is relatively low. It is because of the movement
visual impairment and the feedbacks from them are noted in of face with respect to the camera’s capturing area. The pro-
Table 1. posed application is compared and analyzed with some of the
The application is tested with eight users and the results existing systems with certain parameters as shown in Table 2.
are analyzed. From the table it is noted that the application It is noted that the proposed application provides 95% obsta-
is showing good accuracy rate of 90%. As some of the users cle detection and 90% face recognition accuracy rates with
are not familiar with smartphone they are finding some dif- the navigation support for both outdoor and indoor envi-
ficulties in using the application. Among the users, 75% of ronments. Thus, the proposed application provides higher
them find the system to be useful. For persons 5 and 8, the accuracy rate for both obstacle detection and face recog-
results show that the accuracy rate is high. This is because nition without compromising the portable, less expensive,
the fact that the user is looking directly to the camera without self-contained characteristics of the system. Figs. 6, 7, 8 and

123
Cluster Comput

Fig. 6 ROC curve for genetic algorithm with RBF kernel

Fig. 7 ROC curve for SIFT algorithm

9 represent the ROC curve for various image recognition tion systemis implemented. The system is applicable for both
methods. outdoor and indoor environments. The analysis of the per-
formance of the implemented system shows that 75% of the
blind personal find this system help full and provided an
5 Conclusion accurate result of 90% for face recognition and 95% obsta-
cle detection. However, studies can be extensively made to
This paper proposed a system for blind people to navigate detect animals, vehicles and all abiotic and biotic objects
freely in the environment by avoiding obstacles and also in the environmentswith the advancement in neural learning
provide a method to find the person in front of them. The high- techniques. The future of this paper is to use IoT devices for
light of the proposed system is that an intelligent, portable, recognizing the face reactions. The limitation of this paper
less expensive, self-contained navigation and face recogni- is focuses only on static face recognition.

123
Cluster Comput

Fig. 8 ROC curve for proposed neural learning method

Fig. 9 Comparison of ROC analysis

References 5. Oravec, M.: Feature extraction and classification by machine learn-


ing methods for biometric recognition of face and iris. In: 56th
1. WHO: Visual impairment and blindness. http://www.webcitation. International Symposium ELMAR, pp. 10–12 (September 2014)
org/6YfcCRh9L (August 2014) 6. Chen, Z., Lowry, S., Jacobson, A., Ge, Z., Milford, M.: Dis-
2. Muharram, A.A., Noaman, K.M., Alqubati, I.A.: Neural networks tance metric learning for feature-agnostic place recognition. In:
and machine learning for pattern recognition. Int. J. Comput. Appl. IEEE/RSJ International Conference on Intelligent Robots and Sys-
122(12), 0975 (2015) tems (IROS), Sept 28–Oct 2, pp. 2556–2563 (2015)
3. Xinhua, L., Qian, Y.: Face recognition based on deep neural net- 7. Selvarasu, N., Nachiappan, Alamelu, Nandhitha N.M.: Euclidean
work. Int. J. Signal Process. Imag. Process. Pattern Recognit. 8(10), distance based color image segmentation of abnormality detection
29–38 (2015) from Pseudo color thermographs. In: International Journal of Com-
4. Satonkar, S.S., Pathak, V.M., Khanale, P.B.: Face recognition using puter Theory and Engineering, vol. 2, No. 4 (August 2010)
principal component analysis and artificial neural network of facial 8. Thota, C., Sundarasekar, R., Manogaran, G., Varatharajan, R.,
images datasets in soft computing. In: International Journal of Priyan, M.K.: Centralized fog computing security platform for IoT
Emerging Trends & Technology in Computer Science (IJETTCS), and Cloud in healthcare system. In: Exploring the Convergence
vol. 4, No. 4, July–August (2015) of Big Data and the Internet of Things, pp. 141–154. IGI Global
(2018)

123
Cluster Comput

9. Varatharajan, R., Vasanth, K., Gunasekaran, M., Priyan, M., Gao, 27. Lopez, D., Manogaran, G.: Modelling the H1N1 influenza using
X.Z.: An adaptive decision based kriging interpolation algo- mathematical and neural network approaches. Biomed. Res. 28(8),
rithm for the removal of high density salt and pepper noise in 3711–3715 (2017)
images. Comput. Electr. Eng. (2017). https://doi.org/10.1016/j. 28. Manogaran, G., Thota, C., Lopez, D., Sundarasekar, R.: Big data
compeleceng.2017.05.035 security intelligence for healthcare industry 4.0. In: Cybersecurity
10. Manogaran, G., Thota, C., Lopez, D., Vijayakumar, V., Abbas, for Industry 4.0: Analysis for Design and Manufacturing, vol. 3, p.
K.M., Sundarsekar, R.: Big data knowledge system in healthcare. 103 (2017)
In: Internet of Things and Big Data Technologies for Next Gener- 29. Manogaran, G., Lopez, D., Thota, C., Abbas, K.M., Pyne, S.,
ation Healthcare, pp. 133–157. Springer, Berlin (2017) & Sundarasekar, R.: big data analytics in healthcare Internet of
11. Lopez, D., Manogaran, G.: Modelling the H1N1 influenza using Things. In: Innovative Healthcare Systems for the 21st Century,
mathematical and neural network approaches. Biomed. Res. 28(8), pp. 263–284. Springer, New York (2017)
3711–3715 (2017) 30. Verma, R.N., Jain, K., Rizvi, M.A.: Efficient face recognition
12. Manogaran, G., Lopez, D.: A survey of big data architectures and method using RBF kernel and genetic algorithm. In: IEEE Inter-
machine learning algorithms in healthcare. Int. J. Biomed. Eng. national Conference on Computer, Communication and Control
Technol. 15, 23–34 (2017) IC4-2015, pp. 1–5 (2015)
13. Blasch, B.B., Wiener, W.R., Welsh, R.L.: Foundations of orienta- 31. Kumar, A.L., Ganesan, R.: Improved navigation for visually chal-
tion and mobility, 2nd edn. AFB Press, New York (1997) lenged with high authentication using a modified sift algorithm. In:
14. Kumar, K., Champaty, B., Uvanesh, K., Chachan, R., Pal, K. and International Journal of Advanced Research in Computer Science
Anis, A.: Development of an ultrasonic cane as a navigation aid & Technology, vol. 2, pp. 1–5 (2014)
for the blind people. In: International Conference on Control, 32. Ueki, K., Kobayashi, T.: Multi-layer feature extractions for image
Instrumentation, Communication and Computational Technolo- classification—Knowledge from deep CNNs. In: 2015 Interna-
gies (ICCICCT), pp. 475–479 (July 2014) tional Conference on Systems, Signals, Image Processing (Novem-
15. Jain, R.D., Balakrishnan, R.P.V.: Indoor navigation system for visu- ber 2015)
ally impaired. In: Association for Computing Machinery (May 33. Kaladharan, N.: An english text to speech conversion system. In:
2013) International Journal of Advanced Research in Computer Science
16. Nandhini, N., Vinothchakkaravarthy, G., Deepa Priya, G.: Talking and Software Engineering, vol. 5, No. 10 (October 2015)
assistance about location finding both indoor and outdoor for blind 34. Mache, S.R., Baheti, M.R., Mahender, C.N.: Review on text-to-
people. In: International Journal of innovative Research in Science, speech synthesizer. In: International Journal of Advanced Research
Engineering and Technology, vol. 3, pp. 9644–9651 (February in Computer and Communication Engineering, vol. 4, No. 8, p. 540
2014) (August 2015)
17. Dharani, P., Lipson, B., Thomas, D.: RFID Navigation system for 35. Manogaran, G., Thota, C., Kumar, M.V.: MetaCloudDataStorage
the visually impaired. Worcester Polytechnic Institute (2012) architecture for big data security in cloud computing. Procedia
18. Koley, S., Mishra, R.: Voice operated outdoor navigation system for Comput. Sci. 31(87), 128–133 (2016)
visually impaired persons. In: International Journal of Engineering 36. Varatharajan, R., Manogaran, G., Priyan, M.K., Sundarasekar, R.:
Trends and Technology, vol. 3, No. 2 (2012) Wearable sensor devices for early detection of Alzheimer disease
19. Manogaran, G., Lopez, D.: Spatial cumulative sum algorithm with using dynamic time warping algorithm. Clust. Comput. (2017).
big data analytics for climate change detection. Comput. Electr. https://doi.org/10.1007/s10586-017-0977-2
Eng. (2017). https://doi.org/10.1016/j.compeleceng.2017.04.006 37. Varatharajan, R., Manogaran, G., Priyan, M.K., Balaş, V.E.,
20. Manogaran, G., Thota, C., Lopez, D.: Human-computer interaction Barna, C.: Visual analysis of geospatial habitat suitability model
with big data analytics. In: HCI Challenges and Privacy Preserva- based on inverse distance weighting with paired comparison
tion in Big Data Security, pp. 1–22. IGI Global (2018) analysis. Multimed. Tools Appl. (2017). https://doi.org/10.1007/
21. Thota, C., Manogaran. G., Lopez, D., Vijayakumar, V.: Big data s11042-017-4768-9
security framework for distributed cloud data centers. In: Cyberse- 38. Damani, A., Shah, H., Shah, K., Vala, M.: Global positioning sys-
curity Breaches and Issues Surrounding Online Threat Protection, tem for object tracking. Int. J. Comput. Appl. 109, 40–45 (2015)
pp. 288–310. IGI Global (2017) 39. Lopez, D., Gunasekaran, M.: Assessment of vaccination strategies
22. Priyan, M.K., Devi, G.U.: Energy efficient node selection algorithm using fuzzy multi-criteria decision making. In: Proceedings of the
based on node performance index and random waypoint mobility Fifth International Conference on Fuzzy and Neuro Computing
model in internet of vehicles. Clust. Comput. (2017). https://doi. (FANCCO-2015), pp. 195–208. Springer, New York (2015)
org/10.1007/s10586-017-0998-x 40. Lopez, D., Gunasekaran, M., Murugan, B.S., Kaur, H., Abbas,
23. Kumar, P.M., Gandhi, U.D.: A novel three-tier Internet of Things K.M.: Spatial big data analytics of influenza epidemic in Vellore,
architecture with machine learning algorithm for early detection India. In: IEEE International Conference on InBig Data (Big Data),
of heart diseases. Comput. Electr. Eng. (2017). https://doi.org/10. pp. 19–24 (2014)
1016/j.compeleceng.2017.09.001 41. Lopez, D., Sekaran, G.: Climate change and disease dynamics—a
24. Kumar, P.M., Gandhi, U.D.: Enhanced DTLS with CoAP-based big data perspective. Int. J. Infect. Dis. 45, 23–24 (2016)
authentication scheme for the internet of things in health- 42. Tudor, D., Dobrescu, L., Dobrescu, D.: ltrasonic electronic system
care application. J. Supercomput. (2017). https://doi.org/10.1007/ for blind people navigation. In: The 5th IEEE International Con-
s11227-017-2169-5 ference on E-Health and Bioengineering—EHB, November 19–21
25. Simoes, W.C., de Lucena V.F.: Blind user wearable audio assis- (2015)
tance for indoor navigation based on visual markers and ultrasonic 43. Sutar Shekhar, S., Pophali, S.S., Kamad, N.S., Deokatelaxman, J.:
obstacle detection. In: IEEE International Conference on Consumer Intelligent voice assistant using android platform. In International
Electronics (ICCE) (2016) Journal of Advance Research in Computer Science and Manage-
26. Lakde, C.K., Prasad, P.S.: Navigation system for visually impaired ment Studies, vol. 3, No. 3, (March 2015)
people. In: International Conference on Computation of power, 44. Manogaran, G., Lopez, D.: Disease surveillance system for big
energy, Information and Communication (2015) climate data processing and dengue transmission. Int. J. Ambient
Comput. Intell. 8(2), 88–105 (2017)

123
Cluster Comput

45. Lopez, D., Manogaran, G.: Big data architecture for climate change Gunasekaran Manogaran is
and disease dynamics. CRC Press, Boca Raton (2016) currently pursuing Ph.D. in the
46. Yong, S.P., Chen, Y.Y., Wan, C.E.: Seismic image recognition Vellore Institute of Technology
tool via artificial neural network. In: International Symposium on University. He received his Bach-
Computational Intelligence and Informatics, pp. 19–21 (November elor of Engineering and Master
2013) of Technology from Anna Uni-
versity and Vellore Institute of
Technology University respec-
Priyan Malarvizhi Kumar is tively. He has worked as a
currently pursuing a Ph.D. in Research Assistant for a project
the Vellore Institute of Tech- on spatial data mining funded
nology University. He received by Indian Council of Medical
his Bachelor of Engineering and Research, Government of India.
Master of Engineering degree His current research interests
from Anna University and Vel- include data mining, big data
lore Institute of Technology Uni- analytics and soft computing. He is the author/co-author of papers in
versity, respectively. His cur- conferences, book chapters and journals. He got an award for young
rent research interests include investigator from India and Southeast Asia by Bill and Melinda Gates
Big Data Analytics, Internet of Foundation. He is a member of International Society for Infectious Dis-
Things, Internet of Everything, eases and Machine Intelligence Research labs.
Internet of Vehicles in Health-
care. He is the author/co-author
of papers in international jour- Jidhesh R. received his Mas-
nals and conferences. ter of Engineering degree from
Vellore Institute of Technology
University, Vellore, India. His
Ushadevi Gandhi is working as current research interests include
an Associate Professor in the Big Data Analytics, Image Pro-
School of Information Technol- cessing, Internet of Things, Inter-
ogy and Engineering, Vellore net of Everything, Internet of
Institute of Technology Univer- Vehicles in Healthcare. He is
sity. She received her Bachelor of the author/co-author of papers in
Engineering and Master of Engi- international journals and con-
neering degree from the Anna ferences.
University. Her current research
interests include big data analyt-
ics and wireless networks. She
has published number of interna-
tional journals and conferences.
She is a member of CSI and Thanjai Vadivel received his
IEEE. B.E. degree in Computer Sci-
ence and Engineering from Anna
University, Chennai in 2008. He
has been completed his Masters
R. Varatharajan received his degree in IT(NETWORKING)
B.E., M.E. and Ph.D. degrees all from VIT University, Vellore in
in Electronics and Communica- 2012. Now he is currently pur-
tion Engineering from Anna Uni- suing Ph.D. from Veltech Dr.
versity and Bharath University, RR & Dr. SR University, Chen-
India. His main area of research nai, India. He is currently work-
activity is Medical Image pro- ing as Assistant Professor in the
cessing, Wireless Networks and Department of Computer Sci-
VLSI Physical Design. He has ence and Engineering at Veltech
served as a reviewer for Springer, Dr. RR & Dr. SR University,
Inderscience and Elsevier jour- Chennai, India. His area of interest is network security, IOT, wireless
nals. He has published many sensor networks, cloud security and Data Mining.
research articles in refereed jour-
nals. He is a member of IEEE,
IACSIT, IAENG, SCIEI and
ISTE wireless research group. He has been serving as Organizing Chair
and Program Chair of several International conferences and in the Pro-
gram Committees of several International conferences. Currently he is
working as a Associate professor in the Department of Electronics and
Communication Engineering at Sri Ramanujar Engineering College,
Chennai, India.

123

You might also like