You are on page 1of 7

Speech Based Biomedical Devices Monitoring

Using LabVIEW
Dr. Dilip R Dr Yogini Dilip Borole, Mrs. Sumalatha S
Associate Professor Assistant Professor, Assistant Professor,
Departement of Electrical and Department of Electronics and Department of Electronics and
Electronics Engineering Telecommunications Engineering, Communications Engineering,
Global Academy of Technology G H Raisoni Institute Of Engineering Acharya institute of Technology,
Bengaluru, India and Technology, Bengaluru, India,
dilip.raju@yahoo.com Pune Maharashtra, sumalathas@acharya.ac.in
yoginiborole@gmail.com

Mrs. Nethravathi HM
Assistant Professor
Department of Computer science
2021 9th International Conference on Cyber and IT Service Management (CITSM) | 978-1-6654-3548-2/21/$31.00 ©2021 IEEE | DOI: 10.1109/CITSM52892.2021.9588853

Engineering
BGS Institute Of Technology, ACU,
Nagamangala, Karnataka,
nethravathihm@bgsit.ac.in

Abstract—The primary objective is to develop an interactive which the acoustic signal is converted to several words via a
real time system with the help of speech recognition system to microphone or telephone. The first test of the conversion of
monitor the medical data in addition to manage sensors the speech signal into a text was a direct conversion to a
connected to human body. In this work, the development of a phenomenon sequence, which was not successful. The first
simple system which is capable of transferring the patient’s vital promising result was to turn speech into text when general
parameters such as temperature, heart rate and pattern matching techniques were implemented in the 1990s.
electrocardiography (ECG) from the remote location to doctor The positive result was translated into text in the 1990s when
is done through wireless technology. The sensors interfaced with General Pattern matching techniques were first adopted. Only
My RIO embedded controller are connected to the human body,
a few words could be recognized in that system, but now a
which are monitored and controlled through speech recognition
system. Sensors gets activated by the doctor’s voice command
day’s thousands of words can be recognized at the same time.
which is in the pre-defined vocabulary. The information from The technology of speech recognition allowed computers to
the sensors had been transferred to doctor with help of obey the commands of the human voice and to understand
controller. Real time data processing is performed utilizing the human languages. In this work, one of the objectives is based
LabVIEW tool. on to develop a simple device that can transmit the
information wirelessly with the help of speech technology of
Keywords—Temperature, ECG, Heart Rate, MyRIO, the vital signs of a patient from a remote location to the doctor.
The speakers' recognition is used to detect the speaker or
LabVIEW
check the speaker's identity with the speaker's words as a
typical downside of the model classification. Virtual
I. INTRODUCTION instrumentation offers a replacement style in the analytical
Speaking is one mode of communication that is normal. field to benefit the devices by designing the code, whereby
This has been made possible in the security system by recent virtual instrument technology is also applied to the technology
development. Once the speaker has been identified, the job is of speech recognition and the amount of analysis is reduced.
to use the Sample discourse to pick the identity of the person
speaking from the speaker population. The function of testing II. MATERIALS AND TECHNOLOGY
speakers is to use a speech sample to check whether a speaker A. LabVIEW
does the job. It allows it possible to use the speaker's voice to
The LabVIEW bench will provide the visible
verify their identity and to control access to services such as
programming language with system design and
voice service, banking by mobile, access to information, data
implementation environment. National Instruments visible
services, voice mail, guide preventive maintenance, and
programming language. LabVIEW is typically used on several
remote access to computers. In the earlier days, people used to
operating (OS) and Windows systems, multiple OS, Microsoft
use the analog meters and then make a note of the same in the
Linux, and macOS, for installation, instrument management,
books and any data if they would like to know had to be
and industrial automation.[1] LabVIEW is a partner in an
informed to the colleagues and they in turn would get back
integrated engineering framework designed specifically to
with the required information. This was very time consuming
create measuring and management systems for engineers and
and very tedious working on this technology and the concept,
scientists. The execution of the graphic diagram structure (the
many times the manual errors would always remain the same.
LabVIEW-source code), on which the computer program,
With the invention of computers, all these have been changed
through drawing wires, connects entirely different function
and automation has become most important in the real world.
nodes. Such wires scatter variables and every node execute as
The research has been continuing for many years in the area
all the input files currently are on the market. LabVIEW
of speech recognition. Nevertheless, the method of speech
integrates user interface generation into the event cycle
recognition also provides scope for development. The voice
(termed front panels). LabVIEW program-subroutines
reconnaissance method can be described as the process by
network of virtual instruments (VIs). Each VI has three

uthorized licensed use limited to: AMRITA VISHWA VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on May 09,2023 at 14:58:47 UTC from IEEE Xplore. Restrictions app
components: block diagram, front panel, and icon/connector many challenges including unwanted noise, environmental
pane. [2,3]. disturbance, and variation in the frequency recognition. As a
result, it affects voice-controlled applications. To overcome
The figure 1 represents the VI in the alternate work VI the above limitations speech recognition system uses MFCC,
block diagram and front panel. The front panel consists of HMM and hamming window technique can be introduced to
monitors and indicators for victimization. Inputs of the area improve the performance in the application of the patient
control unit: they allow a user to generate VI data. The area monitoring system.[10,13].
unit outputs indicators: the tests indicated or showed followed
the VI inputs. The interactive ASCII text file is included in the In continuation Work includes various approaches and
rear section, which may be a diagram. Every item can be technological tools in patient’s data storage, monitoring, and
displayed on the front panel as terminals. The rear panel analyses even from remote locations with doctor’s absence in
covers control systems and functions, which provide hospital. The existing approach uses the platform which has a
information to indicators. The structures and functional area limitation on the range of distance, hardware and requires
unit in the function’s palette can be put in the block diagram more tools to perform the task. Hence results in decreased
panel. performance. To overcome this challenge embedded
controller (MyRIO) and LabVIEW platforms will be used
which may increases efficiency to a greater extent.
IV. OBJECTIVE AND METHODOLOGY
A. Objective of the work
This research is mainly an overview of the currently
available technologies chosen for remote patient monitoring
and concerns concerning the use of technology. It implements
a remote patient monitoring system based on wireless sensor
networks, to ensure improved results, prevent hospitalizations,
and avoid any catastrophic incidents that can lead to a
Fig 1. Structure Diagram of LabVIEW reduction in world healthcare costs. The goals of the report are
as follows:
B. Speech Technology in LabVIEW
Either the machine mike entry or MyRIO mike entry is • Design and Develop a user-interactive system for the
employed to record the voice. Initializing a listing of sentences
doctor to monitoring the patient’s data using speech
or phrases. The code solely responds once sure words or
Processing from a remote environment.
phrases area units are used. descriptive linguistics for speech
recognition in these phrases is generated victimization • Develop the algorithm for interpreting doctor
Microsoft's language SDK tools.[5,6] Descriptive linguistics commands and build the PC interface (Personal
should be applied to every management for voice recognition Computer) for the same in the real-time environment
to come recognition from the audio input. Descriptive using the Controller, monitoring, and wireless
linguistics is that the depiction of all the phrases a client says communication technology.
with some semanticized selections. Here we tend to use
descriptive linguistics constructors and categories of B. Methodology
selections. Initialized front panel recognition and voice
Develop an Algorithm for Speech Recognition Technique
recognition. Speaker recognition descriptive linguistics is
using LabView and Create a wireless device for transferring
loaded to the command recognizer once this the popularity
and receiving data using the Embedded Controller. By using
starts. A vowel or a consonant articulated, the vocal cords
a speech recognition algorithm using the Statistical approach
vibrate sporadically to create the organ flow of speech. The
model for Switch ON/OFF the biomedical devices are
movement of the voice organ consists of vibrations of the
connected to patients at the remote location to acquire the data
voice organ. The pitch time is the volume of a pulse speech
from which the doctor can able analysis the patient data. In
organ. The reciprocal value of the space is that space is called
figure 2 represents the proposed block of diagram of the
the basic frequency. The vocal tract is a time-differentiating
Doctor End.
device for the movement of the speech organ. The vocal tract
properties include the frequency reactions depending on the
location of the organs, such as the tubular cavity and lung. The
high frequencies of the vocal tract area unit modifiers
collectively referred to as signal amplitude within the
frequency response. Most researchers believe that the key
factors for voice signal pitch and amplitude depends on
frequency.[7,9]
.
III. RESEARCH GAP
Fig 2. Proposed Block Diagram of Doctor End.
To improve the performance of speech recognition system,
visualization of data in patient monitoring, storage of data in In figure 3 represents the Patient End Terminal in which
the cloud using the internet of services, and creating a medium all the medical devices have to be plug into the embedded
of communication between patients, doctors, and Controller.
caretakers. Speech recognition system the performance
efficiency is increased i.e., a voice-controlled application has

uthorized licensed use limited to: AMRITA VISHWA VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on May 09,2023 at 14:58:47 UTC from IEEE Xplore. Restrictions app
Fig 3. Proposed Block Diagram of Patient End

V. ARCHITECTURE FOR PATIENT MONITORING USING SPEECH Fig 6. Block diagram of Patient Monitoring using Graphical Programming
RECOGNITION
VI. DEPLOYMENT OF FRAMEWORK ON PATIENT MONITORING
The building block for any effective hardware
implementation is good design. The main feature is also
numerous overviews and tests to detect and correct errors. For
the implementation of an interactive computer system to track
the patient's physical parameter.
The design includes an interactive computer system for
monitoring low-cost patients, low power, scalable physical
platform parameters.
Fig 4. Block Diagram of Transmitter Section (Doctor End).  Keep the first configuration as basic as possible and
select the simpler hardware to device complexity.
Figure 4 represents the Working operation of the As a first step in the production of hardware, a
Transmitter section of our system in which the Speech simpler design is a good starting point to prevent
Command which is received from the speaker is sent from the problems due to inefficient hardware.
personal computer to the embedded controller. Once the  The specification should be such that state-of-the-art
speech command received at the controller its starts analysis
speaker recognition systems are commercially
with database word paraphrases located in the system
database for which the speech recognition algorithm is available.
developed using LabVIEW programming. Once the The figure 7 shows approach takes into account choices
command is matched with the database its transfers the signal and considerations. This platform describes the choice of
to the receiver section of our system. hardware and the architecture chosen to implement the
platform.

Fig 5. Block Diagram of Receiver section (Patient End)

Figure 5 represents the receiver section of our system


from which the corresponding signal is received by the
receiver module and data is transferred to the embedded
controller. This data is given to the BTA connector from
which the connects sensor responds to the command given by
the user.
Figure 6 presents the virtual instrumentation system's
interactive block diagram for the patient monitoring, in which
the physical parameter of the patient is acquired, analyzed,
and displayed.

Fig 7. Flow Chart of Choice Selection

uthorized licensed use limited to: AMRITA VISHWA VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on May 09,2023 at 14:58:47 UTC from IEEE Xplore. Restrictions app
connected to the Embedded controller which leads to Turn
ON/OFF medical devices based on voice command
operation.
Case 1: if Doctor want know the status of Temperature,
here doctor will give input as BT [Body Temperature], Then
Temperature sensor is turn on and send the value to main
function for visualisation.
Case 2: if Doctor want know the status of Heart rate, here
doctor will give input as Heart rate, then heart-rate sensor is
turn on and send the value to main function for visualisation.
Case 3: if Doctor want know the status of ECG, here
doctor will give input as ECG, Then ECG electrode sensor is
turn on and send the value to main function for visualisation.
Finally, the doctor wants to stop the monitoring if just say
stop, then the entire system goes to off state will be the steady
state.
VIII. RESULTS AND DISCUSSION
In continuation of from the working operation of the
Speech Recognition system involved in patient monitoring.
In this section, results, and discussion has been elaborately
discussed with real-time analysis. In figure 8. shows the
Fig 7. Flow Chart of Choice Selection graphical user interface developed using the LabVIEW,
which represents the blank template before the code is
VII. WORKING OPERATION executed.
A. Voice Command operation
Voice command analysis is based on feature extraction
and the feature recognition system represents a speech
algorithm developed using LabVIEW, in which the phrase
block requires a speech command list which is already listed
out in the database and created using string functions. Speech
recognition for monitoring and control of biomedical devices.
In the system, we have used a speech model as HMM and
MFCC as an algorithm. In this system Heart-rate, ECG, and
Temperature sensors for patient monitoring and controlling
are considered. Voice command is provided by the doctor
based on a list of phrase block created in the database as
shown in table 1

TABLE I. COMMAND OPERATION IN REAL TIME ANALYSIS

DATA BASE REAL TIME


Fig 8. Design of Speech recognition system
INFORMATION ANALYSIS
TEMEPERATURE BT In figure 9 shows the flow chart of operation involved in
HEART-RATE HEART RATE speech recognition system., a continuation from Chapter four
ECG ECG the working operation of speech recognition algorithm and
development has been discussed here, Here Speech
STOP STOP Command is given by the doctor for the analysis with the help
of an algorithm developed with help of the LabVIEW
Given speech command is compared with pre-set platform In this system, the doctor uses four main phrases
commands, Doctor Sitting in his room, can see the physical such as BT, Heart-Rate, ECG, and Stop which is used to turn
parameters of patient’s parameters such as Heart-rate, on the bio-sensors connected human body. Stage by stage
Temperature, and ECG. A doctor can use the real-time phrase results has been discussed in detail.
which has been defined and created in the database and give
the input in terms speech command, Output of the speech Stage 1: When the code is executed, the system will wait
signals are provided to Embedded Controller[MyRIO], which for a voice command from the doctor.
is an interconnected LabVIEW platform, It compares with the Stage 2: Once the doctor provides the voice command, the
database and speech algorithm code developed using command is compared with the database, if it matches the
LabVIEW, if the condition is true the specific function
database, it goes forward to turn the biomedical device.
operates else it will wait for proper command and Finally,
with the help of wireless technology information is Stage 3: If the command doesn’t match with the database,
transferred from Doctor end to Patient End. In the patient then it will go for the first stage again, it keeps repeating until
End, the information is received by the transceiver and the speech signal matches with the database.

uthorized licensed use limited to: AMRITA VISHWA VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on May 09,2023 at 14:58:47 UTC from IEEE Xplore. Restrictions app
Stage 4: If the database is matched, then based on the
phrases defined by the user, the biomedical devices will get
turn on and start acquiring the data.
Stage 5: All the acquired data is transferred from the
patient end to the doctor end with the help of LabVIEW, for
data visualization and real-time monitoring.

Fig 11. Output of Temperature sensor

Figure 12. shows the Graphical representation of the


output status of Heart-Rate, when the doctor command
is Heart rate, the System recognizes input based on the
algorithm developed for the speech recognition system which
has been discussed in chapter 4, which helps the heart-rate
sensor is turned on.

Fig 9. Flow Chart of Speech Recognition Model

Figure 10 shows the Graphical representation of the


output status of body temperature, when the doctor command
is BT (body temperature), the System recognizes input based
on the algorithm developed for the speech recognition system
which has been discussed in chapter 4, which helps the
temperature sensor is turned on. Fig 12. Command Operation of Heart-Rate

Figure 13 shows the result status of heart-rate when the


heart rate hand grip sensor is turn on based on the command
received by the doctor, Heart-rate data is represented in terms
of BPM.

Fig 13. Output of Heart-Rate sensor

Figure 14 shows the Graphical representation of the


output status of ECG, when the doctor command is ECG, the
Fig 10. Command Operation of Body Temperature
System recognizes input based on the algorithm developed
Figure 11 shows the result status of Body temperature for the speech recognition system which has been discussed
when the temperature sensor is turn on based on the command in chapter 4, which helps the temperature sensor is turned
received by the doctor, Temperature data is represented in on.
terms of Fahrenheit.

uthorized licensed use limited to: AMRITA VISHWA VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on May 09,2023 at 14:58:47 UTC from IEEE Xplore. Restrictions app
[5] Thoraya Obaid, Haliemah Rashed, Ali Abu El Nour, Muhammad
Rehan, Mussab Muhammad Saleh, and Mohammed Tarique, ZIGBEE
BASED VOICE CONTROLLED WIRELESS SMART HOME
SYSTEM, International Journal of Wireless & Mobile Networks
(IJWMN) Vol. 6, No. 1, February 2014, pp. 47-58, DOI:
10.5121/ijwmn.2014.6104
[6] Dr. Ajantha Devi, Ms. V. Suganya, An Analysis on Types of Speech
Recognition and Algorithms International Journal of Computer Science
Trends and Technology (IJCS T) – Volume 4 Issue 2, Mar - Apr 2016
pp. 350-355
[7] S. Pleshkova, Z. Zahariev and A. Bekiarski, "Development of Speech
Recognition Algorithm and LabView Model for Voice Command
Control of Mobile Robot Motion," 2018 International Conference on
High Technology for Sustainable Development (HiTech), Sofia, 2018,
pp. 1-4, doi: 10.1109/HiTech.2018.8566257.
[8] Ali A. Abed, Design of Voice Controlled Smart Wheelchair,
International Journal of Computer Applications (0975 – 8887) Volume
131 – No.1, December2015 PP. 32-38
Fig 14. Command Operation of ECG [9] Kamelia, Alfin Noorhassan S.R, Mada Sanjaya and W.S., Edi
Mulyana, Door-Automation System Using Bluetooth-Based Android
Figure 15 shows the result status of ECG when the heart for Mobile Phone, ARPN Journal of Engineering and Applied
rate hand grip sensor is turn on based on the command Sciences, Vol. 9, no. 10, October 2014, ISSN 1819-6608 pp. 1759-1762
received by the doctor, ECG data is represented in terms of [10] A. Caranica, H. Cucu, C. Burileanu, F. Portet and M. Vacher, "Speech
signal concerning time and amplitude. recognition results for voice-controlled assistive applications," 2017
International Conference on Speech Technology and Human-
Computer Dialogue (SpeD), Bucharest, 2017, pp. 1-8, doi:
10.1109/SPED.2017.7990438.
[11] A. K. Gnanasekar, P. Jayavelu and V. Nagarajan, "Speech recognition
based wireless automation of home loads with fault identification for
physically challenged," 2012 International Conference on
Communication and Signal Processing, Chennai, 2012, pp. 128-132,
doi: 10.1109/ICCSP.2012.6208408.
[12] Amrutha S, Aravind S, Ansu Mathew, Swathy Sugathan, Rajasree R,
and Priyalakshmi S “Speech Recognition Based Wireless Automation
of Home Loads- E Home” International Journal of Engineering Science
and Innovative Technology (IJESIT) Volume 4, Issue 1, January
Fig 15. Output of ECG sensor
2015,pp. 179-184.
[13] Dilip R, Bhagirathi V. (2013) Image Processing Techniques for Coin
IX. CONCLUSION Classification Using LabVIEW. OJAI 2013, 1(1): 13-17 Open Journal
Technology needs are rising in the present pandemic of Artificial Intelligence DOI:10.12966/ojai.08.03.2013
worldwide with regard to rapid growth in advanced [14] Naveen Mukati, Neha Namdev, R. Dilip, N. Hemalatha, Viney
technology in upgrading the current health care infrastructure Dhiman, Bharti Sahu, Healthcare Assistance to COVID-19 Patient
using Internet of Things (IoT) Enabled Technologies,Materials Today:
used for patient monitoring. voice command-based health Proceedings,2021,https://doi.org/10.1016/j.matpr.2021.07.379.
monitoring system has a great potential in the near future to ISSN214-7853,
become an important interactive factor between human and [15] Mr. DILIP R, Dr. Ramesh K. B. (2020). Development of Graphical
computer. A speech recognition algorithm developed can System for Patient Monitoring using Cloud Computing. International
operate multiple speakers with the help of MyRIO embedded Journal of Advanced Science and Technology, 29(12s), 2353 - 2368.
controller and LabVIEW. The speech recognition systems of [16] Mr. Dilip R, Dr. Ramesh K B ."Design and Development of
the speaker were successfully trained to recognize voice Silent Speech Recognition System for Monitoring of Devices",
Volume 7, Issue VI, International Journal for Research in Applied
inputs which were captured using a microphone and Science and Engineering Technology (IJRASET) Page No: , ISSN :
compared with pre-created database. 2321-9653
[17] Anurag Shrivastava, Sudhir Kumar Sharma, Various Arbitration
REFERENCES Algorithm for on-Chip (AMBA) Shared Bus Multi-Processor SoC”,
[1] S. Shirali-Shahreza, H. Sameti and M. Shirali-Shahreza, "Parental IEEE Students' Conference on Electrical, Electronics and Computer
control based on speaker class verification," in IEEE Transactions on Science (SCEECS- 2016) organized by MNIT Bhopal, pp.1-7, 2016.
Consumer Electronics, vol. 54, no. 3, pp. 1244-1251, August 2008, doi: [18] Anurag Shrivastava, Sudhir Kumar Sharma, “AMBA AXI Bus
10.1109/TCE.2008.4637613. Verification Technique,” International Journal of Applied Engineering
[2] L. Tang, P. Zhou and X. Wei, "A Speaker Verification System Based Research (Scopus indexed), Vol. 10, no. 24, pp. 44178-44182, 2015.
on EMD," 2009 Third International Conference on Genetic and [19] Anurag Shrivastava, Sudhir Kumar Sharma, “A Reliable Routing
Evolutionary Computing, Guilin, 2009, pp. 553-556, doi: Architecture and Algorithm for Network-on-Chip,” Journal of
10.1109/WGEC.2009.101. Electronic Design Technology, Vol.6, no.3, pp.40-48, 2015.
[3] R. Gupta, M. Mitra and J. Bera, "Development of a State-of-the-Art [20] Anurag Shrivastava, A. Pandit, Design and Performance Evaluation of
ECG DAS for Storing, Processing and Analysis Using MATLAB- a NOC- Based Router Architecture for MPSoC, IEEE Fourth
Based GUI and Microprocessor," 2009 International Conference on International Conference on Computational Intelligence and
Advances in Computing, Control, and Telecommunication Communication Networks, organized by GLA Mathura, pp .468-
Technologies, Trivandrum, Kerala, 2009, pp. 570-572, doi: 472,2012.
10.1109/ACT.2009.144. [21] A. Shrivastava, A. Singh, G.S. Tomar, “Performance Comparison of
[4] A. Cinar et al., "Automated patient monitoring and diagnosis assistance AMBA Bus-Based System- On-Chip Communication Protocol,” IEEE
by integrating statistical and artificial intelligence tools," Proceedings International Conference on Communication Systems and Network
of the First Joint BMES/EMBS Conference. 1999 IEEE Engineering in Technologies, organized by SMVDU Katra, pp. 449-454,2011.
Medicine and Biology 21st Annual Conference and the 1999 Annual [22] A. Shrivastava, A. Singh, G.S. Tomar, “Design and Implementation of
Fall Meeting of the Biomedical Engineering Society (Cat. N, Atlanta, High Performance AHB Reconfigurable Arbiter for on chip Bus
GA, USA, 1999, pp. 700 vol.2-, doi: 10.1109/IEMBS.1999.803855.

uthorized licensed use limited to: AMRITA VISHWA VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on May 09,2023 at 14:58:47 UTC from IEEE Xplore. Restrictions app
Architecture,IEEE International Conference on Communication
Systems and Network Technologies, organized by SMVDU,Katra, pp
.455- 458, 2011.
[23] A. Shrivastava, G.S. Tomar, K.K. Kalra, Efficient Design and
Performance analysis for AMBA bus Architecture based System-on-
Chip,” IEEE International Conference on Computational Intelligence
and Communication Systems organized by R.G.P.V. Bhopal, pp. 656-
660,2010.
[24] Anurag Shrivastava, Sudhir Kumar Sharma, “Efficient Buss based
router for NOC Architecture, World Journal of Engineering (Scopus
indexed), Vol. 13, no.4, 2016

uthorized licensed use limited to: AMRITA VISHWA VIDYAPEETHAM AMRITA SCHOOL OF ENGINEERING. Downloaded on May 09,2023 at 14:58:47 UTC from IEEE Xplore. Restrictions app

You might also like