Professional Documents
Culture Documents
1 INTRODUCTION
RELATED WORK: SECURITY AND PRIVACY ON
2 IOT TECHNOLOGY
5 DISCUSSION
2
3
IoT Security
1. IoT (Internet of Things) has widely implemented in various sectors
• healthcare, manufacture, transportation, smart city and others
2. IoT implementation is to improve society welfare through various technological facilities, IIoT has
positive impact on industry development
3. Trade-off between excellency provided by IoT technology and its security services
4. Two main concern in IoT implementation are security and privacy
• financial loss, production loss due to device malfunction, loss of intellectual property and even the
presence of accidents or even losing life
5. Smart city
✚ smart lighting, pollution, transport, traffic congestion, waste management, and scarcity resources
− traffic incidents, financial losses, water pollution, information breach, even resulting to losing life
in case attacker try to manipulate building control system
6. Healthcare
- interruptions at end and network devices, modification and fabrication of patient data, or even using
malicious code to modify programs on devices used to treat patients
4
5
SECURITY AND PRIVACY ON IOT TECHNOLOGY
• various types of security threats and vulnerability issues that
emerged on IoT technology which were classified based on
three layers, namely perception, network and application.
• application layer include DoS and data manipulation that
occurs in smart environments, malicious attacks and physical
security on smart grids, insider misuse, cyber-attacks, privacy
attacks on maintenance systems and insecure transmission
control, like configuration and spectrum sharing on smart
transportation. Meanwhile, in regard to privacy issues breach
level index reports that there are more than two million
records about breach data [6] and almost 100 data records
that are stolen or loss in every hour. Figure 1 show the
number of privacy issues that emerged during 2017.
•
S P
e r
Application Layer
c i
u v
Network Layer
r a
i Perception Layer c
t y
y IoT environment
11
Integrated Model of Technical and Non-
Technical Perspectives on Managing IoT
Security
12
Regulators’ Role in IoT Security
“Who is speaking?"”
13
Industries’ Role in IoT Security
14
End Users’ Role in IoT Security
15
Speaker Recognition System
16
The Common Methods
§ MFCC
Feature Extraction [2],[3],[4],[6],[8],[9],[10],[11]
§ LPC [2]
Biometric
(Speaker Recognition)
§ VQ [3],[4]
Feature Matching § HMM [5],[6],[7]
§ GMM [8],[9],[10]
§ NN [11],[12]
§ Mel Frequency Cepstral Coefficient (MFCC) is a coefficient that represents an audio based
on a perception of the human hearing system.
18
19
Input Signal
Database Noise
20
Input signal
21
Simulation Block Diagram
cD
HPF
continuous
DWT
speech
cA Mel result
LPF MFCC VQ
cepstrum
22
23
Simulation Result
§ The graph represent the
accuracy (%) of the speaker
recognition system with the
different SNR values (dB) for
the MFCC, DWT-MFCC level 1,
DWT-MFCC level 2, and DWT-
MFCC level 3 method.
§ SNR = 15 dB to 40 dB to
represent the environment that
contain noise in general.
§ To evaluate the performance of
the MFCC and DWT-MFCC
and also the effect of the
decomposition level.
24
Simulation Result
üAll of the methods have the same graph
trend, which is the greater the SNR value,
the higher the accuracy of the system.
üThe accuracy of the MFCC method is the
lowest.
üThe accuracy of the DWT-MFCC level 3
method is the highest at 25 to 35 dB SNR
values, but the accuracy starts to
decreases at SNR 22.5 dB, even at 15 dB
SNR the accuracy is lower than MFCC.
üAt SNR 15 to 22.5 dB and 37.5 to 40 dB,
the DWT-MFCC Level 2 method has the
highest accuracy value compared to the
others. At 25 to 35 dB SNR, this method is
not the most superior, but has a value that
is equally good and not much different
compared to the DWT-MFCC level 2
method.
25
Discussion
26
Conclusion
27
Future Work
28
References
[1] Arun A. Ross, Karthik Nandakumar, Anil K. Jain. “Handbook of Multibiometrics”, Springer, 2006.
[2] I. Daly, Z. Hajaiej, and A. Gharsallah, “Speech analysis in search of speakers with MFCC, PLP, Jitter and Shimmer,”
Proc. Int. Conf. Adv. Syst. Electr. Technol. IC_ASET 2017, pp. 291–294, 2017.
[3] Y. Zhao and L. Zhu, “Speaker-dependent isolated-word speech recognition system based on vector quantization,” Proc. -
2017 Int. Conf. Comput. Network, Electron. Autom. ICCNEA 2017, vol. 2017–Janua, pp. 113–137, 2017.
[4] J. Martinez, H. Perez, E. Escamilla, and M. M. Suzuki, “Speaker recognition using Mel Frequency Cepstral Coefficients
(MFCC) and Vector quantization (VQ) techniques,” CONIELECOMP 2012 - 22nd Int. Conf. Electron. Commun. Comput.,
pp. 248–251, 2012.
[5] P. Dymarski and S. Wydra, “Large Margin Hidden Markov Models in command recognition and speaker verification
problems,” Proc. IWSSIP 2008 - 15th Int. Conf. Syst. Signals Image Process., no. section 3, pp. 220–224, 2008.
[6] I. Shahin, “Speaker identification in shouted talking environments based on novel Third-Order Hidden Markov Models,”
ICALIP 2014 - 2014 Int. Conf. Audio, Lang. Image Process. Proc., pp. 352–357, 2015.
[7] N. S. Dey, R. Mohanty, and K. L. Chugh, “Speech and speaker recognition system using Artificial Neural Networks and
Hidden Markov Model,” Proc. - Int. Conf. Commun. Syst. Netw. Technol. CSNT 2012, pp. 311–315, 2012.
[8] X. Fan and J. H. L. Hansen, “speaker identification within whispered speech audio streams,” vol. 19, no. 5, pp. 1408–
1421, 2011.
[9] S. Nakagawa, L. Wang, and S. Ohtsuka, “Speaker identification and verification by combining MFCC and phase
information,” IEEE Trans. Audio, Speech Lang. Process., vol. 20, no. 4, pp. 1085–1095, 2012.
[10] W. U. Zunjing and C. A. O. Zhigang, “Improved MFCC-based feature for robust speaker identification,” Tsinghua Sci.
Technol., vol. 10, no. 2, pp. 158–161, 2005.
29
References
[11] N. Chauhan and M. Chandra, “Speaker recognition and verification using Artificial Neural Network,” Proc. 2017 Int. Conf.
Wirel. Commun. Signal Process. Networking, WiSPNET 2017, vol. 2018–Janua, pp. 1147–1149, 2018.
[12] W. Wang, Q. Yuan, R. Zhou, and Y. Yan, “Characterization vector extraction using Neural Network for speaker
recognition,” Proc. - 2016 8th Int. Conf. Intell. Human-Machine Syst. Cybern. IHMSC 2016, vol. 1, pp. 355–358, 2016.
[13] Noor Almaadeed, Amar Aggoun, Abbes Amira, “Speaker identification using multimodal Neural Networks and Wavelet
Analysis”, IET Biom., Vol. 4, Iss. 1, pp. 18–28, 2015
[14] Steven B. Davis, Paul Mermelstein, “Comparison of parametric representations for monosyllabic word recognition in
continuously spoken sentences”, IEEE Transactions on Acoustics, Speech, and Signal Processing, Vol. ASSP-28, No. 4,
August 1980.
[15] S. Umesh and Rohit Sinha, “A study of filter bank smoothing in MFCC features for recognition of children’s speech”, IEEE
Transactions on Audio, Speech, and Language Processing, Vol. 15, NO. 8, November 2007.
[16] Kritagya Bhattarai, P.W.C. Prasad, Abeer Alsadoon, L. Pham, A. Elchouemi, “Experiments on the MFCC application in
speaker recognition using Matlab”, Seventh International Conference on Information Science and Technology, 2017.
[17] Subhasmita Sahoo, Aurobinda Routray, “MFCC feature with optimized frequency range: an essential step for emotion
recognition”, IEEE Conference, 2016.
[18] Shailaja S Yadav, D.G. Bhalke, “Speaker identification system using Wavelet Transform and VQ modeling technique”,
International Journal of Computer Applications (0975 – 8887), Volume 112 – No. 9, February 2015.
[19] Alfred J. Menezes, Paul C. Van Ooschot, Scott A. Vanstone, Handbook of Applied Cryptography. Boca Raton, CRC
Press, , 1997.Inc.
[20] John D. Woodward, Jr., Nicholas M. Orlans, Peter T. Higgins, Biometrics, McGraw-Hill, 2003.
30
References
[21] Stephane G. Mallat, “A theory for multiresolution signal decomposition: the Wavelet representation”, IEEE Transactions
On Pattern Analysis and Machine Intelligence. Vol I I . No. 7. July 1989.
[22] "Wavelet Toolbox", Users Guide, Matlab.2009.
[23] Yoseph Linde, Andres Buzo, A M Robert M. Gray, “An algorithm for vector quantizer design”, IEEE Transactions On
Communications, Vol. Com-28, No. 1, January 1980.
[24] Laurence Rabiner. “Fundamentals of speech recognition”, Prentice-Hall International, Inc., 1993.
[25] Vassil Panayotov, Guoguo Chen, Daniel Povey, Sanjeev Khudanpur, “Librispeech: An ASR corpus based on public
domain audio books”, 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2015.
[26] Shailaja S Yadav, D.G. Bhalke, “Speaker Identification System using Wavelet Transform and VQ modeling Technique”,
International Journal of Computer Applications (0975 – 8887), Volume 112 – No. 9, February 2015.
31
Acknowledgment
32
LOGO
Thank you...
33