You are on page 1of 7

IAES International Journal of Artificial Intelligence (IJ-AI)

Vol. 11, No. 3, September 2022, pp. 1019~1025


ISSN: 2252-8938, DOI: 10.11591/ijai.v11.i3.pp1019-1025  1019

A real-time quantum-conscious multimodal option mining


framework using deep learning

Jamuna S. Murthy1, Siddesh Gaddadevara Matt2, Sri Krishna H. Venkatesh3, Kedarnath R. Gubbi3
1
Department of Computer Science and Engineering, Ramaiah Institute of Technology, Bengaluru, India
2
Department of Information Science and Engineering, Ramaiah Institute of Technology, Bengaluru, India
3
Department of Information Science and Engineering, BNM Institute of Technology, Bengaluru, India

Article Info ABSTRACT


Article history: Option mining is an arising yet testing artificial intelligence function. It
aims at finding the emotional states and enthusiastic substitutes of
Received Sep 22, 2021 expounders associated with a discussion based on their suppositions,
Revised Jun 13, 2022 which are conveyed by various techniques of data. But there exist an
Accepted Jun 20, 2022 abundance of intra and inter expression collaboration data that influences
the feelings of expounders in a perplexing and dynamic manner. Step by
step instructions to precisely and completely model convoluted
Keywords: associations is the critical issue of the field. To pervade this break, an
innovative and extensive system for multimodal option mining
Convolutional neural network framework called a “quantum-conscious multimodal option mining
Interactive emotional dyadic framework (QMF)”, is introduced. This uses numerical ceremoniousness
motion capture of quantum hypothesis and a long transient memory organization. QMF
Long short-term memory system comprise of a multiple-modal choice combination method roused
Multimodal emotional lines by quantum obstruction hypothesis to catch the co- operations inside
dataset every expression and a solid feeble impact model motivated by quantum
Quantum-conscious multimodal multimodal (QM) hypothesis to demonstrate the communications
option mining framework between nearby expressions. Broad examinations are led on two
generally utilized conversational assessment datasets: the multimodal
Sentiment analysis
emotional lines dataset (MELD) and interactive emotional dyadic
motion capture (IEMOCAP) datasets. The exploratory outcomes
manifest that our methodology fundamentally outflanks a broad scope of
guidelines and best in class models.
This is an open access article under the CC BY-SA license.

Corresponding Author:
Jamuna S Murthy
Department of Computer Science and Engineering, Ramaiah Institute of Technology
Bengaluru, Karnataka, India
Email: jamunamurthy.s@gmail.com

1. INTRODUCTION
Researches in the multimodal option mining happens to be a center exploration point in artificial
intelligence linked regions such as emotional processing, data combination and multimodal association. Apart
from the conventional text-based sentiment examination, the multimodal assumption investigation requires
both the use of multimodality portrayal procedures and data combination strategies that are basically
highlight level, choice level and hybrid combination methods. Most of the existing multimodal assumption
investigation approaches center around distinguishing the extremity of individuals' conclusions which are
affixed in online forums. The multimodal reports utilized in these examinations are mostly a part of
individual accounts, without including collaborations among expounders or authors [1]–[3]. These days,
web-based media, contacts the different areas of social action. Individuals share data to readily comprehend

Journal homepage: http://ijai.iaescore.com


1020  ISSN: 2252-8938

and advise others identified with things they care for. A worldwide framework like Twitter permits
individualsto communicate their sentiments in a generally short media message. Getting what individuals feel
identified with items or services is significant both for the leaders that control the individual items/services
and furthermore for their shoppers. Building total information for chiefs should be possible as markers.
Markers are values that help chiefs to ground their choices [4]–[8].
Multimodal option mining in discussion intends to recognize the emotional conditions of various
expounders and study the nostalgic difference in every expounder over the span of the collaboration. The
past multimodal sentimental investigation draws near depicting the connections between various modalities,
the association elements in discussions that are more intricate, including intra-and inter expression
collaborations. Intra-expression connection alludes to the relationship between various modalities inside one
expression like the shared impact, joint portrayal and choice combination. Inter expression communication
includes rehashed connections among expounders, bringing about the trading of thoughts and affecting each
other [9]–[12].
There has been a lot of research work carried out in the field of sentiment analysis and option
mining. The authors in [13]–[16] proposed novel multimodal system for rating forecast of buyer items by
melding distinctive information sources, in particular physiological signs and worldwide surveys obtained
independently for the item and its brand. The surveys posted by worldwide watchers are recovered and
prepared utilizing natural language processing (NLP) strategy to register compound scores considered as
worldwide ratings. Additionally, electroencephalogram signs of the members were recorded at the same time
while watching various items on the PC's screen. From electroencephalogram (EEG) valence scores as far as
item evaluating are obtained utilizing self-report towards each saw item for procuring nearby appraising. A
higher valence score relates to the natural engaging quality of the member towards an item. Radio frequency
based relapse procedures are utilized to display EEG information to assemble a rating forecast system
considered as neighborhood rating. Moreover, artificial bee colony based streamlining calculation is utilized
to help the general presentation of the structure by combining worldwide and neighborhood evaluations.
Affective computing is an arising interdisciplinary exploration field uniting specialists and
professionals from different fields, going from artificial intelligence (AI), NLP, to psychological and
sociologies [17]–[20]. With the expansion of recordings posted on the web for product surveys, film audits,
political perspectives, and affective computing research has progressively advanced from customary
unimodal investigation to more perplexing types of multimodal examination. This is the essential inspiration
driving first of its sort, thorough writing survey of the assorted field of affective computing. Besides, existing
writing overviews come up short on a point-by-point conversation of best in class in multimodal influence
investigation structures, which this audit intends to address. In this paper, they centered for the most part
around the utilization of sound, visual and text data for multimodal influence examination. Following an
outline of various strategies for unimodal influence examination, they diagram existing techniques for
intertwiningdata from various modalities.
In [21], Liu et al. talked about, With the improvement of insightful dynamics, one sort of choice
mode includes countless decision makings (DM), which is called large-scale group decision making
(LSGDM). In LSGDM, arrogance is one of the normal practices due to numerous DMs' support and the
limited soundness of human choice. Carelessness normally contrarily affects LSGDM and can even prompt
disappointment in the last decision(s). To accomplish agreement is vital for LSGDM. Henceforth, the
reason for this paper is to propose an agreement model which thinks about carelessness practices, and the
paper predominantly centers around LSGDM dependent on FRPs-SC. In this, a DM bunching technique,
which consolidates fluffy inclination esteems likeness and fearlessness similitude, is utilized to arrange the
DMs with comparative suppositions into a subgroup. A gathering agreement list which considers both the
fluffy inclination esteems and fearlessness is introduced to quantify the agreement level among DMs.
In [22], Qian, b et al. talked about Recommender frameworks that propose things that clients may
like as indicated by their unequivocal and verifiable input data, like appraisals, surveys, and snaps.
Notwithstanding, most recommender frameworks center primarily around the connections among things and
the client's last buying conduct while overlooking the client's enthusiastic changes, which assume a
fundamental part in utilization movement. To address the test of working on the nature of recommender
administrations, paper proposes a feeling mindful recommender framework dependent on half breed data
combination in which three agent sorts of data are intertwined to extensively break down the client's
highlights: client rating information as unequivocal data, client interpersonal organization information as
verifiable data and notion from client audits as enthusiastic data. The test results confirm that the proposed
method gives a higher expectation rating and altogether builds the suggestion precision. In this, an exact
model dependent on certain criticism information is concentrated tentatively.
In [23], Zadeh et al. represented the issue of multimodal supposition investigation as demonstrating
intra- methodology and inter methodology elements. They presented a novel model which learns both such

Int J Artif Intell, Vol. 11, No. 3, September 2022: 1019-1025


Int J Artif Intell ISSN: 2252-8938  1021

elements from start to finish. The approach is customized for the unpredictable idea of communicating in
language in online recordings just as going with motions and voice. In the investigations, their model beats
cutting edge approaches for both multimodal and unimodal supposition examination.

2. METHOD
The proposed Making use of the quantum speculation ceremoniousness and the long short-term
memory (LSTM) designing, we put forward a novel and extensive quantum-like multiple-modal network
structure, which commonly replicate the intra- and inter- articulation cooperation components by getting the
associations betwixt various modalities and reasoning powerful effects among expounders. Regardless, the
quantum-conscious multimodal option mining framework (QMF) eliminates and addresses multiple-modal
highlights for all articulations in a solitary video utilizing a dense network-based convolutional neural
network (CNN) subnetwork and acknowledges them as data sources. Second, enlivened by the quantum
estimation hypothesis, the QMF presents a solid feeble impact replica to quantify the impacts amid
expounders across expressions and caters the subsequent impact grids into the QMF by fusing them into the
yield door of every LSTM constituent. Third, with printed and visual highlights as information sources, the
QMF utilizes 2 LSTM organizations to acquire their secret conditions, which are taken care of to the softmax
capacities to get the neighborhood slant examination results. At long last, a multimodal choice combination
approach motivated by quantum obstruction is intended to infer an ultimate conclusion dependent on the
neighborhood results.
We have planned and done broad tests on two generally utilized conversational supposition datasets
to exhibit the adequacy of the proposed QMF structure in examination with a wide scope of baselines, a
component level combination approach and a choice level combination approach, and five best in class
multimodal slant investigation models. The outcomes show that the QMF fundamentally outflanks this load
of relative models. The significant developments of the work introduced in this are summed up:
- A quantum-like multiple-modal network structure, which uses the quantum likelihood hypothesis inside
the LSTM engineering, to demonstrate both intra- and inter expression collaboration elements for
multimodal conclusion examination in discussions.
- A quantum obstruction propelled multimodal choice combination strategy to show the choice
relationshipsbetween various modalities.
- Quantum estimation roused solid frail impact model to improve derivations about friendly impact
among speakers than with past techniques.
The proposed engineering for this framework is given above in Figure 1. It shows the manner in
which this framework is planned and brief working of the framework. In the above figure of enthusiastic
dataset is multiparty conversational slant examination dataset named interactive emotional dyadic motion
capture (IEMOCAP), multimodal emotional lines dataset (MELD) and audio speech emotion. This dataset is
the biggest so far delivered for the expanded information with testing, preparing, and approval. Applying
arrangement calculation is profound learning CNN and LSTM with recurrent neural network (RNN)
algorithm to characterize the presentation of enthusiastic outcome, after that preparation boundaries to track
down the exact passionate classes execution of result check and get the end-product like conversational
opinion result are constant perceive the distinctive enthusiastic with appropriate group of deep learning (DL)
technique to precision score and execution methodology. In our work we are utilizing a few modules.

Figure 1. Architecture of quantum-conscious multimodal option mining framework


A real-time quantum-conscious multimodal option mining framework using … (Jamuna S Murthy)
1022  ISSN: 2252-8938

2.1 Dataset collection and data pre-processing


2.1.1 Dataset collection
Multiple-modal sentiment examination of discussions is another region; the criterion datasets are
somewhat restricted. We investigate the MELD1 and IEMOCAP2 datasets. MELD holds 13,708 expressions
from 1433 exchanges of TV sequences. The expressions in every exchange are explained with one of three
conclusions and one of seven feelings. The expressions in MELD are multiple-modal, including sound and
visual just as text-based data. In this, we just utilize text based and visual data. IEMOCAP is a multiple-
modal data set of 10 expounders engaged with two-way dyadic discussions. Every expression is explained
utilizing one of the accompanying feeling classes: outrage, bliss, pity, unbiased, energy, disappointment,
dread, shock, or others. We think about the initial four classifications and appoint different feelings to the
fifth class to contrast our QMF and other cutting-edge baselines in a reasonable way. One more dataset
utilized Audio feeling discourse.

2.1.2. Data pre-processing


For the picture data, the excessively huge pictures are re-scaled to 360∗640. For literary data, we
polish every writing by inspecting for unintelligible characters and revising spelling botches naturally. The
stop terms are eliminated utilizing a grade prevent word list from Python's natural language toolkit (NLTK)
bundle. We don't sift through the accentuation marks as certain accentuation marks, for example, question
marks and interjection focuses, will in general convey abstract data. We run the tests utilizing five-crease
cross-approval on every one of the similar models.

2.2. Feature selection and reduction


From among the all ascribes of the informational collection are separated, all highlights of video
edge and sound record utilizing image processing strategy as referenced, a few DL procedures are utilized
specifically, CNN, LSTM and RNN to extricate the highlights of all information documents. The
examination was rehashed with all the DL procedures utilizing all credits.

3. RESULTS AND DISCUSSION


A few quality demonstration quantifications like veracity, validity and blunder in arrangement are
examined for the computation of implementation viability of this replica. Veracity in the present situation
would mean the grade of occurrences effectively foreseeing from among every one of the attainable cases.
Veracity is characterized as the grade of revisions. Since our methodology and baselines are regulated
assumption investigation strategies, we embrace the exactness, review, F1 score, and precision as the
assessment measurementsto assess the characterization execution of every strategy.
Convolutional Neural Networks were utilized to fulfil certain evolution yield and secure
remarkable challenges. The utilization of Convolutional layers embraces convolving an indication or a
sketch with portions to acquire includes maps. In this way, a unit in a component map is associated with the
past layer through the loads of the parts. The loads of the bits are adjusted during the preparation stage by
back proliferation, to upgrade certain qualities of the info. Since the portions are divided between all units
of a similar elementmap, Convolutional layers have less loads to prepare than thick FC layers, making CNN
simpler to prepare and less inclined to over fitting. Besides, since a similar bit is convolved over the entire
picture, a similar element is recognized freely of the finding-interpretation invariance. By utilizing pieces,
data of the area is considered, which a helpful wellspring of setting data is. Generally, a non-straight
actuation work is applied on the yield of each neural unit. In the event that we stack a few Convolutional
layers, the removed highlights become moretheoretical with the expanding profundity [24], [25].
The output for the training graph, which has 7560 training samples and an epoch value of 50, is
shown in Figure 2 which turns out to provide training accuracy of 99.98% and testing accuracy as 98.81%.
This is a training graph that illustrates the training and testing loss, with the blue line denoting the training
loss and the orange line represents the testing loss.
Figure 3 shows the test results; it is built by taking into consideration 10 different datasets and
printing the predicted and test values. The confusion matrix, sometimes known as the error matrix, is shown
in Figure 4. It displays the algorithm's overall performance of specificity, accuracy, and precision of actual
and forecasted values. The darker blue represents a significant correlation, while the lighter blue indicates a
good affiliation. This is the final output shown in Figure 5, which indicates a 98% overall accuracy, along
with precision, recall, f1- score, overall support.

Int J Artif Intell, Vol. 11, No. 3, September 2022: 1019-1025


Int J Artif Intell ISSN: 2252-8938  1023

Figure 2. Training and testing graph of dataset

Figure 3. Testing performance of dataset

Figure 4. Confusion matrix Figure 5. Over all classification accuracy

4. CONCLUSION
Conversational sentiment analysis is a significant and testing task. We proposed a “Quantum-
Conscious Multimodal Option Mining Framework”, which uses the numerical ceremoniousness of quantum
hypothesis and a long momentary memory organization, to show both intra- and inter-expression connection
elements and perceive expounders' feelings. The principle thought is to utilize a density matrix based CNN, a
quantum estimation enlivened solid frail impact model and a quantum obstruction motivated multimodal
choice combination approach. The exploratory outcomes on the MELD and IEMOCAP datasets show that
our suggested QMF generally beats a broad scope of guidelines and best in class multimodal conclusion
examination calculations, in this way confirming the adequacy of utilizing quantum hypothesis formalisms to
demonstrate inter expression connection, the combination of multimodal substance and the combination of
nearby choices. Since this QMF model is so based on the density matrix representation, in future work we

A real-time quantum-conscious multimodal option mining framework using … (Jamuna S Murthy)


1024  ISSN: 2252-8938

can see how we will go even farther in accurately capturing speaker interactions and naturally incorporating
them into an end-to-end architecture.

ACKNOWLEDGEMENTS
This research was supported by Ramaiah Institute of Technology (MSRIT), Bangalore-560054 and
Visvesvaraya Technological University, Jnana Sangama, Belagavi-590018.

REFERENCES
[1] E. Cambria, “Affective computing and sentiment analysis,” IEEE Intell. Syst., vol. 31, no. 2, pp. 102–107, Mar. 2016, doi:
10.1109/MIS.2016.31.
[2] M. Dragoni, S. Poria, and E. Cambria, “OntoSenticNet: A commonsense ontology for sentiment analysis,” IEEE Intell. Syst., vol.
33, no. 3, pp. 77–85, May 2018, doi: 10.1109/MIS.2018.033001419.
[3] M. Soleymani, D. Garcia, B. Jou, B. Schuller, S.-F. Chang, and M. Pantic, “A survey of multimodal sentiment analysis,” Image
Vis. Comput., vol. 65, pp. 3–14, Sep. 2017, doi: 10.1016/j.imavis.2017.08.003.
[4] J. A. Balazs and J. D. Velásquez, “Opinion mining and information fusion: A survey,” Inf. Fusion, vol. 27, pp. 95–110, Jan. 2016,
doi: 10.1016/j.inffus.2015.06.002.
[5] I. Chaturvedi, E. Cambria, R. E. Welsch, and F. Herrera, “Distinguishing between facts and opinions for sentiment analysis:
Survey and challenges,” Inf. Fusion, vol. 44, pp. 65–77, Nov. 2018, doi: 10.1016/j.inffus.2017.12.006.
[6] S. Poria, E. Cambria, N. Howard, G. Bin Huang, and A. Hussain, “Fusing audio, visual and textual clues for sentiment analysis
from multimodal content,” Neurocomputing, vol. 174, pp. 50–59, Jan. 2016, doi: 10.1016/j.neucom.2015.01.095.
[7] S. Poria, N. Majumder, D. Hazarika, E. Cambria, A. Gelbukh, and A. Hussain, “Multimodal sentiment analysis: Addressing key
issues and setting up the baselines,” IEEE Intell. Syst., vol. 33, no. 6, pp. 17–25, Nov. 2018, doi: 10.1109/MIS.2018.2882362.
[8] P. Zhang, Z. Su, L. Zhang, B. Wang, and D. Song, “A quantum many-body wave function inspired language modeling approach,”
in International Conference on Information and Knowledge Management, Proceedings, Oct. 2018, pp. 1303–1312, doi:
10.1145/3269206.3271723.
[9] Y. Wang, C. von der Weth, Y. Zhang, K. H. Low, V. K. Singh, and M. Kankanhalli, “Concept based hybrid fusion of multimodal
event signals,” in 2016 IEEE International Symposium on Multimedia (ISM), Dec. 2016, pp. 14–19, doi: 10.1109/ISM.2016.0013.
[10] V. Perez Rosas, R. Mihalcea, and L.-P. Morency, “Multimodal sentiment analysis of spanish online videos,” IEEE Intell. Syst.,
vol. 28, no. 3, pp. 38–45, May 2013, doi: 10.1109/MIS.2013.9.
[11] Q. Yang, Y. Rao, H. Xie, J. Wang, F. L. Wang, and W. H. Chan, “Segment-level joint topic-sentiment model for online review
analysis,” IEEE Intell. Syst., vol. 34, no. 1, pp. 43–50, 2019, doi: 10.1109/MIS.2019.2899142.
[12] E. Cambria, S. Poria, A. Gelbukh, and M. Thelwall, “Sentiment analysis is a big suitcase,” IEEE Intell. Syst., vol. 32, no. 6, pp.
74–80, Nov. 2017, doi: 10.1109/MIS.2017.4531228.
[13] S. Kumar, M. Yadava, and P. P. Roy, “Fusion of EEG response and sentiment analysis of products review to predict customer
satisfaction,” Inf. Fusion, vol. 52, pp. 41–52, Dec. 2019, doi: 10.1016/j.inffus.2018.11.001.
[14] C. Welch, V. Perez-Rosas, J. K. Kummerfeld, and R. Mihalcea, “Learning from personal longitudinal dialog data,” IEEE Intell.
Syst., vol. 34, no. 4, pp. 16–23, Jul. 2019, doi: 10.1109/MIS.2019.2916965.
[15] S. Poria, N. Mazumder, E. Cambria, D. Hazarika, L. P. Morency, and A. Zadeh, “Context-dependent sentiment analysis in user-
generated videos,” in ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the
Conference (Long Papers), 2017, vol. 1, pp. 873–883, doi: 10.18653/v1/P17-1081.
[16] D. Hazarika, S. Poria, A. Zadeh, E. Cambria, L. P. Morency, and R. Zimmermann, “Conversational memory network for emotion
recognition in dyadic dialogue videos,” in NAACL HLT 2018 - 2018 Conference of the North American Chapter of the
Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference, 2018, vol. 1, pp.
2122–2132, doi: 10.18653/v1/n18-1193.
[17] S. Poria, E. Cambria, R. Bajpai, and A. Hussain, “A review of affective computing: From unimodal analysis to multimodal
fusion,” Inf. Fusion, vol. 37, pp. 98–125, Sep. 2017, doi: 10.1016/j.inffus.2017.02.003.
[18] S. Poria, D. Hazarika, N. Majumder, G. Naik, E. Cambria, and R. Mihalcea, “MELD: A multimodal multi-party dataset for
emotion recognition in conversations,” in Proceedings of the 57th Annual Meeting of the Association for Computational
Linguistics, 2019, pp. 527–536, doi: 10.18653/v1/P19-1050.
[19] B. Ojamaa, P. K. Jokinen, and K. Muischnek, “Sentiment analysis on conversational texts,” in Proceedings of the 20th Nordic
Conference of Computational Linguistics, NODALIDA 2015, 2015, pp. 233–237.
[20] J. Bhaskar, K. Sruthi, and P. Nedungadi, “Hybrid approach for emotion classification of audio conversation based on text and
speech mining,” Procedia Comput. Sci., vol. 46, pp. 635–643, 2015, doi: 10.1016/j.procs.2015.02.112.
[21] X. Liu, Y. Xu, and F. Herrera, “Consensus model for large-scale group decision making based on fuzzy preference relation with
self-confidence: Detecting and managing overconfidence behaviors,” Inf. Fusion, vol. 52, pp. 245–256, Dec. 2019, doi:
10.1016/j.inffus.2019.03.001.
[22] Y. Qian, Y. Zhang, X. Ma, H. Yu, and L. Peng, “EARS: Emotion-aware recommender system based on hybrid information
fusion,” Inf. Fusion, vol. 46, pp. 141–146, Mar. 2019, doi: 10.1016/j.inffus.2018.06.004.
[23] A. Zadeh, M. Chen, S. Poria, E. Cambria, and L.-P. Morency, “Tensor fusion network for multimodal sentiment analysis,” in
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, 2017, pp. 1103–1114, doi:
10.18653/v1/D17-1115.
[24] R. Girshick, “Fast R-CNN,” in Proceedings of the IEEE International Conference on Computer Vision, Dec. 2015, vol. 2015
Inter, pp. 1440–1448, doi: 10.1109/ICCV.2015.169.
[25] K. He, G. Gkioxari, P. Dollar, and R. Girshick, “Mask R-CNN,” in Proceedings of the IEEE International Conference on
Computer Vision, Oct. 2017, vol. 2017-Octob, pp. 2980–2988, doi: 10.1109/ICCV.2017.322.

Int J Artif Intell, Vol. 11, No. 3, September 2022: 1019-1025


Int J Artif Intell ISSN: 2252-8938  1025

BIOGRAPHIES OF AUTHORS

Jamuna S Murthy holds a Masters degree from M S Ramaiah Institutte of


Technology, India in 2017. She is currently a Resaerch Scholar at M S Ramaiah Institute of
Technology and currently working as an Assistant Professor in Department of ISE at B. N. M.
Institute of Technology, Bengaluru. Her research interests include Machine Learning, Natural
Language Processing, Bigdata and Cloud Computing. She can be contacted at email:
jamunamurthy.s@gmail.com.

Dr. Siddesh G M is currently working as Associate professor in Department of


Information Science and Engineering, M S Ramaiah Institute of Technology, Bangalore. He is
the recipient of Seed Money to Young Scientist for Research (SMYSR) for FY 2014-15, from
Government of Karnataka, Vision Group on Science and Technology (VGST). He has
published a good number of research papers in reputed International Conferences and
Journals. His research interests include Internet of Things, Distributed Computing and Data
Analytics. He can be contacted at email: siddeshgm@gmail.com.

Srikrishna V is a 2021 Information Science and Engineering graduate from


B.N.M. Institute of Technology, Bengaluru. His areas of interest include Sentiment Analysis,
Natural Language Processing, and Computer Vision under Machine Learning. Currently,
Srikrishna is working as a server-side developer at Altimetrik India Private Limited. He can be
contacted at email: srikihv@gmail.com.

Kedarnath R Gubbi has graduated from B.N.M. Institute of Technology in the


field of Information science and engineering. His area of interest includes Sentiment Analysis
and Natural Language Processing and is currently working as a front-end web developer at
NEC Technologies India Pvt. Ltd. He can be contacted at email:
kedarnathrg.1999@gmail.com.

A real-time quantum-conscious multimodal option mining framework using … (Jamuna S Murthy)

You might also like