You are on page 1of 9

Research Proposal Format

1.1 Introduction

Emotional detection is an important process in a project that needs to be addressed


and this can be done well with the obviousness of human faces. Research has shown
that when a person's facial expressions are interpreted, they can actually interpret
what is being said and what is understood. People are able to perceive emotions which
is very important for communication to be successful so in normal conversation about
93 percent of communication depends on the emotion being expressed.

The purpose of chat bots is to support and measure business groups in their
relationships with customers. It can reside on any major chat apps like Facebook
Messenger, Slack, Telegram, Text Messages, etc. At the same time, they offer
companies new opportunities to improve customer engagement and efficiency by
reducing the overall cost of customer service. This project focuses on building custom
chatbots that will be your key step in the learning curve to create your own
professional chatbots. But should you be tired of the strange chat bots out there in the
world designed for business purposes in particular? In this project, we will be building
a comprehensive Chatbot service, which you can talk to. And talking to a chatbot
would not be a business venture. It can be just conversations. Additionally, on top of
that, the chatbot will also recommend songs to the user based on the user’s tone. This
song recommendation uses the Last.fm API, which is very similar to the popular
Spotify API. Also the tone / emotional analysis of the conversation we will be using
the IBM Tone Analyzer API. Interacting with these types of APIs is very important as
in today's world popular conversations do more than just have a data driven
conversation; to add additional features aimed at users. And the reason why choosing
a python to build a chatbot is because python has a wide variety of open libraries for
chatbots, including scikit-learn and TensorFlow. Good for small data sets and easy
analysis; and Python libraries are very effective.

1.2 Statement of the Problem

Chat Recommendation system uses Chatbot


1.3 Aim of the Study

In this project, we will combine multiple services and open source tools to create a
Chatbot that recommends songs based on the chat tone the user has with chatbot.

1.4 Objectives of the Study

• Implement an open source project in the area and address the shortcomings it faces
• You use multiple services to build a new service on them.
• Have a real-world chatbot, in which you can chat in real life as you chat with a real
person and enjoy the music recommended by the program.

1.5 Research Questions

1.6 Significance of the Study/Justification of the Study


2.0 Literature review

The use of Chabot’s has been steadily growing since the arrival of many messaging
platforms such as WhatsApp, Messenger, etc. These inventions have led organizations
to adopt Chabot to make communication easier and more effective. Chabots is a
conversational AI that can communicate with people through natural language
processing. The Chabot module in this project does not have a 3d Avatar as used in
[5] but contains voice-based interaction features shown in Figure 3. 3D image and
facial expressions can be added to our model over time to make the connection. more
effective than it already is. Research on Chabot's [8] has provided us with an in-depth
understanding of the various techniques that can be used to improve Chabot's.
Chabots can be divided into two types. That's right; Scripted bots and AI bots.
Recorded bots are bots that respond with pre-defined texts from the local library.
There are a number of barriers to chat AI such that they are incompatible and
unattractive. Chabots who follow a certain personality [10] are more attractive and
flexible than others. This type of chat AI can be used to make our model more
attractive and consistent. Typically, Chabots are trained using chat data sets collected
from a variety of sources. In our project we used a mixed training approach [11]
where the model starts training using a chat database and is tuned according to our
need by asking the user himself what response they can expect. This helps us to train
and perfect the model even more than we already have. Our model is user-centric [12]
and content drive. We allow the user to change the flow of conversation and at the
same time provide the help they need based on the emotional state of the user. As this
model is driven by content it enables us to conduct long conversations with the user
on a variety of topics [13].

The problem of proposal in the music field has additional challenges as one's
understanding of music depends on various parameters and issues. Studies have
shown that the art of song affects the planning of a customer. They found that music
preferences vary greatly on the basis of age, location and language .These categories
can be divided into smaller groups, countries, provinces, regional languages and much
more. It was reported that artists of the same sound do not really have the same music
and the taste of the audience may be different. Music can be approached or opposed in
every way which is very important to any material that can be used to display music,
for example, filtering, music, beats, arrivals and startups, enabling us to answer the
topic of similarity between two skilled workers from different tracks. The study found
that the majority of music listeners were between the ages of 16 and 45.

2.1 Gaps in literature


3.0 Methodology
Emotion of
3.1 Product Architecture chat using
IBM
Emotional
Get Songs
API
names Get
correspondin songs
g to Emotion similar to
Chatbot Backend from Last.FM a given
User
Frontend Server API song ID

CakeChat
Chatbot
Server

3.2 Method

• User starts a conversation


• Conversational Emotional Analysis is performed using the IBM Emotional API
• Get feedback on a conversation from Cakechat Chatbot
• Based on the Emotion app you see, top songs are available using the Last.fm API
songs
• If the user listens to a particular song for a period of time, the same song can be
recommended to the user using the Last.fm API

3.3 Procedure

Task 1

Setting up a CakeChat Chatbot Server in your area


Since Chatbot is a key part of the project, we will be setting up Chatbot first. This section
alone will give you a stronger understanding of the project. In this archive, you need
to install Cakechat Chatbot from this GitHub Repository.

Requirements

• Create a virtual environment of python-3.


• Fork and Assemble GitHub repository on your local machine.
• Enter the dependency as specified here.
• Download the pre-trained model by following the steps outlined here.
• Launch and check out Cakechat Server as specified here

Enter python dependency boto3 using the following command: - pip install boto3

If you see the following error: - AttributeError: 'str' object has no attribute 'decode' see
here. Expected Outcome

Finally, you will have a Cakechat Chatbot server running on your system and you will be
able to get feedback based on the conversation shown below.

Using the TensorFlow backend. Moreover, in testing using Postman, the result looks like
this: -

Task 2

To set up the IBM Tone Analyzer API

For this record, we would set up the IBM Tone Analyzer API to be able to analyze chat
tone (emotions). We are using the API here as we do not have that sufficient data,
integration capacity and time to build our API model. This landmark will let you see
why you prefer to use open source APIs rather than create your own models each
time.

Requirements

• Check out this website to get an idea of what the IBM Tone Analyzer API will do.
• Create an account on IBM Cloud (Free)
• Enable Tone Analyzer Service for your account from here.
• Try using Python code for chat analysis from here and don't forget to replace {apikey}
and {url} with the apikey and url you received by enabling Tone Analyzer Service for
your account.

Expected Outcome

After applying the code, tone Analyzer Service, you will have a text analysis on your
system that looks like this. Here as the top tone of the whole app and below the tone
of the whole sentence

So, now with any given sentence or sentence, you can analyze the tone of that. We'll use
this code in advance to analyze the tone of the conversation.

Task 3

Setting up the Last.fm Songs API and checking it out In this historic event,

we will be setting up the Last.fm Songs API so that we can recommend more songs to the
user based on the user's tone / feeling. We use the API here as we do not have enough
data, the ability to integrate and browse web songs based on specific tones that we
have previously released.

Requirements

o Create an API account on Last.fm from here and get API_KEY


o Check out the following API features with the help of your API_KEY
o Find the top 5 songs of a specific tag
o Find similar songs related to song References

Finding the top 5 songs of a particular tag • Finding the same songs related to the Note
Work song and JSON API only and not the Expected Outcome of the API track The
top 5 of a particular tag

Task 4

Understanding the whole process involved in chatbot operations


Once we are done with this landmark event, we will have a better understanding of the
complete project structure. In addition, after this, you can take this project to the other
side after this incident. Completing this landmark event is important as there are many
parts that need to be put together to complete the chatbot; having a clear idea of
product design is required here.

o Look at the High Level Path we mentioned earlier

The High Level Path

o User starts the conversation


o Chat Emotional Analysis is done using the IBM Emotional API
o Get the chat response from Cakechat Chatbot - Based on the Emotion we received,
top songs are obtained using Last songs .fm API
o When a user listens to a particular song ... a song similar to that song can be
recommended to the user using the Last.fm API. We will use this method manually to
get started quickly by building a chatbot on the next milestone.

Requirements

• Let's start a conversation with a good heart:

Hello !! What's going on? How is the day?

• Now let's send this document for Emotional Analysis. And we get the following
answer:
References
1. Z. Lian, Y. Li, J. Tao, and J. Huang. 2018. Speech emotion recognition via Contrastive Loss
under Siamese Networks. In The Joint Workshop of the 4th Workshop on Affective Social
Multimedia Computing and first Multi-Modal Affective Computing of Large-Scale Multimedia
Data Workshop (ASMMCMMAC’18), October 26, 2018, Seoul, Republic of Korea. ACM, New
York, NY, USA, 6 pages. https://doi.org/10.1145/3267935.3267946

2. M Anandan, M Manikandan, T Karthick, Advanced Indoor and Outdoor Navigation System


for Blind People Using Raspberry-Pi, Journal of Internet Technology, 2020, vol 21, pp:183-
195

3. Davletcharova, Assel & Sugathan, Sherin & Abraham, Bibia & James, Alex. (2015). Detection
and Analysis of Emotion From Speech Signals. Procedia Computer Science.
10.1016/j.procs.2015.08.032.

4. Mishra, Pawan & rawat, Arti. (2015). Emotion Recognition through Speech Using Neural
Network. International Journal of Advanced Research in Computer Science and Software
Engineering (IJARCSSE).

5. R. A. Khalil, E. Jones, M. I. Babar, T. Jan, M. H. Zafar and T. Alhussain, "Speech Emotion


Recognition Using Deep Learning Techniques: A Review," in IEEE Access, vol. 7, pp. 117327-
117345, 2019.

6. M. S. Likitha, S. R. R. Gupta, K. Hasitha and A. U. Raju, "Speech based human emotion


recognition using MFCC," 2017 International Conference on Wireless Communications,
Signal Processing and Networking (WiSPNET), Chennai, 2017, pp. 2257-2260.

7. Yenigalla, Promod & Kumar, Abhay & Tripathi, Suraj & Singh, Chirag & Kar, Sibsambhu &
Vepa, Jithendra. (2018). Speech Emotion Recognition Using Spectrogram and Phoneme
Embedding. 3688-3692. 10.21437/Interspeech.2018-1811.

8. Petrushin, Valery. (2000). Emotion recognition in speech signal: Experimental study,


development, and application. ICSLP. 222-225.

9. A., Sameera & John, Dr. (2015). Survey on Chatbot Design Techniques in Speech
Conversation Systems. International Journal of Advanced Computer Science and
Applications.

10. 14569/IJACSA.2015.060712. 10. P. A. Angga, W. E. Fachri, A. Elevanita, Suryadi and R. D.


Agushinta, "Design of chatbot with 3D avatar, voice interface, and facial expression," 2015
International Conference on Science in Information Technology (ICSITech), Yogyakarta, 2015,
pp. 326-330.

11. Zhang, Saizheng & Dinan, Emily & Urbanek, Jack & Szlam, Arthur & Kiela, Douwe & Weston,
Jason. (2018). Personalizing Dialogue Agents: I have a dog, do you have pets too?. 2204-
2213. 10.18653/v1/P18-1205.

12. Liu, Bing & Tür, Gokhan & Hakkani-Tur, Dilek & Shah, Pararth & Heck, Larry. (2018). Dialogue
Learning with Human Teaching and Feedback in End-to-End Trainable Task-Oriented
Dialogue Systems. 2060-2069. 10.18653/v1/N18-1187.
13. Fang, Hao & Cheng, Hao & Sap, Maarten & Clark, Elizabeth & Holtzman, Ari & Choi, Yejin &
Smith, Noah & Ostendorf, Mari. (2018). Sounding Board: A User-Centric and Content-Driven
Social Chatbot. 96-100. 10.18653/v1/N18-5020.

You might also like