You are on page 1of 51

EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

CHAPTER -1

INTRODUCTION

Music plays a very important role in enhancing an individual's life as it is an important medium of
entertainment for music lovers and listeners and sometimes even imparts a therapeutic approach.
In today's world, with ever increasing advancements in the field of multimedia and technology,
various music players have been developed with features like fast forward, reverse, variable
playback speed (seek & time compression),local playback, streaming playback with multicast
streams and including volume modulation, genre classification etc. Although these features satisfy
the user's basic requirements, yet the user has to face the task of manually browsing through the
playlist of songs and select songs based on his current mood and behavior. That is the
requirements of an individual, a user sporadically suffered through the need and desire of
browsing through his playlist, according to his mood and emotions. Using traditional music
players, a user had to manually browse through his playlist and select songs that would soothe his
mood and emotional experience. This task was labor intensive and an individual often faced the
dilemma of landing at an appropriate list of songs.

Emotions are synonymous with the aftermath of interplay between an individual's cognitive
gauging of an event and the corresponding physical response towards it. Among the various ways
of expressing emotions, including human speech and gesture, a facial expression is the most
natural way of relaying them. The ability to understand human emotions is desirable for human-
computer interaction. In the past decade, considerable amounts of research have been done on
emotion recognition from voice, visual behavior, and physiological signals respectively or jointly.
Very good progresses have been achieved in this field, and several commercial products have
been developed, such as smile detection in camera. Emotion is a subjective response to the outer
stimulus. A facial expression is a discemible manifestation of the emotive state, cognitive activity,
motive, and psychopathology of a person. Homo sapiens have been blessed with an ability to
interpret and analyze an individual's emotional state.

Dept. of MCA, SSIT, TUMAKURU Page 1


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

Objective

Facial expressions recognition is used to identify the basic human emotions. Facial expressions are
crucialto determine the emotions. Computer systems based on affective interaction could play an
important role in the next generation of computer vision systems. Face emotion can be used in areas of
security, entertainment and human machine interface (HMI). Through lips and eyes humans can
express their emotion. Generally, users have a huge collection of songs in their database or playlists. It
is very tedious to select the song form the large playlist so the user selects a random song which is not
according to the user mood. As a result, the songs are not matching to the user’s current emotion.
However, there is no application where the user could listen the songs based on the mood. Music is an
important part and a source of entertainment. Music can change the man life can help a human to come
out of depression.

Dept. of MCA, SSIT, TUMAKURU Page 2


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

1.2 Company Profile

Technofly Solutions is a leading electronics product design, development and services company. The
professionals with industrial experience in embedded technology, real time software, process control
and industrial electronics held the company.
The company is the pioneers in design and development of Single Board Computers, Compilers for
micro-controllers within India. Talented professional in the field of embedded hardware, software
design and development toil to reach its excellence.

Technofly Solutions & Consulting was found in year 2017 by a team with 14+ years of experience in
embedded systems domain. Technofly Solutions focuses globally on automotive embedded
technologies and VLSI Design, Corporate Training & Consulting. Till now we have delivered more
than 15+ Corporate Trainings for companies working in Embedded Automotive Technologies in India.
Also involved in the Development of OBD2 (On Board Diagnose Product for Passenger cars) for
clients in India.
Technical Expertise
Expertise in Embedded software development:

1. Microcontroller Drivers
2. Boot loader and System software
3. CAN, LIN and other serial communication software
4. On Board Diagnostics services [ISO-14229 and ISO-15765]
5. Model based software development: Modeling, Simulation, Auto coding and Reverse
Engineering
6. Application software development compliant with MISRA-C

Dept. of MCA, SSIT, TUMAKURU Page 3


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

7. UTOSAR Configuration and generation.

Automotive domain expertise, Process quality:

1. Body Control Module


2. Power Electronics, DCDC Convertors
3. HVAC Systems
4. Cluster and Head-Up Display systems
5. Driver Information systems
6. Seat Modules

Expertise in ASIC VLSI:

1. Verilog courses
2. SystemVerilog for design and Verification
3. UVM Methodology for Verification
4. Functional Verification

Process Quality:

1. Experience in SPICE Level 3 development.


2. Functional Safety ISO 26262 - ASIL B products
3. Adaptable to Customer procedures and guidelines

Technologies:

1. Microcontrollers 8, 16, 32 bit


2. Embedded C, Python, Iot (PHP Front End & MY SQL Back End) Wireless – Bluetooth, GPS,
GPRS, Wi-Fi
3. Communication protocols – Spi, I2c, CAN, LIN
4. Mat Lab simlink, Xilinx, Modelsim, LabView

Management:
The Management team as mixture of Technical and Business development expertise with 14+years of
experience in the Information Technology Field.
Current status of Technofly solution:

Dept. of MCA, SSIT, TUMAKURU Page 4


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

Present the company is involved with developing the GPS Training system for two wheels with our
associated partners also more focusing on Corporate Trainings on AUTOMOTIVE EMBEDDED and
Focused on providing ASIC solutions that involves Design and Verification IP’s And Functional
Verification of Designs.

Company Profile:

TechnoFly was formed by professionals with formal qualifications and industrial experience in the
fields of embedded systems, real-time software, process control and industrial electronics. The
company is professionally managed and supported by qualified experienced specialists and consultants
with experience in embedded systems – including hardware and software.
Initially, the company Developed system software tools; these include C Compilers for micro-
controllers and other supporting tools such as assembler, linker, simulator and Integrated Development
Environment. Later Single Board Computers (SBCs) – were developed and are still manufactured. Such
hardware boards support a broad range of processors – including 8 bit, 16 and 32 bit processor.
Since 2015, company also started offering design and development services. This includes a complete
spectrum of activities in product development life cycle that is idea generation, requirement gathering
to prototype making, testing and manufacturing. Company has so far provided product design services
for various sectors which include the Industrial automation, Instrumentation, Automotive, Consumer
and Defense sector.
Services of Technofly:

Embedded Software engineering Services:

When you don’t have enough time, or the right skills on hand, you can supplement your team with
expert embedded engineers from Technofly, who can tackle your projects with confidence, take out the

risk, and hit your milestones. We’ll take as much ownership as you want us to, and make sure your
project is done right, on time and on budget. Go ahead, check our reputation for on-time, on-budget
delivery. We've earned it, time and again.

We can help you cut risk on embedded systems R&D, and accelerate time to market. Technofly is
your best choice for designing and developing embedded products from concept to delivery. Our
team is well-versed in product life cycles. We build complex software systems for real-time

Dept. of MCA, SSIT, TUMAKURU Page 5


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

environments and have unique expertise and core competencies in the following domains:
Wireless, Access and IOT/Cloud.
Technofly solution also offer services which includes

1. Developing client / server applications to run on Windows / Linux


2. Develop / Test Internet based applications
3. Test suite development for applications and network protocols
4. Developing Networking tools for the enterprises
5. Verification & Validation of Enterprise applications
6. Software maintenance of enterprise applications

WORKING DEPARTMENT IN THE COMPANY


The team is associated with R&D in Wireless Communication Technologies department in the
company. The team is currently working on 4G-5G technologies associated with Cognitive Devices
such as WLAN, Bluetooth, Zigbee, other Mobile networks etc, for better achievable network
efficiencies. The work involves examining various methodologies currently available and under
development and implementation of the same for further analysis and in depth understanding of the
effects of these methods on network capacities.

The department is currently developing and examining optimal solutions for Network Data Rate
maximization in both co-operative and non-cooperative network users scenarios involving
cognitive(SU’s) and non-cognitive(PU’s) devices. The work is mainly concentrated on:

1. Resource management (Spectrum management as well as power management),

Dept. of MCA, SSIT, TUMAKURU Page 6


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

2. Power Spectral analysis,


3. Detection Test statics computation methodology analysis,
4. Low power VLSI design
5. Efficiency analysis

The department is actively involved in acquiring latest technologies related projects in Low power
VLSI, wireless domain and these projects are well thought out and detailed implementations are carried
out. Projects are mainly done on Verilog, MATLAB platform (from math works) and may also depend
on NS2, NetSim and Xilinx platforms as per the requirements of the project in progress.
Current internship involves study implementation and analysis of High speed and Energy Efficient
Carry Skip adder (CSKA) with Hybrid model for achieving high speed and reducing the power
consumption.

1. Study Requirements: Low power VLSI design and fundamentals of Digital circuits
2. Implementation Requirements: Verilog code / Modelsim tool
3. Detection Test Static: Simulation results
4. Platform: Verilog and simulated by Modelsim 6.4cand synthesized by Xilinx tool.

Engineering Departments and services:


Technofly solution offers services in the areas of Real-Time Embedded Systems, Low power VLSI
design, Verification and Software Engineering Services. Its strong team of around 30 engineers is
equipped with the right tools and right processes to deliver the best. Technofly solution also offers
customization of its products.
Real Time Embedded System and Low power VLSI design Department:
Technofly solution embedded software, hardware, system development, system integration, verification
and product realization services to customers in automotive electronics and consumer electronics
segments worldwide. Technofly solution has more than 14 years of experience in embedded systems on
a variety of platforms such as Microprocessors, Programmable Logic Devices (PLDs) and ASICs.
Accord develops applications based on the various commercially available real time and embedded
operating systems.
Technofly solution provides services in the following areas:

1. Design Services
2. Product Realization

Dept. of MCA, SSIT, TUMAKURU Page 7


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

Design Services:
Technofly solution offer services in the areas of:

1. Hardware design and development


2. Software design and development

Hardware Design and Development:


Hardware design and development services are related to:

1. High-speed digital design


2. Mixed signal design
3. Analog and RF design
4. PLD (FPGA/EPLD/CPLD) based design
5. Processor (Micro-controllers, DSP) based design
6. Mechanical enclosure design

The hardware design and development follow stringent life cycle guidelines laid out at Technofly
solution while accomplishing the following –
Design Assurance

1. Signal Integrity
2. Cross-talk
3. Matching and Impedance control
4. Power supply design with due emphasis for Low-power battery operated
5. applications
6. Thermal analysis
7. Clock distribution
8. Timing analysis
9. PCB layer stacking

Design optimization
Selection of components keeping in mind

1. Cost , Size
2. Operating and storage temperature

Dept. of MCA, SSIT, TUMAKURU Page 8


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

3. MIL/Industrial/Commercial grades based on application


4. Environmental specifications like vibration, humidity, and radiation

PCB design

1. Optimum number of layers for a given application


2. Material used for PCB
3. Rigid, Flexi and Rigid-Flexi designs based on applications

Pilot production

1. Component sourcing, inward inspection and inventory management


2. PCB assembly
3. Assembled PCB testing

Software Development
Software design and development services are related to

1. Real-time Embedded Application Development


2. Device Driver Development
3. BSP Development
4. Processor/OS Porting Services
5. RTOS based development
6. Board bring-up
7. Digital Signal Processing Algorithms
8. Porting across platforms

ASIC

1. Design IP’s
2. Verification IP’s (VIP’s)
3. Complete verification Solution

Skill Set

1. Language: C, C++, Assembly languages, Verilog and SystemVerilog

Dept. of MCA, SSIT, TUMAKURU Page 9


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

2. Hardware Platforms: ADI DSPs, TI DSPs, ARM, PowerPC, Xscale architecture


3. RTOS: Integrity, VDK, DSP OS, Micro C OS and OASYS
4. FPGA: Xilinx (Spartan and Virtex), Actel, Altera

Tools

1. Development Tools: In-circuit emulators of various processor environments


2. Compilers: Compilers/IDEs of various processor environments

FPGA Tools

1. Front End Design: XST, Synplify, SynplifyPro, Precision Synthesis


2. Back End Design: Xilinx ISE 9.1.03i ,Actel’s Libero 6.0 , Altera’s MAXPlusII

Simulation:

1. Xilinx ModelSim SE
2. Actel’s Libero 6.0
3. Altera’s MAXPlusII

Coverage Analysis:
TransEDA VN-Cover
Debugging:
ChipScope

Hardware Tools:

1. Spectrum Analyzer
2. Signal Generators
3. Logic Analyzer
4. Digital Storage Oscilloscopes
5. Multifunction Counters
6. Development Tools and In-circuit Emulators for all ADI DSP’s, TI DSP’s,
7. ARM Processor, PowerPC

Dept. of MCA, SSIT, TUMAKURU Page 10


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

8. ORCAD, Allegro, Pspice


9. Temperature and Humidity Chamber

Product Realization
Product Realization services are provided in the areas of:

1. Consumer Electronics
2. Automotive
3. Space
4. Defense
5. Simulation/Emulation
6. Temperature and Humidity Chamber
7. Temperature and Humidity Chamber

Software Engineering Department


Technofly solution has a dedicated group specializing in providing productivity tools for work group
collaboration, which also handles software projects for small and medium scale enterprises.
Our Work group productivity software suite Smart Works consists of software applications which can
help you plan and track your projects, Manage meetings and Track various issues to its closures. Smart
Works is affordably priced and uses TCP/IP based client server architecture at its core. Smart Works
server runs on all the windowing platforms (Windows 95/98/NT/2000/ME). Efforts are on to make
Smart works available on other platforms as well.
Technofly solution also offer services which includes

1. Developing client / server applications to run on Windows / Linux


2. Develop / Test Internet based applications
3. Test suite development for applications and network protocols
4. Developing Networking tools for the enterprises
5. Verification & Validation of Enterprise applications
6. Software maintenance of enterprise applications

Following are the skill sets Technofly solution has garnered in the area of software:

1. Programming Languages: C, C++, VC++, Java, C#, ASP.Net, PHP, Lex &Yacc, Perl,
Python, Assembly Language and Ada

Dept. of MCA, SSIT, TUMAKURU Page 11


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

2. Operating Environments: Real Time Operating Systems such as, GreenHills Integrity and
Micro C-OS. DSP OS, VDK, OASYS and MS-WINCE, MS-Windows, Unix/Linux and
MPE/iX are the operating systems that Accord provides services.

CHAPTER -2

LITERATURE SURVEY

In the year of 2009, Barbara Raskauskas had published an article stating the music is one of the widely
accepted culture and language which can be accepted by any type of people. She mentioned that "music
does fill the silence and can hide the noise. Music can convey cultural upbringing. Music is pleasurable
and speaks to us, whether or not the song has words. I've never met a person who didn't like some form
of music. Even a deaf friend of mine said she liked music; she could feel the vibration caused by music.
Finding enjoyment in music is universal."

Emily Sohn (2011) stated that “People love music for much the same reason they're drawn to sex,
drugs, gambling and delicious food, according to new research”. Through the actions and activities
carried out by the people around, the statement mentioned is widely accepted by the public. Study had
proved that human brain will release dopamine, a kind of chemical generated by body which involved
addiction and motivation when people listen to harmony or melody that touch an individual.

Comparison with similar expression can be done in order to detect the facial expression of an
individual. In the year of 2005, Mary Duenwald had published an article which summarizes that
scientists had did several studies and researches and shown that facial expressions across the globe fall
roughly into seven categories:

i. Sadness: The eyelids droop while the inner corners of the brows rise. When in extreme sadness, the
brows will all push nearer together. As for the lips, both of its corners pull down and the lower lip may
push up in a mope.

ii. Surprise: Both the upper eyelids and brows rise, and the jaw drops open.

Dept. of MCA, SSIT, TUMAKURU Page 12


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

iii. Anger: Both the lower and upper eyelids squeeze in as the brows move down and draw
together. The jaw pushes forward, the upper and lower lip press on each other when the lower lip
pushes upper a bit.

iv. Contempt: The expression appears on one side of a face: One half of the upper lip tightens
upward.

v. Disgust: The individual’s nose wrinkles and the upper lip rise while the lower lip protrudes.

vi. Fear: The eyes widen and the upper lids rise. The brows draw together while the lips extend
horizontally.

vii. Happiness: The corners of the lips lifted and shaped a smile, the eyelids tighten, the cheeks rise
up and the outside corners of the brows pull down.

2.1 Existing And Proposed System

 We have came up with advanced technology where we can develop any type of apps. At present
Spotify is the best music player where can listen only online music but can download songs in
device.

 In spotify we get to see many feature like Recently played songs , Shows you might like , Your
2020 wrapped , Free Kicks , Popular playlists , Best of artist , Popular an trending .

 Here we have proposed an Emotion Based Music Player where user can play a song according to
their mood and emotion. It aims to provide user preferred music with respect to their mood.

 Emotion based music player is an idea of giving an effortless mobile application where user can
play song according to their present emotion or mood . After clicking to specific mood its
recognized by inner code and accordingly play list is displayed.

2.2 Tools and Technologies Used

Introduction to Artificial Intelligence

Artificial Intelligence (AI) has gained popularity in both technological media and academic circles.
Computers will take over roles that used to be assigned to human engineers and developers. Machine
Dept. of MCA, SSIT, TUMAKURU Page 13
EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

Learning (ML), in particular, is a branch of AI that describes one of the basic assumptions of AI the
capacity to learn from experience rather than simple instructions. Methods of ML, Supervised and
Unsupervised Learning are two forms of ML that employ different methodologies to train the system.
Supervised and Unsupervised Learning are subcategories of Classification, Regression, and Clustering.
It is imperative to recognize the value of data and data transmission in the age of computers. Hence, the
Internet of Things (IoT) avails a proper technological system with different software and platforms for
data transmission between different devices and computer systems. A big data definition can be defined
as very large volumes of complex data generated by a variety of software programs, applications, and
devices that provide a constant stream of identification and analysis. In this paper, we'll analyze the
results and outcomes of combining AI, ML, Blockchain, and IoT technologies.

AI is a term that describes computational intelligence in a nuanced way. The integration of these
concepts has an immediate impact on the quality and production of a wide variety of products and
services, as well as on employment, productivity, and completion. Some of the topics covered include
reasoning, programming, artificial life, belief revision, data mining, distributed AI, expert systems,
genetic algorithms, systems, knowledge representation, ML, actually natural language understanding,
neural networks, theorem proving, constraint satisfaction, and theory of computation, or so they
thought. Contrary to popular belief, AI has been applied across the board, including technology,
science, academics, health care, commerce, administration, finance, marketing, economics, the stock
market, and law. A variety of viewpoints have been taken about AI's relevance, and AI is often viewed
as an essential tool for improving the world. Essentially, ML is a part of AI, which is vital. By
providing computers with the ability to learn in general and formulate their programs in particular, ML
aims to make them more human-like in their judgments and actions.

The goal of ML is to provide computers with the capacity to learn and to design their programs,
allowing them more perfect behavior and judgments. This is done with as little human interaction as
possible, i.e., no explicit programming. Contrary to common assumption, the learning process is
somewhat automated and enhanced depending on the experiences of the machines throughout the
process. The computers are supplied high-quality data, and various methods are used to develop ML
models to train the machines on this data. The method used is determined by the kind of data at hand
and the sort of action that, for the most part, must be automated.
Dept. of MCA, SSIT, TUMAKURU Page 14
EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

An IoT network is a global collection of connected devices that exchange data or information over the
Internet. These devices range from everyday technology gadgets like smartphones, headphones, and
wearable devices to household objects like smart lights, washing machines, smart thermostats, coffee
makers, and sophisticated industrial tools, machine components, cars, airplanes, and anything else you
can think of. IoT is a massive network of interconnected objects that seamlessly merges the digital and
physical worlds, connecting and exchanging data with each other. Consider IoT to be a massive
network of interconnected objects that smoothly merges the digital and physical worlds by connecting
and exchanging data. Blockchain is unquestionably the next major technological revolution, or so they
believed. Contrary to common opinion, it has essentially opened up a new sector of development
known as blockchain development. It has the potential to transform how we manage data and, more
broadly, how we do business. Blockchain was originally designed to support Bitcoin, but it has proven
to be so versatile and secure that firms in a variety of industries are now using it, or so they thought

ARTIFICIAL INTELLIGENCE

Computing intelligence is referred to as artificial intelligence. AI has gained a lot of traction in recent
years. AI is the reproduction of human cognition in computers that are designed to learn and mimic the
behavior of humans. Computers can learn from their errors and complete tasks like humans. AI
technology has become integral to our lives today, regardless of whether we are consumers or
professionals. AI has gained popularity in both technological media and academic circles. The
prognoses and futuristic methods associated with AI are well known and numerous. Computers will
take over roles that used to be assigned to human engineers and developers. Most AI applications
present today hinge heavily on deep learning and natural language processing. These techniques can
train computers to effectuate specific enterprises by executing specific tasks based on large amounts of
data and patterns in them. In such an environment, old procedures associated with traditional software
engineers get outdated since AI systems can replace these absolute functions.

The software would then not need external support by an engineer anymore and develop
autonomously. This methodology can let a system generate code and solve problems on its own. These
advancements in the field of AI have far-reaching ramifications for the economic structure as a whole.
In addition to having a significant impact on employment, productivity, and completion, these ideas

Dept. of MCA, SSIT, TUMAKURU Page 15


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

have direct effects on the quality and manufacturing of a large range of products and services. The
perspective of John McCarthy the pioneer of AI classified AI in the 1990s as creating machines and
computer programs that are extremely intelligent and sophisticated through “science and engineering”.
Computers can replicate the functions that a human brain can perform, including learning and problem
solving, the term "AI" is employed. Based on the domains of intelligence, AI may be divided into 16
sections. Programming, reasoning, artificial life, data mining, belief revision, expert systems,
distributed AI, genetic algorithms, ML, systems, knowledge representation, natural language
understanding, constraint satisfaction, theorem proving, neural networks, and theory of computation are
among some of the themes explored. AI has been occupied in every sector including technology,
science, academia, healthcare, commerce, administration, banking, marketing, economics, stock
market, and law.

Importance of AI

Today’s operations around us have become smoother by AI and its effective operations. The role of AI
and ensuing components have been existing for a long time now. People have emphasized the value of
AI in different approaches and can be viewed as a tool, making the world a better place. These AI
devices are surrounded everywhere, not like to travel very far to fetch these high-tech devices. AI is
becoming increasingly important in daily life. As a consequence of its importance, it makes life easier
for us. Human effort is minimized as much as possible by these technologies since they have been
designed to make life easier and more efficient. Automated methods are usually available to them.
Using parts that are connected to this technology should therefore never require manual intervention.
They provide a high degree of precision and accuracy while improving the speed and efficiency of your
operations, which is why they are so valuable and crucial. The technologies and apps available to us
today not only make our lives easier and safer, but they are also relevant in our everyday lives in
various ways.

AI ensures the perfect technology, which has a fine line between elevation and destruction. AI is
helping in our lives at a different level. AI uses data to automate repetitive learning and discoveries. AI
conducts regular, high-volume automated activities rather than automating manual ones. And it does so
consistently and without tiring. Of course, people are still required to configure the system and ask the
appropriate questions. Instead of automating manual tasks, AI performs frequently, high-volume
automated tasks. AI makes use of intelligence to automate repeated learning and discovery. As long as

Dept. of MCA, SSIT, TUMAKURU Page 16


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

they configure the system correctly and ask the appropriate questions, the system performs steadily and
without fatigue. A progressive learning algorithm allows data to program itself. A learning algorithm
searches in the data for structure and regularities to help it learn. Video game algorithms can teach
themselves how to play and can learn where to suggest products next on the internet. As new data is
added, the models are updated.

Features of Python programming language

Figure.2.1: Readable: Python is a very readable language.


2. Easy to Learn: Learning python is easy as this is a expressive and high level programming
language, which means it is easy to understand the language and thus easy to learn.
3. Cross platform: Python is available and can run on various operating systems such as Mac,
Windows, Linux, Unix etc. This makes it a cross platform and portable language.
4. Open Source: Python is a open source programming language.
5. Large standard library: Python comes with a large standard library that has some handy codes and
functions which we can use while writing code in Python.
6. Free: Python is free to download and use. This means you can download it for free and use it in your
application. See: Open Source Python License. Python is an example of a FLOSS (Free/Libre Open
Source Software), which means you can freely distribute copies of this software, read its source code
and modify it.

Dept. of MCA, SSIT, TUMAKURU Page 17


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

7. Supports exception handling: If you are new, you may wonder what is an exception?exception is
an event that can occur during program exception and can disrupt the normal flow of program. Python
supports exception handling which means we can write less error prone code and can test various
scenarios that can cause an exception later on.

8. Advanced features: Supports generators and list comprehensions. We will cover these features later.

9. Automatic memory management: Python supports automatic memory management which means
the memory is cleared and freed automatically. You do not have to bother clearing the memory.

Applications of Python

1. Web development – Web framework like Django and Flask are based on Python. They help you
write server side code which helps you manage database, write backend programming logic, mapping
urls etc.
2. Machine learning – There are many machine learning applications written in Python. Machine
learning is a way to write a logic so that a machine can learn and solve a particular problem on its own.
For example, products recommendation in websites like Amazon, Flipkart, eBay etc. is a machine
learning algorithm that recognises user’s interest. Face recognition and Voice recognition in your phone
is another example of machine learning.
3. Data Analysis – Data analysis and data visualisation in form of charts can also be developed using
Python.4. Scripting – Scripting is writing small programs to automate simple tasks such as sending
automated response emails etc. Such type of applications can also be written in Python programming
language.
5. Game development – You can develop games using Python.
6. You can develop Embedded applications in Python.
7. Desktop applications – You can develop desktop application in Python using library like TKinter or
QT.
Python is increasingly being used as a scientific language. Matrix and vector manipulations are
extremely important for scientific computations. Both NumPy and Pandas have emerged to be essential
libraries for any scientific computation, including machine learning, in python due to their intuitive
syntax and high-performance matrix computation capabilities.

Dept. of MCA, SSIT, TUMAKURU Page 18


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

In this post, we will provide an overview of the common functionalities of NumPy and Pandas. We will
realize the similarity of these libraries with existing toolboxes in R and MATLAB. This similarity and
added flexibility have resulted in wide acceptance of python in the scientific community lately. Topic
covered in the blog are:
 Overview of NumPy
 Overview of Pandas
 Using Matplotlib
This post is an excerpt from a live hands-on training conducted by CloudxLab on 25th Nov 2017. It
was attended by more than 100 learners around the globe. The participants were from countries namely;
United States, Canada, Australia, Indonesia, India, Thailand, Philippines, Malaysia, Macao, Japan,
Hong Kong, Singapore, United Kingdom, Saudi Arabia, Nepal, & New Zealand.

NumPy:NumPy stands for ‘Numerical Python’ or ‘Numeric Python’. It is an open source module of
Python which provides fast mathematical computation on arrays and matrices.
examples:A[2:5] will print items 2 to 4. Index in NumPy arrays starts from 0
A[2::2] will print items 2 to end skipping 2 items
A[::-1] will print the array in the reverse order
A[1:] will print from row 1 to end
Pandas:
Similar to NumPy, Pandas is one of the most widely used python libraries in data science. It provides
high-performance, easy to use structures and data analysis tools.
Matplotlib:
Matplotlib is a 2d plotting library which produces publication quality figures in a variety of hardcopy
formats and interactive environments. Matplotlib can be used in Python scripts, Python and IPython
shell, Jupyter Notebook, web application servers and GUI toolkits.

Example 1: Plotting a line graph Example 2: Plotting a histogram

1 >>>import matplotlib.pyplot asplt 1 >>>import matplotlib.pyplot asplt


2 >>>plt.plot([1,2,3,4]) 2 >>>x=[21,22,23,4,5,6,77,8,9,10,31,32,33,34,35,36,
3 >>>plt.ylabel('some numbers') 3 37,18,49,50,100]
4 >>>num_bins=5

Dept. of MCA, SSIT, TUMAKURU Page 19


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

>>>plt.hist(x,num_bins,facecolor='blue')

4 >>>plt.show() 5
>>>plt.show()

2.2 Hardware and Software Requirements

Hardware Requirements

Processor : Pentium 4 and


Above System memory: 128
MB and more RAM
: 512 MB and
more

Software Requirements

Operating System :
Windows 7,10 Front End
: Python
Tools : Python IDLE
Browser : Google Chrome Internet Explorer,
Mozilla, Firefox

Dept. of MCA, SSIT, TUMAKURU Page 20


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

2.4 BACKGROUND STUDY

Emotions are the bodily feelings associated with mood, temperament, personality or character. Paul
Ekman had developed the classifications of basic emotions which are anger, disgust, fear, happiness,
sadness and surprise in 1972.

A facial expression can be expressed through the motions or from one or more motions,
movements or even positions of the muscles of the face. These movements transmit of the emotional
status of an individual. Facial expression can be adopted as voluntary action as individual can control
his facial expression and to show the facial expression according to his will. For an example, a person
can make the eyebrow closer and frown to show through the facial expression that he is angry. On the
other hand, an individual will try to relax the face’s muscle to indicate that he is not influence by the
current situation. However, since facial expression is closely associated with the emotion, thus it is
mostly an involuntary action. It is nearly impossible for an individual to insulate himself from
expressing the emotions. An individual may have a strong desire or will to not to express his current
feelings through emotions but it is hard to do so. An individual may show his expression in first few
micro-second before resume to a neutral expression.

Since the work of Darwin in 1872, the behavioral scientists had actively involved in the research and
analysis of facial expression detection. In 1978, Suwa et al. presented his early attempt on the idea of
automatically facial expressions analysis by tracking the motion of twenty identified spots on an image
sequence. After Suwa’s attempt, there are lots progresses in developing the computer systems in order
to help human to recognize and read the individual’s facial expression, which is a useful and natural
medium in communication.

Facial expression analysis includes both detection and interpretation of facial motion and
recognition of expression. The three approaches which enabled the automatic facial expression

Dept. of MCA, SSIT, TUMAKURU Page 21


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

analysis (AFEA) include i) face acquisition, ii) facial data extraction and representation, and iii) facial
expression recognition.

Figure.2.2: Basic structure of facial expression analysis systems

The “Emotion Based Music Player” is a device developed aimed to detect the emotion of an individual,
and play the lists of music accordingly. First, the individual will reflect his emotion through the facial
expression. After that, the device will detect the condition of the facial expression, analyze it and
interpret the emotion. After determined the emotion of the individual, the music player will play the
songs which can suit the current emotion of the individual. The device will focus on the analysis of the
facial expression only which does not include the head or face movement.

2.4.1 PROBLEM STATEMENT

The significance of music on an individual's emotions has been generally acknowledged. After the
day’s toils and hard works, both the primitive and modern man able to relax and ease him in the melody
of the music. Studies had proof that the rhythm itself is a great tranquilizer.

However, most people facing the difficulty of songs selection, especially songs that match individuals’
current emotions. Looking at the long lists of unsorted music, individuals will feel more demotivated to

Dept. of MCA, SSIT, TUMAKURU Page 22


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

look for the songs they want to listen to. Most user will just randomly pick the songs available in the
song folder and play it with music player. Most of the time, the songs played does not match the
user’s current emotion. For an example, when a person is sad, he would like to listen to some heavy
rock music to release his sadness. It is impossible for the individual to search from his long playlist for
all the heavy rock music. The individual would rather choose the songs randomly or just “play all” for
all the songs he had.

Besides, people get bored with this traditional way of searching and selecting songs. The method had
been implemented since few years back.

2.4.2 PROJECT OBJECTIVE

2.4.2.1 General Objective

The main objective of this project is to develop the “Emotion Based Music Player” for all kinds of
music lovers which aimed to serve as a platform to assist individuals to play and listen to the songs
according to his emotions. It is aimed to provide a better enjoyment of entertainment to the music
lovers.

2.4.2.2 Specific Objective

The Specific Objective for this project is specified as below:

i. To propose a facial expression detection model to detect and analyze the emotion of an
individual.
ii. To accurately detect the four basic emotions, namely normal, happy, sad and
surprise.
iii. To integrate the music player into the proposed model to play the music based on the
emotions detected.

2.4.2.3 SCOPE OF STUDY

Currently there is no commonly used application or system which able to detect the emotion of
individual and play music according to the emotion detected. This system will propose a new lifestyle

Dept. of MCA, SSIT, TUMAKURU Page 23


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

to all music lovers which will ease them when searching for playlists. The target users will be the music
lovers. English will be the main medium of language used in the proposed model and specifically
aimed to detect some basic emotion such as normal, happy, sad or surprise. The evaluation of this
system will base on the accuracy in detecting the correct facial expression as well as playing the right
category of songs.

The scope of study will be as follow:

I. Study on the different method in expression detection. With the improvement of technology in
image processing, more and more experts did researches or introduced different technique in processing
a specific area or small area on an image. All these techniques can be applied to the facial expression
processing. Researches had to be done in order to understand each technique which will then be useful in
the project development.
II. Get information on the tools appropriate for the facial expression detection in order to build the
proposed model for this project. Different tools (software and hardware) are studied on their
feasibility and functionality as well as user- friendliness in order to figure out the most suitable
and applicable tools to develop it.

2.4.3 FEASIBILITY STUDY

II.4.3.1 Within the scope frame

As stated above, the focus of this project will be entirely on the detection of facial expression and
integrates it to the music player. As a prototype, the proposed model will detect only the basic emotion
such as happy, sad, normal, and etc.

To understand the scope of the project in depth, massive research needs to be done in order to figure
out the current technology in facial expression detection and the used of the technology. The studies
include as well the technical aspect, especially the programming languages used in writing image
processing and facial expression detection. Few analyses need to be done in order to find out the most
user friendly programming language and application in order to develop this system. This is so that the
development of the system is within the timeline given.

II.4.3.2 Within the time frame

Dept. of MCA, SSIT, TUMAKURU Page 24


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

The Final Year Project (FYP) course is divided into two parts, which are the FYP1 and FYP2. As in the
syllabus given, FYP1 will focus more on the brain storming for FYP title, proposal writing, data
gathering, and researches well as the report writing. On the other hand, the development of prototype,
implementation and testing of the developed proposed model will be done in FYP2.

As the phase has been divided evenly between the two semesters which is equivalent to eight months,
the project will be able to finish on time with the proper time management.
CHAPTER-3

SOFTWARE REQUIREMENT SPECIFICATIONS

Requirement Analysis
1. Python: Python is the basis of the program that we wrote. It utilizes many of the python libraries.

2. Libraries:

• Numpy: Pre-requisite for Dlib


• Scipy: Used for calculating Euclidean distance between the eyelids.
• Playsound: Used for sounding the alarm
• Dlib: This program is used to find the frontal human face and estimate its pose using 68 face
landmarks.
• Imutils: Convenient functions written for Opencv.
• Opencv: Used to get the video stream from the webcam, etc.
3. OS: Program is tested on Windows 10 build 1903 and PopOS 19.04

4. Laptop: Used to run our code.

5. Webcam: Used to get the video feed.

SOFTWARE REQUIREMENT SPECIFICATION


A software requirement specification (SRS) may be a comprehensive description of the
supposed purpose and surroundings for software that is below development. The SRS describes fully
what the software system can do and the way it will be expected to perform. An SRS is a fully

Dept. of MCA, SSIT, TUMAKURU Page 25


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

description of the activities of the system to be developed. It includes the functional and non-
functional requirement for the software that is developed. What the software should do is included in
the functional requirement includes the constraint on the design are implementation of the system.

• Operating system : Windows 10

• Software Packages : Pandas, matplotlib, numpy.

• Coding Language : Python.

• IDE : Python 3.7

HARDWARE REQUIREMENTS SPECIFICATION


1. Laptop with basic hardware.

2. Webcam

SPECIAL REQUIREMENTS

• Track User Emotion

• Recommend by sorting playlist based on user's current emotion

• Sort songs by 2 factors

-Relevancy to User Preference

-Effect on User Emotion

REQUIREMENTS

The requirements, which are commonly considered, are classified into four categories, namely,
functional requirements, non-functional requirements, hardware requirements and software
requirements.

FUNCTIONAL REQUIREMENTS

Useful requirements describe the product's internal activities: that is, the technical
subtleties, monitoring and handling of data and other specific functionality demonstrating how

Dept. of MCA, SSIT, TUMAKURU Page 26


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

to satisfy the use cases. They are upheld by non-utilitarian prerequisites that force the plan or
execution of imperatives.
• Reading the insects data of csv.
• System should Process the data.
• System should predict Calories using algorithm.

NON-FUNCTIONAL REQUIREMENTS

Unnecessary prerequisites are requirements that suggest parameters that can be used to assess a
framework's operation rather than specific activities. This should be distinguished from useful
necessities indicating explicit behaviour or capabilities. Reliability, flexibility, and price are common
non-practical necessities.
Non-practical preconditions are often referred to as system utilizes. Different terms for non-
practical necessities are "limitations, “quality characteristics" and "Prerequisites for administration”.
On the off chance that any special cases occur during the product execution, it should be obtained
and keep the framework from slamming along these lines. The architecture should be created in order
to incorporate new modules and functionalities, thereby promoting application development. The
cost should be small as a result of programming packages being freely accessible.
• Usability
• System Should be user Friendly
• Reliability
• System should be Reliable
• Performance
• System Should not take excess time in detecting

Dept. of MCA, SSIT, TUMAKURU Page 27


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

Dept. of MCA, SSIT, TUMAKURU Page 28


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

CHAPTER -4

METHODOLOGY

Install OpenCV

OpenCV is the open source computer vision library, and it's super powerful. Here are a few random
things that you can do with it:

 video input and output


 3D reconstruction
 video analysis
 object detection
 image stitching to make panoramas
 ...

You could start with the OpenCV tutorial , and also have a look at the very nice blog from Adrian
Rosebrock. That's actually where I first got in touch with OpenCV!

So let's install the tool. Today we'll install it through Anaconda. I assume that you have already
installed anaconda for python 3.X. If not, you can follow these instructions .

Add the following packages to anaconda: opencv numpy matplotlib

If you know how to use the command line, you can install them by typing:

conda install opencv numpy matplotlib


Otherwise, just use the anaconda navigator.

OPEN CV
Open CV is an open source C++ library for image processing and computer vision originally
developed by Intel and now supported by Willow Garage. It is free for both commercial and non-
commercial use. Therefore it is not mandatory for your Open CV communication to open for free it is a
library of many inbuilt functions mainly aimed at real time image processing. Now it has several
hundreds of image processing and computer vision algorithms which make developing advanced
computer vision applications easy and efficient if you are having any troubles with installing Open CV
or configure your Visual Studio IDE for Open CV, please refer to Installing and Configuring with
Visual Studio.

Dept. of MCA, SSIT, TUMAKURU Page 29


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

Key Features

 Optimized for real time image processing & computer vision applications
 Primary interface of Open CV is in C++
 There are also C, Python and JAVA full interfaces
 Open CV applications run on Windows, Android, Linux, Mac and iOS
 Optimized for Intel processors

OPENCV MODULES
Open CV has a modular structure. The main modules of Open CV are listed below. I have provided
some links which are pointing to some example lessons under each module.

 Core
This is the basic module of Open CV. It includes basic data structures (e.g.- Mat data structure) and
basic image processing functions. This module is also extensively used by other modules like highgui,
etc.
 Highgui
This module provides simple user interface capabilities, several image and video codecs, image and
video capturing capabilities, manipulating image windows, handling track bars and mouse events
and etc. If you want more advanced UI capabilities.
 Imgproc
This module includes basic image processing algorithms including image filtering, image
transformations, color space conversions and etc.

 Video

This is a video analysis module which includes object tracking algorithms, background subtraction
algorithms and etc.
 Objdetect
This includes object detection and recognition algorithms for standard objects. Open CV is now
extensively used for developing advanced image processing and computer vision applications. It has
been a tool for students, engineers and researchers in every nook and corner of the world.

Dept. of MCA, SSIT, TUMAKURU Page 30


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

HISTORY
Officially launched in 1999, the OpenCV project was initially an Intel Research initiative to
advance CPU-intensive applications, part of a series of projects including real-time ray tracing and 3D
display walls. The main contributors to the project included a number of optimization experts in Intel
Russia, as well as Intel’s Performance Library Team. In the early days of OpenCV, the goals of the
project were described as:
 Advance vision research by providing not only open but also optimized code for basic vision
infrastructure. No more reinventing the wheel.
 Disseminate vision knowledge by providing a common infrastructure that developers could build on, so
that code would be more readily readable and transferable.
 Advance vision-based commercial applications by making portable, performance- optimized code
available for free with a license that did not require to be open or free themselves.

The first alpha version of OpenCV was released to the public at the IEEE Conference on Computer
Vision and Pattern Recognition in 2000, and five betas were released between 2001 and 2005. The first
1.0 version was released in 2006. In mid-2008, OpenCV obtained corporate support from Willow
Garage, and is now again under active development. A version 1.1 "pre-release" was released in
October 2008. The second major release of the OpenCV was on October 2009. OpenCV 2 includes
major changes to the C++ interface, aiming at easier, more type-safe patterns, new functions.
And better implementations for existing ones in terms of performance (especially on multi-core
systems). Official releases now occur every six months and development is now done by an
independent Russian team supported by commercial corporations. In August 2012, support for OpenCV
was taken over by a non-profit foundation OpenCV.org, which maintains a developer and user site.
APPLICATIONS OF OPENCV

 2D and 3D feature toolkits


 Egomotion estimation
 Facial recognition system
 Gesture recognition
 Human–computer interaction (HCI)
 Mobile robotics
 Motion understanding
 Object identification
 Segmentation and recognition
 Stereo sis stereo vision: depth perception from 2 cameras

Dept. of MCA, SSIT, TUMAKURU Page 31


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

 Structure from motion (SFM)


 Motion tracking
 Augmented reality
To support some of the above areas, OpenCV includes a statistical machine learning library that
contains:
 Boosting
 Decision tree learning
 Gradient boosting trees
 Expectation-maximization algorithm
 k-nearest-neighbor-algorithm
 Naive Byes classifier
 Artificial neural networks
 Random forest
 Support vector machine (SVM)

Programming language
OpenCV is written in C++ and its primary interface is in C++, but it still retains a less comprehensive
though extensive older C interface. There are bindings in Python, Java and OPEN CV
PYTHON/OCTAVE. The API for these interfaces can be found in the online documentation Wrappers
in other languages such as C#,Perl,Ch,and Ruby have been developed to encourage adoption by a wider
audience.
All of the new developments and algorithms in OpenCV are now developed in the C++ interface.
Hardware Acceleration
If the library finds Intel's Integrated Performance Primitives on the system, it will use these
proprietary optimized routines to accelerate itself. A CUDA-based GPU interface has been in progress
since September 2010. An OpenCL-based GPU interface has been in progress since October
2012.documentation for version 2.4.9.0 can be found at docs.opencv.org.
OS support
OpenCV runs on a variety of platforms. Desktop Windows, Linux, OS X, FreeBSD, NetBSD OpenBSD, Mobile,
Android, iOSMaemo, BlackBerry 10.The user can get official releases from Source Forge or take the latest
sources from GitHub. OpenCV uses CMake.

Dept. of MCA, SSIT, TUMAKURU Page 32


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

Architecture:

Figure.4.1: Architecture
The working of the system consists of two parts a. Search by song name b. Capture the mood In search
by song name user need to input the song name the system will search the song using linear search
algorithm. In capture mood the user face would be detected using the OpenCV library. The image
would be capture and using Haar cascade Algorithm the mood will be detected and the songs will be
played.

Face Detection Open CV

It uses machine learning algorithms to detect and recognize face, identify objects, classify human
actions in videos, from camera to find similar images from an image database. OpenCV uses Haar
Cascade classifier. Haar cascade classifier is a machine learning concept where a cascade function is
trained from images both positive and negative. Based on the training it is then used to detect the
objects in the other images. The algorithms break the task of identifying the face into thousands of
smaller, bite-sized tasks, each of which is easy to solve. These tasks are also called classifier this is
what the initial dataset looks like. The dataset has various fields which need to be cleaned and all the
category types have been converted to numeric.

Steps for Open CV Algorithm

Dept. of MCA, SSIT, TUMAKURU Page 33


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

• Detect face using Haar cascade classifier.

• Load the image and convert it into grayscale.

• Once the image is converted from RGB to grey, the system will locate the features in face using
“detect Multiscale” function.

• From the above step, the function detect Multiscale returns 4 values – X-coordinate, Y-coordinate,
width(w) and height(h) of the detected features of the face. Based on these 4 values systems will draw a
rectangle around the face.

Dept. of MCA, SSIT, TUMAKURU Page 34


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

CHAPTER -5

SYSTEM DESIGN

5.1 MODULES

The facial expression recognition system is trained using supervised learning approach in which it takes
images of different facial expressions. The system includes the training and testing phase followed by
image acquisition, face detection, image preprocessing, feature extraction and classification. Face
detection and feature extraction are carried out from face images and then classified into six classes
belonging to six basic expressions which are outlined below:

5.1.1 Image Acquisition


Images used for facial expression recognition are static images or image sequences. Images of face can be
captured using camera.

5.1.2 Face detection


Face Detection is useful in detection of facial image. Face Detection is carried out in training dataset using
Haar classifier called Voila-Jones face detector and implemented through Opencv. Haar like features
encodes the difference in average intensity in different parts of the image and consists of black and white
connected rectangles in which the value of the feature is the difference of sum of pixel values in black and
white regions [6].

5.1.3 Image Pre-processing


Image pre-processing includes the removal of noise and normalization against the variation of pixel
position or brightness.
a) Color Normalization
b) Histogram Normalization

Dept. of MCA, SSIT, TUMAKURU Page 35


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

5.1.4 Feature Extraction


Selection of the feature vector is the most important part in a pattern classification problem. The image of
face after pre-processing is then used for extracting the important features. The inherent problems related
to image classification include the scale, pose, translation and variations in illumination level [6]. The
important features are extracted using LBP algorithm which is described below:

Local Binary Pattern


LBP is the feature extraction technique. The original LBP operator points the pixels of an image with
decimal numbers, which are called LBPs or LBP codes that encode the local structure around each pixel.
Each pixel is compared with its eight neighbors in a 3 X 3 neighborhood by subtracting the center pixel
value. In the result, negative values are encoded with 0 and the others with 1. For each given pixel, a
binary number is obtained by merging all these binary values in a clockwise direction, which starts from
the one of its top-left neighbor. The corresponding decimal value of the generated binary number is then
used for labeling the given pixel. The derived binary numbers are referred to be the LBPs or LBP codes
[6].

Binary: 11010011
5 9 1 1 1 0
Decimal:
Thresholding 211
4 4 6 1 1

7 2 3 1 0 0

Figure.5.1: The Basic LBP Operator

Dept. of MCA, SSIT, TUMAKURU Page 36


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

(P=8,R=1.0) (P=12,R=1.5)
Figure.5.2: Two examples of extended LBP

The limitation of the basic LBP operator is that its small 3×3 neighborhood cannot capture the dominant
features with large scale structures. As a result, to deal with the texture at different scales, the operator was
later extended to use neighborhoods of different sizes [7]. Using circular neighborhoods and bilinearly
interpolating the pixel values allow any radius and number of pixel in the neighborhood. Examples of the
extended LBP are shown above (Figure 5.3.2), where (P, R) denotes sampling points on a circle of radius
of R.

Further extension of LBP is to user uniform patterns. A LBP is called uniform if it contains at most two
bitwise transitions from 0 to 1 or vice versa when the binary string is considered circular. E.g.00000000,
001110000 and 11100001 are uniform patterns. A histogram of a labelled image f1(x, y) can be defined as

Hi = ∑x,y I (fl(x, y) = i),

i = 0, … , n – 1

Where n is the number of different labels produced by the LBP operator and

1 A is true
I(A) = {0 A is false

This histogram contains information about the distribution of the local micro-patterns, such as edges, spots
and flat areas, over the whole image. For efficient face representation, feature extracted should retain also
Dept. of MCA, SSIT, TUMAKURU Page 37
EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

spatial information. Hence, face image is divided into m small regions R0, R1,…,Rm and a spatially
enhanced histogram is defined as

Hi = ∑x,y I (fl(x, y) = i)I((x, y)ϵRj

Classification
The dimensionality of data obtained from the feature extraction method is very high so it is reduced using
classification. Features should take different values for object belonging to different class so classification
will be done using CNN.

CNN

Facial expression recognition is a topic of great interest in most fields from artificial intelligence and
gaming to marketing and healthcare. The goal of this paper is to classify images of human faces into one
of seven basic emotions. A number of different models were experimented with, including decision trees
and neural networks before arriving at a final Convolutional Neural Network (CNN) model. CNNs work
better for image recognition tasks since they are able to capture spacial features of the inputs due to their
large number of filters. The proposed model consists of six convolutional layers, two max pooling layers
and two fully connected layers. Upon tuning of the various hyperparameters, this model achieved a final
accuracy of 0.60.

Dept. of MCA, SSIT, TUMAKURU Page 38


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

CHAPTER -6

DETAILED DESIGN

6.1 USE CASE DIAGRAM

Dept. of MCA, SSIT, TUMAKURU Page 39


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

6.2 DATA FLOW DIAGRAM

Launch the
Emotion
Based Music Load/Capture
image

Open the
music Wait for
couple minute
for the system
to detect the
Is user wish to face features
customize the
songs in
folder?
Listen to the
songs play by
[Yes] [No] music player.

User customize
the songs in
Is user wish
the different
category of
to change
song folders the emotion?

No Yes
Save the
changes made

YES No

Close the System

Dept. of MCA, SSIT, TUMAKURU Page 40


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

6.3 FLOW CHART

Dept. of MCA, SSIT, TUMAKURU Page 41


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

CHAPTER -6

SYSTEM IMPLEMENTATION

Feelings recognition or may be referred as emotion recognition, is the process where machine
recognize the human emotion from facial expressions, this process involves two steps:

● Face detection: to extract the face region from a given image.

● Emotion recognition: which we emphasize/focus here.

While face recognition is an old problem, and there are many existing solution and per-trained models
available, emotion recognition problem is a little bit harder, that is, we are motivated to train our own
convolutional neural network (CNN) model.

First of all, we need to get a clean dataset to train our model, and after a little search we’ve found that
there is available dataset on kaggle called ”FER2013”, and it was prepared by Pierre-Luc Carrier and
Aaron Courville, as part of an ongoing research project with the fol- lowing specs;

The data consists of 48*48 pixel grayscale images of faces. The faces have been au- tomatically
registered so that the face is more or less centred and occupies about the same amount of space in each
image. The task is to categorize each face based on the emotion shown in the facial expression in to
one of seven categories as follows;

Dept. of MCA, SSIT, TUMAKURU Page 42


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

label Expression

0 Angry

1 Disgust

2 Fear

3 Happy

4 Sad

5 Surprise

6 Neutral

Table.6.1: List of used Emotion

Dept. of MCA, SSIT, TUMAKURU Page 43


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

Snapshots

Figure.6.1:

Dept. of MCA, SSIT, TUMAKURU Page 44


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

Figure.6.2:

Figure.6.3:

Figure.6.4:

Dept. of MCA, SSIT, TUMAKURU Page 45


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

Figure.6.5:

Figure.6.6:

Dept. of MCA, SSIT, TUMAKURU Page 46


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

Figure.6.7:

Figure.6.8:

Dept. of MCA, SSIT, TUMAKURU Page 47


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

Figure.6.9:

Figure.6.10:

Dept. of MCA, SSIT, TUMAKURU Page 48


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

CHAPTER -7
SYSTEM TESTING

System testing of software or hardware is a testing conducted on complete, integrated system, to


evaluation the system's compliance with its specified requirements. System testing falls within the
scope of black-box, testing and as such required no knowledge of the inner design of the code or logic.

As every person has unique facial features, it is difficult to detect accurate human emotion or mood.
But with proper facial expressions, it can be detected up to a certain extent. The camera of the device
should have a higher resolution. The application shall run successfully and meet the outcome as
precisely as possible.

For Example: For “angry”, “Fear”, “disgust” and “surprise” moods, devotional, motivational, and
patriotic songs are suggested to the user. Hence, the user is also provided with mood improvement.

Instructions Explained to the User. In this scenario, the users were given instructions as to what is to be
done to perform the prediction of the emotion expressed which provided the following results.
Sometimes in cases where the inner emotion is sad and facial expression is happy it resulted in a failed
case.

Thus, the accuracy may vary as follows-

USER EMOTION EXPRESSION ACCURACY

1 Happy Happy 100%

2 Sad Happy 0%

3 Sad Sad 100%

Table..7.1: Test Case

Dept. of MCA, SSIT, TUMAKURU Page 49


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

CHAPTER -8
CONCLUSION

Music Player has changed in many different ways since it was first introduced. Now-a-days people like
to get more out of different applications, so the designing of applications and the thought process
behind it has changed. The users prefer more interactive & sophisticated yet simple to use applications.
The proposed system (Facial Expression based Music Player) presents a music player capable of
playing the songs based on emotion detected and thereby providing the user with an easy way to play.
Similarly, we have imported the data sets and libraries and needed data for the final implementation of
the system. The Emotion-Based Music Player is used to automate and give a better music player
experience for the end-user. The application solves the basic needs of music listeners without troubling
them as existing applications do: it uses increases the interaction of the system with the user in many
ways. It eases the work of the end-user by capturing the image using a camera, determining their
emotion, and suggesting a customized play-list through a more advanced and interactive system. The
user will also be notified of songs that are not being played, to help them free up storage space. Our
main aim is to consume users’ time and to satisfy them.

Dept. of MCA, SSIT, TUMAKURU Page 50


EMOTION BASED MUSIC SYSTEM-18MCA63 2021- 2022

CHAPTER -9
REFERENCE
[1] Anagha S. Dhavalikar and Dr. R. K. Kulkarni, “Face Detection and Facial Expression Recognition
System” 2014 Interntional Conference on Electronics and Communication System (ICECS 2014).
[2] Yong-Hwan Lee, Woori Han and Youngseop Kim, “Emotional Recognition from Facial
ExpressionAnalysis using Bezier Curve Fitting” 2013 16th International Conference on Network-Based
Information Systems. 3. Thayer ‖ The biopsychology of mood&arousal‖,Oxford University Press ,1989.
[3] Arto Lehtiniemi and Jukka Holm,“UsingAnimated Mood Pictures in Music Recommendation”,
2012 16th International Conference on Information Visualisation.
[4] F. Abdat, C. Maaoui and A. Pruski, “Human-computer interaction using emotion recognition from
facial expression”, 2011 UKSim 5 European Symposium on Computer Modelling and Simulation

[5] https://iq.opengenus.org/face-recognition-usingfisherfaces/

Dept. of MCA, SSIT, TUMAKURU Page 51

You might also like