Professional Documents
Culture Documents
Bacolor, Pampanga
A Project Proposal
Prepared by:
Lumbang, Elidan S.
Mallari, Amabee Leicezter Zosivic
A. Pardo, Allaine Louise Y.
Ocampo, Arovica P.
Submitted to:
Enrg. Enmar
Tuazon
Signals, Spectra, Signal Processing (Laboratory) Teacher
INTRODUCTION
At the core of this initiative lies the application of advanced machine learning algorithms,
strategically employed to analyze key facets of human expression, encompassing facial features,
voice tonality, and physiological signals. Through the judicious utilization of these indicators,
our system aspires to decode a spectrum of emotions, ranging from joy and tranquility to sorrow
and excitement. Upon identifying the user's emotional state, the system will autonomously
generate personalized music playlists tailored to enhance and resonate with the prevailing mood.
This project represents a convergence of technical acumen in machine learning and a nuanced
understanding of the intersection between technology and emotion. Beyond mere technical
prowess, it promises a heightened and personalized user experience, potentially reshaping the
dynamics of human interaction with technology, fostering a more empathetic and responsive
paradigm.
PROJECT INSPIRATION
Inspired by the intricate dance between emotions and music, our project seeks to
revolutionize the way individuals experience and interact with their favorite tunes. Motivated by
a profound commitment to enhancing user experiences, we aim to create a cutting-edge music
system that dynamically adapts to users' emotional states through facial recognition technology.
Beyond the realm of traditional AI applications, we are driven by the curiosity to explore the
intersection of artificial intelligence and human emotions. By fusing technology and the arts, we
aspire to offer a more immersive and personalized music listening experience, one that not only
entertains but also resonates on a deeper, emotional level. This endeavor is not just a
technological feat; it's a venture into the realm of affective computing, where machines can
understand and respond to human emotions. We envision our project contributing not only to
innovative human- machine interactions but also potentially offering therapeutic benefits by
harnessing the power of music to positively impact mental well-being. Ultimately, our project
represents a fusion of technology, creativity, and emotional intelligence, paving the way for a
new era in personalized, emotionally resonant music experiences.
At School. The project offers an engaging and innovative platform for educational institutions to
incorporate technology into the learning environment. It can be utilized to demonstrate practical
applications of machine learning concepts, fostering a deeper understanding of how advanced
algorithms can be employed to enhance user experiences. By incorporating mood detection and
personalized music recommendation, the system can contribute to creating a positive emotional
environment within the school setting. This could potentially impact students' well-being,
concentration, and overall enjoyment of the learning process.
At Home: At home, the system can extend its influence into personal productivity and learning
environments. Tailoring music to match the user's mood can create an optimal atmosphere for
studying, working, or relaxing, thereby potentially boosting productivity and overall well-being.
At Community: This project holds significance in promoting mental health and well-being
within the community. By providing a tool for individuals to explore and express their emotions
through music, it contributes to a more emotionally aware and supportive community
environment.
MACHINE LEARNING SUBJECT
The experimental focus for this project centers on individuals aged 18-25, specifically
targeting classmates within this demographic. This age group was strategically chosen for several
reasons. First, the homogeneity within the 18-25 age range in terms of technological familiarity,
shared academic contexts, and similar lifestyles provides a conducive environment for initial data
collection and model training. As early technology adopters, classmates in this age group are
more likely to engage with and provide valuable feedback on innovative applications, such as
those incorporating facial recognition technology. The familiarity and openness to technology
contribute to a more seamless integration of the proposed system into their daily lives.
Furthermore, the selection aligns with the project's aim to create a music recommendation system
tailored to the preferences and emotional responses of the target demographic. The use of
Convolutional Neural Networks (CNNs) in the project is particularly fitting for facial expression
recognition within this age group. CNNs, known for their effectiveness in image-related tasks,
are well-suited for capturing the nuanced facial features that convey emotions. Leveraging the
inherent capabilities of CNNs ensures the accurate interpretation of facial expressions, laying the
foundation for personalized music recommendations that resonate with the emotional states of
the selected age group.
The secondary model that we use for the project is the Convolutional Neural Networks (CNNs),
due to their well- suited architecture for image-related tasks, particularly facial expression recognition.
The complex and
varied nature of facial expressions demands a model capable of capturing intricate spatial patterns and
nuanced features. CNNs, designed to automatically learn hierarchical representations from images,
provide an effective solution by virtue of their translation invariance and local receptive fields. The
inherent ability of CNNs to automatically learn discriminative features during training aligns perfectly
with the task at hand, where the specific features contributing to different emotions may be intricate and
not readily defined. The use of weight sharing further optimizes the model's performance by reducing
parameters and enhancing generalization across different individuals and facial variations. Leveraging
pre-trained CNN models not only capitalizes on prior learning but also facilitates the development
process. In essence, the adaptability, feature extraction capabilities, and effectiveness in handling spatial
complexities make CNNs the ideal machine learning model for accurately recognizing facial expressions
and, consequently, enabling the integration of emotion-based music recommendations in this project.
PROJECT OBJECTIVES
Emotion Detection System Development: Design and implement a robust emotion detection
system using advanced machine learning algorithms, analyzing facial features, voice tonality,
and physiological signals.
Multi-faceted Emotion Recognition: Extend the emotion detection system to recognize and
interpret a spectrum of emotions, including happy, sad, anger, neutral, fear, surprise and disgust.
REFERENCE
Yao, F., & Qiu, L. (2021). Facial Expression Recognition Based on Convolutional Neural
Network Fusion SFT features of Mobile Virtual Reality. Wireless Communications and Mobile
Computing, 2021, Article ID 5763626. https://doi.org/10.1155/2021/5763626