You are on page 1of 22

A Mini Project Review

On
EMOJIFY-CREATE YOUR OWN EMOJI
Submitted to Jawaharlal Nehru Technological University for the partial fulfillment of the
requirement for the Award of the Degree in
BACHELOR OF TECHNOLOGY
in
COMPUTER SCIENCE & ENGINEERING
Submitted by
MD.SULTAN (17271A0576)
B.SRISAHITHI (18271A05A5)
G.SUCHARITHA (18271A05A7)
A.SUSHMITHA (18271A05B0)
Under the Esteemed guidance of
G.SRILATHA
Professor (CSE Dept.)

DEPARTMENT OF COMPUTER SCIENCE &ENGINEERING


JYOTHISHMATHI INSTITUTE OF TECHNOLOGY & SCIENCE
(Approved by AICTE, New Delhi, Accredited by NAAC with ‘A’ Grade Affiliated to JNTUH, Hyderabad)

NUSTULAPUR, KARIMNAGAR- 505481, TELANGANA, INDIA


(2018-2022)
ABSTRACT

• One way to indicate nonverbal cues is by sending emojis These cues have
becoming an essential part of online chatting ,brand emotions, and many more.

• Emojify is used to detect your face expression like angry, fear , happy etc. These
facial expression turn into an emoji.

• We use a user’s facial emotional expression to filter out the relevant set of emoji
by emotion category.

• To recognize facial emotional expression, we use Convolutional Neural


Network(CNN).

• Deep Learning methods aids to get accuracy in the process to find out the model.
INTROUCTION

•Communication is an important part of everyday life. Verbal or non-


verbal communication allows one to engage in conversations.
•Today is the era of communication technologies. The internet and
other communication devices have made it possible to engage in the
fast, dynamic, and affective communication.
•The emojis are being used for the visual depictions of human
emotions.
•In this research, emotions are detected and generated real time-
based emoji using facial expressions.
•The identification of facial expressions plays a key role in identifying
patterns and image processing, and identifying facial expressions and
generates the emoji.
EXISTING SYSTEM

• Existing system provides us only static way of emoji generations.


By using this system, we can only create emojis statically. No
dynamic emoji creation is there. And created emojis are
segregated themselves that means all generated emojis are
collective outcomes.
• This limitation is covered by our proposed system.
Fig-1:Existing System Architecture
PROPOSED SYSTEM

 Proposed system dynamically generates emojis by uing deep learning, it


process the input taken by camera and alternative emoji is generated
dynamically.

 This proposed system is used to express the expressions of humans using real
time emojis.

 The role of emoji, pictographic forms of facial expressions, objects, and Symbol.
Fig-2: Proposed System Architecture
LITERATURE SURVEY
Today, the most widely recognized method of communication among
individuals is virtual platforms,regardless of whether utilizing the web or phones.
The current age utilizes online applications and stages to impart and trade
discussions.Be that as it may, imparting emotions is troublesome.Thus, little and
straightforward pictures, also called emoticon characters, are utilized to enhance
feelings when utilizing composed language .The examination on emoticons has
become an interesting issue in the scholastic field.Emoticon characters are turning
out to be increasingly more promoted hence the variety of these characters has
expanded. Be that as it may, the current existing emoticon characters are
restricted to predetermined characters. To customize emoticon characters, this
examination investigated techniques for clients to "emojify" their photos. In an
Instagram emoticon study, faces represented 6 of the main 10 emoticons utilized,
giving additional proof that individuals as often as possible use emoticons to
communicate emotions. At last, in a subjective report from Lee on emoji sticker
utilization,they tracked down that these stickers were utilized fundamentally for
communicating emotions.
Methodology :

To create your own emojis is discussed in four steps:

1. Explore the dataset


2. Build a CNN model
3.Train and validate the model
4. Test the model with test dataset
Step1:Emoji Dataset

 The dataset contains seven different facial expessions.


 Facial expressions used
0:angry
1:disgust
2:feat
3:happy
4:sad Fig:Sample dataset of emotion “Happy”

5:surprise
6:natural
 All the seven folder is consists of 48x48 pixel grayscale images of faces.
 Step 2:CNN classifier

 A convolutional neural network (CNN) is a type of


artificial neural network used in image recognition and
processing that is specifically designed to process pixel
data. CNNs use image recognition and classification in
order to detect objects, recognize faces, etc.
 They are made up of neurons with learnable weights and
biases.
 CNNs are primarily used to classify images, cluster them
by similarities, and then perform object recognition. To
classify the images into their respective categories, we
will build a CNN model ( Convolution Neural Network).
Convolution Neural Networks have following
layers

1.Convolution layer
2.ReLU layer
3.Pooling layer
4.Fully connected layer
Step 3:Train and validate the model

 After building the model architecture, we then train the


model using model.fit(). And after 10 epochs the accuracy
was Good.
 Accuracy = percentage of images being correctly recognized
 Loss = softmax cross entropy between truth and predictions
Step 4:Test our model with test dataset

 Our dataset contains a test folder.we have the details related to the
image path . We extract the image path and labels using pandas.
Then to predict the model, we have to resize our images to 48×48
pixels and make a numpy array containing all image data.
 After that we calculate accuracy of our train model. Also can show
that hall accuracy, loss, validation accuracy and validation loss.Now
when we increase the Epoch at that time accuracy will be increase
and loss will be decrease.
 Now we save our model using save weights function.

 After that we have open webcam using open cv and detect face
using CNN classifier .
SYSTEM REQUIREMENTS

 SOFTWARE  HARDWARE
REQUIREMENTS REQUIREMENTS

 Operating System: Windows  Laptop (32-bit or 64-bit


7/8/8.1/10,Linux. architecture, 2+ GHzCPU,
 Tools and Framework:
4 GBRAM).
OpenCV, Tensor flow,Keras.
 Camera (8MP & above).
 Language Requirement:
Python.
UML Diagrams
start

Input data
require visual
data

Filter to remove Prompt:No


noise Face detected

Face
detect
ed
No
yes

Analyse the emotion based


on facial image parameters
like eyes,nose etc

Convert input image to


cartoonified image similar to an
emoji

Search for related emoticons


similar to cartoonified
images
Output
images:cartoonofied
image and suggestions

end Fig:FlowChart
INPUT DATA PROCESSED UNIT OUTPUT DATA

Visual Customised
CAMERA DATASET
Database emoji
image

CAMERA MODULE Suggestions of


IN LAPTOP Python module emoji related to
EXTERNAL input
CAMERA

Fig:Data Flow Diagram


Fig:Use Case Diagram
APPLICATIONS

 Emojis or avatars are ways to indicate nonverbal cues.

 These cues have become an essential part of online chatting, product

review, brand emotion, and many more.


CONCLUSION

 In this deep learning , we built a convolution neural network to recognize


facial emotions. We have to train our model on the dataset. Then we are
mapping those emotions with the corresponding emojis or avatars . We learned
to detect faces expression in deep learning and how to build Convolutional
neural network using keras and find out the accuracy of our model.
THANK YOU

You might also like