to download project abstract

ABSTRACT
Emotion is one of the few terms which does not have a precise definition that is also
intelligible. It’s ethereal. “Emotion” is a term used to describe a subjective condition
expressed through social signals. This differs from, for instance, recognizing the
emotional content of a multimedia clip, which is concerned with the feelings that the
clip may elicit in its audience. Despite this, practically every decision we’ve ever
made has been influenced by emotion. According to marketing studies, correctly
forecasting feelings can be a big source of growth for businesses. Also, can be used
for safety measures such as drowsiness detection for a car driver. In the computer
vision domain, recognizing automated facial expressions from a facial image is a
difficult task that seems to have a wide range of applications, covering driving safety,
human-computer interactions, health care, behavioural science, teleconferencing,
cognitive science, as well as others. This notion falls under the category of cognitive
systems in the area of data and machine learning. The goal is to try to segregate
the emotions contained in the data using training data. The FER2013 data consists
of pixel images of emotions of faces of seven types of emotions which are Angry,
Disgust, Fear, Happy, Sad, Surprise, and Neutral. The faces have been
automatically registered so that the face is more or less centred and occupies about
the same amount of space in each image. The goal is to recognize the emotions in
a real-time environment using a webcam. Deep convolutional neural networks have
been used to produce numerous advancements in image categorization (CNNs). An
automated feature extractor and a classifier are the two major components of these
designs. The former generates low-level, mid-level, and high-level characteristics
for the item of interest, characterizing basic, moderate, and complicated textures,
respectively. In general, a powerful classifier learns the target from a large number
of high-level characteristics, hence the network should be trained using a huge
amount of data. Deep learning techniques are frequently employed in many picture
classification difficulties, such as ImageNet, PASCAL VOC, CIFAR, and the Facial
Expression Recognition Challenge 2013. The FER2013 data consists of images of
faces of seven types of emotions i.e., Angry, Disgust, Fear, Happy, Sad, Surprise,
and Neutral. Using the dataset, these networks may be leveraged to produce task-
specific outcomes. Using Transfer Learning, we utilized our dataset on a pre-trained
network. The pre-trained network is used for both feature extraction and
classification. Transfer Learning takes features and weights from previously trained
models and applies them to subsequent models, even when there is less information
on the most recent job.

Leave a Comment

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *