2025-06-04
2025-04-30
Manuscript received March 16, 2025; revised April 23, 2025; accepted May 21, 2025; published September 17, 2025.
Abstract—Emotion detection is a technique to recognize human emotions by addressing facial expressions. It is essential for psychology, security systems, and human-computer interaction. The ability to perceive and interpret an individual’s facial expressions helps to understand their actions and improve the interaction between a person and a computer. Facial Emotion Recognition (FER) is instrumental whenever there is a need for human-computer interaction for behavioral assessment, like in clinical usage. When using machine learning models in the FER field, the accuracy and robustness remain difficult because of the diversity of human faces and image changes, such as differences in spatial pose and lighting. This research used the FER2013 dataset, which contained approximately 30,000 images divided into seven classes (anger face, disgust face, fear face, happy face, sad face, surprise face, and neutral face). It also used two Convolutional Neural Networks (CNN) models (VGG19 and Sequential). The result of the VGG19 model achieved 68% accuracy, validation accuracy achieved 66%, the Sequential model achieved 78% accuracy, and validation accuracy achieved 67%. To address the limitations of single-stream models, a novel hybrid architecture is proposed that integrates ResNet50, MobileNetV2, and a Convolutional Block Attention Module (CBAM)-enhanced CNN through feature-level fusion. This design enables the model to capture diverse and salient facial features, significantly improving recognition accuracy on the FER2013 dataset. The proposed method achieved 96% accuracy, and the validation accuracy was 91%. Keywords—face emotion, FER2013, Sequential model, VGG19 model, Convolutional Neural Networks (CNN), deep learning Cite: Lujain Y. Abdulkadir, Hiba A. Saleh, Omar I. Alsaif, and Rana K. Sabri, "Evaluating Facial Emotional Proportion Based on Computer Vision Technique," Journal of Image and Graphics, Vol. 13, No. 5, pp. 469-475, 2025. Copyright © 2025 by the authors. This is an open access article distributed under the Creative Commons Attribution License (CC-BY-4.0), which permits use, distribution and reproduction in any medium, provided that the article is properly cited, the use is non-commercial and no modifications or adaptations are made.