Manuscript received July 12, 2022; revised September 20, 2022; accepted October 20, 2022.
Abstract—Different parts of our face contribute to overall facial expressions, such as anger, happiness and sadness in distinct ways. This paper investigates the degree of importance of different human face parts to the accuracy of Facial Expression Recognition (FER). In the context of machine learning, FER refers to a problem where a computer vision system is trained to automatically detect the facial expression from a presented facial image. This is a difficult image classification problem that is not yet fully solved and has received significant attention in recent years, mainly due to the increased number of possible applications in daily life. To establish the extent to which different human face parts contribute to overall facial expression, various sections have been extracted from a set of facial images and then used as inputs into three different FER systems. In terms of the recognition rates for each facial section, this result confirms that various regions of the face have different levels of importance regarding the accuracy rate achieved by an associated FER system.
Keywords—facial expression recognition, facial features, principal component analysis, convolutional neural networks
Cite: Yining Yang, Vuksanovic Branislav, and Hongjie Ma, "The Performance Analysis of Facial Expression Recognition System Using Local Regions and Features," Journal of Image and Graphics, Vol. 11, No. 2, pp. 104-114, June 2023.
Copyright © 2023 by the authors. This is an open access article distributed under the Creative Commons Attribution License (CC BY-NC-ND 4.0), which permits use, distribution and reproduction in any medium, provided that the article is properly cited, the use is non-commercial and no modifications or adaptations are made.
Copyright © 2012-2023 Journal of Image and Graphics, All Rights Reserved