Home > Articles > All Issues > 2025 > Volume 13, No. 6, 2025 >
JOIG 2025 Vol.13(6):604-620
doi: 10.18178/joig.13.6.604-620

Facial Expressions in Virtual Reality Based- Education: Understanding Recognition Approaches and Their Integration for Immersive Experiences

Anass Touima 1,* and Mohamed Moughit 1,2
1. Laboratory Science and Technology for Engineering (LaSTI), National School of Applied Sciences-Khouribga, Sultan Moulay Slimane University, Morocco
2. Laboratory Artificial Intelligence, Modeling and Computational Engineering (AIMCE), ENSAM Casablanca, Hassan II University, Casablanca, Morocco
Email: anass.touima@usms.ac.ma (A.T.); mohamed.moughit@usms.ac.ma (M.M.)
*Corresponding author

Manuscript received March 27, 2025; revised April 15, 2025; accepted June 10, 2025; published November 25, 2025.

Abstract—Integrating facial expressions into Virtual Reality (VR) for education is hindered by the cost and technical limitations of current Facial Expression Recognition (FER) systems, impacting accessibility and the enrichment of remote learning. Our research aimed to develop and assess a cost-effective, webcam-based FER system for real-time replication of a teacher’s facial expressions onto a VR avatar, to enhance emotional interactivity and pedagogical effectiveness in distance education. A Convolutional Neural Network (CNN)-Deep Neural Network (DNN) deep learning model with Correlation-based Feature Selection (CFS) was developed for FER and integrated into a Unity-based VR classroom, using OpenFace for landmark detection from webcam input. Accuracy was validated on benchmark datasets (CK+, JAFFE, OULU CASIA-VIS), followed by an empirical study with 65 instructors. The FER model achieved high accuracy (e.g., 100% on CK+), and our VR application successfully mapped expressions in real-time. Instructors reported improved emotional communication (74%) and engagement (72%), with the system’s affordability (approx. $1200/user) being a key advantage, though adoption barriers and occasional misclassifications were noted. In conclusion, an affordable, webcam-based FER system can enhance VR education by improving emotional interactivity. While promising, addressing realworld robustness, facial occlusion by VR headsets, and user acceptance is crucial for wider deployment. Future work includes predicting occluded facial features and multimodal emotion detection.

Keywords—virtual reality, E-learning, facial expressions recognition

Cite: Anass Touima and Mohamed Moughit, "Facial Expressions in Virtual Reality Based- Education: Understanding Recognition Approaches and Their Integration for Immersive Experiences," Journal of Image and Graphics, Vol. 13, No. 6, pp. 604-620, 2025.

Copyright © 2025 by the authors. This is an open access article distributed under the Creative Commons Attribution License (CC-BY-4.0), which permits use, distribution and reproduction in any medium, provided that the article is properly cited, the use is non-commercial and no modifications or adaptations are made.

Article Metrics in Dimensions