2025-06-04
2025-04-30
Manuscript received April 5, 2025; revised May 7, 2025; accepted June 9, 2025; published October 17, 2025.
Abstract—In the context of public health and safety, particularly during pandemics, real-time monitoring of mask compliance in public spaces is critical. This study proposes an advanced face mask detection framework that integrates deep learning and green computing within an Internet of Things (IoT) environment. A Faster Regions with Convolutional Neural Network (R-CNN) model with ResNet- 50 backbone is fine-tuned using a small but targeted dataset consisting of 2000 training and 400 testing images. Although relatively small, this dataset includes a variety of maskwearing conditions, which enables the model to generalize in public settings. The system demonstrates high accuracy, low latency, and robustness against lighting, occlusions, and different mask orientations. Green computing techniques, including model compression and quantization, are employed to ensure the system is resource-efficient and deployable on edge devices. The methodology includes preprocessing, training, and evaluation using performance metrics such as precision, recall, and F1-Score. A comparative analysis with existing face mask detection models is provided, showing the proposed model’s competitive performance. Privacy concerns related to surveillance applications are addressed with a focus on data anonymization and secure processing. The proposed system has strong potential for deployment in smart city applications such as public transportation, healthcare, and educational institutions. Keywords—mask detection, Internet of Things (IoT), object detection, integration, traffic monitoring infrastructure, compatibility, Regions with Convolutional Neural Network (R-CNN), face mask detection, data exchange, green technology Cite: Yousef Farhaoui, Ahmad E. Allaoui, Jawad Rasheed, and Onur Osman, "Fine-Tuned Object Detection for Mask Recognition Using Green Computing in IoT Systems," Journal of Image and Graphics, Vol. 13, No. 5, pp. 561-569, 2025. Copyright © 2025 by the authors. This is an open access article distributed under the Creative Commons Attribution License (CC-BY-4.0), which permits use, distribution and reproduction in any medium, provided that the article is properly cited, the use is non-commercial and no modifications or adaptations are made.