Home > Articles > All Issues > 2025 > Volume 13, No. 2, 2025 >
JOIG 2025 Vol.13(2):198-212
doi: 10.18178/joig.13.2.198-212

Achieving High-End Image Localization via Causality Infused Renet50 Model

Chaitanya Kapoor
Mangalayatan University, Beswan Aligarh, India
Email: kapoorchaitanya42@gmail.com
*Corresponding author

Manuscript received October 22, 2024; revised December 12, 2024; accepted January 10, 2025; published April 25, 2025.

Abstract—Achieving high-precision image localization is a critical objective in computer vision, particularly for applications requiring spatially accurate object identification. This study proposes a causality-infused ResNet50 model that integrates causal inference techniques with deep learning to enhance localization accuracy and robustness. ResNet50, a widely adopted convolutional neural network, is employed for feature extraction, while causal mechanisms mitigate confounding factors and improve generalization across diverse datasets. The dataset comprises images annotated with bounding boxes corresponding to ground truth labels and predicted labels for object localization tasks. The evaluation metric assesses the predicted and ground truth boxes based on label consistency and the extent of spatial overlap. The training set comprised 70% of the total dataset, while the remaining 30% was designated as the validation set. The model leverages advanced algorithms, including Granger Causality and principal component analysis, to optimize feature relevance during training. Evaluated on the ImageNet dataset, the approach demonstrates exceptional performance, achieving a validation accuracy of 99.7%. An Intel Core i7 processor was utilised, and the LAMB optimiser was implemented. Our proposed implementation flawlessly delivers superior performance with high precision and efficiency.

Keywords—image localization, residual networks, causality, principal component analysis

Cite: Chaitanya Kapoor, "Achieving High-End Image Localization via Causality Infused Renet50 Model," Journal of Image and Graphics, Vol. 13, No. 2, pp. 198-212, 2025.

Copyright © 2025 by the authors. This is an open access article distributed under the Creative Commons Attribution License (CC-BY-4.0), which permits use, distribution and reproduction in any medium, provided that the article is properly cited, the use is non-commercial and no modifications or adaptations are made.