2025-06-04
2025-04-30
Manuscript received June 17, 2025; revised July 14, 2025; accepted August 8, 2025; published November 25, 2025.
Abstract—Image inpainting, the task of restoring missing or corrupted regions in images, remains a critical challenge in computer vision with applications ranging from photo editing to scene understanding. Motivated by the limitations of existing Generative Adversarial Network (GAN)-based methods in preserving contextual integrity and texture realism, this paper presents a deep learning framework that leverages both Generative Adversarial Networks (GANs) and attention mechanisms to improve inpainting quality. Our approach integrates a multi-stage architecture with a context-aware attention module to better capture semantic coherence and fine-grained details in the reconstruction process. Extensive experiments on benchmark datasets including CelebA-HQ, ADE20K, and Paris Streetview demonstrate that our method outperforms recent state-ofthe- art techniques in terms of Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Fréchet Inception Distance (FID) metrics. The proposed model achieves notable gains in realism and structure preservation, making it a promising solution for both academic research and practical deployment. The results validate the effectiveness of our contributions and highlight potential avenues for further advancements in the field of deep image completion. Keywords—image inpainting, Generative Adversarial Networks (GANs), deep learning, context-aware image completion, structural consistency Cite: Mahesh Patil and Vikas Tiwari, "Advances in Image Inpainting: A Deep Learning and GAN-Based Perspective with Defined Research Objectives," Journal of Image and Graphics, Vol. 13, No. 6, pp. 590-603, 2025. Copyright © 2025 by the authors. This is an open access article distributed under the Creative Commons Attribution License (CC-BY-4.0), which permits use, distribution and reproduction in any medium, provided that the article is properly cited, the use is non-commercial and no modifications or adaptations are made.