Home > Articles > All Issues > 2025 > Volume 13, No. 4, 2025 >
JOIG 2025 Vol.13(4):336-347
doi: 10.18178/joig.13.4.336-347

Dynamic Attention for Enhancement of Weak Contrast Images Using Advance ASENet

Muddapu Harika 1,*, Gottapu Sasibhushana Rao 1, and Rajkumar Goswami 2
1. Department of Electronics and Communication Engineering, Andhra University College of Engineering, Visakhapatnam, India
2. Department of Electronics and Communication Engineering, Gayatri Vidya Parishad College of Engineering for Women, Visakhapatnam, India
3. Department of Informatics and Computer Engineering, Politeknik Elektronika Negeri Surabaya, Surabaya, Indonesia
Email: jalluharika427@gmail.com (M.H.); sasigps@gmail.com (G.S.R.); rajkumargoswami@gmail.com (R.G.)
*Corresponding author

Manuscript received March 10, 2025; revised April 27, 2025, accepted May 26, 2025; published July 17, 2025.

Abstract—Low-light photography often results in images with significant noise and insufficient brightness, making enhancement of such images a persistent and challenging task in computer vision. Although numerous techniques have been proposed to address this issue, many of them inadvertently amplify noise or fail under extremely poor lighting conditions. To overcome these limitations, this research introduces the Advanced Attention-Shift Enhancement Network (Adv-ASENet); An innovative deep learning-based approach designed to effectively enhance Weak Contrast Low-Light (WCLL) images. Adv-ASENet leverages a dynamic attention mechanism that allows the model to selectively focus on the most informative regions of a low-light image. This selective focus enables the network to enhance poorly illuminated areas while minimizing noise amplification in already well-lit regions, resulting in a more balanced and visually coherent enhancement. Weak contrast images often exhibit localized deficiencies in brightness and contrast; Adv-ASENet addresses this with dynamic attention blocks that selectively enhance such regions without overprocessing the well-contrasted areas. The spatial attention module further aids in preserving well-exposed parts of the image, ensuring that enhancements are applied only where needed. Experimental results demonstrate that the proposed network achieves competitive performance with a manageable level of complexity. Quantitative evaluations show that Adv-ASENet attains a Structural Similarity Index Measure (SSIM) of 87.9%, Peak Signal-to- Noise Ratio (PSNR) of 36.05 dB, Mean Squared Error (MSE) of 0.031, and a correlation coefficient of 98.8%, outperforming several existing state-of-the-art methods across standard metrics including SSIM, PSNR, MSE, and entropy.

Keywords—image enhancement, renoir dataset, convolutional neural network, attention shift enhancement network

Cite: Muddapu Harika, Gottapu Sasibhushana Rao, and Rajkumar Goswami, "Dynamic Attention for Enhancement of Weak Contrast Images Using Advance ASENet," Journal of Image and Graphics, Vol. 13, No. 4, pp. 336-347, 2025.

Copyright © 2025 by the authors. This is an open access article distributed under the Creative Commons Attribution License (CC-BY-4.0), which permits use, distribution and reproduction in any medium, provided that the article is properly cited, the use is non-commercial and no modifications or adaptations are made.

Article Metrics in Dimensions