2025-06-04
2025-04-30
Manuscript received December 27, 2024; revised February 12, 2025; accepted April 7, 2025; published June 12, 2025.
Abstract—Ultrasound imaging is one of the key noninvasive diagnostic methods used in medicine today. Many of the Deep Learning (DL) speckle denoising algorithms, in particular Autoencoder models and Convolutional Neural Network (CNN) based techniques, tend to be overfit, have low accuracy, or even perform badly on different sets of data. To help tackle these problems, the study proposed a new CNN architecture based model UNet-Elu that incorporates an Exponential Linear Unit (ELU) as its activation function. ELU is also used to endow the model with non-linearity while facilitating the flow of gradients within the model. The batch normalization and dropout layers are added with focus on improving accuracy and preventing overfitting. The proposed framework is evaluated in two stages. In stage 1 the proposed framework is compared with fine-tuned state-of-the-art UNet, UNet-ReLU, CNN Autoencoder and other filtering methods. For stage 2 comparative analysis transfer learning models are optimized and compared. The proposed framework performs without any sign of performance degradation and overfitting when tested on different datasets. This model was evaluated using the evaluation metrics of Peak Signal-to-Noise Ratio (PSNR), structural similarity (SSIM) and Mean Square Error (MSE) with different levels of speckle noise in order to determine the effectiveness of these techniques. It was able to achieve a PSNR of 37.76 dB and SSIM 98% for the UNet-Elu model which indicates a strong denoising performance. The optimized adjustment to the architecture and ELU activation function of the proposed model marks a significant improvement in ultrasound image denoising. Keywords—deep learning, speckle reduction, ultrasound imaging, denoising, improved framework Cite: Nilima Patil, M. M. Deshpande, and V. N. Pawar, "A Novel Deep Learning Approach for Speckle Denoising Using Hyperparameter Tuning," Journal of Image and Graphics, Vol. 13, No. 3, pp. 267-274, 2025. Copyright © 2025 by the authors. This is an open access article distributed under the Creative Commons Attribution License (CC-BY-4.0), which permits use, distribution and reproduction in any medium, provided that the article is properly cited, the use is non-commercial and no modifications or adaptations are made.