2025-12-25
2025-12-13
2025-10-07
Manuscript received July 15, 2025; revised July 31, 2025; accepted September 23, 2025; published February 27, 2026.
Abstract—Edge detection is a crucial step in computer vision, serving as a foundation for different applications like object detection, segmentation and scene understanding. Traditional edge detection methods often fail to capture complex boundaries in natural images. This study proposes a novel deep learning-based architecture, Clip- MobileNetV2-UNet, that integrates the lightweight, efficient MobileNet encoder with the segmentation capabilities of UNet and the stabilizing properties of the Clip Rectified Linear Unit (ReLU) activation function. The MobileNet backbone significantly reduces computational cost and model size, making it suitable for edge detection on resource-constrained devices such as mobile and embedded devices. The Clip ReLU activation, a clipped version of the standard ReLU, is employed throughout the network to ensure bounded activations, helping prevent gradient explosions and stabilizing training. This modification preserves fine-grained features to improve model generalisation, overfitting, and edge sharpness. The U-Net decoder with skip connections recovers spatial details lost during downsampling. The training and validation datasets have been artificially increased using the rotation data augmentation technique. The Berkeley Segmentation Data Set and Benchmarks 500 (BSDS500) dataset has been used to train and evaluate the performance of Clip-MobileNetV2- U-Net using 3200 training, 1000 validation, and 500 testing images. The proposed model achieved competitive performance, with a mean Dice Coefficient (mDC) of 0.9256 and a mean Intersection over Union (mIoU) of 0.8419, and outperformed other deep architectures in edge detection. The proposed edge detection model, cross-validated on the Barcelona Images for Perceptual Edge Detection (BIPED) dataset, offers a reliable, precise, and scalable solution with low computational complexity and high accuracy, making it ideal for real-time computer vision applications. Keywords—U-Net, Berkeley Segmentation Data Set and Benchmarks 500 (BSDS500), Barcelona Images for Perceptual Edge Detection (BIPED) datasets, Clip- MobileNetV2-U-Net, encoder-decoder, edge detection Cite: Wazir Lakra and Rajeshwar Dass, "Edge Detection Using Clip ReLU-Based Enhanced Hybrid Network," Journal of Image and Graphics, Vol. 14, No. 1, pp. 84-95, 2026. Copyright © 2026 by the authors. This is an open access article distributed under the Creative Commons Attribution License (CC-BY-4.0), which permits use, distribution and reproduction in any medium, provided that the article is properly cited, the use is non-commercial and no modifications or adaptations are made.