2025-06-04
2025-04-30
Manuscript received June 26, 2025; revised July 14, 2025; accepted September 5, 2025; published January 16, 2026.
Abstract—Accurate segmentation of medical images is essential for reliable breast cancer detection and for downstream tasks such as identifying nuclei and cell membranes in histopathology slides. However, traditional deep learning models like U-Net often encounter limitations in precision, computational efficiency, and adaptability when confronted with complex, multimodal datasets. To address these challenges, we propose AfroNet, a novel U-shaped deep learning architecture that integrates advanced attention mechanisms specifically designed for breast cancer image segmentation. AfroNet introduces three complementary modules: (1) a cross-attention module that adaptively recalibrates encoder–decoder interactions to emphasize diagnostically relevant semantic features; (2) a multi-scale feature-fusion block that captures fine spatial details across varying resolutions to enhance boundary delineation; and (3) an adaptive skip-connection enhancement strategy that strengthens gradient flow and preserves contextual information throughout the network. Extensive experiments on benchmark breast cancer histopathology datasets demonstrate that AfroNet consistently outperforms state-ofthe- art segmentation methods in terms of Dice Coefficient (DC), Intersection-over-Union (IOU), and inference speed. These results highlight AfroNet’s potential as a robust and efficient framework for high-precision breast cancer histopathology analysis and clinical decision support. Keywords—breast cancer, semantic segmentation, U-Net, breast image, pyramidal network Cite: Vuppula Manohar, P. S. Rao, Sreedhar Kollem, Karri Chiranjeevi, B. Jaya, M. Shashidhar, Syed M. Ahamed, Appala S. Kumar, and Manasa Koppula, "AfroNet: A Cross-Attention Enhanced U-Net for Breast Cancer Image Segmentation," Journal of Image and Graphics, Vol. 14, No. 1, pp. 1-14, 2026. Copyright © 2026 by the authors. This is an open access article distributed under the Creative Commons Attribution License (CC-BY-4.0), which permits use, distribution and reproduction in any medium, provided that the article is properly cited, the use is non-commercial and no modifications or adaptations are made.