Abstract—In this paper we developed a Deep Learning (DL) method to assistant radiologists quickly and accurately labeling and classifying the lesions of the breast ultrasound images. A faster R-CNN detector was trained to label and classify the lesions with the Breast Imaging Reporting and Data System (BI-RADS). The initial trained model used 2000 labeled images. From the testing results with 6000 images, we got poor accuracy. Therefore, we developed the second DL model with 4294-image set in which the images of BI-RADS 4 were removed. Then the second DL model was tested by 1000 images and used to classify 1836 images of BI-RADS 4. The results show that the classification accuracy, sensitivity and specificity are achieved as 92.37%, 98.34%, and 82.46%, respectively when it used to classify the BI-DADS 4 images into 4A and 4B, and 98.10%, 97.78% and 98.13%, respectively when it is used for breast cancer screening.
Index Terms—deep learning, breast ultrasound image, DL labeling and classification
Cite: Lei Wang, Biao Liu, Shaohua Xu, Ji Pan, and Qi Zhou, "AI Auxiliary Labeling and Classification of Breast Ultrasound Images," Journal of Image and Graphics, Vol. 9, No. 2, pp. 45-49, June 2021. doi: 10.18178/joig.9.2.45-49
Copyright © 2021 by the authors. This is an open access article distributed under the Creative Commons Attribution License (CC BY-NC-ND 4.0), which permits use, distribution and reproduction in any medium, provided that the article is properly cited, the use is non-commercial and no modifications or adaptations are made.
Copyright © 2012-2023 Journal of Image and Graphics, All Rights Reserved