Home > Published Issues > 2023 > Volume 11, No. 2, June 2023 >
JOIG 2023 Vol.11(2): 121-126
doi: 10.18178/joig.11.2.121-126

Classification Model Based on U-Net for Crack Detection from Asphalt Pavement Images

Yusuke Fujita *, Taisei Tanaka, Tomoki Hori, and Yoshihiko Hamamoto
Graduate School of Sciences and Technology for Innovation, Yamaguchi University, Ube, Japan
*Correspondence: y-fujita@yamaguchi-u.ac.jp (Y.F.)

Manuscript received August 5, 2022; revised September 5, 2022; accepted October 22, 2022.

Abstract—The purpose of our study is to detect cracks accurately from asphalt pavement surface images, which includes unexpected objects, non-uniform illumination, and irregularities in surfaces. We propose a method to construct a classification Convolutional Neural Network (CNN) model based on the pre-trained U-Net, which is a well-known semantic segmentation model. Firstly, we train the U-Net with a limited amount of the asphalt pavement surface dataset which is obtained by a Mobile Mapping System (MMS). Then, we use the encoder of the trained U-Net as a feature extractor to construct a classification model, and train by fine-tuning. We describe comparative evaluations with VGG11, ResNet18, and GoogLeNet as well-known models constructed by transfer learning using ImageNet, which is a large size dataset of natural images. Experimental results show our model has high classification performance, compared to the other models constructed by transfer learning using ImageNet. Our method is effective to construct convolutional neural network model using the limited training dataset.

Keywords—Convolutional Neural Network (CNN), crack detection, pavement inspection, Mobile Mapping System (MMS), pre-training, U-Net

Cite: Yusuke Fujita, Taisei Tanaka, Tomoki Hori, and Yoshihiko Hamamoto, "Classification Model Based on U-Net for Crack Detection from Asphalt Pavement Images," Journal of Image and Graphics, Vol. 11, No. 2, pp. 121-126, June 2023.

Copyright © 2023 by the authors. This is an open access article distributed under the Creative Commons Attribution License (CC BY-NC-ND 4.0), which permits use, distribution and reproduction in any medium, provided that the article is properly cited, the use is non-commercial and no modifications or adaptations are made.