Abstract—Real time road scene understanding is a crucial challenge for vision-based Advanced Driver Assistance Systems (ADAS). In our previous work, we proposed a method to utilize the advantages of enhancement-based segmentation method to improve the road segmentation result at reasonable computational effort. However, the performance is suffered from the poor efficiency and generalizability of Conditional Random Field (CRF) models. To overcome these drawbacks, we propose a novel semi-supervised refinement strategy based on a modified Cycle Generative Adversarial Network (Cycle-GAN). Our contributions are the following: first, our method learns a mapping between unpaired 4 channel images and a label domain. Second, a new pair-wise metric learning for a sub-set of images is added to improve the robustness of learning procedure. Third, we proposed a generative network with fewer parameters than the original Cycle-GAN. Forth, adversarial learning procedure is limited to the already predicted road boundary obtained from our recent work, that all together boost the segmentation performance. Experiments on KITTI benchmark show the effectiveness of the 4-7% of improvement compares to our previous work based on the super pixel and Convolutional Neural Network (CNN) and achieves comparable performance among the top-performing algorithms of recent un/semi-supervised semantic segmentation tasks.
Index Terms—super-pixel, semantic segmentation, CNN, deep learning, conditional adversarial network, road segmentation, CycleGAN, un(semi)-supervised method
Cite: Farnoush Zohourian and Josef Pauli, "Coarse-to-Fine Semantic Road Segmentation Using Super-Pixel Data Model and Semi-Supervised Modified CycleGAN," Journal of Image and Graphics, Vol. 10, No. 4, pp. 132-144, December 2022.
Copyright © 2022 by the authors. This is an open access article distributed under the Creative Commons Attribution License (CC BY-NC-ND 4.0), which permits use, distribution and reproduction in any medium, provided that the article is properly cited, the use is non-commercial and no modifications or adaptations are made.
Copyright © 2012-2024 Journal of Image and Graphics, All Rights Reserved