Manuscript received December 4, 2022; revised April 22, 2023; accepted May 10, 2023.
Abstract—The recent spread of smartphones and social networking services has increased the means of seeing images of human faces. Particularly, in the face image field, the generation of face images using facial expression transformation has already been realized using deep learning–based approaches. However, in the existing deep learning–based models, only low-resolution images can be generated due to limited computational resources. Consequently, the generated images are blurry or aliasing. To address this problem, we proposed a two-step method to enhance the resolution of the generated facial images by combining a super-resolution network following the generative model, which can be considered a serial model, in our previous work. We further proposed a parallel model that trains a generative adversarial network and a superresolution network through multitask learning. In this paper, we propose a new model that integrates self-supervised guidance encoders into the parallel model to further improve the accuracy of the generated results. Using the peak signalto- noise ratio as an evaluation index, image quality was improved by 0.25 dB for the male test data and 0.28 dB for the female test data compared with our previous multitaskbased parallel model.
Keywords—image processing, deep learning, facial expression transformation, generative adversarial networks, super resolution
Cite: Tatsuya Hanano, Masataka Seo, and Yen-Wei Chen, "Generation of High-Resolution Facial Expression Images Using a Super-Resolution Technique and Self-Supervised Guidance," Journal of Image and Graphics, Vol. 11, No. 3, pp. 302-308, September 2023.
Copyright © 2023 by the authors. This is an open access article distributed under the Creative Commons Attribution License (CC BY-NC-ND 4.0), which permits use, distribution and reproduction in any medium, provided that the article is properly cited, the use is non-commercial and no modifications or adaptations are made.
Copyright © 2012-2023 Journal of Image and Graphics, All Rights Reserved