Relighting Humans in the Wild:
Monocular Full-Body Human Relighting with Domain Adaptation

Daichi Tajima, Yoshihiro Kanamori, Yuki Endo

University of Tsukuba

Pacific Graphics 2021



Abstract:

The modern supervised approaches for human image relighting rely on training data generated from 3D human models. However, such datasets are often small (e.g., Light Stage data with a small number of individuals) or limited to diffuse materials (e.g., commercial 3D scanned human models). Thus, the human relighting techniques suffer from the poor generalization capability and synthetic-to-real domain gap. In this paper, we propose a two-stage method for single-image human relighting with domain adaptation. In the first stage, we train a neural network for diffuse-only relighting. In the second stage, we train another network for enhancing non-diffuse reflection by learning residuals between real photos and images reconstructed by the diffuse-only network. Thanks to the second stage, we can achieve higher generalization capability against various cloth textures, while reducing the domain gap. Furthermore, to handle input videos, we integrate illumination-aware deep video prior to greatly reduce flickering artifacts even with challenging settings under dynamic illuminations.

Keywords: Image manipulation; Neural networks


Video:


Code: (published on Oct. 18, 2021)


Real Photo Dataset:


Publication:

  1. Daichi Tajima, Yoshihiro Kanamori, Yuki Endo: "Relighting Humans in the Wild: Monocular Full-Body Human Relighting with Domain Adaptation," Computer Graphics Forum (Proc. of Pacific Graphics 2021), 2021. [arXiv][PDF (14 MB)]

BibTeX Citation

@article{tajimaPG21,
      author    = {Daichi Tajima and Yoshihiro Kanamori and Yuki Endo},
      title     = {Relighting Humans in the Wild: Monocular Full-Body Human Relighting with Domain Adaptation},
      journal   = {Computer Graphics Forum (Proc. of Pacific Graphics 2021)},
      volume    = {40},
      number    = {7},
      pages     = {205--216},
      year      = {2021}
    }

Acknowledgments

The authors would like to thank ZOZO, Inc. for providing a real photograph dataset, without which this work was not possible. The authors would also like to thank the anonymous referees for their constructive comments.

Last modified: Oct. 2021

[back]