GitHub - NVlabs/few-shot-vid2vid
source link: https://github.com/NVlabs/few-shot-vid2vid
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
README.md
Few-shot vid2vid
Project | YouTube | arXiv
Pytorch implementation for few-shot photorealistic video-to-video translation. It can be used for generating human motions from poses, synthesizing people talking from edge maps, or turning semantic label maps into photo-realistic videos. The core of video-to-video translation is image-to-image translation. Some of our work in that space can be found in pix2pixHD and SPADE.
Few-shot Video-to-Video Synthesis
Ting-Chun Wang, Ming-Yu Liu, Andrew Tao, Guilin Liu, Jan Kautz, Bryan Catanzaro
NVIDIA Corporation
In Neural Information Processing Systems (NeurIPS) 2019
Example Results
- Dance Videos
- Talking Head Videos
- Street View Videos
Code Coming Soon
Recommend
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK