

GitHub - Qiulin-W/SAFA: Official Pytorch Implementation of 3DV2021 paper: SAFA:...
source link: https://github.com/Qiulin-W/SAFA
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

SAFA: Structure Aware Face Animation (3DV2021)
Official Pytorch Implementation of 3DV2021 paper: SAFA: Structure Aware Face Animation.
Getting Started
git clone https://github.com/Qiulin-W/SAFA.git
Installation
Python 3.6 or higher is recommended.
1. Install PyTorch3D
Follow the guidance from: https://github.com/facebookresearch/pytorch3d/blob/master/INSTALL.md.
2. Install Other Dependencies
To install other dependencies run:
pip install -r requirements.txt
Usage
1. Preparation
a. Download FLAME model, choose FLAME 2020 and unzip it, put generic_model.pkl
under ./modules/data
.
b. Download head_template.obj
, landmark_embedding.npy
, uv_face_eye_mask.png
and uv_face_mask.png
from DECA/data, and put them under ./module/data
.
c. Download SAFA model checkpoint from Google Drive and put it under ./ckpt
.
d. (Optional, required by the face swap demo) Download the pretrained face parser from face-parsing.PyTorch and put it under ./face_parsing/cp
.
2. Demos
We provide demos for animation and face swap.
a. Animation demo
python animation_demo.py --config config/end2end.yaml --checkpoint path/to/checkpoint --source_image_pth path/to/source_image --driving_video_pth path/to/driving_video --relative --adapt_scale --find_best_frame
b. Face swap demo We adopt face-parsing.PyTorch for indicating the face regions in both the source and driving images.
For preprocessed source images and driving videos, run:
python face_swap_demo.py --config config/end2end.yaml --checkpoint path/to/checkpoint --source_image_pth path/to/source_image --driving_video_pth path/to/driving_video
For arbitrary images and videos, we use a face detector to detect and swap the corresponding face parts. Cropped images will be resized to 256*256 in order to fit to our model.
python face_swap_demo.py --config config/end2end.yaml --checkpoint path/to/checkpoint --source_image_pth path/to/source_image --driving_video_pth path/to/driving_video --use_detection
Training
We modify the distributed traininig framework used in that of the First Order Motion Model. Instead of using torch.nn.DataParallel (DP), we adopt torch.distributed.DistributedDataParallel (DDP) for faster training and more balanced GPU memory load. The training procedure is divided into two steps: (1) Pretrain the 3DMM estimator, (2) End-to-end Training.
3DMM Estimator Pre-training
CUDA_VISIBLE_DEVICES="0,1,2,3" python -m torch.distributed.launch --nproc_per_node 4 run_ddp.py --config config/pretrain.yaml
End-to-end Training
CUDA_VISIBLE_DEVICES="0,1,2,3" python -m torch.distributed.launch --nproc_per_node 4 run_ddp.py --config config/end2end.yaml --tdmm_checkpoint path/to/tdmm_checkpoint_pth
Evaluation / Inference
Video Reconstrucion
python run_ddp.py --config config/end2end.yaml --checkpoint path/to/checkpoint --mode reconstruction
Image Animation
python run_ddp.py --config config/end2end.yaml --checkpoint path/to/checkpoint --mode animation
3D Face Reconstruction
python tdmm_inference.py --data_dir directory/to/images --tdmm_checkpoint path/to/tdmm_checkpoint_pth
Dataset and Preprocessing
We use VoxCeleb1 to train and evaluate our model. Original Youtube videos are downloaded, cropped and splited following the instructions from video-preprocessing.
a. To obtain the facial landmark meta data from the preprocessed videos, run:
python video_ldmk_meta.py --video_dir directory/to/preprocessed_videos out_dir directory/to/output_meta_files
b. (Optional) Extract images from videos for 3DMM pretraining:
python extract_imgs.py
Citation
If you find our work useful to your research, please consider citing:
@article{wang2021safa,
title={SAFA: Structure Aware Face Animation},
author={Wang, Qiulin and Zhang, Lu and Li, Bo},
journal={arXiv preprint arXiv:2111.04928},
year={2021}
}
License
Please refer to the LICENSE file.
Acknowledgement
Codes are heavily borrowed from First Order Motion Model. Some codes are also borrowed from DECA, generative-inpainting-pytorch, face-parsing.PyTorch, video-preprocessing.
Recommend
-
31
StyleGAN2-ADA — Official PyTorch implementation Training Generative Adversarial Networks with Limited Data Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, Timo Aila
-
13
MobileStyleGAN: A Lightweight Convolutional Neural Network for High-Fidelity Image Synthesis Official PyTorch Implementation The accompanying videos can be found on
-
11
TimeSformer This is an official pytorch implementation of Is Space-Time Attention All You Need for Video Understanding?. In this repository, we provide PyTorch code for training and...
-
12
README TabNet : Attentive Interpretable Tabular Learning This is a pyTorch implementation of Tabnet (Arik, S. O., & Pfister, T. (2019). TabNet: Attentive Interpretable Tabular Learning. arXiv preprint arXiv:1908.07442.)...
-
15
Alias-Free Generative Adversarial Networks (StyleGAN3)Official PyTorch implementation of the NeurIPS 2021 paper Alias-Free Generative Adversarial Networks Tero Karras, Miika Aittala, Samuli Laine, Erik Hä...
-
11
BlendGAN: Implicitly GAN Blending for Arbitrary Stylized Face Generation Official PyTorch implementation of the NeurIPS 2021 paper Mingcong Liu,
-
18
GAN-Supervised Dense Visual Alignment — Official PyTorch Implementation Paper | Project Page | Vi...
-
93
nvdiffrec Joint optimization of topology, materials and lighting from multi-view image observations as described in the paper Extracting Triangular 3D Models, Materials, and Lighting From...
-
11
Unbiased Teacher for Semi-Supervised Object Detection This is the PyTorch implementation of our paper: Unbiased Teacher for Semi-Supervised Object Detection Yen-Cheng...
-
16
MDM: Human Motion Diffusion Model The official PyTorch implementation of the paper "Human Motion Diffusion Model". Please visit our
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK