GitHub - PeterL1n/RobustVideoMatting: Robust Video Matting in PyTorch, TensorFlo...
source link: https://github.com/PeterL1n/RobustVideoMatting
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
Robust Video Matting (RVM)
English | 中文
Official repository for the paper Robust High-Resolution Video Matting with Temporal Guidance. RVM is specifically designed for robust human video matting. Unlike existing neural models that process frames as independent images, RVM uses a recurrent neural network to process videos with temporal memory. RVM can perform matting in real-time on any videos without additional inputs. It achieves 4K 76FPS and HD 104FPS on an Nvidia GTX 1080 Ti GPU. The project was developed at ByteDance Inc.
- [Aug 25 2021] Source code and pretrained models are published.
- [Jul 27 2021] Paper is accepted by WACV 2022.
Showreel
Watch the showreel video (YouTube, Bilibili) to see the model's performance.
All footage in the video are available in Google Drive and Baidu Pan (code: tb3w).
- Webcam Demo: Run the model live in your browser. Visualize recurrent states.
- Colab Demo: Test our model on your own videos with free GPU.
Download
We recommend MobileNetv3 models for most use cases. ResNet50 models are the larger variant with small performance improvements. Our model is available on various inference frameworks. See inference documentation for more instructions.
Framework Download Notes PyTorch rvm_mobilenetv3.pthrvm_resnet50.pth Official weights for PyTorch. Doc TorchHub Nothing to Download. Easiest way to use our model in your PyTorch project. Doc TorchScript rvm_mobilenetv3_fp32.torchscript
rvm_mobilenetv3_fp16.torchscript
rvm_resnet50_fp32.torchscript
rvm_resnet50_fp16.torchscript If inference on mobile, consider export int8 quantized models yourself. Doc ONNX rvm_mobilenetv3_fp32.onnx
rvm_mobilenetv3_fp16.onnx
rvm_resnet50_fp32.onnx
rvm_resnet50_fp16.onnx Tested on ONNX Runtime with CPU and CUDA backends. Provided models use opset 12. Doc, Exporter. TensorFlow rvm_mobilenetv3_tf.zip
rvm_resnet50_tf.zip TensorFlow 2 SavedModel. Doc TensorFlow.js rvm_mobilenetv3_tfjs_int8.zip Run the model on the web. Demo, Starter Code CoreML rvm_mobilenetv3_1280x720_s0.375_fp16.mlmodel
rvm_mobilenetv3_1280x720_s0.375_int8.mlmodel
rvm_mobilenetv3_1920x1080_s0.25_fp16.mlmodel
rvm_mobilenetv3_1920x1080_s0.25_int8.mlmodel CoreML does not support dynamic resolution. Other resolutions can be exported yourself. Models require iOS 13+.
s
denotes downsample_ratio
. Doc, Exporter
All models are available in Google Drive and Baidu Pan (code: gym7).
PyTorch Example
- Install dependencies:
pip install -r requirements_inference.txt
- Load the model:
import torch from model import MattingNetwork model = MattingNetwork('mobilenetv3').eval().cuda() # or "resnet50" model.load_state_dict(torch.load('rvm_mobilenetv3.pth'))
- To convert videos, we provide a simple conversion API:
from inference import convert_video convert_video( model, # The model, can be on any device (cpu or cuda). input_source='input.mp4', # A video file or an image sequence directory. output_type='video', # Choose "video" or "png_sequence" output_composition='output.mp4', # File path if video; directory path if png sequence. output_video_mbps=4, # Output video mbps. Not needed for png sequence. downsample_ratio=None, # A hyperparameter to adjust or use None for auto. seq_chunk=12, # Process n frames at once for better parallelism. )
- Or write your own inference code:
from torch.utils.data import DataLoader from torchvision.transforms import ToTensor from inference_utils import VideoReader, VideoWriter reader = VideoReader('input.mp4', transform=ToTensor()) writer = VideoWriter('output.mp4', frame_rate=30) bgr = torch.tensor([.47, 1, .6]).view(3, 1, 1).cuda() # Green background. rec = [None] * 4 # Initial recurrent states. downsample_ratio = 0.25 # Adjust based on your video. with torch.no_grad(): for src in DataLoader(reader): # RGB tensor normalized to 0 ~ 1. fgr, pha, *rec = model(src.cuda(), *rec, downsample_ratio) # Cycle the recurrent states. com = fgr * pha + bgr * (1 - pha) # Composite to green background. writer.write(com) # Write frame.
- The models and converter API are also available through TorchHub.
# Load the model. model = torch.hub.load("PeterL1n/RobustVideoMatting", "mobilenetv3") # or "resnet50" # Converter API. convert_video = torch.hub.load("PeterL1n/RobustVideoMatting", "converter")
Please see inference documentation for details on downsample_ratio
hyperparameter, more converter arguments, and more advanced usage.
Training and Evaluation
Please refer to the training documentation to train and evaluate your own model.
Speed
Speed is measured with inference_speed_test.py
for reference.
- Note 1: HD uses
downsample_ratio=0.25
, 4K usesdownsample_ratio=0.125
. All tests use batch size 1 and frame chunk 1. - Note 2: GPUs before Turing architecture does not support FP16 inference, so GTX 1080 Ti uses FP32.
- Note 3: We only measure tensor throughput. The provided video conversion script in this repo is expected to be much slower, because it does not utilize hardware video encoding/decoding and does not have the tensor transfer done on parallel threads. If you are interested in implementing hardware video encoding/decoding in Python, please refer to PyNvCodec.
Project Members
Recommend
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK