GitHub - THUDM/CogVideo: Text-to-video generation.
source link: https://github.com/THUDM/CogVideo
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
This is the official repo for the paper: CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers.
Video samples generated by CogVideo. The actual text inputs are in Chinese. Each sample is a 4-second clip of 32 frames, and here we sample 9 frames uniformly for display purposes.
CogVideo is able to generate relatively high-frame-rate videos. A 4-second clip of 32 frames is shown below.
Aggregate valuable and interesting links.
Joyk means Joy of geeK