3

[2202.09061] VLP: A Survey on Vision-Language Pre-training

 1 year ago
source link: https://arxiv.org/abs/2202.09061
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

[Submitted on 18 Feb 2022 (v1), last revised 21 Feb 2022 (this version, v2)]

VLP: A Survey on Vision-Language Pre-training

Download PDF

In the past few years, the emergence of pre-training models has brought uni-modal fields such as computer vision (CV) and natural language processing (NLP) to a new era. Substantial works have shown they are beneficial for downstream uni-modal tasks and avoid training a new model from scratch. So can such pre-trained models be applied to multi-modal tasks? Researchers have explored this problem and made significant progress. This paper surveys recent advances and new frontiers in vision-language pre-training (VLP), including image-text and video-text pre-training. To give readers a better overall grasp of VLP, we first review its recent advances from five aspects: feature extraction, model architecture, pre-training objectives, pre-training datasets, and downstream tasks. Then, we summarize the specific VLP models in detail. Finally, we discuss the new frontiers in VLP. To the best of our knowledge, this is the first survey on VLP. We hope that this survey can shed light on future research in the VLP field.

Comments: A Survey on Vision-Language Pre-training
Subjects: Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL)
Cite as: arXiv:2202.09061 [cs.CV]
  (or arXiv:2202.09061v2 [cs.CV] for this version)
  https://doi.org/10.48550/arXiv.2202.09061

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK