8

Momentum Contrast for Unsupervised Visual Representation Learning

 3 years ago
source link: http://ppwwyyxx.com/publication/moco/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Momentum Contrast for Unsupervised Visual Representation Learning

Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, Ross Girshick

November 2019

Computer Vision and Pattern Recognition (CVPR), 2020 (Oral)
Best Paper Nomination (top 30)
featured.svg

Abstract

We present Momentum Contrast (MoCo) for unsupervised visual representation learning. From a perspective on contrastive learning as dictionary look-up, we build a dynamic dictionary with a queue and a moving-averaged encoder. This enables building a large and consistent dictionary on-the-fly that facilitates contrastive unsupervised learning. MoCo provides competitive results under the common linear protocol on ImageNet classification. More importantly, the representations learned by MoCo transfer well to downstream tasks. MoCo can outperform its supervised pre-training counterpart in 7 detection/segmentation tasks on PASCAL VOC, COCO, and other datasets, sometimes surpassing it by large margins. This suggests that the gap between unsupervised and supervised representation learning has been largely closed in many vision tasks.

Publication
Computer Vision and Pattern Recognition (CVPR), 2020 (Oral)
Best Paper Nomination (top 30)
Yuxin Wu

Research Engineer


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK