

[2006.10803] Supervision Accelerates Pre-training in Contrastive Semi-Supervised...
source link: https://arxiv.org/abs/2006.10803
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

[Submitted on 18 Jun 2020 (v1), last revised 1 Dec 2020 (this version, v2)]
Supervision Accelerates Pre-training in Contrastive Semi-Supervised Learning of Visual Representations
We investigate a strategy for improving the efficiency of contrastive learning of visual representations by leveraging a small amount of supervised information during pre-training. We propose a semi-supervised loss, SuNCEt, based on noise-contrastive estimation and neighbourhood component analysis, that aims to distinguish examples of different classes in addition to the self-supervised instance-wise pretext tasks. On ImageNet, we find that SuNCEt can be used to match the semi-supervised learning accuracy of previous contrastive approaches while using less than half the amount of pre-training and compute. Our main insight is that leveraging even a small amount of labeled data during pre-training, and not only during fine-tuning, provides an important signal that can significantly accelerate contrastive learning of visual representations. Our code is available online at this http URL.
Recommend
-
26
This article originally appeared onblog.zakjost.com Introduction I have recently worked to understand Noise Contrastive Es...
-
40
One-shot learning is a classification task where one, or a few, examples are used to classify many new examples in the future. This characterizes tasks seen in the field of face recognition, such as face identifi...
-
54
README.md ...
-
28
Contrastive self-supervised learning techniques are a promising class of methods that build representations by learning to encode what makes two things similar or different. The prophecy that self-supervised...
-
25
在本文中,微软亚洲研究院的研究员和实习生们提出了一个简单且高效的无监督预训练方法——参数化实例分类(PIC)。和目前最常用的非参数化对比学习方法不同,PIC 采用了类似于有监督图片分类的框架,将每个实例或图片看作一个独立的类别进...
-
139
Self-Supervised Vision Transformers with DINO PyTorch implementation and pretrained models for DINO. For details, see Emerging Properties in Self-Supervised Vision Transformers. [
-
9
PAWS Predicting View-Assignments with Support Samples This repo provide...
-
13
论文标题:Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph Representation Learning论文作者:Ming Jin, Yizhen Zheng, Yuan-Fang Li, Chen Gong, Chuan Zhou, Shirui Pan论文来源:2021, IJCAI论文地址...
-
7
本篇post以半监督文本分类为例,重点介绍如何通过对抗训练(Adversarial Training)的trick进一步提升系统性能。 对抗训练是一种用于监督式学习的正则化方法,虚拟对抗训练可将监督式学习的正则化方法扩展到半监督式中。上...
-
17
论文标题:Adversarial training methods for semi-supervised text classification论文作者:Taekyung Kim论文来源:ICLR 2017论文地址:download 论文代码...
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK