Self-supervised learning

Machine learning


Self supervised learning (SSL) is a learning paradigm based on the idea of using information contained within the training data to build better representations of it. Self-supervised models are usually trained to predict hidden parts of the input data from its visible parts.


Self-supervised learning has been used for a long time in NLP. In Language modeling, one tries to predict words from previous ones. Recent language deep learning models have introduced other techniques such as allowing a transformer to read words forward and backward but partially masking them (Devlin et al. 2019).

Word vectors is another example of successful self-supervised learning which goal is to learn rich vector representations for words from their context.

SLL in Computer vision

In 2019 and 2020 self-supervised became more and more widespread in the vision community as the results on standard benchmarks started to match regular supervised learning.


  1. . . "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding". arXiv. DOI.
Last changed | authored by


← Back to Notes