|
gptkbp:instanceOf
|
gptkb:Machine_Learning_Paradigm
|
|
gptkbp:advantage
|
Enables pretraining of models
Improves generalization
Reduces need for labeled data
|
|
gptkbp:distinctFrom
|
Supervised Learning uses labeled data
Unsupervised Learning does not use labels or pseudo-labels
|
|
gptkbp:example
|
gptkb:Masked_Language_Modeling
gptkb:BYOL
Autoencoders
MoCo
SimCLR
Contrastive Learning
|
|
gptkbp:field
|
gptkb:Machine_Learning
gptkb:artificial_intelligence
|
|
gptkbp:goal
|
Learn representations from unlabeled data
|
|
gptkbp:method
|
Generate pseudo-labels from data itself
|
|
gptkbp:notableFor
|
gptkb:BERT
gptkb:Vision_Transformers
GPT (Generative Pre-trained Transformer)
Self-supervised Speech Models
|
|
gptkbp:originatedIn
|
Early 2010s
|
|
gptkbp:output
|
Learned Representations
|
|
gptkbp:popularizedBy
|
gptkb:Yann_LeCun
gptkb:Facebook_AI_Research
|
|
gptkbp:relatedConcept
|
gptkb:Representation_Learning
Data Augmentation
Contrastive Loss
Masked Prediction
Pretext Task
Pseudo-labeling
Self-labeling
|
|
gptkbp:relatedTo
|
gptkb:Unsupervised_Learning
Supervised Learning
|
|
gptkbp:trainer
|
Unlabeled Data
|
|
gptkbp:usedFor
|
Transfer Learning
Feature Extraction
Pretraining Neural Networks
Downstream Tasks
|
|
gptkbp:usedIn
|
gptkb:Computer_Vision
gptkb:Representation_Learning
gptkb:Natural_Language_Processing
gptkb:Speech_Recognition
|
|
gptkbp:bfsParent
|
gptkb:Unsupervised_Learning
gptkb:Foundation_Model
|
|
gptkbp:bfsLayer
|
8
|
|
https://www.w3.org/2000/01/rdf-schema#label
|
Self-supervised Learning
|