Statements (13)
| Predicate | Object |
|---|---|
| gptkbp:instanceOf |
gptkb:research
|
| gptkbp:aimsTo |
connect vision and language
|
| gptkbp:conductedBy |
gptkb:OpenAI
|
| gptkbp:demonstrates |
zero-shot learning capabilities
|
| gptkbp:enables |
image classification without task-specific training
|
| gptkbp:focusesOn |
contrastive language–image pre-training
|
| gptkbp:influenced |
multimodal AI research
|
| gptkbp:publishedIn |
2021
|
| gptkbp:resultedIn |
CLIP model
|
| gptkbp:usedDataset |
400 million image-text pairs
|
| gptkbp:bfsParent |
gptkb:OpenAI,_Inc.
|
| gptkbp:bfsLayer |
7
|
| https://www.w3.org/2000/01/rdf-schema#label |
OpenAI CLIP Research
|