Multimodal Neurons in Artificial Neural Networks
GPTKB entity
Statements (27)
| Predicate | Object |
|---|---|
| gptkbp:instanceOf |
gptkb:scientific_theory
|
| gptkbp:defines |
Neurons in artificial neural networks that respond to multiple types of input modalities, such as images and text.
|
| gptkbp:enables |
zero-shot learning
multimodal generation cross-modal understanding multimodal retrieval semantic alignment |
| gptkbp:field |
gptkb:artificial_intelligence
gptkb:machine_learning |
| gptkbp:firstDescribed |
2021
|
| gptkbp:inspiredBy |
biological neurons
|
| gptkbp:notableExample |
gptkb:DALL-E
gptkb:GPT-4 CLIP model |
| gptkbp:notablePublication |
Multimodal Neurons in Artificial Neural Networks (OpenAI, 2021)
|
| gptkbp:property |
can be probed with both text and image inputs
emerge in large-scale neural networks respond to abstract concepts across modalities |
| gptkbp:relatedTo |
gptkb:artificial_neural_networks
deep learning representation learning multimodal learning |
| gptkbp:studiedBy |
gptkb:OpenAI
gptkb:Google_Research |
| gptkbp:bfsParent |
gptkb:Gabriel_Goh
|
| gptkbp:bfsLayer |
7
|
| https://www.w3.org/2000/01/rdf-schema#label |
Multimodal Neurons in Artificial Neural Networks
|