Statements (28)
| Predicate | Object |
|---|---|
| gptkbp:instanceOf |
gptkb:machine_learning_concept
|
| gptkbp:appliesTo |
large language models
|
| gptkbp:compatibleWith |
gradient descent during inference
parameter updates |
| gptkbp:contrastsWith |
fine-tuning
traditional supervised learning |
| gptkbp:demonstrates |
Brown et al. 2020
|
| gptkbp:enables |
dynamic task specification
learning from context models to learn from prompts task adaptation without retraining |
| gptkbp:format |
prompt with examples
|
| gptkbp:introducedIn |
GPT-3 paper
|
| gptkbp:relatedTo |
few-shot learning
zero-shot learning prompt engineering |
| gptkbp:requires |
large model capacity
|
| gptkbp:studiedIn |
natural language processing
machine learning research |
| gptkbp:usedBy |
gptkb:Gemini
gptkb:OpenAI_GPT-3 gptkb:Claude gptkb:LLaMA gptkb:PaLM gptkb:OpenAI_GPT-4 |
| gptkbp:bfsParent |
gptkb:Large_Language_Models
|
| gptkbp:bfsLayer |
7
|
| https://www.w3.org/2000/01/rdf-schema#label |
In-Context Learning
|