Statements (27)
| Predicate | Object |
|---|---|
| gptkbp:instanceOf |
gptkb:machine_learning_technique
|
| gptkbp:alternativeTo |
full fine-tuning
prefix tuning |
| gptkbp:appliesTo |
gptkb:Stable_Diffusion
gptkb:BERT gptkb:GPT transformer models |
| gptkbp:benefit |
enables multi-task adaptation
faster training reduces memory usage |
| gptkbp:citation |
gptkb:arXiv:2106.09685
|
| gptkbp:enables |
parameter-efficient training
|
| gptkbp:fullName |
gptkb:Low-Rank_Adaptation
|
| gptkbp:hasConcept |
injects trainable low-rank matrices into weights
|
| gptkbp:openSource |
gptkb:Hugging_Face_PEFT_library
|
| gptkbp:proposedBy |
gptkb:Microsoft_Research
|
| gptkbp:publicationYear |
2021
|
| gptkbp:reduces |
number of trainable parameters
|
| gptkbp:relatedTo |
gptkb:PEFT
adapter methods prompt tuning |
| gptkbp:usedFor |
fine-tuning large language models
|
| gptkbp:usedIn |
computer vision
natural language processing |
| gptkbp:bfsParent |
gptkb:Diffusers
|
| gptkbp:bfsLayer |
7
|
| https://www.w3.org/2000/01/rdf-schema#label |
LoRA
|