LoRA

GPTKB entity

Statements (27)
Predicate Object
gptkbp:instanceOf machine learning technique
gptkbp:alternativeTo full fine-tuning
prefix tuning
gptkbp:appliesTo gptkb:Stable_Diffusion
gptkb:BERT
gptkb:GPT
transformer models
gptkbp:benefit enables multi-task adaptation
faster training
reduces memory usage
gptkbp:citation gptkb:arXiv:2106.09685
gptkbp:enables parameter-efficient training
gptkbp:fullName gptkb:Low-Rank_Adaptation
gptkbp:hasConcept injects trainable low-rank matrices into weights
https://www.w3.org/2000/01/rdf-schema#label LoRA
gptkbp:openSource gptkb:Hugging_Face_PEFT_library
gptkbp:proposedBy gptkb:Microsoft_Research
gptkbp:publicationYear 2021
gptkbp:reduces number of trainable parameters
gptkbp:relatedTo gptkb:PEFT
adapter methods
prompt tuning
gptkbp:usedFor fine-tuning large language models
gptkbp:usedIn computer vision
natural language processing
gptkbp:bfsParent gptkb:Diffusers
gptkbp:bfsLayer 6