LoRA: Low-Rank Adaptation of Large Language Models

GPTKB entity

Statements (37)
Predicate Object
gptkbp:instanceOf machine learning method
gptkbp:abbreviation Low-Rank Adaptation of Large Language Models
gptkbp:appliesTo large language models
gptkbp:arXivID 2106.09685
gptkbp:author gptkb:Edward_J._Hu
gptkb:Phillip_Wallis
gptkb:Yelong_Shen
gptkb:Yuanzhi_Li
gptkb:Zeyuan_Allen-Zhu
gptkb:Weizhu_Chen
gptkb:Lu_Wang
Shean Wang
gptkbp:benefit lower memory usage
faster training
enables multiple task adaptation
gptkbp:citation 1000+
gptkbp:contrastsWith full fine-tuning
gptkbp:enables parameter-efficient fine-tuning
gptkbp:field gptkb:machine_learning
natural language processing
transfer learning
gptkbp:hasConcept adapts pre-trained models by injecting trainable low-rank matrices into each layer
https://www.w3.org/2000/01/rdf-schema#label LoRA: Low-Rank Adaptation of Large Language Models
gptkbp:openSource gptkb:Hugging_Face_PEFT_library
gptkbp:proposedBy gptkb:Microsoft_Research
gptkbp:publicationYear 2021
gptkbp:publishedIn gptkb:arXiv
gptkbp:reduces number of trainable parameters
gptkbp:relatedTo prefix tuning
prompt tuning
adapter tuning
gptkbp:url https://arxiv.org/abs/2106.09685
gptkbp:usedIn gptkb:T5
gptkb:GPT-3
gptkb:BERT
gptkbp:bfsParent gptkb:LoRA_adapters
gptkbp:bfsLayer 7