gptkbp:instanceOf
|
machine learning technique
|
gptkbp:abbreviation
|
gptkb:LoRA
|
gptkbp:appliesTo
|
large language models
transformer models
|
gptkbp:arXivID
|
2106.09685
|
gptkbp:author
|
gptkb:Edward_J._Hu
gptkb:Phillip_Wallis
gptkb:Yelong_Shen
gptkb:Yuanzhi_Li
gptkb:Zeyuan_Allen-Zhu
gptkb:Weizhu_Chen
gptkb:Lu_Wang
Shean Wang
|
gptkbp:enables
|
efficient adaptation of pre-trained models
faster fine-tuning
training with less GPU memory
|
https://www.w3.org/2000/01/rdf-schema#label
|
Low-Rank Adaptation
|
gptkbp:introducedIn
|
2021
|
gptkbp:openSource
|
gptkb:PEFT_library
Hugging Face LoRA
|
gptkbp:paperSizeSupported
|
gptkb:LoRA:_Low-Rank_Adaptation_of_Large_Language_Models
|
gptkbp:proposedBy
|
gptkb:Microsoft_Research
|
gptkbp:purpose
|
parameter-efficient fine-tuning
|
gptkbp:reduces
|
number of trainable parameters
|
gptkbp:relatedTo
|
fine-tuning
adapters
prompt tuning
parameter-efficient transfer learning
|
gptkbp:usedIn
|
computer vision
natural language processing
|
gptkbp:bfsParent
|
gptkb:LoRA
gptkb:LoRA_adapters
|
gptkbp:bfsLayer
|
7
|