Statements (31)
Predicate | Object |
---|---|
gptkbp:instanceOf |
machine learning technique
|
gptkbp:appliesTo |
transformer models
|
gptkbp:arXivID |
2106.09685
|
gptkbp:author |
gptkb:Edward_J._Hu
gptkb:Phillip_Wallis gptkb:Yelong_Shen gptkb:Yuanzhi_Li gptkb:Zeyuan_Allen-Zhu gptkb:Weizhu_Chen gptkb:Lu_Wang Shean Wang |
gptkbp:citation |
gptkb:LoRA:_Low-Rank_Adaptation_of_Large_Language_Models
|
gptkbp:enables |
multi-task learning
domain adaptation efficient adaptation of large models |
gptkbp:fullName |
gptkb:Low-Rank_Adaptation
|
https://www.w3.org/2000/01/rdf-schema#label |
LoRA adapters
|
gptkbp:introduced |
gptkb:Microsoft_Research
|
gptkbp:introducedIn |
2021
|
gptkbp:openSource |
gptkb:PEFT_library
gptkb:Hugging_Face_Transformers |
gptkbp:publishedIn |
gptkb:arXiv
|
gptkbp:reduces |
number of trainable parameters
|
gptkbp:relatedTo |
prefix tuning
prompt tuning adapter layers |
gptkbp:usedFor |
parameter-efficient fine-tuning
|
gptkbp:usedIn |
computer vision
natural language processing |
gptkbp:bfsParent |
gptkb:Stable_Diffusion_XL
|
gptkbp:bfsLayer |
6
|