DeBERTa: Decoding-enhanced BERT with Disentangled Attention
GPTKB entity
Statements (26)
Predicate | Object |
---|---|
gptkbp:instanceOf |
gptkb:model
large language model |
gptkbp:author |
gptkb:Pengcheng_He
gptkb:Weizhu_Chen gptkb:Xiaodong_Liu gptkb:Jianfeng_Gao |
gptkbp:availableOn |
gptkb:Hugging_Face_Model_Hub
|
gptkbp:basedOn |
gptkb:BERT
Transformer architecture |
gptkbp:developedBy |
gptkb:Microsoft_Research
|
gptkbp:hasFeature |
disentangled attention mechanism
enhanced mask decoder absolute and relative position embeddings |
https://www.w3.org/2000/01/rdf-schema#label |
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
|
gptkbp:improves |
gptkb:BERT
gptkb:RoBERTa |
gptkbp:introducedIn |
2021
|
gptkbp:language |
English
|
gptkbp:notablePublication |
gptkb:DeBERTa:_Decoding-enhanced_BERT_with_Disentangled_Attention
|
gptkbp:openSource |
true
|
gptkbp:publishedIn |
International Conference on Learning Representations (ICLR) 2021
|
gptkbp:usedFor |
question answering
natural language understanding text classification |
gptkbp:bfsParent |
gptkb:DeBERTa
|
gptkbp:bfsLayer |
6
|