Sparse Transformer: Generating Text with Sparse Attention
GPTKB entity
Statements (25)
Predicate | Object |
---|---|
gptkbp:instanceOf |
gptkb:academic_journal
|
gptkbp:arXivID |
1904.10509
|
gptkbp:author |
gptkb:Ilya_Sutskever
gptkb:Scott_Gray gptkb:Alec_Radford gptkb:Rewon_Child |
gptkbp:citation |
gptkb:Sparse_Transformer:_Generating_Text_with_Sparse_Attention
|
gptkbp:describes |
reducing computational complexity of transformers
|
gptkbp:field |
gptkb:machine_learning
natural language processing |
gptkbp:focusesOn |
efficient transformer models
|
gptkbp:hasKeyword |
gptkb:transformation
language modeling sparse attention |
https://www.w3.org/2000/01/rdf-schema#label |
Sparse Transformer: Generating Text with Sparse Attention
|
gptkbp:influenced |
gptkb:Big_Bird
gptkb:Longformer Reformer |
gptkbp:language |
English
|
gptkbp:proposedBy |
sparse attention mechanism
|
gptkbp:publicationYear |
2019
|
gptkbp:publishedBy |
gptkb:OpenAI
|
gptkbp:url |
https://arxiv.org/abs/1904.10509
|
gptkbp:bfsParent |
gptkb:Google_Brain_(former)
|
gptkbp:bfsLayer |
7
|