Statements (23)
Predicate | Object |
---|---|
gptkbp:instanceOf |
gptkb:convolutional_neural_network
|
gptkbp:appliesTo |
image generation
language modeling |
gptkbp:attentionPattern |
fixed sparse attention
local attention strided attention |
gptkbp:author |
gptkb:Ilya_Sutskever
gptkb:Scott_Gray gptkb:Alec_Radford gptkb:Rewon_Child |
gptkbp:designedFor |
efficient sequence modeling
|
gptkbp:developedBy |
gptkb:OpenAI
|
gptkbp:enables |
scaling to longer sequences
|
https://www.w3.org/2000/01/rdf-schema#label |
Sparse Transformer
|
gptkbp:improves |
Transformer efficiency
|
gptkbp:introducedIn |
2019
|
gptkbp:publishedIn |
gptkb:arXiv:1904.10509
|
gptkbp:reduces |
computational complexity
|
gptkbp:relatedTo |
gptkb:transformation
Attention mechanism |
gptkbp:uses |
sparse attention mechanism
|
gptkbp:bfsParent |
gptkb:transformation
|
gptkbp:bfsLayer |
5
|