gptkbp:instanceOf
|
large language model
|
gptkbp:architecture
|
gptkb:transformation
|
gptkbp:arXivID
|
1910.10683
|
gptkbp:author
|
gptkb:Adam_Roberts
gptkb:Colin_Raffel
gptkb:Katherine_Lee
gptkb:Michael_Matena
gptkb:Noam_Shazeer
gptkb:Peter_J._Liu
gptkb:Sharan_Narang
gptkb:Wei_Li
gptkb:Yanqi_Zhou
|
gptkbp:availableOn
|
gptkb:TensorFlow
gptkb:PyTorch
gptkb:Hugging_Face_Transformers
|
gptkbp:citation
|
high
|
gptkbp:developedBy
|
gptkb:Google_Research
|
gptkbp:encoderDecoder
|
yes
|
gptkbp:fineTunedWith
|
yes
|
gptkbp:fullName
|
gptkb:Text-To-Text_Transfer_Transformer
|
gptkbp:hasVariant
|
gptkb:T5-11B
gptkb:T5-3B
gptkb:T5-Base
gptkb:T5-Large
gptkb:T5-Small
|
https://www.w3.org/2000/01/rdf-schema#label
|
T5
|
gptkbp:input
|
gptkb:text
|
gptkbp:introducedIn
|
2019
|
gptkbp:language
|
English
|
gptkbp:license
|
Apache 2.0
|
gptkbp:notableFor
|
state-of-the-art results on NLP benchmarks
unified text-to-text framework
|
gptkbp:notablePublication
|
gptkb:Exploring_the_Limits_of_Transfer_Learning_with_a_Unified_Text-to-Text_Transformer
|
gptkbp:openSource
|
yes
|
gptkbp:output
|
gptkb:text
|
gptkbp:parameter
|
60 million
220 million
11 billion
3 billion
770 million
|
gptkbp:pretrainingObjective
|
span corruption
|
gptkbp:taskFormat
|
text-to-text
|
gptkbp:tokenizerType
|
gptkb:SentencePiece
|
gptkbp:trainer
|
gptkb:Colossal_Clean_Crawled_Corpus
C4
|
gptkbp:type
|
self-attention
|
gptkbp:usedFor
|
translator
question answering
summarization
text generation
text classification
|
gptkbp:bfsParent
|
gptkb:large_language_model
gptkb:Tramway_de_Lyon
gptkb:bus
gptkb:transformation
gptkb:convolutional_neural_network
gptkb:GPT-3
gptkb:Large_Language_Models
|
gptkbp:bfsLayer
|
5
|