gptkbp:instanceOf
|
large language model
|
gptkbp:architecture
|
gptkb:transformation
|
gptkbp:author
|
gptkb:Jacob_Devlin
gptkb:Kenton_Lee
gptkb:Ming-Wei_Chang
gptkb:Kristina_Toutanova
|
gptkbp:availableOn
|
gptkb:GitHub
|
gptkbp:developedBy
|
gptkb:Google_AI
|
gptkbp:fineTunedWith
|
yes
|
gptkbp:hiddenSize
|
1024
|
https://www.w3.org/2000/01/rdf-schema#label
|
BERT Large
|
gptkbp:input
|
gptkb:text
|
gptkbp:language
|
English
|
gptkbp:level
|
24
|
gptkbp:notablePublication
|
gptkb:BERT:_Pre-training_of_Deep_Bidirectional_Transformers_for_Language_Understanding
|
gptkbp:numberOfAttentionHeads
|
16
|
gptkbp:openSource
|
yes
|
gptkbp:output
|
contextual embeddings
|
gptkbp:parameter
|
340 million
|
gptkbp:pretrainingObjective
|
masked language modeling
next sentence prediction
|
gptkbp:publishedIn
|
gptkb:NAACL_2019
|
gptkbp:releaseDate
|
2018
|
gptkbp:tokenizerType
|
gptkb:WordPiece
|
gptkbp:trainer
|
gptkb:English_Wikipedia
gptkb:BooksCorpus
|
gptkbp:usedFor
|
question answering
natural language understanding
sentence similarity
text classification
named entity recognition
|
gptkbp:bfsParent
|
gptkb:BERT
|
gptkbp:bfsLayer
|
6
|