gptkbp:instance_of
|
gptkb:Model
|
gptkbp:architecture
|
gptkb:Transformers
|
gptkbp:available_in
|
gptkb:Hugging_Face_Transformers
|
gptkbp:can_be_used_for
|
question answering
sentiment analysis
text generation
text classification
named entity recognition
|
gptkbp:developed_by
|
gptkb:Google_Research
|
gptkbp:has_achieved
|
higher efficiency
lower training time
|
gptkbp:has_applications_in
|
language translation
search engines
content moderation
chatbots
information retrieval
|
gptkbp:has_variants
|
small, base, large
|
https://www.w3.org/2000/01/rdf-schema#label
|
ELECTRA
|
gptkbp:improves
|
fine-tuning tasks
|
gptkbp:influenced_by
|
GANs
|
gptkbp:introduced_in
|
gptkb:2020
|
gptkbp:is_based_on
|
gptkb:Transformers
|
gptkbp:is_compared_to
|
gptkb:GPT-2
|
gptkbp:is_evaluated_by
|
gptkb:GLUE_benchmark
gptkb:SQu_AD
gptkb:MNLI
WNLI
RTE
QQP
STS-B
Co LA
|
gptkbp:is_known_for
|
flexibility
high accuracy
robustness
fast inference
|
gptkbp:is_optimized_for
|
computational efficiency
training speed
model size
|
gptkbp:is_part_of
|
gptkb:AI_technology
NLP research community
|
gptkbp:is_supported_by
|
research papers
community contributions
open-source code
|
gptkbp:is_tasked_with
|
natural language processing
|
gptkbp:is_trained_in
|
large text corpora
|
gptkbp:is_used_by
|
gptkb:engineers
gptkb:developers
gptkb:researchers
data scientists
|
gptkbp:is_used_in
|
AI applications
|
gptkbp:mission
|
discriminative
|
gptkbp:performance
|
state-of-the-art
|
gptkbp:related_to
|
gptkb:BERT
|
gptkbp:supports
|
multiple languages
|
gptkbp:type
|
pre-trained language model
|
gptkbp:uses
|
masked language modeling
generator-discriminator framework
token embeddings
|
gptkbp:bfsParent
|
gptkb:T5-large
gptkb:Attention_Is_All_You_Need
gptkb:GLUE_benchmark
gptkb:Transformer_Models
gptkb:Hugging_Face
gptkb:Super_GLUE_benchmark
|
gptkbp:bfsLayer
|
5
|