gptkbp:instance_of
|
gptkb:language
|
gptkbp:applies_to
|
self-supervised learning
|
gptkbp:architecture
|
gptkb:Transformers
|
gptkbp:developed_by
|
gptkb:Microsoft_Research
|
gptkbp:has_achieved
|
state-of-the-art performance
|
gptkbp:has_function
|
over 1.5 billion
|
https://www.w3.org/2000/01/rdf-schema#label
|
De BERTa-v2
|
gptkbp:improves
|
gptkb:De_BERTa
|
gptkbp:is_adopted_by
|
various organizations
|
gptkbp:is_available_on
|
gptkb:Hugging_Face_Model_Hub
|
gptkbp:is_based_on
|
BERT architecture
|
gptkbp:is_compared_to
|
gptkb:GPT-3
gptkb:T5
gptkb:XLNet
ALBERT
|
gptkbp:is_compatible_with
|
gptkb:Tensor_Flow
gptkb:Py_Torch
|
gptkbp:is_discussed_in
|
AI workshops
NLP conferences
|
gptkbp:is_documented_in
|
research papers
technical blogs
Git Hub repositories
|
gptkbp:is_evaluated_by
|
gptkb:GLUE_benchmark
gptkb:Super_GLUE_benchmark
gptkb:SQu_AD_dataset
Co LA dataset
Fever dataset
HANS dataset
MNLI dataset
MRPC dataset
PUD dataset
QQP dataset
RTE dataset
SNLI dataset
SST-2 dataset
STS-B dataset
TREC dataset
WNLI dataset
XNLI dataset
|
gptkbp:is_influenced_by
|
transformer models
self-attention mechanisms
|
gptkbp:is_known_for
|
high accuracy
efficiency in training
|
gptkbp:is_optimized_for
|
question answering
text generation
text classification
|
gptkbp:is_part_of
|
De BERTa family
|
gptkbp:is_supported_by
|
community contributions
|
gptkbp:is_trained_in
|
large text corpora
masked language modeling
next sentence prediction
|
gptkbp:is_used_for
|
natural language processing tasks
|
gptkbp:is_used_in
|
research papers
industry applications
|
gptkbp:outperforms
|
gptkb:BERT
gptkb:Ro_BERTa
|
gptkbp:performance
|
NLP models
|
gptkbp:release_year
|
gptkb:2021
|
gptkbp:supports
|
multiple languages
|
gptkbp:uses
|
disentangled attention
|
gptkbp:bfsParent
|
gptkb:De_BERTa
|
gptkbp:bfsLayer
|
6
|