gptkbp:instance_of
|
gptkb:Transformers_character
|
gptkbp:bfsLayer
|
4
|
gptkbp:bfsParent
|
gptkb:Transformers_character
|
gptkbp:application
|
Image Classification
|
gptkbp:architectural_style
|
gptkb:Transformers_character
|
gptkbp:coat_of_arms
|
gptkb:32
|
gptkbp:collaborations
|
Various Universities
|
gptkbp:community_support
|
Active
|
gptkbp:construction_cost
|
gptkb:High
|
gptkbp:contribution
|
Advancement of Vision Transformers
|
gptkbp:developed_by
|
gptkb:Google_Research
|
gptkbp:economic_impact
|
gptkb:Significant
|
gptkbp:established
|
Ge LU
|
gptkbp:features
|
gptkb:battle
|
gptkbp:field_of_study
|
gptkb:viewpoint
|
gptkbp:focus
|
Multi-head Attention
|
gptkbp:future_plans
|
Ongoing
|
gptkbp:gpu
|
gptkb:battle
|
https://www.w3.org/2000/01/rdf-schema#label
|
Vi T-H
|
gptkbp:hyper_threading
|
Tunable
|
gptkbp:influenced_by
|
gptkb:BERT
|
gptkbp:input_output
|
Softmax
224x224
|
gptkbp:invention
|
gptkb:None
|
gptkbp:is_a_framework_for
|
gptkb:Graphics_Processing_Unit
gptkb:Py_Torch
|
gptkbp:is_cited_in
|
gptkb:High
|
gptkbp:is_evaluated_by
|
F1 Score
|
gptkbp:is_open_source
|
gptkb:battle
|
gptkbp:is_optimized_for
|
gptkb:Adam
|
gptkbp:is_used_in
|
gptkb:battle
|
gptkbp:localization
|
gptkb:battle
|
gptkbp:losses
|
Cross-entropy Loss
|
gptkbp:migration
|
Supported
|
gptkbp:orbital_period
|
over 600 million
|
gptkbp:performance
|
Available
Top-1 Accuracy
|
gptkbp:provides_information_on
|
gptkb:CIFAR-10
gptkb:battle
|
gptkbp:publishes
|
Vision Transformers for Image Recognition
|
gptkbp:reduces
|
0.1
|
gptkbp:related_model
|
Large
Vi T-B, Vi T-L
|
gptkbp:related_to
|
gptkb:Deep_Learning
|
gptkbp:resolution
|
1000 classes
|
gptkbp:sister_channel
|
gptkb:3
|
gptkbp:successor
|
Vi T-G
|
gptkbp:training
|
gptkb:Image_Net
0.001
Days
Self-supervised Learning
|
gptkbp:tuning
|
Possible
|
gptkbp:user_base
|
Researchers, Developers
|
gptkbp:uses
|
gptkb:battle
Self-Attention Mechanism
|
gptkbp:values
|
632 million
|
gptkbp:year_created
|
gptkb:2021
|