Vi T-H

GPTKB entity

Statements (58)
Predicate Object
gptkbp:instance_of gptkb:Transformers
gptkbp:application Image Classification
gptkbp:architectural_style gptkb:Transformers
gptkbp:architecture gptkb:Transformers
gptkbp:attention_mechanism Multi-head Attention
gptkbp:coat_of_arms gptkb:32
gptkbp:collaborator Various Universities
gptkbp:community_support Active
gptkbp:computational_cost gptkb:High
gptkbp:contribution Advancement of Vision Transformers
gptkbp:developed_by gptkb:Google_Research
gptkbp:drops 0.1
gptkbp:economic_impact gptkb:Significant
gptkbp:evaluates F1 Score
gptkbp:feature_extractor gptkb:Yes
gptkbp:field_of_study gptkb:Computer_Vision
gptkbp:future_prospects Ongoing
gptkbp:gpu gptkb:Yes
gptkbp:has_publications Vision Transformers for Image Recognition
https://www.w3.org/2000/01/rdf-schema#label Vi T-H
gptkbp:hyperparameters Tunable
gptkbp:influenced_by gptkb:BERT
gptkbp:initiated_by Ge LU
gptkbp:input_output 224x224
gptkbp:invention gptkb:None
gptkbp:is_a_framework_for gptkb:Tensor_Flow
gptkb:Py_Torch
gptkbp:is_cited_in gptkb:High
gptkbp:is_open_source gptkb:Yes
gptkbp:is_optimized_for gptkb:Adam
gptkbp:is_taught_in 0.001
gptkbp:is_trained_in gptkb:Image_Net
gptkbp:losses Cross-entropy Loss
gptkbp:migration Supported
gptkbp:model Large
Vi T-B, Vi T-L
gptkbp:num_parameters 632 million
gptkbp:orbital_period over 600 million
gptkbp:output_activation Softmax
gptkbp:performance Available
Top-1 Accuracy
gptkbp:provides_information_on gptkb:CIFAR-10
gptkb:Yes
gptkbp:related_to gptkb:Deep_Learning
gptkbp:resolution 1000 classes
gptkbp:sister_channel gptkb:3
gptkbp:successor Vi T-G
gptkbp:training Days
Self-supervised Learning
gptkbp:tuning Possible
gptkbp:user_base Researchers, Developers
gptkbp:uses Self-Attention Mechanism
gptkbp:uses_attention gptkb:Yes
gptkbp:uses_patch_embedding gptkb:Yes
gptkbp:uses_positional_encoding gptkb:Yes
gptkbp:year_established gptkb:2021
gptkbp:bfsParent gptkb:Transformers
gptkbp:bfsLayer 4