gptkbp:instance_of
|
gptkb:Person
|
gptkbp:affiliation
|
gptkb:Google_Brain
|
gptkbp:author
|
gptkb:Attention_is_All_You_Need
|
gptkbp:award
|
gptkb:Best_Paper_Award_at_Neur_IPS_2017
|
gptkbp:birth_place
|
gptkb:India
|
gptkbp:birth_year
|
gptkb:1985
|
gptkbp:collaborator
|
gptkb:Noam_Shazeer
gptkb:Aidan_N._Gomez
gptkb:Jake_Shlens
gptkb:Niki_Parmar
Llion Jones
|
gptkbp:contribution
|
gptkb:Model
gptkb:Ford_Model_T
gptkb:GPT_model
Multi-Head Attention
Positional Encoding
Self-Attention Mechanism
|
gptkbp:education
|
gptkb:University_of_Southern_California
gptkb:Indian_Institute_of_Technology,_Delhi
|
gptkbp:field
|
gptkb:Natural_Language_Processing
gptkb:machine_learning
|
https://www.w3.org/2000/01/rdf-schema#label
|
Ashish Vaswani
|
gptkbp:influenced_by
|
gptkb:Yoshua_Bengio
gptkb:Geoffrey_R._Hinton
gptkb:Yann_Le_Cun
|
gptkbp:invention
|
gptkb:Transformer_Architecture
gptkb:Neural_Machine_Translation
Attention Mechanism
Text Generation Models
Text Summarization Techniques
Transfer Learning in NLP
Model Compression Techniques
Attention-based Models
Bias Mitigation in NLP Models
Contextual Word Embeddings
Data Augmentation Techniques for NLP
Efficient Training Methods for NLP
Ethics in AI and NLP
Evaluation Metrics for NLP Models
Explainability in NLP Models
Fine-tuning Techniques for NLP
Language Understanding Models
Pre-trained Language Models
Robustness in NLP Models
Scalable Neural Networks
Sequence Modeling Techniques
|
gptkbp:known_for
|
gptkb:Transformers
|
gptkbp:nationality
|
Indian
|
gptkbp:occupation
|
gptkb:researchers
|
gptkbp:research_interest
|
gptkb:neural_networks
gptkb:Deep_Learning
Sequence-to-Sequence Learning
|
gptkbp:bfsParent
|
gptkb:Attention_Is_All_You_Need
|
gptkbp:bfsLayer
|
5
|