gptkbp:instanceOf
|
gptkb:computer_scientist
|
gptkbp:award
|
gptkb:Best_Paper_Award_at_ICLR_2018
|
gptkbp:coauthor
|
gptkb:Marc'Aurelio_Ranzato
gptkb:Alexis_Conneau
gptkb:Ludovic_Denoyer
|
gptkbp:education
|
gptkb:INRIA
gptkb:École_normale_supérieure
|
gptkbp:employer
|
gptkb:Meta_AI
|
gptkbp:field
|
gptkb:artificial_intelligence
gptkb:machine_learning
natural language processing
|
gptkbp:github
|
guillaume-be
|
https://www.w3.org/2000/01/rdf-schema#label
|
Guillaume Lample
|
gptkbp:knownFor
|
transformer models
unsupervised machine translation
contributions to large language models
|
gptkbp:language
|
gptkb:French
English
|
gptkbp:nationality
|
gptkb:French
|
gptkbp:occupation
|
gptkb:researchers
|
gptkbp:publishedIn
|
gptkb:Cross-lingual_Language_Model_Pretraining
gptkb:Phrase-Based_&_Neural_Unsupervised_Machine_Translation
gptkb:Unsupervised_Machine_Translation_Using_Monolingual_Corpora_Only
Faster Transformers with Better Attention Patterns
|
gptkbp:twitter
|
@GuillaumeLample
|
gptkbp:bfsParent
|
gptkb:Mistral
|
gptkbp:bfsLayer
|
5
|