gptkbp:instanceOf
|
speech representation learning model
|
gptkbp:architecture
|
gptkb:transformation
|
gptkbp:arXivID
|
2106.07447
|
gptkbp:author
|
gptkb:Abdelrahman_Mohamed
gptkb:Alexei_Baevski
gptkb:Ruslan_Salakhutdinov
gptkb:James_Glass
gptkb:Wei-Ning_Hsu
Benjamin Bolte
Kuan-Chieh Wang
Yao-Hung Hubert Tsai
|
gptkbp:basedOn
|
self-supervised learning
|
gptkbp:citation
|
HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units
|
gptkbp:developedBy
|
gptkb:Facebook_AI_Research
|
gptkbp:evaluationBenchmark
|
gptkb:LibriSpeech
gptkb:TIMIT
CommonVoice
|
gptkbp:format
|
feature vectors
waveform
|
gptkbp:fullName
|
Hidden-Unit BERT
|
https://www.w3.org/2000/01/rdf-schema#label
|
HuBERT
|
gptkbp:input
|
raw audio
|
gptkbp:language
|
English
|
gptkbp:license
|
gptkb:MIT
|
gptkbp:memiliki_tugas
|
automatic speech recognition
unsupervised speech representation learning
|
gptkbp:openSource
|
yes
|
gptkbp:pretrainingData
|
gptkb:Libri-Light
gptkb:LibriSpeech
|
gptkbp:pretrainingMethod
|
masked prediction
|
gptkbp:publishedIn
|
2021
|
gptkbp:relatedTo
|
gptkb:wav2vec_2.0
gptkb:BERT
|
gptkbp:repository
|
https://github.com/pytorch/fairseq/tree/main/examples/hubert
|
gptkbp:usedFor
|
speech recognition
speech representation
|
gptkbp:bfsParent
|
gptkb:pre-trained_WavLM_models
gptkb:pre-trained_wav2vec2_models
gptkb:Hugging_Face_models
gptkb:Wei-Ning_Hsu
|
gptkbp:bfsLayer
|
7
|