|
gptkbp:instanceOf
|
gptkb:convolutional_neural_network
|
|
gptkbp:advantage
|
captures rare word representations
computationally intensive for large vocabularies
|
|
gptkbp:hyperparameter
|
learning rate
window size
negative sampling
embedding dimension
|
|
gptkbp:implementedIn
|
gptkb:Python
gptkb:TensorFlow
gptkb:PyTorch
|
|
gptkbp:input
|
target word
|
|
gptkbp:introduced
|
gptkb:Tomas_Mikolov
|
|
gptkbp:introducedIn
|
2013
|
|
gptkbp:objective
|
predict context words given a target word
|
|
gptkbp:output
|
context words
|
|
gptkbp:partOf
|
gptkb:Word2Vec
|
|
gptkbp:relatedTo
|
gptkb:GloVe
gptkb:FastText
Continuous Bag of Words model
|
|
gptkbp:trainer
|
large text corpora
|
|
gptkbp:usedFor
|
information retrieval
text classification
analogy tasks
semantic similarity
word embedding
|
|
gptkbp:usedIn
|
natural language processing
|
|
gptkbp:bfsParent
|
gptkb:DeepWalk
gptkb:Efficient_Estimation_of_Word_Representations_in_Vector_Space
gptkb:Distributed_Representations_of_Words_and_Phrases_and_their_Compositionality
|
|
gptkbp:bfsLayer
|
8
|
|
https://www.w3.org/2000/01/rdf-schema#label
|
Skip-gram model
|