Statements (22)
Predicate | Object |
---|---|
gptkbp:instanceOf |
mathematical optimization
|
gptkbp:advantage |
handles non-stationary objectives
reduces oscillations in parameter updates |
gptkbp:category |
gradient descent optimization
|
gptkbp:commonIn |
training neural networks
|
https://www.w3.org/2000/01/rdf-schema#label |
RMSprop
|
gptkbp:hyperparameter |
learning rate
decay rate epsilon |
gptkbp:implementedIn |
available in Keras
available in PyTorch available in TensorFlow |
gptkbp:proposedBy |
gptkb:Geoffrey_Hinton
2012 |
gptkbp:purpose |
adaptive learning rate optimization
|
gptkbp:relatedTo |
gptkb:Adam
gptkb:AdaGrad |
gptkbp:updateRule |
divides learning rate by moving average of squared gradients
|
gptkbp:usedIn |
gptkb:machine_learning
deep learning |
gptkbp:bfsParent |
gptkb:machine_learning
|
gptkbp:bfsLayer |
4
|