RMSProp

GPTKB entity

Statements (24)
Predicate Object
gptkbp:instanceOf mathematical optimization
gptkbp:advantage handles non-stationary objectives
may require tuning of hyperparameters
reduces learning rate for frequent parameters
gptkbp:application neural network training
gptkbp:category gradient descent optimization
gptkbp:feature adaptive learning rate
element-wise learning rate adjustment
https://www.w3.org/2000/01/rdf-schema#label RMSProp
gptkbp:implementedIn available in Keras
available in PyTorch
available in TensorFlow
gptkbp:parameter learning rate
decay rate
epsilon
gptkbp:proposedBy gptkb:Geoffrey_Hinton
2012
gptkbp:relatedTo gptkb:Adam
gptkb:AdaGrad
gptkbp:updateRule divides learning rate by moving average of squared gradients
gptkbp:usedIn gptkb:machine_learning
deep learning
gptkbp:bfsParent gptkb:Adam_optimizer
gptkbp:bfsLayer 5