Alternative names (2)
ruleChange • ruleModificationRandom triples
| Subject | Object |
|---|---|
| gptkb:Momentum_optimizer | velocity-based |
| gptkb:Adam_optimizer | uses bias-corrected first and second moment estimates |
| gptkb:Ross_Chastain's_'Hail_Melon'_move_at_Martinsville | NASCAR banned wall-riding moves in 2023 |
| gptkb:NaSch_model | acceleration |
| gptkb:Para_7-a-side_National_Team | throw-ins can be one-handed |
| gptkb:AdaGrad | per-parameter learning rate |
| gptkb:Particle_Swarm_Optimization | position update |
| gptkb:RMSprop | divides learning rate by moving average of squared gradients |
| gptkb:Least_Mean_Squares | w(n+1) = w(n) + μ e(n) x(n) |
| gptkb:Self-Organizing_Map | neighborhood function |
| gptkb:State-Action-Reward-State-Action | Q(s,a) ← Q(s,a) + α [r + γ Q(s',a') - Q(s,a)] |
| gptkb:Hopfield_network | asynchronous |
| gptkb:LMS_algorithm | w(n+1) = w(n) + μ e(n) x(n) |
| gptkb:Heavy_ball_method | x_{k+1} = x_k - \alpha \nabla f(x_k) + \beta (x_k - x_{k-1}) |
| gptkb:Trust_Region_Policy_Optimization | constrained optimization |
| gptkb:Adam_optimizer | parameter update based on moving averages of gradient and squared gradient |
| gptkb:NaSch_model | deceleration |
| gptkb:Flag_Football_Under_14 | no tackling |
| gptkb:binary_PSO | sigmoid function |
| gptkb:Global_Rapid_Rugby | no kicking out on the full |