Statements (27)
| Predicate | Object |
|---|---|
| gptkbp:instanceOf |
gptkb:deep_reinforcement_learning_algorithm
|
| gptkbp:application |
gptkb:artificial_intelligence
gptkb:reinforcement_learning |
| gptkbp:author |
gptkb:Ioannis_Antonoglou
gptkb:David_Silver gptkb:Tom_Schaul John Quan |
| gptkbp:basedOn |
gptkb:Deep_Q-Network
|
| gptkbp:citation |
gptkb:Prioritized_Experience_Replay
gptkb:arXiv_preprint 2015 |
| gptkbp:contribution |
prioritized experience replay
|
| gptkbp:fullName |
Prioritized Experience Replay Deep Q-Network
|
| gptkbp:improves |
sample efficiency
learning speed |
| gptkbp:introduced |
gptkb:Tom_Schaul
|
| gptkbp:introducedIn |
2015
|
| gptkbp:publishedIn |
arXiv:1511.05952
|
| gptkbp:relatedTo |
gptkb:Reinforcement_Learning
Deep Q-Learning Experience Replay |
| gptkbp:url |
https://arxiv.org/abs/1511.05952
|
| gptkbp:usedIn |
Atari 2600 benchmarks
|
| gptkbp:uses |
TD-error for prioritization
|
| gptkbp:bfsParent |
gptkb:Rainbow_DQN
|
| gptkbp:bfsLayer |
7
|
| https://www.w3.org/2000/01/rdf-schema#label |
Prioritized DQN
|