Dopamine reward prediction error theory
GPTKB entity
Statements (26)
| Predicate | Object |
|---|---|
| gptkbp:instanceOf |
gptkb:scientific_theory
|
| gptkbp:appliesTo |
animal learning
human learning |
| gptkbp:citation |
gptkb:Nature
gptkb:science |
| gptkbp:describes |
role of dopamine in learning
|
| gptkbp:explains |
gptkb:reinforcement_learning
dopamine neuron activity |
| gptkbp:field |
neuroscience
psychology |
| gptkbp:hasConcept |
gptkb:reward_prediction_error
|
| gptkbp:influenced |
gptkb:artificial_intelligence
computational neuroscience |
| gptkbp:predicts |
no change in dopamine response for fully predicted rewards
dopamine response decreases with omitted expected rewards dopamine response increases with unexpected rewards |
| gptkbp:proposedBy |
gptkb:Wolfram_Schultz
1990s |
| gptkbp:relatedTo |
temporal difference learning
reinforcement learning algorithms |
| gptkbp:state |
dopamine neurons signal difference between expected and received rewards
|
| gptkbp:supportedBy |
fMRI studies
electrophysiological studies |
| gptkbp:bfsParent |
gptkb:Peter_Dayan
|
| gptkbp:bfsLayer |
6
|
| https://www.w3.org/2000/01/rdf-schema#label |
Dopamine reward prediction error theory
|