Statements (23)
| Predicate | Object |
|---|---|
| gptkbp:instanceOf |
gptkb:algorithm
gptkb:machine_learning_method |
| gptkbp:approach |
Bayesian inference
|
| gptkbp:author |
gptkb:Deepak_Ramachandran
Eyal Amir |
| gptkbp:citation |
high
|
| gptkbp:describedBy |
Bayesian Inverse Reinforcement Learning (Ramachandran & Amir, 2007)
|
| gptkbp:field |
inverse reinforcement learning
|
| gptkbp:goal |
recover reward function from demonstrations
|
| gptkbp:influenced |
apprenticeship learning
preference-based RL |
| gptkbp:input |
expert demonstrations
|
| gptkbp:output |
posterior distribution over reward functions
|
| gptkbp:publicationYear |
2007
|
| gptkbp:relatedTo |
gptkb:Markov_Decision_Process
gptkb:Reinforcement_Learning Maximum Entropy IRL |
| gptkbp:usedIn |
gptkb:artificial_intelligence
autonomous systems robotics |
| gptkbp:bfsParent |
gptkb:Inverse_Reinforcement_Learning
|
| gptkbp:bfsLayer |
8
|
| https://www.w3.org/2000/01/rdf-schema#label |
Bayesian IRL
|