Statements (19)
| Predicate | Object |
|---|---|
| gptkbp:instanceOf |
gptkb:person
|
| gptkbp:almaMater |
gptkb:University_of_California,_Berkeley
|
| gptkbp:doctoralAdvisor |
gptkb:Pieter_Abbeel
gptkb:Anca_Dragan |
| gptkbp:employer |
gptkb:Massachusetts_Institute_of_Technology
|
| gptkbp:field |
gptkb:artificial_intelligence
gptkb:machine_learning AI safety |
| gptkbp:nationality |
gptkb:American
|
| gptkbp:notableWork |
gptkb:cooperative_inverse_reinforcement_learning
AI alignment research |
| gptkbp:occupation |
gptkb:computer_scientist
|
| gptkbp:position |
assistant professor
|
| gptkbp:researchInterest |
human-AI interaction
value alignment reward modeling |
| gptkbp:bfsParent |
gptkb:AI_Research_Priorities_Open_Letter
|
| gptkbp:bfsLayer |
7
|
| https://www.w3.org/2000/01/rdf-schema#label |
Dylan Hadfield-Menell
|