gptkbp:instanceOf
|
gptkb:person
|
gptkbp:activeYearsStart
|
1996
|
gptkbp:author
|
gptkb:Harry_Potter_and_the_Methods_of_Rationality
gptkb:Inadequate_Equilibria
gptkb:Rationality:_From_AI_to_Zombies
|
gptkbp:birthDate
|
1979-09-11
|
gptkbp:birthPlace
|
gptkb:Chicago,_Illinois,_USA
|
gptkbp:education
|
self-taught
|
gptkbp:field
|
gptkb:artificial_intelligence
decision theory
ethics
rationality
|
gptkbp:founder
|
gptkb:Machine_Intelligence_Research_Institute
gptkb:LessWrong
|
gptkbp:genre
|
gptkb:science_fiction
non-fiction
fan fiction
|
gptkbp:hasBlog
|
gptkb:LessWrong
gptkb:Overcoming_Bias
|
https://www.w3.org/2000/01/rdf-schema#label
|
Eliezer Yudkowsky
|
gptkbp:influenced
|
gptkb:Scott_Alexander
gptkb:Gwern_Branwen
the rationalist community
|
gptkbp:influencedBy
|
gptkb:Nick_Bostrom
gptkb:Vernor_Vinge
|
gptkbp:knownFor
|
gptkb:LessWrong
gptkb:Harry_Potter_and_the_Methods_of_Rationality
rationality
artificial intelligence safety
|
gptkbp:memberOf
|
gptkb:Machine_Intelligence_Research_Institute
|
gptkbp:nationality
|
gptkb:American
|
gptkbp:notableIdea
|
gptkb:Coherent_Extrapolated_Volition
gptkb:Friendly_AI
AI alignment
AI risk
Bayesian rationality
|
gptkbp:notableWork
|
gptkb:AI_as_a_Positive_and_Negative_Factor_in_Global_Risk_(paper)
gptkb:Coherent_Extrapolated_Volition_(paper)
gptkb:Creating_Friendly_AI_(paper)
gptkb:Harry_Potter_and_the_Methods_of_Rationality
gptkb:Inadequate_Equilibria
gptkb:Rationality:_From_AI_to_Zombies
gptkb:Sequences_(LessWrong)
|
gptkbp:occupation
|
gptkb:computer_scientist
gptkb:writer
|
gptkbp:religion
|
atheism
|
gptkbp:spouse
|
gptkb:Brienne_Yudkowsky
|
gptkbp:twitter
|
@ESYudkowsky
|
gptkbp:website
|
https://www.lesswrong.com/
https://www.yudkowsky.net/
|
gptkbp:bfsParent
|
gptkb:Rationality_Enhancement
|
gptkbp:bfsLayer
|
4
|