gptkbp:instance_of
|
gptkb:Person
|
gptkbp:affiliation
|
gptkb:Singularity_Institute_for_Artificial_Intelligence
|
gptkbp:alma_mater
|
gptkb:University_of_Chicago
|
gptkbp:associated_with
|
gptkb:Machine_Intelligence_Research_Institute
|
gptkbp:birth_date
|
1979-09-11
|
gptkbp:birth_place
|
gptkb:Chicago,_Illinois
|
gptkbp:children
|
gptkb:unknown
|
gptkbp:contribution
|
gptkb:effective_altruism
AI alignment
|
gptkbp:field
|
rationalist community
artificial intelligence safety
|
https://www.w3.org/2000/01/rdf-schema#label
|
Eliezer Yudkowsky
|
gptkbp:influenced
|
rationalist movement
|
gptkbp:influenced_by
|
gptkb:Elon_Musk
gptkb:David_Deutsch
gptkb:Nick_Bostrom
|
gptkbp:known_for
|
gptkb:Artificial_Intelligence
rationality
|
gptkbp:nationality
|
gptkb:American
|
gptkbp:notable_work
|
gptkb:Harry_Potter_and_the_Methods_of_Rationality
|
gptkbp:occupation
|
gptkb:Writer
gptkb:researchers
|
gptkbp:published_work
|
gptkb:Rationality:_From_AI_to_Zombies
The Future of Humanity
The Ethics of Artificial Intelligence
The Nature of Rationality
The Role of AI in Society
The AI Revolution
The Sequences
The Impact of AI on Society
Causal Influence
The Simple Truth About AI
The AI Alignment Problem
The Challenge of AI Alignment
The Dangers of Unaligned AI
The Ethics of AI Development
The Future of AI Ethics
The Future of AI Safety
The Future of Humanity and AI
The Importance of AI Safety
The Importance of Rational Thinking
The Limits of AI
The Path to AI Safety
The Problem of AI Control
The Role of Ethics in AI
The Role of Rationality in Society
The Yudkowsky Sequence
|
gptkbp:resides_in
|
gptkb:California
|
gptkbp:spouse
|
gptkb:unknown
|
gptkbp:website
|
gptkb:lesswrong.com
|
gptkbp:bfsParent
|
gptkb:Lifeboat_Network
gptkb:Nick_Bostrom
gptkb:Ray_Solomonoff
gptkb:AI_Alignment_Forum
|
gptkbp:bfsLayer
|
5
|