Statements (73)
Predicate | Object |
---|---|
gptkbp:instance_of |
gptkb:Buddhism
gptkb:philosopher |
gptkbp:author |
gptkb:Superintelligence:_Paths,_Dangers,_Strategies
|
gptkbp:birth_date |
1973-03-10
|
gptkbp:birth_year |
gptkb:1973
|
gptkbp:born_in |
gptkb:Sweden
|
gptkbp:contributed_to |
AI safety
future studies philosophy of technology |
gptkbp:field |
gptkb:technology
gptkb:philosophy ethics |
gptkbp:founded |
gptkb:Future_of_Humanity_Institute
|
gptkbp:has_publications |
Global Catastrophic Risks
The Ethics of Artificial Intelligence Human Enhancement Anthropic Bias The Vulnerable World Hypothesis |
gptkbp:has_written |
gptkb:Artificial_Intelligence
gptkb:philosophy_of_mind gptkb:Public_School gptkb:moral_philosophy gptkb:philosophy_of_science theoretical computer science risk assessment philosophy of mathematics AI safety decision theory philosophy of language philosophy of history social choice theory posthumanism philosophy of physics bioethics philosophy of biology philosophy of religion rationality future studies technological singularity human enhancement normative ethics applied ethics philosophy of action philosophy of ethics philosophy of artificial intelligence philosophy of law philosophy of economics philosophy of technology philosophy of culture philosophy of chemistry philosophy of politics philosophy of social science machine ethics philosophy of cognitive science philosophy of neuroscience future of humanity global catastrophic risks long-term future existential risks moral uncertainty value of life |
https://www.w3.org/2000/01/rdf-schema#label |
Nick Bostrom
|
gptkbp:influenced_by |
gptkb:Eliezer_Yudkowsky
gptkb:David_Chalmers gptkb:John_Searle |
gptkbp:known_for |
existential risk
superintelligence |
gptkbp:nationality |
gptkb:Swedish
|
gptkbp:works_at |
gptkb:University_of_Oxford
|
gptkbp:bfsParent |
gptkb:John_Mc_Carthy_IV
gptkb:Dr._John_D._Mc_Carthy gptkb:Transhumanism |
gptkbp:bfsLayer |
4
|