Superintelligence (AI)

GPTKB entity

Statements (51)
Predicate Object
gptkbp:instanceOf gptkb:artificial_intelligence
concept
gptkbp:agingPotential autonomous decision making
outperform human intelligence
rapid self-improvement
gptkbp:cause unintended consequences
global catastrophic risk
loss of human control
gptkbp:concerns future of humanity
ethics of artificial intelligence
gptkbp:debatedBy philosophers
AI researchers
technology ethicists
gptkbp:describedBy gptkb:Nick_Bostrom
gptkbp:discusses AI safety
AI alignment
gptkbp:goal achieve superhuman capabilities
autonomous problem solving
maximize intelligence
https://www.w3.org/2000/01/rdf-schema#label Superintelligence (AI)
gptkbp:mainTopicOf gptkb:Superintelligence:_Paths,_Dangers,_Strategies
gptkbp:predicts gptkb:Ray_Kurzweil
gptkb:Nick_Bostrom
gptkb:Vernor_Vinge
gptkbp:relatedTo gptkb:DeepMind
gptkb:OpenAI
gptkb:machine_learning
gptkb:Future_of_Humanity_Institute
gptkb:Machine_Intelligence_Research_Institute
gptkb:orthogonality_thesis
artificial general intelligence
technological singularity
control problem
AI takeover
friendly AI
recursive self-improvement
instrumental convergence thesis
paperclip maximizer
Center for the Study of Existential Risk
gptkbp:requires value alignment
robust control mechanisms
gptkbp:riskFactor existential risk
gptkbp:studiedBy gptkb:Max_Tegmark
gptkb:Nick_Bostrom
gptkb:Stuart_Russell
gptkb:Eliezer_Yudkowsky
gptkbp:subjectOf gptkb:military
gptkb:science_fiction
academic research
gptkbp:bfsParent gptkb:Superintelligence_(film)
gptkbp:bfsLayer 6