Machine Super Intelligence

GPTKB entity

Statements (61)
Predicate Object
gptkbp:instanceOf concept
gptkbp:alsoKnownAs gptkb:Superintelligent_AI
gptkbp:benefit technological progress
gptkbp:challenge gptkb:orthogonality_thesis
AI ethics
AI alignment research
AI safety research
AI governance
AI forecasting
AI policy
corrigibility
AI regulation
AI transparency
control problem
instrumental convergence
AI arms race
AI boxing
AI catastrophic risk
AI containment
AI dual use
AI existential risk mitigation
AI global coordination
AI interpretability
AI misuse
AI risk assessment
AI robustness
AI security
AI superalignment
AI value learning
goal specification
value alignment problem
gptkbp:containsScenario singularity
intelligence explosion
AI takeover
friendly AI
unfriendly AI
gptkbp:debatedBy gptkb:Bill_Gates
gptkb:Elon_Musk
gptkb:Geoffrey_Hinton
gptkb:Ray_Kurzweil
gptkb:Stephen_Hawking
gptkb:Yoshua_Bengio
gptkb:Max_Tegmark
gptkb:Nick_Bostrom
gptkb:Demis_Hassabis
gptkb:Sam_Altman
gptkb:Stuart_Russell
gptkbp:describedBy gptkb:Nick_Bostrom
gptkb:Superintelligence:_Paths,_Dangers,_Strategies
gptkbp:hasProperty autonomous decision making
intelligence surpassing human level
self-improvement capability
https://www.w3.org/2000/01/rdf-schema#label Machine Super Intelligence
gptkbp:relatedTo gptkb:artificial_intelligence
gptkb:Artificial_General_Intelligence
gptkbp:riskFactor existential risk
gptkbp:studiedIn ethics
philosophy of mind
AI safety
gptkbp:bfsParent gptkb:Shane_Legg
gptkbp:bfsLayer 6