gptkbp:instanceOf
|
gptkb:artificial_intelligence
concept
|
gptkbp:abilities
|
outperform humans in all domains
|
gptkbp:concerns
|
unintended consequences
value alignment
control problem
instrumental convergence
paperclip maximizer
|
gptkbp:debatedBy
|
gptkb:Bill_Gates
gptkb:Elon_Musk
gptkb:Stephen_Hawking
gptkb:Stuart_Russell
|
gptkbp:describedBy
|
gptkb:Nick_Bostrom
gptkb:Superintelligence:_Paths,_Dangers,_Strategies
|
gptkbp:discusses
|
gptkb:science_fiction
AI safety research
ethics of artificial intelligence
|
gptkbp:exampleInFiction
|
gptkb:HAL_9000
gptkb:Ultron
gptkb:Westworld
gptkb:Skynet
gptkb:Colossus:_The_Forbin_Project
gptkb:Her
gptkb:Person_of_Interest
gptkb:The_Matrix
gptkb:Ex_Machina
gptkb:Samantha_(Her)
I, Robot
Transcendence
|
gptkbp:firstDiscussed
|
20th century
|
gptkbp:goal
|
problem solving
self-improvement
autonomous decision making
|
https://www.w3.org/2000/01/rdf-schema#label
|
Superintelligent AI
|
gptkbp:potentialOutcome
|
utopian future
global transformation
human extinction
|
gptkbp:relatedTo
|
gptkb:machine_learning
artificial general intelligence
technological singularity
|
gptkbp:riskFactor
|
existential risk
AI alignment problem
|
gptkbp:solvedBy
|
gptkb:cooperative_inverse_reinforcement_learning
gptkb:tripwire_mechanism
AI alignment research
corrigibility
AI boxing
value learning
oracle AI
interpretability research
|
gptkbp:studiedBy
|
gptkb:DeepMind
gptkb:OpenAI
gptkb:Future_of_Humanity_Institute
gptkb:Machine_Intelligence_Research_Institute
|
gptkbp:bfsParent
|
gptkb:Samaritan_(AI_in_Person_of_Interest)
|
gptkbp:bfsLayer
|
6
|