Statements (21)
| Predicate | Object |
|---|---|
| gptkbp:instanceOf |
gptkb:concept
|
| gptkbp:category |
gptkb:existential_risk
AI alignment |
| gptkbp:debatedBy |
gptkb:Ben_Goertzel
gptkb:Nick_Bostrom gptkb:Eliezer_Yudkowsky |
| gptkbp:defines |
The process by which an intelligent agent improves its own architecture or algorithms, leading to rapid increases in intelligence.
|
| gptkbp:discusses |
AI safety literature
|
| gptkbp:field |
gptkb:artificial_intelligence
gptkb:machine_learning |
| gptkbp:firstMentioned |
1965
|
| gptkbp:potentialOutcome |
intelligence explosion
|
| gptkbp:proposedBy |
gptkb:I.J._Good
|
| gptkbp:relatedTo |
gptkb:superintelligence
technological singularity |
| gptkbp:requires |
access to computational resources
self-modification capability |
| gptkbp:riskFactor |
uncontrolled AI development
|
| gptkbp:bfsParent |
gptkb:Welcome_Oblivion
|
| gptkbp:bfsLayer |
7
|
| https://www.w3.org/2000/01/rdf-schema#label |
Recursive Self-Improvement
|