Statements (13)
| Predicate | Object |
|---|---|
| gptkbp:instanceOf |
gptkb:concept
|
| gptkbp:associatedWith |
AI alignment problem
|
| gptkbp:consequence |
human extinction
|
| gptkbp:contrastsWith |
gptkb:Friendly_AI
|
| gptkbp:describedBy |
gptkb:Nick_Bostrom
|
| gptkbp:discusses |
AI safety literature
|
| gptkbp:goal |
misaligned with human values
|
| gptkbp:relatedTo |
gptkb:artificial_intelligence
|
| gptkbp:riskFactor |
gptkb:existential_risk
|
| gptkbp:topic |
gptkb:Superintelligence:_Paths,_Dangers,_Strategies
|
| gptkbp:bfsParent |
gptkb:Friendly_AI
|
| gptkbp:bfsLayer |
8
|
| https://www.w3.org/2000/01/rdf-schema#label |
Unfriendly AI
|