Statements (17)
| Predicate | Object |
|---|---|
| gptkbp:instanceOf |
gptkb:concept
|
| gptkbp:category |
AI ethics
AI safety |
| gptkbp:coinedBy |
gptkb:Eliezer_Yudkowsky
|
| gptkbp:contrastsWith |
gptkb:Unfriendly_AI
|
| gptkbp:describedBy |
gptkb:LessWrong
|
| gptkbp:discusses |
gptkb:Future_of_Humanity_Institute
gptkb:Machine_Intelligence_Research_Institute |
| gptkbp:focusesOn |
safety
value alignment |
| gptkbp:goal |
ensure AI acts in humanity's best interests
|
| gptkbp:relatedTo |
gptkb:artificial_intelligence
AI alignment machine ethics |
| gptkbp:bfsParent |
gptkb:Eliezer_Yudkowsky
|
| gptkbp:bfsLayer |
7
|
| https://www.w3.org/2000/01/rdf-schema#label |
Friendly AI
|