Statements (17)
Predicate | Object |
---|---|
gptkbp:instanceOf |
concept
|
gptkbp:category |
AI ethics
AI safety |
gptkbp:coinedBy |
gptkb:Eliezer_Yudkowsky
|
gptkbp:contrastsWith |
gptkb:Unfriendly_AI
|
gptkbp:describedBy |
gptkb:LessWrong
|
gptkbp:discusses |
gptkb:Future_of_Humanity_Institute
gptkb:Machine_Intelligence_Research_Institute |
gptkbp:focusesOn |
safety
value alignment |
gptkbp:goal |
ensure AI acts in humanity's best interests
|
https://www.w3.org/2000/01/rdf-schema#label |
Friendly AI
|
gptkbp:relatedTo |
gptkb:artificial_intelligence
AI alignment machine ethics |
gptkbp:bfsParent |
gptkb:Eliezer_Yudkowsky
|
gptkbp:bfsLayer |
5
|