Statements (21)
| Predicate | Object |
|---|---|
| gptkbp:instanceOf |
gptkb:organization
gptkb:AI_safety_research_organization |
| gptkbp:abbreviation |
gptkb:ARC
|
| gptkbp:collaboratesWith |
gptkb:OpenAI
gptkb:Anthropic gptkb:Redwood_Research |
| gptkbp:focusesOn |
AI safety
AI alignment |
| gptkbp:foundedBy |
gptkb:Paul_Christiano
|
| gptkbp:foundedYear |
2021
|
| gptkbp:location |
gptkb:United_States
|
| gptkbp:mission |
Ensure that advanced AI systems are aligned with human interests
|
| gptkbp:notableMember |
gptkb:Paul_Christiano
gptkb:Mark_Xu gptkb:Beth_Barnes |
| gptkbp:notableWork |
gptkb:ELK_(Eliciting_Latent_Knowledge)
AI evaluation research |
| gptkbp:website |
https://alignment.org/
|
| gptkbp:bfsParent |
gptkb:Paul_Christiano
|
| gptkbp:bfsLayer |
6
|
| https://www.w3.org/2000/01/rdf-schema#label |
Alignment Research Center
|