Statements (23)
| Predicate | Object |
|---|---|
| gptkbp:instanceOf |
gptkb:AI_alignment_technique
|
| gptkbp:aimsTo |
align advanced AI systems with human values
|
| gptkbp:category |
AI research
AI alignment |
| gptkbp:contrastsWith |
gptkb:Debate_(AI_alignment)
Imitation learning |
| gptkbp:describedBy |
gptkb:OpenAI_blog
gptkb:Alignment_Forum |
| gptkbp:developedBy |
gptkb:Paul_Christiano
|
| gptkbp:goal |
create scalable supervision for AI
|
| gptkbp:involves |
breaking down complex questions into simpler sub-questions
recursive amplification process |
| gptkbp:proposedBy |
2018
|
| gptkbp:publishedBy |
gptkb:OpenAI
|
| gptkbp:relatedTo |
Machine learning
AI safety Scalable oversight |
| gptkbp:uses |
AI assistants
decomposition of tasks human feedback |
| gptkbp:bfsParent |
gptkb:Paul_Christiano
|
| gptkbp:bfsLayer |
6
|
| https://www.w3.org/2000/01/rdf-schema#label |
Iterated Amplification
|