Statements (25)
Predicate | Object |
---|---|
gptkbp:instanceOf |
gptkb:network_protocol
red teaming initiative |
gptkbp:acceptsMembersFrom |
global community
|
gptkbp:focusesOn |
AI safety
AI alignment AI misuse AI security |
gptkbp:goal |
to improve the safety of OpenAI models
to provide feedback on AI model behavior to test AI models for vulnerabilities |
https://www.w3.org/2000/01/rdf-schema#label |
OpenAI Red Teaming Network
|
gptkbp:launched |
2023
|
gptkbp:membersInclude |
gptkb:engineer
gptkb:researchers policy experts security professionals |
gptkbp:operatedBy |
gptkb:OpenAI
|
gptkbp:purpose |
to identify and mitigate risks in AI systems
|
gptkbp:relatedTo |
gptkb:OpenAI_DALL-E
gptkb:OpenAI_GPT-4 gptkb:OpenAI_GPT-3.5 OpenAI safety research |
gptkbp:website |
https://openai.com/red-teaming-network
|
gptkbp:bfsParent |
gptkb:OpenAI,_Inc.
|
gptkbp:bfsLayer |
7
|