Singularity Institute for Artificial Intelligence

GPTKB entity

Statements (50)
Predicate Object
gptkbp:instance_of gptkb:non-profit_organization
gptkbp:activities gptkb:education
gptkb:Research_Institute
advocacy
gptkbp:affiliation gptkb:Machine_Intelligence_Research_Institute
gptkbp:collaborations gptkb:Future_of_Humanity_Institute
gptkb:Open_AI
gptkb:Future_of_Life_Institute
gptkbp:community_involvement gptkb:podcasts
workshops
blog posts
online courses
social media presence
videos
publications
public lectures
gptkbp:events gptkb:AI_Safety_Conference
gptkb:Singularity_Summit
gptkbp:focus artificial intelligence safety
gptkbp:founded gptkb:2009
gptkbp:founder gptkb:Eliezer_Yudkowsky
gptkbp:goal promote safe AI development
https://www.w3.org/2000/01/rdf-schema#label Singularity Institute for Artificial Intelligence
gptkbp:impact academics
general public
policy makers
AI research community
AI practitioners
gptkbp:location gptkb:California
gptkbp:mission ensure beneficial AI
gptkbp:notable_person gptkb:Eliezer_Yudkowsky
gptkb:Nick_Bostrom
gptkb:Stuart_Russell
gptkbp:publishes gptkb:Coherent_Extrapolated_Volition
The Sequences
Artificial Intelligence as a Positive and Negative Factor in Global Risk
gptkbp:receives_funding_from grants
donations
gptkbp:research_areas gptkb:software_framework
decision theory
AI alignment
philosophy of AI
existential risk
ethics of AI
AGI safety
policy implications of AI
gptkbp:type gptkb:Research_Institute
gptkbp:website http://intelligence.org
gptkbp:bfsParent gptkb:Lifeboat_Network
gptkbp:bfsLayer 4