Machine Intelligence Research Institute

GPTKB entity

Statements (42)
Predicate Object
gptkbp:instanceOf gptkb:nonprofit_organization
gptkbp:abbreviation gptkb:MIRI
gptkbp:acceptsDonations yes
gptkbp:affiliation gptkb:Effective_Altruism_movement
gptkbp:director gptkb:Nate_Soares
gptkbp:focus AI alignment
artificial intelligence safety
existential risk from AI
gptkbp:formerName gptkb:Singularity_Institute_for_Artificial_Intelligence
gptkbp:founded 2000
gptkbp:founder gptkb:Eliezer_Yudkowsky
gptkbp:headquarters gptkb:Berkeley,_California
https://www.w3.org/2000/01/rdf-schema#label Machine Intelligence Research Institute
gptkbp:location gptkb:Berkeley,_California
gptkbp:mission Ensure that the creation of smarter-than-human intelligence has a positive impact
gptkbp:notableContributor gptkb:Paul_Christiano
gptkb:Benya_Fallenstein
gptkb:Eliezer_Yudkowsky
gptkb:Nate_Soares
gptkbp:notableEvent hosted Singularity Summit
rebranded from SIAI to MIRI in 2013
gptkbp:notablePublication gptkb:AI_Alignment:_Why_It’s_Hard,_and_Where_to_Start
gptkb:Intelligence_Explosion_Microeconomics
gptkb:The_Basic_AI_Drives
gptkbp:regionServed global
gptkbp:relatedOrganization gptkb:OpenAI
gptkb:Future_of_Humanity_Institute
gptkb:Centre_for_the_Study_of_Existential_Risk
gptkb:Future_of_Life_Institute
gptkbp:researchArea decision theory
AI forecasting
corrigibility
logical uncertainty
value alignment
gptkbp:taxStatus gptkb:nonprofit_organization
gptkbp:website https://intelligence.org/
gptkbp:bfsParent gptkb:Open_Philanthropy
gptkb:Thiel_Foundation
gptkb:MIRI
gptkb:LessWrong_community
gptkb:Eliezer_Yudkowsky
gptkbp:bfsLayer 5