gptkbp:instanceOf
|
gptkb:research_institute
|
gptkbp:abbreviation
|
gptkb:MIRI
|
gptkbp:acceptsDonations
|
yes
|
gptkbp:collaboratesWith
|
gptkb:DeepMind
gptkb:OpenAI
gptkb:Future_of_Humanity_Institute
gptkb:Centre_for_the_Study_of_Existential_Risk
|
gptkbp:focus
|
AI alignment
artificial intelligence safety
|
gptkbp:formerName
|
gptkb:Singularity_Institute_for_Artificial_Intelligence
|
gptkbp:founded
|
2000
|
gptkbp:founder
|
gptkb:Eliezer_Yudkowsky
|
gptkbp:fullName
|
gptkb:Machine_Intelligence_Research_Institute
|
https://www.w3.org/2000/01/rdf-schema#label
|
MIRI
|
gptkbp:influenced
|
gptkb:Effective_Altruism_movement
AI safety community
|
gptkbp:influencedBy
|
gptkb:Ray_Kurzweil
gptkb:Nick_Bostrom
gptkb:Vernor_Vinge
|
gptkbp:language
|
English
|
gptkbp:location
|
gptkb:Berkeley,_California,_United_States
|
gptkbp:mission
|
ensure that the creation of smarter-than-human intelligence has a positive impact
|
gptkbp:notableMember
|
gptkb:Rob_Bensinger
gptkb:Eliezer_Yudkowsky
gptkb:Nate_Soares
|
gptkbp:publishedIn
|
gptkb:Agent_Foundations
gptkb:Embedded_Agency
gptkb:Logical_Induction
Technical Agenda
|
gptkbp:regionServed
|
global
|
gptkbp:researchArea
|
decision theory
AI risk
logical uncertainty
value alignment
|
gptkbp:taxStatus
|
gptkb:nonprofit_organization
|
gptkbp:type
|
gptkb:nonprofit_organization
|
gptkbp:website
|
https://intelligence.org/
|
gptkbp:bfsParent
|
gptkb:James_Webb_Space_Telescope
|
gptkbp:bfsLayer
|
4
|