gptkbp:instance_of
|
gptkb:Research_Institute
|
gptkbp:aims_to
|
gptkb:AI_technology
|
gptkbp:collaborates_with
|
academic institutions
|
gptkbp:conducts
|
workshops on AI safety
|
gptkbp:contributes_to
|
AI safety literature
|
gptkbp:focus
|
artificial intelligence safety
|
gptkbp:founded
|
gptkb:2000
|
gptkbp:founder
|
gptkb:Eliezer_Yudkowsky
|
gptkbp:goal
|
ensure that artificial general intelligence benefits humanity
|
gptkbp:has_a_presence_in
|
social media
|
gptkbp:has_a_research_agenda
|
yes
|
gptkbp:has_community
|
gptkb:researchers
|
gptkbp:has_hosted
|
conferences on AI safety
|
gptkbp:has_mission
|
promote safe AI development
|
gptkbp:has_partnerships_with
|
other AI research organizations
|
gptkbp:has_publications
|
yes
AI alignment research
AI risk
AI safety strategies
|
gptkbp:has_research_focus
|
value alignment
|
gptkbp:has_team
|
research scientists
|
gptkbp:hosts
|
workshops
|
https://www.w3.org/2000/01/rdf-schema#label
|
Machine Intelligence Research Institute
|
gptkbp:is_a_member_of
|
gptkb:Effective_Altruism_community
|
gptkbp:is_a_source_of
|
AI safety resources
|
gptkbp:is_a_thought_leader_in
|
AI governance
|
gptkbp:is_active_in
|
AI ethics discussions
|
gptkbp:is_funded_by
|
donations
|
gptkbp:is_involved_in
|
gptkb:public_outreach
policy discussions
AI forecasting
grant-making for AI safety projects
|
gptkbp:is_known_for
|
Eliezer Yudkowsky's writings
advocating for careful AI development
|
gptkbp:is_recognized_for
|
thought leadership in AI safety
|
gptkbp:location
|
gptkb:Berkeley,_California
|
gptkbp:member
|
yes
|
gptkbp:offers
|
fellowships
|
gptkbp:publishes
|
research papers
|
gptkbp:research_areas
|
gptkb:machine_learning
decision theory
AI alignment
|
gptkbp:type
|
gptkb:non-profit_organization
|
gptkbp:vision
|
beneficial AI
|
gptkbp:was_a_proponent_of
|
long-termism in AI development
|
gptkbp:website
|
intelligence.org
|
gptkbp:bfsParent
|
gptkb:Lifeboat_Network
|
gptkbp:bfsLayer
|
5
|