Future of Humanity Institute
GPTKB entity
Statements (55)
Predicate | Object |
---|---|
gptkbp:instance_of |
gptkb:Research_Institute
|
gptkbp:affiliated_with |
gptkb:University_of_Oxford
|
gptkbp:aims_to |
improve decision-making
|
gptkbp:analyzes |
technological trends
|
gptkbp:collaborated_with |
gptkb:Machine_Intelligence_Research_Institute
gptkb:Open_AI gptkb:Centre_for_the_Study_of_Existential_Risk |
gptkbp:collaborates_with |
other research organizations
|
gptkbp:conducts |
interdisciplinary research
|
gptkbp:engages_in |
gptkb:public_outreach
|
gptkbp:focuses_on |
biosecurity
global catastrophic risks long-term future artificial intelligence safety |
gptkbp:founded |
gptkb:2005
|
gptkbp:founder |
gptkb:Nick_Bostrom
|
gptkbp:has_goal |
ensuring a positive future for humanity
mitigating risks from advanced technologies promoting beneficial uses of technology |
gptkbp:has_impact_on |
AI governance
global security discussions future technology policies |
gptkbp:has_influence_on |
policy making
|
gptkbp:hosts |
workshops
|
https://www.w3.org/2000/01/rdf-schema#label |
Future of Humanity Institute
|
gptkbp:is_active_in |
community engagement
research funding international collaborations |
gptkbp:is_involved_in |
gptkb:Research_Institute
socioeconomic studies policy advocacy public education ethical discussions philosophical inquiries |
gptkbp:is_known_for |
ethical implications of AI
advocacy for safe AI development research on superintelligence |
gptkbp:is_part_of |
gptkb:Future_of_Life_Institute
|
gptkbp:is_recognized_by |
policy makers
academic community technology leaders |
gptkbp:is_supported_by |
gptkb:charity
grants academic institutions |
gptkbp:located_in |
gptkb:Oxford,_England
|
gptkbp:offers |
fellowships
|
gptkbp:provides |
advisory services
|
gptkbp:publishes |
gptkb:books
gptkb:report research papers articles |
gptkbp:receives_funding_from |
donations
|
gptkbp:research |
existential risks
|
gptkbp:bfsParent |
gptkb:Lifeboat_Network
|
gptkbp:bfsLayer |
4
|