Future of Humanity Institute

GiveWiki
 Accepting donations
$20.862.717
16 donations
Support score: 212OrganizationAI governanceAI safety

The Future of Humanity Institute is a unique world-leading research centre that works on big picture questions for human civilisation and explores what can be done now to ensure a flourishing long-term future. Its multidisciplinary research team includes several of the world’s most brilliant and famous minds working in this area. Its work spans the disciplines of mathematics, philosophy, computer science, engineering, ethics, economics, and political science.

FHI has originated or played a pioneering role in developing many of the key concepts that shape current thinking about humanity’s future. These include: simulation argument, existential risk, nanotechnology, information hazards, strategy and analysis related to machine superintelligence, astronomical waste, the ethics of digital minds, crucial considerations, observation selection effects in cosmology and other contexts of self-locating belief, prediction markets, infinitarian paralysis, brain emulation scenarios, human enhancement, the unilateralist’s curse, the parliamentary model of decision making under normative uncertainty, the vulnerable world hypothesis, and many others.

FHI has individual researchers working across many topics related to humanity’s future. We also currently have the following research groups:

  • Macrostrategy: How long-term outcomes for humanity are connected to present-day actions; global priorities; crucial considerations that may reorient our civilizational scheme of values or objectives.
  • Governance of Artificial Intelligence: The governance concerns of how humanity can best navigate the transition to advanced AI systems; how geopolitics, governance structures, and strategic trends shape the development or deployment of machine intelligence.
  • AI Safety: Techniques for building artificially intelligent systems that are scalably safe or aligned with human values (in close collaboration with labs such as DeepMind, OpenAI, and CHAI).
  • Biosecurity: How to make the world more secure against (both natural and human-made) catastrophic biological risks; how to ensure that capabilities created by advances in synthetic biology are handled well.
  • Digital Minds: Philosophy of mind and AI ethics, focusing on questions concerning which computations are conscious and which digital minds have what kinds of moral status, and what political systems would enable a harmonious coexistence of biological and nonbiological minds.

Other areas in which we are active and are interested in expanding include (but are not limited to) the following:

  • Philosophical Foundations: When and how might deep uncertainties related to anthropics, infinite ethics, decision theory, computationalism, cluelessness, and value theory affect decisions we might make today? Can we resolve any of these uncertainties?
  • Existential Risk: Identification and characterisation of risks to humanity; improving conceptual tools for understanding and analysing these risks.
  • Grand Futures: Questions related to the Fermi paradox, cosmological modelling of the opportunities available to technologically mature civilizations, implications of multiverse theories, the ultimate limits to technological  advancement, counterfactual histories or evolutionary trajectories, new physics.
  • Cooperative Principles and Institutions: Theoretical investigations into structures that facilitate future cooperation at different scales and search for levers to increase the chances of cooperative equilibria, e.g. with respect to rival AI developers, humans and digital minds, or among technologically mature civilizations.
  • Technology and Wisdom: What constitutes wisdom in choosing which new technological paths to pursue? Are there structures which enable society to act with greater wisdom both in making choices about what to develop and when or how to deploy new capabilities?
  • Sociotechnical Information Systems: Questions concerning the role of surveillance in preventing existential risks and how to design global information systems (e.g. recommender systems, social networks, peer review, discussion norms, prediction markets, futarchy) to mitigate epistemic dysfunctions.
  • Reducing Risk from Malevolent Humans: Defining and operationalizing personality traits of potential concern (e.g. sadism, psychopathy, etc.) or promise (e.g. compassion, wisdom, integrity), especially ones relevant to existential risk; evaluating possible intervention strategies (e.g. cultural and biological mechanisms for minimising malevolence; personnel screening tools; shaping incentives in key situations).
  • Concepts, Capabilities, and Trends in AI: Understanding the scaling properties and limitations of current AI systems; clarifying concepts used to analyze machine learning models and RL agents; assessing the latest breakthroughs and their potential contribution towards AGI; projecting trends in hardware cost and performance.
  • Space Law: What would be an ideal legal system for long-term space development, and what opportunities exist for adjusting existing treaties and norms?
  • Nanotechnology: Analyzing roadmaps to atomically precise manufacturing and related technologies, and potential impacts and strategic implications of progress in these areas.

The Institute is led by its founding director, Professor Nick Bostrom. He is the author of over 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller which helped spark a global conversation about artificial intelligence. His pioneering work has been highly influential in several of the areas in which the Institute is now active.

[This project was created by the GiveWiki team. Please visit the website of the organization for more information on their work.]

0
0