- Support score: 217OrganizationField-building AI safety
Rational Animations
Truth-seeking, science, technology, the future, and more. With animations! We aim to help humanity safely transition to a world with artificial general intelligence (AGI) and artificial super intelligence (ASI). In particular, we want to make it less likely for AGI and ASI to cause human extincti… - Support score: 217OrganizationEliciting latent knowledge (ELK)AI safety
Alignment Research Center: Theory Project
The Theory team is led by Paul Christiano, furthering the research initially laid out in our report on Eliciting Latent Knowledge. At a high level, we’re trying to figure out how to train ML systems to answer questions by straightforwardly “translating” their beliefs into natural language rather tha… - Support score: 216OrganizationAI safety
Ought
Our mission is to scale up good reasoning. We want ML to help as much with thinking and reflection as it does with tasks that have clear short-term outcomes. In the near future, ML has the potential to reshape any area of life where we have clear metrics a… - Support score: 216OrganizationField-building AI safety
FAR AI – Incubate new technical AI safety research agendas
FAR accelerates neglected but high-potential AI safety research agendas. We support projects that are either too large to be led by academia or overlooked by the commercial sector as they are unprofitable. We solicit proposals from our in-house researchers and external collaborato… - Support score: 216OrganizationAI governanceExistential and suffering risksAI safety
Simon Institute for Longterm Governance
We founded the Simon Institute for Longterm Governance (SI) in April 2021 to enhance the multilateral system’s ability to anticipate and mitigate global catastrophic risks. Our current work is focused on promoting the responsible development of artificial intelligence (AI) through multilateral gover… - Support score: 215OrganizationField-building AI safety
Center for AI Safety
CAIS exists to ensure the safe development and deployment of AI AI risk has emerged as a global priority, ranking alongside pandemics and nuclear war. Despite its importance, AI safety remains remarkably neglected, outpaced by the rapid rate of AI development. Currently, society is ill-prepared to … - Support score: 215OrganizationCompute governanceAI safetyForecasting
AI Impacts
This project aims to improve our understanding of the likely impacts of human-level artificial intelligence. The intended audience includes researchers doing work related to artificial intelligence, philanthropists involved in funding research related to artificial intelligence, and policy-makers w… - Support score: 215EducationField-building AI safety
Alignment Jam
A weekend of intense, fun, and collaborative research on the most interesting questions of our day from machine learning safety. With speakers from top institutions and companies in artificial intelligence development, you will get a chance to work on impactful research. Alignment Jams happen acros… - Support score: 214OrganizationFarmed animal welfareWild animal welfare
Faunalytics
Faunalytics' mission is to empower animal advocates with access to research, analysis, strategies, and messages that maximize their effectiveness to reduce animal suffering. We conduct essential research, maintain an online research library, and directly support advocates and organizations in their… - Support score: 214OrganizationField-building AI governanceCompute governanceAI safetyGlobal priorities research
Rethink Priorities
Rethink Priorities has an AI Governance and Strategy (AIGS) team as well as an Existential Security Team (XST). The AIGS team works to reduce catastrophic risks related to AI by conducting research and strengthening the field of AI governance. We aim to bridge the technical and policy worlds, and w… - Support score: 212OrganizationAI governanceAI safety
Future of Humanity Institute
The Future of Humanity Institute is a unique world-leading research centre that works on big picture questions for human civilisation and explores what can be done now to ensure a flourishing long-term future. Its multidisciplinary research team includes several of the world’s most brilliant and fam… - Support score: 212OrganizationAI governanceBiosecurityGlobal priorities research
Legal Priorities Project
The Legal Priorities Project is an independent, global research and field-building project founded by researchers at Harvard University. We conduct strategic legal research that mitigates existential risk and promotes the flourishing of future generations, and we b… - Support score: 211CommunicationEducationOrganizationField-building Existential and suffering risksAI safety
80,000 Hours
80,000 Hours provides research and support to help students and recent graduates switch into careers that effectively tackle the world’s most pressing problems. We’re a Y Combinator–backed nonprofit funded by philanthropic donations, and everything we pro… - Support score: 208OrganizationMultipolar scenariosExistential and suffering risksAI safety
Center on Long-Term Risk
Our goal is to address worst-case risks from the development and deployment of advanced AI systems. We are currently focused on conflict scenarios as well as technical and philosophical aspects of cooperation. We do interdisciplinary research, make and recommend grants, and build a community of pro… - Support score: 207OrganizationNuclear security
Global Catastrophic Risk Institute
GCRI’s mission is to develop the best ways to confront humanity’s gravest threats. The Global Catastrophic Risk Institute (GCRI) is a nonprofit, nonpartisan think tank. GCRI works on the risk of events that could significantly harm or even destroy human civilization at the global scale. As a thin… - Support score: 206ResearchOrganizationExistential and suffering risksCivilizational resilienceClimate changeNuclear security
Alliance to Feed the Earth in Disasters
The mission of the Alliance to Feed the Earth in Disasters is to help create resilience to global food shocks. We seek to identify various resilient food solutions and to help governments implement these solutions, to increase the chances that people have enough to eat in the event of a global cat… - Support score: 204OrganizationAI governanceExistential and suffering risksAI safetyBiosecurity
Pour Demain
Our vision is a positive and secure future. Pour Demain is a non-profit think tank that develops policy proposals on neglected issues positively impacting Switzerland and beyond. We are committed to fact-based policies, developing scientifically sound recommendations and facilitating the exchange be… - Support score: 203OrganizationExistential and suffering risksAI safety
Center for Reducing Suffering
Our mission is to reduce severe suffering, taking all sentient beings into account. We develop ethical views that give priority to suffering, and research how to best reduce suffering. Our top priority is to avoid worst-case futures. The Center for Reducing Suffering (CRS) is a research center that… - Support score: 198OrganizationField-building AI safety
AI Safety Camp
AI Safety Camp connects you with a research lead to collaborate on a project – to see where your work could help ensure future AI is safe. [This project was created by the GiveWiki team. Please visit the website of the organization for more information on their work.] - Support score: 196EducationOrganizationInfrastructureAI governanceExistential and suffering risksAI safety
Centre for Enabling EA Learning and Research (CEEALAR)
A centre for talented individuals to upskill, collaborate and work on AI safety, existential risk and other EA cause areas. CEEALAR is a research and career development hub that supports EAs to achieve high impact goals by providing: Financial assistance (free/subsidised accommodation, vega… - Support score: 175Field-building AI governanceAI safety
Virtual AI Safety Unconference
Virtual AI Safety Unconference (VAISU) 2024 is an online event designed to foster meaningful collaboration among both seasoned and emerging AI safety researchers. The event promotes discussions on questions such as: "How do we ensure that AI systems are safe and beneficial, both in the s… - Support score: 13OrganizationGlobal priorities researchInstitutional decision-makingMetascience
Quantified Uncertainty Research Institute
QURI researches systematic practices to specify and estimate the most important parameters for the most important or scalable decisions. Research areas include forecasting, epistemics, evaluations, ontology, and estimation. We emphasize technological solutions that can heavily scale in the next 5 to… - Support score: 13ResearchOrganizationAI safety
Apollo Research
Apollo Research is a fiscally sponsored project of Rethink Priorities. AI systems will soon be integrated into large parts of the economy and our personal lives. While this transformation may unlock substantial personal and societal benefits, ther… - Support score: 7OrganizationBiosecurityClimate changeNuclear security
Future of Life Institute
Steering transformative technology towards benefiting life and away from extreme large-scale risks. We believe that the way powerful technology is developed and used will be the most important factor in determining the prospects for the future of life. This is why we have made it our mission to ens… - Support score: 4OrganizationAI governance
The Future Society
The Future Society (TFS) is an independent 501(c)(3) nonprofit organization based in the US and Europe. Our mission is to align artificial intelligence through better governance to ensure that AI systems are safe and adhere to fundamental human values. We develop, advocate for, and facilitate th… - Support score: 4OrganizationDecision theoryEmbedded agencyInner alignmentMultipolar scenariosOuter alignmentAgent foundationsAI boxing (containment)AI safety
Machine Intelligence Research Institute
The Machine Intelligence Research Institute is a research nonprofit studying the mathematical underpinnings of intelligent behavior. Our mission is to develop formal tools for the clean design and analysis of general-purpose AI systems, with the intent of making such systems safer and more reliable … - Support score: 3OrganizationGlobal priorities research
Global Priorities Institute
The Global Priorities Institute exists to develop and promote rigorous, scientific approaches to the question of how appropriately motivated actors can do good more effectively, with a particular focus on areas that are not already well addressed by existing academic research. This line of thought …