- Support score: 0OrganizationAI governance
AI Objectives Institute
The AI Objectives Institute is a nonprofit research lab of leading builders and researchers. We create tools and projects to ensure that AI and economic systems of the future are built with genuine human objectives. Our aim is to avoid large-scale economic, institutional and environmental catastroph… - Support score: 0Factored cognitionAI governance
A Paper on Collaborative Design and Formal Verification of Factored Cognition Schemes (CoT, Tree of Thought, selection inference, prompt chaining etc)
Preprint Title: Towards Formally Describing Program Traces of Language Model Calls with Causal Influence Diagrams: A Sketch Google Doc (Last Updated 15 June 2023). **Extended Abstract… - Support score: 0OrganizationAI governanceOther cause area
Balsa Research
We are in the process of evaluating the most impactful area for research and policy development. On this page, you can see examples of policies that make our shortlist. - Support score: 0ResearchOrganizationAI safetyLongevity
Foresight Institute
Foresight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1986 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focus… - Support score: 0ResearchOrganizationAI safety
Basis Research Institute
Basis is a nonprofit applied research organization with two mutually reinforcing goals. The first is to understand and build intelligence. This means to establish the mathematical principles of what it means to reason, to learn, to make decisions, to understand, and to expl… - Support score: 0OrganizationAI safety
Catalyze Impact
Working on a better future with AI by incubating AI Safety research organizations. Catalyze is a non-profit organization which focuses on providing support to individuals in setting up AI Safety research organizations. Through our work, we try to … - Support score: 0EducationOrganizationAI safety
ML Alignment & Theory Scholars (MATS)
The ML Alignment & Theory Scholars (MATS) Program is an educational seminar and independent research program that aims to provide talented scholars with talks, workshops, and research mentorship in the field of AI alignment and connect them with the Berkeley AI safety… - Support score: 0EducationOrganizationAI safety
aisafety.info
New to AI safety? Start here. In recent years, we’ve seen AI systems grow increasingly capable. They may soon attain human and even strongly superhuman skill in a wide range of domains. Such systems could bring great benefits, but if their goals don’t line up with human values, they could also caus… - Support score: 0OrganizationAI safety
Oxford China Policy Lab
Oxford China Policy Lab is an interdisciplinary hub of China-focused research at Oxford. We produce policy relevant-research to mitigate long-term risks resulting from US-China great power competition. Our work is divided into four main comp… - Support score: 0OrganizationAI safety
Palisade Research
AI capabilities are improving rapidly. Palisade Research studies the offensive capabilities of AI systems today to better understand the risk of losing control to AI systems forever. We research dangerous AI capabilities to better understand misuse risks… - Support score: 0OrganizationAI safety
Probably Good
Want your career to have meaningful impact?Probably Good aggregates the best evidence, analysis, and expert opinions to help you do more good with your career. [Click to read more on global catastrophic risks.](https://probablygood.org/cause-areas/global-catastrophic-r… - Support score: 0IndividualRestrain AI developmentAI safety
Pause AI Wholly
I [Holly Elmore] want to organize public support for a pause on the development of artificial general intelligence (AGI). The companies building these systems do not understand how they work and cannot control them. Development needs to be paused until it can be done safely. The US public is support… - Support score: 0ResearchOrganizationInner alignmentOuter alignmentValue learningAI governanceExistential and suffering risksAI safetyFarmed animal welfareMoral enhancementWild animal welfare
Creating AI systems free of speciesist bias to contribute to broader AI safety and alignment efforts.
Open Paws is dedicated to creating AI systems free of speciesist bias, contributing to broader AI safety and alignment efforts. Our mission is to ensure that future AI systems respect and protect all sentient beings, reducing the likelihood that superintelligent AI will adopt harmful biases, such as… - Support score: 0Field-building AI governanceAI safetyBiosecurityInstitutional decision-makingInternational development
Good Ancestors Policy
At Good Ancestors Policy, we advocate for Australian-specific policies aimed at solving this century’s most challenging problems. We develop policy, engage directly with senior decision-makers, and support communities and individuals across Australia to raise their voices in a way that is coordinate… - Support score: 0CommunicationEducationOrganizationInfrastructureAI governanceAI safetyDemocractic decision-making
AIGS Canada: Enable Canada to lead on AI governance & safety
AI Governance & Safety Canada is a cross-partisan not-for-profit and community of people across the country. We started with the question "What can we do in Canada, and from Canada, to ensure positive AI outcomes"? Our primary focus is education, advocacy and community building to shape domestic and… - Support score: 0CommunicationOrganizationOther type of workAI governanceAI services (CAIS)Existential and suffering risksAI safetyBiosecurityClimate changeDemocractic decision-makingInstitutional decision-making
Future Matters, an AI safety strategy consultancy
Future Matters is a 3-year-old nonprofit strategy consultancy and think tank. We help clients in AI safety and other cause areas who are trying to create change through policy, politics, coalitions, and/or social movements. Future Matters offer a wide range of services, such as policy prioritization… - Support score: 0ResearchMultipolar scenariosAI-assisted alignmentDebate (AI safety technique)Transparency/interpretabilityValue learningEvaluationsAI safetyForecasting
Act I: Exploring emergent behavior from multi-AI, multi-human interaction
Explore and predict potential risks and opportunities arising from a future that involves many independently controlled AI systems Act I treats researchers and AI agents as coequal members. This is important because most previous evaluations and investigations give research… - Support score: 0ResearchRLHFValue learningAI governanceAI safetyDemocractic decision-makingInformation security
Tournesol - Secure Collaborative Governance of Large AI Models
Tournesol is a research project focusing on developing a collaborative algorithmic governance for large scale AI systems. The research project is sustained by an association founded in 2021 and builds upon multiple years… - Support score: 0OrganizationAI governanceCompute governanceExistential and suffering risksAI safety
Institute for AI Policy & Strategy
The Institute for AI Policy & Strategy (IAPS) works to reduce risks related to the development & deployment of frontier AI systems. We do this by: producing and sharing research that grounds concrete recommendations in strategic considerations strengthening coordination and talent pipelines acr…