Institute for AI Policy & Strategy
The Institute for AI Policy & Strategy (IAPS) works to reduce risks related to the development & deployment of frontier AI systems. We do this by:
- producing and sharing research that grounds concrete recommendations in strategic considerations
- strengthening coordination and talent pipelines across the AI governance field.
We do both deep research driven by our research agendas and rapid-turnaround outputs or briefings driven by decision-makers’ immediate needs
We do both public and nonpublic work. Across all our work, intellectual independence is a core value; we are nonpartisan and do not accept funding from for-profit organizations. Read more about our funding and intellectual independence policy here.
We aim to bridge the technical and policy worlds. We have expertise on frontier models and the hardware underpinning them, and staff in San Francisco, DC, London, Oxford, and elsewhere.
OUR FOCUS AREAS
AI REGULATIONS
This team is currently focused on US AI standards, regulations, and legislation. We aim to answer questions such as how an agency to regulate frontier AI should be set up and operate, and how AI regulations could be updated rapidly yet still in well-informed ways. Our methods include drawing lessons from regulation in other sectors.
COMPUTE GOVERNANCE
This team works to establish a firmer empirical and theoretical grounding for the fledgling field of compute governance, inform ongoing policy processes and debates, and develop more concrete technical and policy proposals. Currently we are focused on understanding the impact of existing compute-related US export controls, and researching what changes to the controls or their enforcement may be feasible and beneficial.
INTERNATIONAL GOVERNANCE & CHINA
This team works to improve decisions at the intersection of AI governance and international governance or China. We are interested in international governance regimes for frontier AI, China-West relations concerning AI, and relevant technical and policy developments within China.
LAB GOVERNANCE
This team works to identify concrete interventions that could improve the safety, security, and governance of frontier AI labs’ systems and that could be implemented through voluntary lab commitments, standards, or regulation. Currently we are focused on pre-deployment risk assessments, post-deployment monitoring and incident response, and corporate governance structures.
We also do some work outside the above four areas.
[This project was created by the GiveWiki team. Please visit the website of the organization for more information on their work.]
No comments yet