Advancing the Fundamental Science of Trustworthy AI
The science of trustworthy AI is nascent. Today, many demonstrations of unsafe behavior are isolated, and mitigations do not generalize reliably across model families, deployment contexts, or capability regimes. Therefore, we aim to support basic technical research that improves our ability to understand, predict, and control risks from frontier AI systems while simultaneously enabling their trustworthy deployment.
Opportunities for Funding
-
Apply
2026 Science of Trustworthy AI RFP
Science of Trustworthy AI
The Challenge
Every day, AI technology is becoming more consequential. As a result, the impact of safety failures are potentially incredibly harmful.
-
We need mature decision-relevant evaluations.
Today’s evaluations often fail exactly where we need them most: under distribution shift, long-horizon interaction, tool use, and optimization pressure. Many tests are brittle, highly correlated, or easy to “train to”, and stylized scenarios can be misleading if they do not reflect deployment-like contexts. We need a rigorous science of evaluation with construct validity, predictive validity, and clear evidence standards for when results justify real-world decisions.
-
The research capacity needed for trustworthy AI lags the pace of deployment.
Frontier systems are being deployed rapidly, but the infrastructure for trustworthy assessment and oversight is not keeping pace—especially for large, ambitious projects that require substantial compute, multidisciplinary expertise, and time. Accelerating progress will require sustained support for high-ambition, field-shaping research rather than incremental work.
-
Academics are underleveraged in trustworthy AI research.
Currently, safety research for the largest AI models is primarily conducted by leading AI labs. Still, despite vast private capital flowing into AI development, commercial incentives for foundational, pre-product safety science are often weaker than incentives for capability and product improvements, especially when benefits are diffuse or long-horizon. Foundational advances in the science of trustworthy AI therefore function as a global public good, motivating targeted philanthropic support for academic and nonprofit researchers.
Program Goals
-
Deepen our understanding of safety properties of AI systems
-
Build a rigorous science of evaluation with construct and predictive validity
-
Advance trustworthy AI approaches resistant to obsolescence from fast-evolving technology
-
Support a global community of researchers advancing the science of trustworthy AI
Research Agenda
Our research agenda organizes priorities around three connected aims:
-
Characterize and forecast misalignment in frontier AI systems
Why frontier AI training-and-deployment safety stacks still result in models learning effective goals that fail under distribution shift, pressure, or extended interaction.
-
Develop generalizable measurement and intervention
Advance the science of evaluations with decision-relevant construct and predictive validity, and develop interventions that control what AI systems learn (not just what they say).
-
Oversee AI systems with superhuman capabilities and address multi-agent risks
Develop oversight and control methods for settings where direct human evaluation of correctness or safety isn’t feasible, and address risks that emerge from interacting AI systems.
Featured Projects
-
Dr. Sanjeev Arora, Princeton University
-
Dr. Eugene Bagdasarian & Dr. Shlomo Zilberstein, University of Massachusetts Amherst
-
Dr. Yoshua Bengio, Mila - Quebec Artificial Intelligence Institute
-
Dr. Nicolas Flammarion, EPFL Swiss Federal Technology Institute of Lausanne
-
Dr. Adam Gleave and Kellin Pelrine, FAR.AI, and Dr. Thomas Costello, American University and MIT
-
Dr. Tatsu Hashimoto, Stanford University
-
Dr. Matthias Hein, University of Tübingen and Dr. Jonas Geiping, ELLIS Institute Tübingen
-
Dr. Zhijing Jin, University of Toronto & Dr. Mrinmaya Sachan, ETH Zürich
-
Dr. Daniel Kang, University of Illinois, Urbana-Champaign
-
Dr. Mykel Kochenderfer, Stanford University
-
Dr. Zico Kolter, Carnegie Mellon University
-
Dr. Sanmi Koyejo, Stanford University
-
Dr. David Krueger, University of Cambridge
-
Dr. Anna Leshinskaya, University of California-Irvine
-
Dr. Bo Li, University of Illinois Urbana-Champaign
-
Dr. Sharon Li, University of Wisconsin-Madison
-
Dr. Evan Miyazono and Alexandre Rademaker, Atlas Computing
-
Dr. Karthik Narasimhan, Princeton University
-
Dr. Arvind Narayanan, Princeton University
-
Dr. Maarten Sap and Dr. Graham Neubig, Carnegie Mellon University
-
Dr. Dawn Song, University of California-Berkeley
-
Dr. Huan Sun, Dr. Yu Su and Dr. Zhiqiang Lin, The Ohio State University
-
Dr. Florian Tramèr, ETH Zurich
-
Dr. Ziang Xiao, Johns Hopkins University and Dr. Susu Zhang, University of Illinois Urbana-Champaign