Sign up for our newsletter
Back To Top
Back to AI

Science of Trustworthy AI

Advancing the Fundamental Science of Trustworthy AI

The science of trustworthy AI is nascent. Today, many demonstrations of unsafe behavior are isolated, and mitigations do not generalize reliably across model families, deployment contexts, or capability regimes. Therefore, we aim to support basic technical research that improves our ability to understand, predict, and control risks from frontier AI systems while simultaneously enabling their trustworthy deployment.

Opportunities for Funding

  • 2026 Science of Trustworthy AI RFP

    Science of Trustworthy AI

    Apply

The Challenge

Every day, AI technology is becoming more consequential. As a result, the impact of safety failures are potentially incredibly harmful.

  • We need mature decision-relevant evaluations.

    Today’s evaluations often fail exactly where we need them most: under distribution shift, long-horizon interaction, tool use, and optimization pressure. Many tests are brittle, highly correlated, or easy to “train to”, and stylized scenarios can be misleading if they do not reflect deployment-like contexts. We need a rigorous science of evaluation with construct validity, predictive validity, and clear evidence standards for when results justify real-world decisions.

  • The research capacity needed for trustworthy AI lags the pace of deployment.

    Frontier systems are being deployed rapidly, but the infrastructure for trustworthy assessment and oversight is not keeping pace—especially for large, ambitious projects that require substantial compute, multidisciplinary expertise, and time. Accelerating progress will require sustained support for high-ambition, field-shaping research rather than incremental work.

  • Academics are underleveraged in trustworthy AI research.

    Currently, safety research for the largest AI models is primarily conducted by leading AI labs. Still, despite vast private capital flowing into AI development, commercial incentives for foundational, pre-product safety science are often weaker than incentives for capability and product improvements, especially when benefits are diffuse or long-horizon. Foundational advances in the science of trustworthy AI therefore function as a global public good, motivating targeted philanthropic support for academic and nonprofit researchers.

Program Goals

  • Deepen our understanding of safety properties of AI systems

  • Build a rigorous science of evaluation with construct and predictive validity

  • Advance trustworthy AI approaches resistant to obsolescence from fast-evolving technology

  • Support a global community of researchers advancing the science of trustworthy AI

Research Agenda

Our research agenda organizes priorities around three connected aims:

  • Characterize and forecast misalignment in frontier AI systems

    Why frontier AI training-and-deployment safety stacks still result in models learning effective goals that fail under distribution shift, pressure, or extended interaction.

  • Develop generalizable measurement and intervention

    Advance the science of evaluations with decision-relevant construct and predictive validity, and develop interventions that control what AI systems learn (not just what they say).

  • Oversee AI systems with superhuman capabilities and address multi-agent risks

    Develop oversight and control methods for settings where direct human evaluation of correctness or safety isn’t feasible, and address risks that emerge from interacting AI systems.

Advisory Board

More AI programs and initiatives

Back To Top