AI & Advanced Computing
New $10 Million AI Safety Science Program Launched for Foundational Research
Schmidt Sciences | Feb 11, 2025

Schmidt Sciences tackles underfunded AI safety challenges to ensure reliable and robust AI systems.
MEDIA CONTACT: Carlie Wiener cwiener@schmidtsciences.org
NEW YORK—Amid the rapid advance of AI technology, Schmidt Sciences announced the selection of 27 projects developing the fundamental science critical to understanding the safety properties of AI systems, an essential yet underfunded area of study. The projects—the first to be supported through Schmidt Sciences’ new AI Safety Science program— seek to develop well-founded, concrete, implementable technical methods for testing and evaluating large language models (LLMs) so that they are less likely to cause harm, make errors or be misused.
By fostering a collaborative global research community, and offering computational support from the Center for AI Safety and API access from OpenAI, Schmidt Sciences seeks to make safety science an integral part of AI innovation. The program is also designed to develop robust tools to measure and evaluate risks, provide funding to support long-term research and elevate underutilized academic expertise in AI. Current safety benchmarks are limited, funding falls far short of what’s needed to address future risks and university researchers lack access to the resources required to contribute effectively.
A recent Wall Street Journal op-ed by Eric Schmidt, co-founder with his wife Wendy of Schmidt Sciences, addressed the pressing need for safety protocols in AI development and deployment, a key goal of the AI Safety Science program.
“As AI systems advance, we face the risk that they will act in ways that contradict human values and interests—but this risk is not inevitable,” said Eric Schmidt. “With efforts like the AI Safety Science program, we can help build a future in which AI benefits us all while maintaining safeguards that protect us from harm.”
The cohort of researchers—including “godfather of AI” Yoshua Bengio at Mila – Quebec Artificial Intelligence Institute, OpenAI board member Zico Kolter at Carnegie Mellon University, and “AI Snake Oil” author Arvind Narayanan at Princeton University—will focus their work on research that addresses the critical safety challenges that AI faces.
“Addressing these gaps now is essential to ensure that future AI systems are safe, trustworthy, and beneficial for society. This is more than a funding program—it’s a movement to energize the scientific community into action,” said Stu Feldman, Schmidt Sciences chief scientist and president. “By building the foundation of AI safety science, we will be able to ensure that AI systems serve humanity responsibly and equitably.”
“The science of AI safety is a crucial new field that is underfunded by philanthropy, commercial AI labs, and the government,” said Michael Belinsky, a director in Schmidt Sciences’ AI and Advanced Computing Institute and lead of the AI Safety Science program. “We are proud to support these dedicated researchers as they work to ensure that AI is safe and aligned with human values.”
Later this year, the AI Safety Science program will convene its awardees in California, where they will share their work with each other and with organizations interested in AI safety. The program also plans additional calls for proposals to bring new awardees into the program.
The full page of awardees can be found here.
Schmidt Sciences is a nonprofit organization founded in 2024 by Eric and Wendy Schmidt that works to accelerate scientific knowledge and breakthroughs with the most promising, advanced tools to support a thriving planet. The organization prioritizes research in areas poised for impact including AI and advanced computing, astrophysics, biosciences, climate, and space—as well as supporting researchers in a variety of disciplines through its science systems program.
# # #