Sign up for our newsletter
Back To Top
Back to Blog & News

AI & Advanced Computing

Schmidt Sciences joins global research effort to safeguard AI

Schmidt Sciences | Jul 30, 2025

UK AISI, Canada AISI, Amazon Web Services, Anthropic, and others collaborate to offer grants, computational power and venture capital to build trust in AI

MEDIA CONTACT: Carlie Wiener cwiener@schmidtsciences.org

NEW YORK—Schmidt Sciences announced today it will participate in a global public-private coalition that plans to direct up to $20 million to support research into making AI safe, secure and aligned with human values.

“Keeping AI systems aligned with human values is the great scientific challenge of our time—and meeting it will require the same creativity that powered past scientific revolutions,” said Mark Greaves, executive director of AI and advanced computing at Schmidt Sciences. “Schmidt Sciences is proud to collaborate with a global consortium to develop techniques to ensure that AI system behavior is aligned with human values.”

Spearheaded by leading government bodies for AI safety and security research, and guided by an expert advisory board, the AI Alignment Project plans to provide grants of up to $1.3 million per project, along with compute resources, to academic researchers addressing the understudied but essential challenges of AI safety and alignment. The Project will also seek venture capital for commercial efforts that accelerate adoption of AI safety tools and practices by the for-profit sector.

The AI Alignment Project is an international collaboration led by the UK AI Security Institute (UK AISI). Alongside Schmidt Sciences, additional organizations joining this effort include Canada AI Safety Institute (Canada AISI), Amazon Web Services, Anthropic, UK Research and Innovation, Halcyon Futures, the Safe AI Fund and the Advanced Research and Invention Agency. The Project is actively seeking additional supporters from any sector to contribute research grants, cloud compute resources or venture funding. 

“It’s crucial we’re driving forward research to ensure this transformative technology is behaving in our interests,” said U.K. Secretary of State for Science, Innovation and Technology Peter Kyle. “This fund will help us make AI more reliable, more trustworthy, and more capable of delivering growth, better public services and high-skilled jobs.”

As AI models continue to rapidly improve their capabilities, including demonstrating expert-level knowledge in some fields, the nascent study of AI safety has not kept pace. Today’s methods for controlling AI, according to the 2025 International AI Safety Report, are unlikely to be capable of reigning in tomorrow’s AI systems. 

“The general goals of alignment may be simple to state: preventing AI systems from carrying out actions that pose a risk to our collective security. But, there is still large amounts of research to be done to make these objectives concrete, and to build AI that can achieve this goal in the face of continually improving capabilities,” said AI Alignment Project Advisory Board Member and Schmidt Sciences AI Safety Sciences grantee Zico Kolter. “Efforts like the Alignment Project are fundamental to moving forward this research agenda.”

The advisory board, which will guide the strategy and vision of the effort, will include Kolter as well as Yoshua Bengio, founder of LawZero, pioneering cryptographer Shafi Goldwasser, and Boston University computer scientist Andrea Lincoln. Schmidt Sciences has also supported Yoshua Bengio’s AI safety research. 

Schmidt Sciences participation in this initiative builds on the work of its AI Safety Science program. Announced in February, the program has already awarded more than $10 million in grants to 27 research projects. 

The Project builds on the leadership of the UK AI Security Institute and Canada’s AI Safety Institute, which are dedicated to driving progress on safe, controllable AI that can be deployed with confidence. 

 

About Schmidt Sciences

Schmidt Sciences is a nonprofit organization founded in 2024 by Eric and Wendy Schmidt that works to accelerate scientific knowledge and breakthroughs with the most promising, advanced tools to support a thriving planet. The organization prioritizes research in areas poised for impact including AI and advanced computing, astrophysics, biosciences, climate, and space—as well as supporting researchers in a variety of disciplines through its science systems program.

 

About UK AISI 

The AI Security Institute is a research organisation within the Department of Science, Innovation and Technology. AISI works to test advanced AI systems and inform policymakers about their risks; foster collaboration across companies, governments, and the wider research community to mitigate risks and advance publicly beneficial research; and strengthen AI development practices and policy globally.

# # #