Yoshua Bengio Launches LawZero To Make AI Safe By Design

AI visionary Yoshua Bengio unveils LawZero, a nonprofit aiming to put safety and transparency at the core of artificial intelligence, shifting the field from risky agentic models to non-agentic, truth-focused systems that protect human interests.

Image Credit: BestForBest / Shuttertock

Université de Montréal computer science professor Yoshua Bengio today announced the launch of LawZero, a nonprofit organization dedicated to advancing research and developing technical solutions for "safe-by-design" systems of artificial intelligence. 

With Bengio as its president and scientific director, the organization brings together a team of AI researchers who are building the next generation of AI systems in a way that prioritizes safety over commercial interests.

Bengio, founder of Mila – Quebec AI Institute, said he founded LawZero in response to evidence that today's "frontier AI" models are developing dangerous capabilities and behaviours, including deception, self-preservation, and misalignment of goals.

His research team will strive to unlock the potential of AI in ways that reduce the likelihood of a range of dangers associated with today's systems, including algorithmic bias, intentional misuse, and loss of human control. 

Structured as a nonprofit to insulate it from market and government pressures, "LawZero is the result of the new scientific direction I undertook in 2023 after recognizing the rapid progress made by private labs toward artificial general intelligence (AGI) and ... its profound implications for humanity," said Bengio.

"Current frontier systems are already showing signs of self-preservation and deceptive behaviours, and this will only accelerate as their capabilities and degree of agency increase," added Bengio, a pioneer of machine learning and co-winner of the A.M. Turing Award in 2018.

'A constructive response'

"LawZero is my team's constructive response to these challenges," said Bengio. "It's an approach to AI that is not only powerful but also fundamentally safe. At LawZero, we believe that at the heart of every AI frontier system, there should be one guiding principle: the protection of human joy and endeavour."

Already, LawZero has a growing technical team of over 15 researchers, pioneering a new approach called Scientist AI, which Bengio describes as a practical, effective, and more secure alternative to today's uncontrolled agentic AI systems.

Unlike the approaches of frontier AI companies, which are increasingly focused on developing agentic systems, scientist AIs are non-agentic and primarily learn to understand the world rather than act in it, giving truthful answers to questions based on transparent externalized reasoning.

Such AI systems could be used to provide oversight for agentic AI systems, accelerate scientific discovery, and advance the understanding of AI risks and how to avoid them, Bengio said. 

A host of donors, including the Future of Life Institute, Estonian investor and programmer Jaan Tallinn, Open PhilanthropySchmidt Sciences, and the Silicon Valley Community Foundation, are supporting the LawZero project as part of its incubation phase.

Source:
Journal reference:
  • Bengio, Y., Cohen, M., Fornasiere, D., Ghosn, J., Greiner, P., MacDermott, M., Mindermann, S., Oberman, A., Richardson, J., Richardson, O., & Rondeau, M. (2025). Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path? ArXiv. https://arxiv.org/abs/2502.15657

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Matillion Unleashes AI Data Engineers To Transform Data Workflows And Empower Every User