AI visionary Yoshua Bengio unveils LawZero, a nonprofit aiming to put safety and transparency at the core of artificial intelligence, shifting the field from risky agentic models to non-agentic, truth-focused systems that protect human interests.
Image Credit: BestForBest / Shuttertock
Université de Montréal computer science professor Yoshua Bengio today announced the launch of LawZero, a nonprofit organization dedicated to advancing research and developing technical solutions for "safe-by-design" systems of artificial intelligence.
With Bengio as its president and scientific director, the organization brings together a team of AI researchers who are building the next generation of AI systems in a way that prioritizes safety over commercial interests.
Bengio, founder of Mila – Quebec AI Institute, said he founded LawZero in response to evidence that today's "frontier AI" models are developing dangerous capabilities and behaviours, including deception, self-preservation, and misalignment of goals.
His research team will strive to unlock the potential of AI in ways that reduce the likelihood of a range of dangers associated with today's systems, including algorithmic bias, intentional misuse, and loss of human control.
Structured as a nonprofit to insulate it from market and government pressures, "LawZero is the result of the new scientific direction I undertook in 2023 after recognizing the rapid progress made by private labs toward artificial general intelligence (AGI) and ... its profound implications for humanity," said Bengio.
"Current frontier systems are already showing signs of self-preservation and deceptive behaviours, and this will only accelerate as their capabilities and degree of agency increase," added Bengio, a pioneer of machine learning and co-winner of the A.M. Turing Award in 2018.
'A constructive response'
"LawZero is my team's constructive response to these challenges," said Bengio. "It's an approach to AI that is not only powerful but also fundamentally safe. At LawZero, we believe that at the heart of every AI frontier system, there should be one guiding principle: the protection of human joy and endeavour."
Already, LawZero has a growing technical team of over 15 researchers, pioneering a new approach called Scientist AI, which Bengio describes as a practical, effective, and more secure alternative to today's uncontrolled agentic AI systems.
Unlike the approaches of frontier AI companies, which are increasingly focused on developing agentic systems, scientist AIs are non-agentic and primarily learn to understand the world rather than act in it, giving truthful answers to questions based on transparent externalized reasoning.
Such AI systems could be used to provide oversight for agentic AI systems, accelerate scientific discovery, and advance the understanding of AI risks and how to avoid them, Bengio said.
A host of donors, including the Future of Life Institute, Estonian investor and programmer Jaan Tallinn, Open Philanthropy, Schmidt Sciences, and the Silicon Valley Community Foundation, are supporting the LawZero project as part of its incubation phase.
Source:
Journal reference:
- Bengio, Y., Cohen, M., Fornasiere, D., Ghosn, J., Greiner, P., MacDermott, M., Mindermann, S., Oberman, A., Richardson, J., Richardson, O., & Rondeau, M. (2025). Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path? ArXiv. https://arxiv.org/abs/2502.15657