A new Perspective highlights why the future of AI lies in the art of symbolization, uniting human-designed abstractions with machine learning to overcome the limits of scaling and spark genuine discovery.

Research: How large language models need symbolism. Image Credit: Golden Dayz / Shutterstock
Advances in artificial intelligence, particularly large language models (LLMs), have been driven by the "scaling law" paradigm: performance improves with more data, computation, and larger models. However, this approach reveals a critical vulnerability when confronting frontier domains where usable data is inherently scarce. In these environments, LLMs often fail at the complex, multi-step reasoning required for true innovation.
Beyond Scaling: The Case for Symbols
In a new Perspective article published in the journal National Science Review, Professor Xiaotie Deng and co-author Hanyu Li from Peking University argue that the path forward requires a fundamental shift in strategy. Instead of relying on scaling alone, they propose augmenting the powerful statistical intuition of LLMs with symbols derived from human wisdom. They term this process “quotienting” – the creation of compact symbolic spaces that simplify vast problem domains, much like quotient spaces in mathematics.
"The challenges LLMs face in data-scarce environments don't mean the scaling paradigm has reached its ceiling," explains Dr. Deng. "Rather, it signals that we need to integrate a distinctly human capability: using symbols as a cognitive technology to map and navigate complexity."
Lessons from History: When Symbols Shape Thought
The authors illustrate this principle with historical examples, from the Pirahã people, whose language lacks number words, limiting their ability to recall exact quantities, to the triumph of Leibniz's calculus notation over Newton's, which provided a more intuitive framework for thought. These cases highlight how symbols not only label concepts but also encode heuristics for reasoning. A well-designed symbol system, they argue, is not just a label but a powerful tool for reasoning.
AlphaGeometry and Neuro-Symbolic Synergy
This synergy is powerfully demonstrated by AlphaGeometry, an AI system that reached a gold-medal level in the International Mathematical Olympiad. The system combines an LLM, trained on a human-designed symbolic language for geometric constructions, with a deductive solver. The LLM makes an intuitive leap to propose a constructive step, which the symbolic engine then efficiently explores.
"This neuro-symbolic synthesis overcomes the limitations of both purely statistical and purely symbolic paradigms," says Hanyu Li. "If scaling laws have given models their powerful intuition, it is the art of the symbol that will provide the compass for genuine discovery."
Future Directions: The Art of Symbolization
The authors suggest this approach opens promising new research fields, including automated algorithm design with theoretical guarantees, combinatorial optimization, and optimized code generation for specific hardware. Examples cited include LegoNE, where an LLM guided by symbolic structures devised an approximate-Nash algorithm surpassing previous human designs; AutoSAT, which iteratively edits SAT-solver rules to enhance performance; and NVIDIA’s verifier-guided LLM that generated GPU attention kernels outperforming expert hand-tuned libraries.
The central task ahead, they conclude, is the art of symbolization itself—crafting abstractions that can guide AI beyond scaling and toward genuine discovery.
Source:
Journal reference: