Power usage by AI and data center systems in the U.S. is extraordinary by any measure. The International Energy Agency estimates U.S. AI and data centers used about 415 terrawatt hours of power in 2024-more than 10% of that year's nationwide energy output-and it's expected to double by 2030.
Seeking to head off this unsustainable path of power consumption, researchers at the School of Engineering have developed a proof-of-concept for efficient AI systems that could use 100 times less energy than current systems while providing more accurate results on tasks.
The approach developed in the laboratory of Matthias Scheutz, Karol Family Applied Technology Professor, uses neuro-symbolic AI-a combination of conventional neural network AI with symbolic reasoning similar to the way humans break down tasks and concepts into steps and categories. The research will be presented at the International Conference of Robotics and Automation in Vienna in May and published in the conference proceedings.
Scheutz and his team focus their work on robots interacting with humans, so the AI technologies they employ are not the type of screen-based large language models (LLMs) like ChatGPT and Gemini, for example. Instead, they study visual-language-action (VLA) models, which are an extension of LLMs with visual and movement capabilities for robots. These models use camera and language inputs and respond by generating actions in the real world, like moving a robot's wheels, legs, arms, and fingers.
Using conventional, resource-intensive VLA approaches, if a robot were asked to pile blocks into a simple tower, the system might scan the environment, identify the locations of the blocks, their shapes and orientations, and interpret the instruction to place each block on top of the others. The attempt to do so may, for example, lead to misinterpreting the shape of a block due to shadows, misplacing a block, or stacking blocks in a way that would cause them to tip over.
Going back to LLMs by analogy, the robots' missed attempts are akin to a chatbot providing inaccurate or even completely false results in text or images. Famous examples include making up imagined court cases for legal briefs or showing people with six fingers.
Symbolic reasoning is more efficient than the conventional approach, yielding more general planning strategies based on puzzle rules and abstract categories such as block shape and centers of mass.
How Neuro‑Symbolic Systems Work Better
"Like an LLM, VLA models act on statistical results from large training sets of similar scenarios, but that can lead to errors," said Scheutz. "A neuro-symbolic VLA can apply rules that limit the amount of trial and error during learning and get to a solution much faster. Not only does it complete the task much faster, but the time spent on training the system is significantly reduced."
In tests using a standard Tower of Hanoi puzzle, the neuro‑symbolic VLA system had a 95% success rate, compared with 34% for standard VLAs. For a more complex version of the puzzle that the robot had not seen in training, the neuro-symbolic system had a 78% success rate, while standard VLAs failed every attempt.
The neuro-symbolic system could be trained in just 34 minutes, while the standard VLA model took over a day and a half. Significantly, training the neuro-symbolic model used only 1% of the energy required to train a VLA model, and the energy savings persisted during task execution, with the neuro-symbolic model using only 5% of the energy required by the VLA.
Scheutz draws parallels to familiar LLMs like ChatGPT or Gemini. "These systems are just trying to predict the next word or action in a sequence, but that can be imperfect, and they can come up with inaccurate results or hallucinations. Their energy expense is often disproportionate to the task. For example, when you search on Google, the AI summary at the top of the page consumes up to 100 times more energy than the generation of the website listings."
With the explosion in user demand for AI systems and their integration into industrial applications, there is a competitive arms race for ever-larger data center facilities whose power usage can add up to hundreds of megawatts, far more than typically needed to power small cities.
The researchers conclude that current LLMs and VLAs, despite their popularity, may not be the right foundation for energy‑efficient, reliable AI and may push us up against a wall of resource limitations. Instead, they suggest that hybrid neuro‑symbolic AI could provide a more sustainable and dependable path forward.

*Important notice: arXiv publishes preliminary scientific reports that are not peer-reviewed and, therefore, should not be regarded as definitive, used to guide development decisions, or treated as established information in the field of artificial intelligence research.
Source:
Journal reference:
- Preliminary scientific report.
Duggan, T., Lorang, P., Lu, H., & Scheutz, M. (2026). The Price Is Not Right: Neuro-Symbolic Methods Outperform VLAs on Structured Long-Horizon Manipulation Tasks with Significantly Lower Energy Consumption. ArXiv. https://arxiv.org/abs/2602.19260