As autonomous AI agents gain the power to browse, act, and make decisions on users’ behalf, researchers warn that robust guardrails are essential to prevent hidden prompts, excessive permissions, and compliance failures from turning productivity gains into serious breaches.

Image Credit: innni / Shutterstock
Artificial intelligence is no longer limited to passive chatbots awaiting instructions. A new class of systems, known as agentic AI, can interpret user intent and take independent action across digital environments. These agents can browse websites, complete forms, plan tasks, and even execute decisions without step-by-step human input.
Saint Louis University’s Flavio Esposito, Ph.D., emphasizes that users should understand both the advantages and the inherent risks of these more autonomous systems.
“As an example, OpenAI recently released ChatGPT Atlas, a new Internet browser guided by an AI agent,” Esposito explains. “Unlike traditional browsers that wait for user input, these systems can act on your behalf. Users simply state what they want, and the browser executes the entire workflow - even completing purchases.”
Esposito is currently leading a National Science Foundation–funded collaboration between Saint Louis University and Northeastern University that stress-tests agentic AI in next-generation open radio (O-RAN) cellular networks. By simulating adversarial conditions, the team aims to detect vulnerabilities and remove corrupted model behavior without fully retraining the AI.
Pros of Agentic AI
- Transforms how information is accessed and processed, similar to past disruptions caused by search engines.
- Automates repetitive digital tasks, enabling humans to focus on reasoning and creativity.
- Expands access for users with disabilities or limited technical skills.
- Allows general workers to reduce routine work hours to minutes.
Cons of Agentic AI
- Systems like Atlas require user-granted permissions; inattentive users may authorize more access than intended.
- Once granted, permissions can dissolve traditional safeguards that rely on active human control.
- Agentic systems are vulnerable to “Indirect Prompt Injection,” in which hidden website instructions manipulate an AI into performing harmful actions.
- Blurring the boundary between user and agent complicates compliance with privacy laws such as HIPAA and FERPA, which assume human-controlled data access.
“Agentic AI can become either the next major productivity breakthrough or the next major breach of trust,” Esposito says. “The outcome depends on the guardrails we establish now.”
Esposito’s Broader Research Contributions
Throughout his career, Esposito has secured NSF funding to advance resilient, adaptive, and efficient computing systems. A 2022 grant supported work on integrating learning algorithms with core network mechanisms, enabling a programmable wireless testbed at SLU. Earlier projects involved strengthening virtual cloud-computing networks and advancing distributed system design to reduce environmental impact.
Esposito earned his Ph.D. in Computer Science from Boston University in 2013 and holds a Master of Science in Telecommunication Engineering from the University of Florence. His research focuses on network management, virtualization, and distributed systems.