Experts Call For Flexible ‘Leash’ Regulation To Better Manage AI Risks

Instead of static guardrails, researchers say AI oversight needs dynamic “leashes” that let innovation flourish while keeping risks in check, offering a smarter path for regulators navigating tomorrow’s technology.

Research: Leashes, not guardrails: A management-based approach to artificial intelligence risk regulation. Image Credit: aniqpixel / ShutterstockResearch: Leashes, not guardrails: A management-based approach to artificial intelligence risk regulation. Image Credit: aniqpixel / Shutterstock

Many policy discussions on AI safety regulation have focused on the need to establish regulatory "guardrails" to protect the public from the risks of AI technology. In a new paper published in the journal Risk Analysis, two experts argue that, instead of imposing guardrails, policymakers should demand "leashes."

Director of the Penn Program on Regulation and professor at University of Pennsylvania Carey Law School, Cary Coglianese and University of Notre Dame computer science doctoral candidate Colton R. Crum explain that management-based regulation (a flexible "leash" strategy) will work better than a prescriptive guardrail approach, as AI is too heterogeneous and dynamic to operate within fixed lanes. Leashes "are flexible and adaptable - just as physical leashes used when walking a dog through a neighborhood allow for a range of movement and exploration," the authors write. Leashes "permit AI tools to explore new domains without regulatory barriers getting in the way." 

The various applications of AI include social media, chatbots, autonomous vehicles, precision medicine, fintech investment advisors, and many more. While AI offers benefits for society, such as, to pick but one example, the ability to find evidence of cancerous tumors that well-trained radiologists can miss, it also can pose risks. 

In their paper, Coglianese and Crum provide three examples of AI risks: autonomous vehicle (AV) collisions, suicide associated with social media, and bias and discrimination introduced by AI through various applications and digital formats, including AI-generated text, images, and videos. 

With flexible management-based regulation, firms using AI tools that pose risks in each of these settings, as well as others, would be expected to put their AI tools on a leash by creating internal systems to anticipate and mitigate the range of possible harms from the use of their tools.

Management-based regulation can flexibly respond to "AI's novel uses and problems and better allows for technological exploration, discovery, and change," write Coglianese and Crum. At the same time, it provides "a tethered structure that, like a leash, can help prevent AI from 'running away.'"

Source:
Journal reference:

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
4 Essential Insights From A Deepfake Expert On The Take It Down Act