Practically half of in-house authorized professionals say they’d not detect an unauthorized or incorrect motion taken by an AI agent till after it had already occurred — typically days or even weeks later — in line with new survey research launched in the present day by Icertis, the contract lifecycle administration firm.
The findings, drawn from a survey of greater than 1,000 U.S. company authorized practitioners, level to a governance hole that the corporate argues has emerged as AI instruments have grown extra autonomous.
Whereas the overwhelming majority of respondents mentioned they primarily use AI in an assistive capability, roughly one in 4 mentioned AI sometimes handles duties by itself, and for practically 10% of respondents, human assessment of AI exercise is already the exception somewhat than the rule.
Regardless of that rising autonomy, solely about 23% of authorized groups mentioned they’ve a complete, documented agentic AI coverage in place, and solely 34% mentioned their current common AI insurance policies adequately cowl agentic use circumstances. About 60% expressed confidence that their insurance policies can be prepared to manipulate AI brokers throughout the subsequent 12 to 24 months.
Accuracy considerations compound the governance problem. Solely 26% of respondents mentioned they had been very assured that the AI their workforce makes use of is correct sufficient for high-stakes enterprise selections, and roughly half mentioned they need to apply human judgment earlier than trusting AI-generated outputs.
The survey additionally discovered fragmented visibility into AI exercise. About 39% of respondents mentioned they’re assured in real-time visibility into their AI brokers’ actions, whereas an equal share mentioned they’d doubtless catch an issue — however solely after the actual fact. Eight p.c mentioned an AI motion might go undetected for days or even weeks.
On the query of accountability when AI makes a mistake, responses had been cut up practically evenly: 23% mentioned duty would fall on the workforce that deployed the agent, 23% mentioned the workforce managing its day-to-day operations, and 22% mentioned it could depend upon the circumstances. Solely 10% mentioned authorized would bear accountability when compliance is compromised — though 35% mentioned authorized is the first proprietor of AI utilization insurance policies.
The survey additionally flagged considerations about knowledge connectivity. Greater than 70% of respondents mentioned their groups use generic giant language fashions — instruments like ChatGPT or Claude — of their authorized work, with 65% utilizing these instruments immediately on contracts. Solely 17% mentioned their AI instruments each ship and obtain knowledge from different enterprise techniques, whereas 23% mentioned their AI knowledge stays solely inside authorized techniques.
Icertis, whose platform is constructed round contract intelligence and administration, frames the issue when it comes to an answer it’s positioned to supply: contracts as a governance layer. The report argues that contract knowledge can provide AI brokers the enterprise context wanted to behave precisely, and that solely 38% of authorized groups at present view contracts as a instrument for governing AI — although one other 32% mentioned they see the potential.
“The pace of AI innovation is outpacing the governance meant to supervise it,” mentioned Bernadette Bulacan, chief evangelist at Icertis, “and authorized is feeling this strain on two fronts: with the usage of AI brokers in their very own division, and thru rising utilization by different features.”
The full report is on the market on the Icertis web site.
