
Most attorneys suppose the arduous a part of AI is the know-how. It isn’t. The arduous half is that the legislation is shifting at a fraction of its pace. In case you are in-house, you’re already feeling the stress. Your online business needs to deploy a brand new AI functionality, patrons are asking for commitments you’ve by no means seen earlier than, and your executives need a straight reply about danger in a panorama the place even regulators appear not sure.
In my dialog with John Pavolotsky, know-how transactions lawyer and co-head of the AI follow at Stoel Rives, he put it plainly: “You draft to the lay of the land proper now, and to the place issues would possibly go within the subsequent six to 12 months.” For in-house groups, that window is already uncomfortably small. That is the second when authorized groups both adapt or fall behind the pace of their very own corporations.
Understanding this pressure is step one. Performing on it’s the second.
The Regulatory Terrain Is Shifting Underneath Your Toes
John described the present patchwork of AI regulation as a shifting goal. California alone has dozens of payments which are labeled “AI-related.” The EU AI Act categorizes programs into danger tiers that many U.S. corporations will really feel the consequences of, even when they aren’t straight topic to it.
For in-house groups, the issue isn’t monitoring each invoice. The issue is staying aligned with the small subset that really intersects your enterprise. That requires greater than scanning headlines. It requires ongoing conversations inside the corporate about how the know-how is designed, deployed, up to date, and used.
John’s level right here is beneficial: the states stay laboratories of governance, and they’re going to proceed experimenting forward of federal frameworks. In-house attorneys ought to assume {that a} “steady” AI regulatory panorama is years away. The job is to not predict the end result however to construct contracting methods that survive the volatility.
Excessive-Danger Use Instances Are Already Outlined. The Market Is Paying Consideration.
One sensible perception John shared is that the definition of “high-risk” is just not as mysterious as folks assume. The EU AI Act and the Colorado AI Act checklist them clearly: training, housing, monetary providers, authorities providers, and any area with a significant impression on an individual’s livelihood.
Most in-house counsel already know whether or not their firm’s merchandise or inside use instances contact these areas. The hole is commonly operational, not conceptual. Has the group mapped its AI use instances? Do product managers know the way the corporate defines “high-risk”? Are procurement workflows flagging these programs earlier than a contract hits authorized? If the reply is not any, the difficulty is just not regulatory uncertainty. The problem is inside readability.
That is the place authorized can lead.
AI Is Software program, However Contracting for AI Is Not SaaS 2.0
John made a degree that sounds easy however has huge implications: AI continues to be software program. But as soon as AI turns into extra agentic, “all the danger mannequin shifts.” If programs start taking actions on a consumer’s behalf, making choices with out human sign-off, or interacting with different programs autonomously, the SaaS analogy breaks down.
In SaaS, we negotiate availability, uptime, knowledge rights, SLAs, catastrophe restoration, audits. With agentic programs, we shift towards questions on delegation, autonomy boundaries, and failure modes. We shift towards:
What occurs when the system does one thing unanticipated?
What’s the chain of accountability when a system acts on incomplete or deceptive knowledge?
How do you consider danger when the system’s inside reasoning is just not deterministic?
This isn’t theoretical. John gave the instance of a future AI journey concierge. You inform it to plan your mountain climbing journey within the Bavarian Alps. It books your flights, pays in your lodging, coordinates guides, and executes choices throughout a number of distributors. In the present day, that will be a cute demo. In just a few years, it might be actual. And as soon as AI instruments start transacting, negotiating, and executing autonomously, contract clauses constructed for SaaS workflows will collapse below their very own assumptions.
In-house counsel ought to count on this shift, not react to it.
Experimentation Is Now A Skilled Obligation
One in every of John’s most beneficial items of recommendation is straightforward: authorized groups can’t meaningfully advise on AI except they’re utilizing it. He encourages attorneys to choose a few instruments and get snug with them. Feed them actual prompts. Ask them to draft clauses. Stress-test the outputs. Study the place the seams are. Study the place they hallucinate, misread, or oversimplify. Study the place they shine.
This isn’t about changing into a immediate engineer. It’s about understanding the mechanics of the instruments shaping trendy contracting. If the enterprise is experimenting and authorized is just not, authorized is not going to be prepared when the actual danger choices present up.
Experimentation additionally forces readability. It helps you outline what “ok” seems to be like in your group. As John famous, people nonetheless battle to agree on shared language, and AI will inherit these struggles. Utilizing the instruments provides you a stronger basis to ascertain drafting requirements, evaluate checklists, and steerage your groups can depend on.
The In-Home Benefit: You Sit Closest To The Know-how
John spent years at Intel and Roku earlier than returning to non-public follow, and he emphasised one thing in-house counsel underestimate: proximity to the enterprise is the superpower. You see product roadmaps earlier than exterior counsel. You see design discussions. You see experimentation. You see failures. That visibility is the uncooked materials wanted to draft contracts that replicate how the know-how truly behaves, not how a product sheet describes it.
AI danger will at all times look totally different inside the corporate than from the surface. Your engineers know the place the mannequin is brittle. Your product groups know what occurs in edge instances. Your safety crew is aware of the actual knowledge flows. If authorized isn’t in these conversations, your contracts will over-index on theoretical danger and under-index on the dangers your organization is definitely uncovered to.
That is the second to lean in.
Focus Your AI Contracting Technique On Your Precise Sandbox
John ended with a degree that deserves extra consideration: making an attempt to trace each invoice, proposal, and headline is a waste of time. Your job is to grasp your slice of the world and tailor your contracting playbook to it. That begins with mapping:
What AI are we constructing?
What AI are we shopping for?
What AI are we embedding in third-party platforms?
The place are the autonomy boundaries?
The place does knowledge go?
What choices are being delegated?
As soon as this, you’ll be able to construction contracts round the actual dangers, not speculative patterns.
The temptation proper now could be to boil the ocean. Resist it. Construct focused frameworks. Prepare your crew on these frameworks. Revisit them quarterly. Align them with product actuality, not headlines. That is the way you construct a contracting perform that stays forward of regulatory modifications with out chasing each draft invoice.
The Solely Sustainable Technique Is Steady Dialogue
After I requested John for one takeaway, he stated: “Have extra conversations.” He’s proper. None of us will get this proper in isolation. The know-how is evolving rapidly, and experience will come from speaking with one another, testing concepts, evaluating notes, and refining our approaches over time.
In-house counsel don’t want good foresight. They want adaptable frameworks, grounded danger evaluation, and a willingness to revise their method because the panorama shifts. The businesses that thrive would be the ones whose authorized groups keep engaged, curious, and near the know-how, not those ready for regulators at hand them the solutions.
AI contracting is shifting quick. Your group wants you to maneuver with it.
Olga V. Mack is the CEO of TermScout, the place she builds authorized programs that make contracts sooner to grasp, simpler to function, and extra reliable in actual enterprise circumstances. Her work focuses on how authorized guidelines allocate energy, handle danger, and form choices below uncertainty. A serial CEO and former Common Counsel, Olga beforehand led a authorized know-how firm by acquisition by LexisNexis. She teaches at Berkeley Legislation and is a Fellow at CodeX, the Stanford Heart for Authorized Informatics. She has authored a number of books on authorized innovation and know-how, delivered six TEDx talks, and her insights frequently seem in Forbes, Bloomberg Legislation, VentureBeat, TechCrunch, and Above the Legislation. Her work treats legislation as important infrastructure, designed for a way organizations truly function.

