← All insights

Contracts

March 2, 2026

The AI Clause Gap: Why 83% of Your Vendor Contracts Are Missing Critical Protections

Your enterprise probably signed its first AI-powered vendor agreement in 2024 or early 2025. The contract was likely based on a standard SaaS template — the same framework you’d use for a project management tool or an HR platform.

That template is not adequate for what AI vendors are actually doing with your data.

Recent analysis of enterprise AI vendor agreements reveals a striking gap between what vendors claim and what they contractually commit to: 92% of AI vendors assert broad data usage rights in their terms of service, but only 17% commit to full regulatory compliance. Just 33% provide indemnification for third-party intellectual property claims. And only 17% include performance warranties tied to documentation compliance — compared to 42% in traditional SaaS contracts.

If your legal department hasn’t revisited its AI vendor agreements with these gaps in mind, you have unsigned risk on your books.

The five clauses most contracts are missing

1. Data training restrictions

The default position of most AI vendors is that they can use your data to train and improve their models. This is buried in terms of service under language like “improve our services” or “enhance product functionality.”

For an enterprise, this is unacceptable. Your proprietary data — customer information, financial records, internal communications, strategic documents — should never be used to train a model that serves your competitors.

What the clause should say: The vendor shall not use Customer Data, including inputs, outputs, prompts, or metadata, to train, fine-tune, or improve any model, product, or service, whether for Customer or any third party, without prior written consent.

2. Disclosure requirements

Many AI-powered tools use third-party models (OpenAI, Anthropic, Google) as their underlying infrastructure. Your contract with the vendor may say nothing about this dependency — meaning your data may flow to a sub-processor you haven’t evaluated, approved, or even identified.

What the clause should say: Vendor shall disclose all AI models, sub-processors, and third-party services used to process Customer Data. Vendor shall provide 30 days’ prior written notice before introducing any new AI model or sub-processor. Customer shall have the right to object and terminate without penalty.

3. Output ownership and IP

Who owns the output of an AI tool when it’s trained on your data and prompted by your employees? The legal landscape is unsettled, but your contract doesn’t have to be.

What the clause should say: Customer owns all rights in inputs, prompts, and outputs generated through Customer’s use of the Service. Vendor retains no rights in Customer-generated outputs. Vendor represents that outputs do not, to Vendor’s knowledge, infringe third-party intellectual property rights.

4. Bias and discrimination liability

If your vendor’s AI tool makes or influences employment decisions, credit decisions, or customer-facing recommendations, you may be liable for discriminatory outcomes under the Colorado AI Act (effective 2026), NYC Local Law 144 (effective 2023), and potentially the Illinois Artificial Intelligence Video Interview Act and the incoming Illinois AIDA.

What the clause should say: Vendor shall conduct and provide results of annual bias audits for any AI system that materially influences decisions affecting individuals. Vendor shall indemnify Customer against claims arising from discriminatory outputs of the Service, including claims under applicable federal, state, and local anti-discrimination laws.

5. Regulatory compliance commitment

The state-level AI regulatory landscape is fragmenting rapidly. Twenty states now have comprehensive privacy laws. Colorado, Illinois, and New Jersey have specific AI governance requirements. The EU AI Act extends through 2027.

Most vendors promise “compliance with applicable laws” in general terms. Few specify which laws, commit to ongoing compliance monitoring, or accept liability for regulatory failures.

What the clause should say: Vendor shall comply with all applicable AI governance requirements, including [enumerate specific statutes relevant to Customer’s operations]. Vendor shall maintain documentation sufficient to demonstrate compliance and shall make such documentation available to Customer upon request.

The negotiation reality

Vendors will push back on these clauses. Large platform vendors (Microsoft, Google, Salesforce) may refuse to negotiate individually. Smaller, specialized AI vendors — the ones most likely to be handling your most sensitive use cases — are more likely to negotiate.

The leverage point is simple: you are the customer. If a vendor won’t commit to basic data protections, training restrictions, and regulatory compliance, that tells you something important about their operational practices.

What to do now

Inventory your AI vendor agreements. Identify every contract that involves AI-powered processing of your data. This is broader than you think — many traditional SaaS vendors have added AI features through updates that didn’t require contract amendments.

Conduct a clause gap analysis. For each agreement, check for the five clauses above. Flag contracts that are silent on data training, sub-processor disclosure, output ownership, bias liability, and regulatory compliance.

Prioritize by risk. Contracts involving employee data, customer data, financial data, or decision-making automation are highest priority. A marketing analytics tool with AI features is lower risk than an AI-powered hiring screener.

Negotiate amendments. For existing contracts, propose amendments addressing the gaps. For new contracts, build these clauses into your standard procurement templates.

The AI clause gap is not a future problem. Your vendors are using AI on your data right now, under contracts that were written before the technology and the regulatory landscape existed. Close the gap before regulators or litigants do it for you.