Can Lawyers Use AI for Client Documents?
Yes, lawyers can use AI for client documents - and many already are. But here is the problem: most attorneys using tools like ChatGPT for client work are likely violating their ethics obligations without realizing it. The rules permit AI use. They do not permit the way most lawyers are currently using it. The gap between "allowed" and "how it is actually being done" is where disciplinary risk lives.
The Ethics Rules That Govern AI Use in Legal Practice
Three ABA Model Rules form the backbone of every state bar opinion on lawyers and AI. If you practice in any U.S. jurisdiction, these apply to you.
Rule 1.1: Competence (The Duty to Understand Technology)
Model Rule 1.1 requires lawyers to provide "competent representation" to clients. Comment 8 to that rule, updated in 2012, explicitly states that competence includes staying current with "the benefits and risks associated with relevant technology." This is not optional guidance - it is a formal comment to a binding rule.
In practical terms, this means you need to understand how an AI tool processes data before you use it for client work. You cannot claim competence while feeding client information into a system whose data handling practices you have never reviewed.
Rule 1.6: Confidentiality
Rule 1.6 requires lawyers to make "reasonable efforts to prevent the inadvertent or unauthorized disclosure" of client information. When you paste client data into a cloud-based AI tool, you need to know:
- Whether the provider retains your inputs
- Whether your data is used to train or improve the model
- Who at the provider company can access your prompts
- Where the data is stored and under what security controls
- Whether inputs could surface in responses to other users
If you cannot answer these questions, you have not made "reasonable efforts" under Rule 1.6.
Rule 5.3: Supervision of Nonlawyer Assistance
Rule 5.3 holds lawyers responsible for ensuring that nonlawyer assistants act compatibly with the lawyer's professional obligations. ABA Formal Opinion 512 (July 2024) - the ABA's first formal ethics guidance on generative AI - confirmed that this rule extends to AI tools. You are responsible for the output, regardless of who or what produced it.
What State Bars Are Saying
State bar associations have been issuing AI guidance at a rapid pace. The consensus is clear: AI is permitted, but with significant guardrails.
- ABA Formal Opinion 512 (July 2024): The ABA's landmark opinion covers competence, confidentiality, informed consent, and billing. It states that lawyers must understand how generative AI tools use data and secure client consent before processing confidential information through AI - and that boilerplate consent in engagement letters will not suffice.
- NYC Bar Formal Opinion 2024-5 (August 2024): Provides detailed guidance on ethical obligations when using generative AI, including requirements for confidentiality safeguards and verification of AI-generated work product.
- Florida Bar Opinion 24-1 (January 2024): Permits AI use but requires lawyers to protect client confidentiality, ensure accuracy and competence of AI-assisted work, avoid improper billing for AI-generated output, and comply with advertising rules when AI is client-facing.
- California State Bar Practical Guidance (November 2023): One of the earliest comprehensive guides, offering practical principles for generative AI use including data security, competence obligations, and supervisory duties.
The through-line across all of these: you can use AI, but you own the outcome. Every output requires human verification. Every input requires a confidentiality analysis.
The Real Risks of Using Cloud AI for Client Documents
The risks are not theoretical. They break down into four categories:
1. Data Retention and Training
Most cloud-based AI services retain user inputs for some period. Some use those inputs to improve their models. OpenAI's default terms, for example, allow use of inputs for model training unless you opt out or use the API with specific settings. If a client's privileged information enters the training pipeline, you have lost control of that data permanently.
2. Confidentiality Breach by Design
When you use a cloud AI tool, client data leaves your control. It travels to third-party servers, gets processed by systems you did not build, and is governed by terms of service written for the provider's benefit - not yours. Even with enterprise agreements, you are trusting a third party with information your clients trusted exclusively to you.
3. Hallucination and Fabrication
Large language models generate plausible-sounding text. They do not verify facts. They will fabricate case citations, invent statutes, and produce legal analysis that sounds authoritative but is completely wrong. This is not a bug that will be patched - it is a fundamental characteristic of how these systems work.
4. No Privilege Protection
Attorney-client privilege can be waived by disclosure to third parties. Whether sending client information to an AI provider constitutes a waiver is an open question in most jurisdictions, and that uncertainty alone should give you pause.
Mata v. Avianca: The Case Every Lawyer Should Know
In June 2023, Judge P. Kevin Castel of the Southern District of New York sanctioned attorney Steven Schwartz and his firm after they submitted a brief containing six fabricated case citations generated by ChatGPT. The fictional cases included invented party names, fake docket numbers, and entirely fabricated judicial opinions.
When the court questioned the citations, Schwartz went back to ChatGPT and asked if the cases were real. The tool confirmed they were and claimed they could be found on Westlaw and LexisNexis. They could not, because they did not exist.
Schwartz received a $5,000 fine and was required to notify the judges falsely identified as authors of the fabricated opinions. The case - Mata v. Avianca, Inc., No. 22-cv-1461 (S.D.N.Y. 2023) - became the most widely cited example of AI misuse in legal practice.
The lesson is not "do not use AI." The lesson is: AI output is a first draft, never a final product. Every citation must be verified. Every legal conclusion must be checked against actual authority.
What "Competent Supervision" of AI Actually Looks Like
Treating AI like a junior associate who needs their work checked is a popular analogy, but it undersells the problem. A junior associate went to law school, passed the bar, and understands that making up case law is wrong. An AI tool has none of those constraints.
Competent supervision means:
- Verify every factual claim. Citations, dates, holdings, statutory references - check them all against primary sources.
- Review for hallucinated authority. AI does not distinguish between real and fabricated cases. You must.
- Evaluate legal reasoning independently. AI can produce analysis that sounds correct but applies the wrong standard, misidentifies the jurisdiction, or mischaracterizes a holding.
- Document your review process. If your AI use is ever questioned, you want a record showing you treated AI output as a starting point, not a finished product.
- Assess the tool before using it. Understand data handling, retention policies, and security practices. Do this once per tool, not once per use.
Where AI Works Well in Legal Practice
Despite the risks, AI can be genuinely useful for legal work when deployed properly. The key is matching the tool to tasks where its strengths matter and its weaknesses can be managed.
Strong use cases:
- Summarizing lengthy documents, depositions, or discovery materials
- Generating first drafts of standard documents (NDAs, engagement letters, routine motions)
- Legal research assistance (with mandatory verification of all citations)
- Reviewing contracts for specific clause types or missing provisions
- Translating complex legal language into client-friendly summaries
Use cases requiring extra caution:
- Documents involving privileged communications
- Client-specific litigation strategy
- Court filings of any kind
- Matters involving trade secrets or highly sensitive business information
- Work product that will be submitted to regulators
AI Use Case Risk Assessment for Law Firms
| AI Use Case | Risk Level | Recommended Approach | Self-Hosted vs. Cloud |
|---|---|---|---|
| Document summarization (internal use) | Low | Review summary for accuracy; confirm key details against source | Either, with data handling review |
| First drafts of standard documents | Low-Medium | Use templates; review all terms; never send without attorney sign-off | Either, with data handling review |
| Legal research assistance | Medium | Verify every citation against Westlaw/Lexis; treat output as leads, not authority | Either, with data handling review |
| Contract review and analysis | Medium | Use as a second set of eyes; attorney review remains primary | Self-hosted preferred for client contracts |
| Client-specific strategy memos | High | Limit AI to structure/outline; substantive analysis must be attorney-driven | Self-hosted strongly recommended |
| Privileged communications | High | Avoid cloud AI entirely; privilege waiver risk is unresolved | Self-hosted only |
| Court filings and briefs | Very High | AI for outline/structure only; all substance independently verified | Self-hosted strongly recommended |
| Discovery and e-discovery processing | High | Use purpose-built tools with audit trails; validate sampling accuracy | Self-hosted required for sensitive matters |
How Self-Hosted AI Changes the Calculus
Most of the ethics concerns around AI in legal practice trace back to a single problem: client data leaving the firm's control. Cloud-based tools create confidentiality risks, training data risks, and privilege questions that are difficult to resolve within existing ethics frameworks.
Self-hosted AI eliminates these issues at the infrastructure level. When the model runs on your own servers - or on an air-gapped system within your network - client data never leaves your environment. There is no third-party retention, no training pipeline, no terms of service that give a tech company rights over your clients' information.
Platforms like Compass AI are built specifically for this use case: deploying capable AI models in environments where data sovereignty is non-negotiable. For law firms handling sensitive client matters, self-hosted deployment turns the ethics analysis from "can we mitigate these risks?" into "these risks do not exist in our setup."
That does not eliminate the need for supervision. Hallucination risk exists regardless of where the model runs. But it does remove the entire category of confidentiality and data security concerns that make cloud AI a difficult fit for legal work.
Building an AI Policy for Your Firm
Every law firm using AI - or whose attorneys might use AI on personal accounts - needs a written policy. At minimum, it should address:
- Approved tools and platforms: Which AI tools are permitted, and which are prohibited for client work
- Data classification rules: What types of client information can and cannot be processed through AI
- Verification requirements: Mandatory steps before AI-assisted work product is finalized
- Client disclosure obligations: When and how clients are informed about AI use in their matters
- Billing practices: How AI-assisted work is billed (ABA Formal Opinion 512 addresses this directly)
- Training requirements: Ongoing education for all attorneys and staff on approved AI use
Frequently Asked Questions
Do I need client consent before using AI on their documents?
ABA Formal Opinion 512 recommends obtaining informed consent before processing client information through generative AI tools, particularly cloud-based ones. Boilerplate language in engagement letters is not sufficient. The consent should explain what tools you use, how client data is handled, and what safeguards are in place. Some state bars may require it; all recommend it.
Can I use ChatGPT for legal research?
You can, but treat every result as an unverified lead. Never cite a case, statute, or regulation from AI output without confirming it exists in an authoritative legal database like Westlaw or LexisNexis. The Mata v. Avianca sanctions demonstrate exactly what happens when attorneys skip verification. AI is useful for generating research starting points, not for producing citable authority.
Does using a cloud AI tool waive attorney-client privilege?
This question is not fully resolved in most jurisdictions. Sharing privileged information with a third-party AI provider could be argued as a voluntary disclosure that waives privilege. Until courts provide clearer guidance, the safest approach is to avoid processing privileged communications through any cloud-based AI tool. Self-hosted AI running within your firm's infrastructure avoids this issue entirely.
What is the difference between self-hosted AI and cloud AI for legal ethics purposes?
Cloud AI sends data to third-party servers governed by the provider's terms of service. Self-hosted AI runs entirely within your firm's infrastructure, so client data never leaves your control. From an ethics perspective, self-hosted deployment eliminates the confidentiality, data retention, and privilege waiver concerns that make cloud AI problematic for sensitive legal work. The supervision and verification obligations remain the same regardless of deployment model.
Are courts requiring lawyers to disclose AI use?
A growing number of courts are adopting standing orders or local rules requiring disclosure of AI use in filings. Federal judges in districts across the country have implemented such requirements since 2023. Check your jurisdiction's local rules and any standing orders from your assigned judge. Even where disclosure is not yet required, proactive transparency is the safer approach given the current trajectory of judicial attention to AI use.











