Machines in the Courthouse: Artificial Intelligence as Research Associate, Co-Counsel, and Law Clerk

“By presenting to the court a pleading, written motion, or other paper — whether by signing, filing, submitting, or later advocating it — an attorney or unrepresented party certifies that to the best of the person's knowledge, information, and belief, formed after an inquiry reasonable under the circumstances . . . the claims, defenses, and other legal contentions are warranted by existing law . . . [and] the factual contentions have evidentiary support . . .”
Federal Rules of Civil Procedure, Rule 11(b)

Introduction

Imagine this: A New York lawyer is in Federal Court for a case involving an international treaty and a bankruptcy question, but his 25+ years of practice are exclusively in New York state law. The other side files a motion to dismiss and an opposition is due soon. Our lawyer, inexperienced in the subject area, reaches for the latest and greatest computer-assisted research tool: the artificial intelligence chatbot ChatGPT. He prepares a motion, and hands it to his partner to review. They file.

Two weeks later, opposing counsel replies. They say: “the undersigned has been unable to locate most of the case law cited.” (Mata v. Avianca, Inc., 678 F. Supp. 3d 443 at 450). It turns out that at least seven of the cases cited in the motion are fake. Made up. Nonexistent. Furthermore, quoted snippets from these cases are nonsensical. When questioned, the lawyers deflect, delay, and fail to own their mistake. The lawyers are ordered to pay a $5,000 fine, and to mail each real judge cited as the author of one of these false opinions a letter — sort of the judicial equivalent of making children apologize for telling tall tales on the playground.

Embarrassing, yes. These lawyers — real lawyers, in the real case Mata v. Avianca — were sanctioned for misrepresenting case law, lying to the court, and overall acting in bad faith. The crime was not merely misusing a new technology. But surely there are many experienced lawyers who read that part of this case and thought to themself, “There but for the grace of God go I.”

Use of artificial intelligence is growing in the legal profession. Can lawyers rely on these tools? Should we make AI our research associate, co-counsel, or clerk? When does computer-assistance become a wholesale delegation to a machine, and a dereliction of ethical obligations?

AI-Assisted vs. AI-Generated

Computer-assisted research, writing, and editing tools fall along a spectrum. On one end, we have traditional aids like spell- and grammar-check functions baked into many word processors. These are ubiquitous and uncontroversial – no lawyer has ever been sanctioned for letting Word autocorrect “teh” to “the.” Before the AI chatbot boom, you could even call these services a form of “artificial intelligence” – and that would still be correct today, though a typical user would reserve that phrase for services with more advanced capabilities.

More recent tools like Grammarly, Microsoft’s Editor, or Google’s Smart Compose go further, offering suggestions for sentence-level edits, stylistic and tone improvements, and even generating portions of sentences as you type. These tools support lawyers by catching passive voice, offering more concise phrasing, and proposing synonyms. But importantly, they do not introduce new ideas or legal reasoning. They help a lawyer to write and polish their own work.

We might call these “AI-Assisted” writing tools. Like a copyeditor, they help refine what the human has already written. They are a natural evolution of software most lawyers already use without a second thought.

Generative-AI services like ChatGPT, Claude, or Gemini move far beyond spelling and style. These models can produce complete documents from scratch: contracts, memos, motions, and briefs. They can summarize cases and draft arguments, complete with source citations. This power raises new questions. When using AI, is a lawyer merely using a smarter spellchecker and more thorough writing assistant, or attempting to outsource thinking altogether?

The risk is not just hallucinated cases. It’s that lawyers, under pressure to move quickly and bill efficiently, will delegate their entire role to AI services. When an AI service becomes a lawyer’s wholesale ghostwriter, ethical and professional boundaries start to blur.

Ethical Obligations

An AI service cannot exercise judgment. That unique ability and essential role is retained by the human lawyer. Lawyers who use these tools must remain firmly in control of their judgment, guided by three core responsibilities: to verify, disclose, and review.

Verify:

A lawyer must confirm every assertion and citation generated by an AI service. This includes independently checking each and every case, statute, and quotation. Derogating this responsibility has consistently backfired on lawyers, as eager-to-please AI tools invent sources to support virtually any claim. If you fail to check a false source and you are caught, you risk your reputation.

Disclose:

When AI-generated content is used with little or no modification, lawyers should clearly disclose its use. Transparency helps to ensures accountability. However, if the lawyer substantially edits or critically reviews the content, such that the final work reflects the lawyer’s own professional judgment and the lawyer adopts full responsibility for it, disclosure may not be required. The threshold is the meaningful application of human judgment.

Review:

Final work product must always be reviewed. The lawyer, not the machine, must render decisions and determine conclusions. AI can assist with drafting and propose ideas, it can correct language and suggest citations, and it can even generate content, but the final capstone must be placed with a sentient touch. Any artificially generated content must be confirmed by human judgment: keep the human in the loop.

Conclusion

An AI service is not the same as a human research associate or co-counsel. It is a software tool. Like with other tools, the researcher, lawyer, or judge wielding it is ultimately responsible for what is created. While AI tools are exceptionally good at generating material, the content they create should be approached critically: interrogate the output, do not blindly adopt it, and never fail to apply human judgment.