Columns

Professionalism as guardrails for GenAI research

 

by Joe Lawson   |   Michigan Bar Journal

Libraries & Legal Research

As use of generative artificial intelligence (“Gen AI”) in the legal profession expands, misuses for conducting legal research continue to grab headlines. Examples of unprofessional conduct now abound, from lawyers who misrepresent being hacked rather than tell a court that they used Gen AI to lawyers who, in 2025, claim they are unaware that Gen AI can hallucinate legal materials.1 We have now reached a time when all lawyers should know that uncritical reliance on Gen AI for their legal research is not up to general standards of professionalism. One judge described it in the following words:

At this point, to be blunt, any lawyer unaware that using generative AI platforms to do legal research is playing with fire is living in a cloud.2

Nevertheless, missteps with this evolving technology continue, and with each new sanction, the contours of what constitutes professional use of Gen AI for legal research become clearer. Additionally, bar associations have issued guidance to help attorneys find their way. The common thread that has emerged throughout the guidance and sanctions orders is that the rigors of existing ethical and professional standards provide the guardrails for lawyers when researching with Gen AI.

COMPETING PARADIGMS

When the epidemic of lawyers submitting Gen AI research to courts made headlines in 2023, it was far from clear how the bench and bar should respond. The landmark sanctions case that drew nationwide attention to the issue of hallucinated citations in court documents was Mata v. Avianca, Inc.3 Attorney Steven Schwartz provided research services for the plaintiff. When pressed by opposing counsel and the court, Schwartz produced excerpts from cited cases. The court conducted a hearing in which Schwartz admitted that the fake citations and excerpts were produced by ChatGPT. Schwartz did not consult his firm’s Fastcase subscription; he did not have access to another database; and his database access included only state materials when he was researching a federal matter. The court levied a $5,000 sanction and approved of the firm’s plan to expand its “Fastcase subscription and CLE programming.”4 Schwartz was sanctioned under Federal Rule of Civil Procedure 11, which requires all claims to be “warranted by existing law or by a nonfrivolous argument for extending, modifying, or reversing existing law.”5 After a multi-point analysis, the court found that the standard was not met and that “relying on fake opinions is an abuse of the adversary system.”6

Many courts have followed this path. One researcher, Damien Charlotin, is keeping track. His database titled AI Hallucination Cases tracks the worldwide phenomenon of attorneys and parties being sanctioned for submitting unvetted Gen AI outputs to courts.7 Within the dataset, Michigan fares quite well, with only one reported case involving a pro se plaintiff. If a case following the egregious fact pattern of the Schwartz case appeared in a Michigan state court, Michigan Court Rule 1.109(E) would provide a similar mechanism for sanctioning attorneys who sign and file documents that include claims not “warranted by existing law or a good-faith argument for the extension, modification, or reversal of existing law.”8

Following the Schwartz sanctions case, courts recognized the limits of sanctions alone to solve the problem of attorney overreliance on Gen AI legal research. Many courts, including the U.S. Fifth Circuit Court of Appeals, explored requiring a certificate for each filing detailing whether the attorney reviewed all Gen AI outputs used.

Although some trial courts adopted the rules, many did not. The Fifth Circuit ultimately did not impose the rule after a public comment period in which attorneys and scholars argued that existing court and ethical rules already required attorneys to check their research and take responsibility for Gen AI outputs presented as attorney work product.9 Not only did this decision end the prospect of a standalone Gen AI certificate in the Fifth Circuit, but it also likely dissuaded many other courts from pursuing a similar solution.10 Nevertheless, some courts continue to adopt ad hoc certification requirements, so it remains a possibility to watch for going forward.11

PROFESSIONALISM WINS OUT

Despite the competing paradigms, the bench and bar seem to have come to the consensus that prevailing professionalism standards already require such things as researching the law competently and not misleading the court, even if these missteps are accomplished with new technologies. In July 2024, the American Bar Association issued Formal Opinion 512, in which duties of competence and candor to the tribunal were identified as sources of lawyers’ professional duty to check all outputs produced by Gen AI prior to using any research results in client matters or court filings.12 Additionally, commentators across the country have regularly pointed to technology competence as part of professional competence when noting that attorneys need to understand the positives and pitfalls of using Gen AI for legal research.13

The State Bar of Michigan expanded on this view with its July 2025 report titled Transforming the Legal Landscape in the Age of AI.14 Legal research is identified as an area where Gen AI can increase productivity but not without awareness of the danger of hallucinated citations, misinterpretations, and other errors that attorneys must catch during a review process. In addition to the Duty of Competence, the Duty of Diligence prompts Michigan attorneys to check all citations and sources used in their work product. The report specifically points to the Schwartz sanctions case as an example of a failure of diligence when Gen AI was used for legal research.15 Further, the duty of Candor to the Tribunal can be violated when “overreliance on AI [results] in false statements of fact or law if not checked and reviewed prior to submission to the tribunal.”16 Based on the language, it is clear that the lawyer is responsible when filing the document, and an argument that hallucinated content was generated by AI should not allow an attorney to shirk professional responsibility.

SO WHAT IS REQUIRED?

Several lessons can be gleaned from Gen AI sanctions cases. First, the Schwartz sanctions case teaches that access to Gen AI does not supplant the need for attorneys to have access to materials for the jurisdiction (e.g, federal materials when researching and citing cases in federal court). Additionally, an attorney should read all materials cited. Of course, this would help avoid citing nonexistent cases in one’s own work product, but it may also be required of opposing counsel. In Nolan v. Land of the Free, a California appellate court sanctioned an attorney who cited hallucinated cases but also denied attorney fees to opposing counsel because the court, not opposing counsel, found the bogus citations.17 Finally, misleading the court to cover up Gen AI use is never a good idea. In a recent New York civil case, an attorney attempted to blame hackers for the “incoherent document” he filed. When the court discovered it was produced by Gen AI, it not only sanctioned the attorney but also reported his ethical violations so that his fitness to practice could be investigated.18

FURTHER READING

Gen AI for legal research is evolving quickly, and the ethical responsibility to stay up to date is growing. The University of Michigan Law Library has assembled a guide on Generative AI for the legal community. It includes sections on ethical implications, tools and resources, news, and more.19 Add it to your current awareness resources to stay professional.


The views expressed in “Libraries & Legal Research,” as well as other expressions of opinions published in the Bar Journal from time to time, do not necessarily state or reflect the official position of the State Bar of Michigan, nor does their publication constitute an endorsement of the views expressed. They are the opinions of the authors and are intended not to end discussion, but to stimulate thought about significant issues affecting the legal profession, the making of laws, and the adjudication of disputes.


ENDNOTES

1. Belanger, You won’t believe the excuses lawyers have after getting busted for using AI, Ars Technica (Nov 11, 2025) https://arstechnica.com/tech-policy/2025/11/lawyers-keep-giving-weak-sauce-excuses-for-fake-ai-citations-in-court-docs/ (all websites visited March 23, 2026).

2. In re Martin, 670 BR 636, 647 (ND Ill, 2025).

3. Mata v Avianca, Inc, 678 F Supp 3d 443 (SD NY, 2023).

4. Id.

5. FRCP 11(b)(2).

6. Mata, supra n 3 at 461.

7. Charlotin, AI Hallucinated Cases https://www.damiencharlotin.com/hallucinations.

8. MCR 1.109(E)(5) et seq.

9. US Fifth Circuit Decides Against Its Proposed Rule Amendment on AI Use in Legal Filings, King & Spalding (June 14, 2024) https://www.kslaw.com/news-and-insights/us-fifth-circuit-decides-against-its-proposed-rule-amendment-on-ai-use-in-legal-filings.

10. Martinson, Law Scholars Hope 5th Circuit Decision Deters More AI Rules, LAW360 Pulse (June 14, 2024) https://www.law360.com/pulse/articles/1847796.

11. Ash, 11th and 17th Circuits Order Disclosure, Certification of AI Use in Court Filings, Florida Bar News (Feb 09, 2026) https://www.floridabar.org/the-florida-bar-news/11th-and-17th-circuits-order-disclosure-certification-of-ai-use-in-court-filings/.

12. ABA issues first ethics guidance on a lawyer’s use of AI tools, American Bar Association (July 29, 2024) https://www.americanbar.org/news/abanews/aba-news-archives/2024/07/aba-issues-first-ethics-guidance-ai-tools/.

13. See, e.g., Brown, Generative Artificial Intelligence: Legal Ethics Issues, 104 Mich B J 48 (Jan 2025) https://www.michbar.org/journal/Details/Generative-artificial-intelligence-Legal-ethics-issues?ArticleID=5022.

14. Transforming the Legal Landscape in the Age of AI, State Bar of Michigan (June 2025) https://www.michbar.org/Portals/0/publications/pdfs/Age_of_AI_Report_June25.pdf.

15. Id. at 28.Z

16. Id. at 33.

17. Ambrogi, A New Wrinkle in AI Hallucination Cases: Lawyers Dinged for Failing to Detect Opponent’s Fake Citations, LawSites (Sept 16, 2025) https://www.lawnext.com/2025/09/a-new-wrinkle-in-ai-hallucination-cases-lawyers-dinged-for-failing-to-detect-opponents-fake-citations.html.

18. Belanger, supra n 1.

19. Generative AI, University of Michigan Law Library https://libguides.law.umich.edu/c.php?g=1456750&p=10830769.