THE ETHICAL IMPERATIVE OF AI COMPETENCE IN LEGAL PRACTICE
The use and integration of large language model generative AI (GAI), such as ChatGPT and Lexis+ AI, into the legal profession has sparked significant debate over its ethical implications. Concerns include algorithmic bias, hallucinations, inadvertent disclosure of client confidences, maintaining independent judgement, and the possible necessity of disclosure.1 While much attention has focused on whether using AI might be unethical, an equally legitimate question remains underexamined: Could failing to adopt and properly use AI in legal practice itself constitute a breach of a lawyer or judge’s ethical obligations?
WHAT IS LARGE LANGUAGE MODEL GENERATIVE AI?
Large machine learning models, such as ChatGPT, operate on deep neural networks (DNNs) that mimic the multilayered structure of human cognition.2 These networks consist of interconnected layers of nodes, or “neurons,” each processing input data and passing it to the next layer.3 The depth of these networks, ranging from a few to hundreds of layers, allows them to learn highly complex representations of data.4 Training DNNs involves feeding them large datasets and refining their connections through supervised and unsupervised learning, reinforcement learning, and evolutionary computation, enabling them to minimize errors and improve predictions.5
ChatGPT, as a generative pre-trained transformer (GPT), exemplifies this advanced architecture. Put simply, GAI operates as an advanced word prediction system.6 It leverages statistical patterns and contextual relationships learned from vast datasets to predict the most likely sequence of words in response to a given prompt.7 This prediction process involves complex computations within a transformer architecture, allowing the model to generate outputs that appear contextually coherent and humanlike.8 While its “knowledge” is derived from patterns in its training data, it lacks true understanding or reasoning, functioning instead as a sophisticated synthesis of probabilities.9
TECHNOLOGICAL COMPETENCE UNDER MICHIGAN’S RULES OF PROFESSIONAL CONDUCT
Of course, Michigan lawyers are required to provide competent representation.10 However, this competence encompasses more than zealous advocacy combined with knowledge of the relevant laws, their application, proper procedures, and the like. The commentary to this rule provides that lawyers must also maintain technological proficiency, to ensure they have the knowledge and skills needed to competently represent clients in specific matters.11 Furthermore, State Bar of Michigan Ethics Opinion JI-155 provides that “Judicial officers must maintain competence with advancing technology, including but not limited to artificial intelligence.”12
The 2025 State Bar of Michigan’s AI Report13 extends this duty to lawyers, emphasizing that the duty of competence “requires continuing study and education, including the knowledge and skills regarding existing and developing technology that are reasonably necessary to provide competent representation,” expressly including artificial intelligence.14 It further provides that judges and lawyers alike “have a duty to understand technology, which includes competence in artificial intelligence, generative artificial intelligence, and future technologies of which we are not yet aware.15 In this way, Michigan aligns the traditional ethical duty of competence under MRPC 1.1 with the modern realities of legal practice, recognizing that mastery of emerging technologies is now essential to competent and responsible representation. Thus, legal professionals must familiarize themselves with the foundational mechanics of GAI, such as discussed briefly in the preceding section. This knowledge helps lawyers critically evaluate the reliability and potential biases of GAI outputs.
Furthermore, technological competence includes mastering advanced utilization strategies, such as prompt engineering, refining AI-generated results and reducing the risk of inaccuracies or “hallucinations.”16 By combining technical understanding with practical application, lawyers can responsibly leverage GAI to enhance their practice, ensuring they meet their ethical obligations of competence and diligence in an increasingly digital landscape .
The Michigan State Bar’s AI report also indicates that technological competencies are linked with the duty of reasonable fees under MRPC 1.5, observing that “failing to use AI technology that materially reduces the cost of providing legal services arguably could result in a lawyer charging an unreasonable fee to a client.”17 Thus, the duty of competence is not merely about capability but about ethical efficiency, using available tools to provide better, more economical service.
As GAI advances toward becoming an integral tool in legal research, drafting, analysis, and even trial litigation, both lawyers and judges must understand its implications to uphold the integrity of the justice system. Neglecting competency relative to GAI could lead to inefficiencies and subpar client service, potentially breaching a lawyer’s ethical duties.
Conversely, overreliance on AI without adequate verification may violate duties of diligence, candor, and supervision under MRPC 1.3, 3.3, and 5.3. The Michigan State Bar’s AI report concludes that competent representation in the AI age “includes educating oneself, setting expectations with clients, and continuous monitoring.”18
BROADER ETHICAL OBLIGATIONS: ABA AND OTHER STATES
In July 2024, the American Bar Association issued Formal Opinion 512, its first comprehensive ethics opinion addressing generative artificial intelligence in legal practice.19 The Opinion emphasizes that the existing duties of competence, confidentiality, communication, supervision, and reasonable fees under the Model Rules of Professional Conduct fully apply when lawyers use AI-powered tools.20 It cautions that lawyers must understand both the benefits and risks of these technologies and must take “reasonable steps” to verify the accuracy of AI-generated work before relying upon or sharing it.21 This national guidance aligns closely with the State Bar of Michigan’s AI Report, which likewise stresses that competent representation in the AI era requires “educating oneself, setting expectations with clients, and continuous monitoring.”22
Both authorities make clear that lawyers cannot delegate professional judgment to a machine: The lawyer remains personally responsible for the work product and representations made to a client or tribunal, even when assisted by generative systems. Together, these documents signal a maturation of professional standards from general awareness of technological change to a concrete ethical framework for responsible AI integration, placing accountability squarely on the human professional rather than the technology itself.
These obligations to learn about and ethically use advancing technologies in one’s practice of law, including GAI, are not unique to Michigan. A LexisNexis survey suggests that 40 states and the District of Columbia have formally adopted the American Bar Association’s Model Rule 1.1, Comment 8, or its equivalent.23 This rule requires lawyers to stay informed about technological changes and the benefits and risks associated with relevant technologies, including tools used in litigation and client communication.
Many states have adopted Comment 8 verbatim, including Arkansas, Connecticut, Delaware, Illinois, and Wisconsin.24 Delaware’s rule states that, “a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology,” and emphasizes that “deliberate ignorance of technology is inexcusable.”25 Attorneys are warned that if they cannot master suitable technology, they must associate with tech-savvy lawyers or consultants who can ensure competence in the technological aspects of their practice.26
Florida goes further, requiring attorneys to complete three hours of continuing legal education in technology and mandating that they safeguard client confidentiality when using these tools.27 Florida also acknowledges the value of outside expertise, stating that “competent representation may also involve the association or retention of a non-lawyer advisor of established technological competence in the field in question.”28
Some states have taken a more cautious approach. New Hampshire amended its comments to note that lawyers should “keep reasonably abreast of readily determinable benefits and risks associated with applications of technology used by the lawyer,” rather than imposing a broad requirement.29 This adjustment acknowledges disparities in resources and capabilities among practitioners.
The widespread adoption of technological competence rules underscores the growing expectation for lawyers to integrate advanced tools like GAI into their practice responsibly. States like Florida and Michigan provide clear guidance on safeguarding confidentiality and ensuring technological proficiency. Generative AI, with its reliance on complex transformer neural networks, requires lawyers to understand not only how to use such tools effectively but also how to mitigate risks associated with their application.
Moreover, since GAI is rapidly evolving, ethical obligations may soon require law firms to take proactive steps, such as conducting vendor audits of AI systems, ensuring transparency of AI-decisioning, and documenting human oversight of AI output.30 The survey highlights the importance of prompt engineering and rigorous oversight when utilizing GAI, particularly to align with ethical obligations like client confidentiality and accuracy. Lawyers who fail to engage with these technologies responsibly risk falling short of the evolving standards of competence demanded by the profession.
TECHNOLOGICAL COMPETENCE AND THE ART OF PROMPT ENGINEERING
Technological competence in using (GAI) goes beyond the skills required for familiar tools like Google or Westlaw. While these platforms rely on relatively straightforward input, GAI demands a more sophisticated approach to interaction, one that includes understanding how to guide the technology effectively through carefully designed prompts. This skill, known as prompt engineering, is critical for ensuring that GAI delivers precise and useful outputs tailored to the complexities of legal practice.31
A prompt is essentially a set of natural language instructions that programs the AI to perform a specific task. Unlike traditional coding, which relies on symbols and syntax, prompt engineering allows users to guide AI behavior using plain language. For instance, a naive prompt32 for a legal task might be: “Explain the duty of technological competence for lawyers.” While this could produce a general response, it may lack depth or specificity.
An engineered prompt refines the instructions to achieve more targeted results: “Summarize the duty of technological competence for lawyers under the ABA Model Rules, including Rule 1.1 and its commentary, with specific emphasis on how this applies to generative AI.” This version specifies the context (ABA Model Rules) and sets clear expectations for the depth and focus of the response, reducing the likelihood of irrelevant or superficial results.
Beyond basic prompts, more advanced techniques offer even greater control and versatility. Persona prompts, for example, instruct the AI to adopt a specific perspective, such as that of a legal scholar or an experienced litigator.33 Flipped interaction prompts restructure the AI’s role, asking it to critique or refine a user’s input.34 Cognitive verifier prompts add another layer of rigor by requiring the AI to explain its reasoning or justify its conclusions.35 Similarly, fact-check prompts compel the AI to identify and verify the sources underlying its responses, thereby enhancing transparency and reducing the risk of hallucination or unsupported claims.36 Ultimately, as lawyers refine their skill, efficiency, and strategic awareness in prompting, the precision and reliability of AI-generated legal output will improve in direct proportion, transforming prompting itself into a form of professional competency.
These approaches demonstrate the breadth of possibilities within prompt engineering, providing lawyers with powerful tools to tailor AI outputs to meet the demands of their practice. Perhaps even more importantly, by crafting well-designed prompts, attorneys can set guardrails that guide AI to produce responses that are accurate, relevant, and less susceptible to hallucinations or bias.37
RISKS ASSOCIATED WITH AI IN LEGAL PRACTICE – AND HOW TO AVOID THEM
The use of AI in legal practice offers significant potential for creativity, efficiency, and precision but also introduces ethical challenges that must be responsibly managed. This management must occur at both the individual and the supervisory level. For example, Florida requires partners and supervisory-level attorneys to establish policies and procedures that protect the firm’s use of technologies, such as generative artificial intelligence, while ensuring that less-experienced lawyers are properly supervised in their application of these advanced tools.38
Perhaps the foremost ethical concern in using GAI is the protection of client confidentiality. Cloud-based AI platforms pose significant risks, as they can expose sensitive client information to breaches, misuse, and unauthorized access. Compounding this issue is the troubling potential for these platforms to monitor and monetize user input, further threatening the confidentiality that lawyers are ethically bound to safeguard.39 The Florida Bar addressed this issue in a recent ethics opinion, emphasizing the importance of secure, private AI systems and informed client consent.40 Recent reporting highlights how users of AI chatbots have inadvertently exposed deeply personal data, which may then be leveraged for targeted advertising and surveillance.41 Even more alarming is the use of AI-shared information in generating or supporting criminal suspicion, investigation, and prosecution, demonstrating that data once presumed private can reemerge as evidence.42 In this environment, lawyers must exercise heightened vigilance, ensuring that every interaction with AI tools preserves the sanctity of privileged communications and prevents client data from becoming a digital breadcrumb trail available to third parties, or worse, to the state itself.
One way to address the issue of client confidentiality is to create, maintain, and use an “on-premises” local GAI tool.43 This is a GAI system or software that enables users to create outputs, such as text, images, music, or other data, using GAI models on their local hardware instead of relying on cloud-based services.44 These tools provide the functionality of generative AI while prioritizing privacy, customization, and often reduced latency, since data processing happens locally.
THE BROADER IMPERATIVE TO EMBRACE GAI RESPONSIBLY
As the ABA and various state bar associations continue to grapple with how to integrate cutting-edge technological competence into their ethical frameworks, the imperative for lawyers to learn and responsibly utilize GAI intensifies. Early adopters who master the variety of GAI tools available to the legal profession are likely to gain a competitive edge, delivering more effective and efficient client service. Conversely, lawyers who fail to appropriately engage with these advancements risk falling behind, possibly jeopardizing their professional standing or even breaching their ethical obligations.
Generative AI represents a transformative force in the legal profession, akin to the advent of the internet decades ago. Integrating GAI into one’s legal practice requires diligent training and careful navigation of complex ethical considerations. However, the effort is well worth it, as the potential benefits of AI will pay significant dividends for the lawyer and client alike. For Michigan criminal defense lawyers, and the profession as a whole, the path forward lies in striking a balance: leveraging GAI to enhance practice while upholding the principles of competence, confidentiality, diligence, and integrity that define our profession.