Redefining Competence: The Ethical Use of AI Tools in the Modern Law Firm  

The legal profession stands on the edge of a profound technological shift. While artificial intelligence has long been a part of litigation support and contract analytics, the rise of generative AI (GAI) introduces a new paradigm. These tools, capable of producing human-like text and performing complex reasoning, present unprecedented opportunities for legal innovation. They also demand a careful recalibration of our ethical compass.

 

On July 29, 2024, the American Bar Association’s Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512, offering essential guidance on the use of GAI by lawyers. The opinion does not reject the utility of these technologies, far from it. It recognizes the significant value GAI can offer in enhancing efficiency and quality of service. But it also draws clear ethical boundaries grounded in longstanding duties under the Model Rules of Professional Conduct.

 

At its core, the opinion makes one thing clear: AI may assist, but it cannot replace human judgment.

 

The duty of competence under Model Rule 1.1 requires lawyers to understand not only the legal matters they handle, but also the tools they use to deliver services. This includes understanding both the capabilities and limitations of GAI. Lawyers are not expected to become technologists, but they must remain informed, through continuing legal education, consultation with experts, or internal testing about how these tools function and where they fall short.

 

Indeed, GAI’s potential for error is not theoretical. Large language models are known to “hallucinate”-fabricating legal citations, mischaracterizing precedent, or confidently presenting inaccurate information. The lawyer who fails to verify such output before using it in client work or court submissions risks violating their duty of competence, and potentially duties of candor and honesty under Rules 3.3 and 8.4(c).

 

Equally critical is the obligation to maintain confidentiality under Rule 1.6. Many GAI tools are self-learning, meaning they retain input data to improve future outputs. If client information is entered into such a system without appropriate safeguards or without the client’s informed consent, confidentiality may be compromised. The ABA opinion emphasizes that lawyers must conduct a risk assessment, understand the terms of service and privacy policies of the tools they use, and, in certain cases, obtain explicit client consent.

 

This ethical vigilance extends to communication. Under Rule 1.4, lawyers have a duty to inform clients when GAI use materially affects their representation. For example, if AI-generated work significantly alters timelines, costs, or methods of service delivery, clients must be made aware. Transparency builds trust, and as the legal profession increasingly integrates advanced technologies, that trust will be paramount.

 

Another key focus of the opinion is the responsibility of supervising lawyers under Rules 5.1 and 5.3. Firms must establish clear policies around GAI usage and ensure all personnel, lawyers, and non-lawyers alike are trained on the ethical and operational implications of these tools. This includes overseeing third-party GAI vendors to ensure they meet standards for data protection, reliability, and confidentiality.

 

Even billing practices are not immune to the changes GAI introduces. The opinion reinforces that fees must remain reasonable under Rule 1.5. If GAI tools reduce the time required to complete a task, billing must reflect that efficiency. Flat fees may need to be adjusted, and time spent learning to use GAI tools cannot be billed to clients. Furthermore, the classification of expenses, whether GAI is treated as overhead or passed through, must be transparent and justified.

 

What emerges from ABA Formal Opinion 512 is a nuanced and pragmatic framework. It recognizes that while GAI is transforming legal work, the foundational duties of a lawyer, such as exercising judgment, maintaining confidentiality, communicating openly, and delivering value, remain unchanged. It is not a rejection of innovation, but a reminder that technology must be implemented with intentionality and integrity.

 

In practice, this means developing internal protocols for AI use, investing in firm-wide training, and establishing verification standards appropriate to the task and tool. It means choosing platforms based on data security, transparency, and alignment with the firm’s values. And most importantly, it means centering the client, not the technology, in every decision.

 

At Mavacy, we view GAI not as a threat, but as an opportunity, a powerful tool in the hands of a competent, ethical, and forward-thinking legal team. We believe the future of law will be built not only on results, but on relationships, trust, and clarity. ABA Opinion 512 is not a constraint; it’s a compass. And if we follow it, we can navigate this new frontier with both confidence and care.

 

As the legal landscape continues to evolve, the challenge before us is not simply to adopt new technologies, but to lead with them. Ethically. Responsibly. And always in the service of our clients.

Share the Post:

Related Posts

Author

Michael Melfi and Martin T. Shepherd