Last time, we continued our “nuts and bolts” series on artificial intelligence (AI) for legal professionals with a look at transparency, explainability, and interpretability of AI – what these concepts are, how they differ, and the considerations associated with them. Now, we’ll discuss the importance of staying informed as it relates to legal technology.

When it comes to ethics for lawyers, one of the most important organizations for guidance is the American Bar Association (ABA), which has established ethical rules for conduct by lawyers. The ABA Model Rules of Professional Conduct provide a framework for ethical legal practice in the United States.

The ABA Model Rules guide lawyers in various aspects of their professional conduct, including client-lawyer relationships, duties to the legal system, public service, and the legal profession. They cover duties that include client confidentiality, conflict of interest, competence, legal advertising, and pro bono work. The competence duty includes technology competence through ABA Model Rule 1.1, comment 8, which includes the statement: “a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology” – an important expectation when it comes to frequently changing technology like AI.

ABA Resolutions Regarding AI

The ABA also periodically passes resolutions for additional guidance to lawyers and other professionals, and in the past few years, it has passed three resolutions related to AI. They are:

ABA Resolution 112: Urges courts and lawyers to address the emerging ethical and legal issues related to the usage of AI in the practice of law (passed in August 2019)
ABA Resolution 700: Urges governments to refrain from using pretrial risk assessment tools unless data supporting risk assessment is transparent, publicly disclosed, and validated to demonstrate the absence of bias (passed in February 2022)
ABA Resolution 604: Urges organizations that design, develop, deploy, and use AI systems and capabilities to follow several guidelines (passed in February 2023)

Let’s look at each of these in more detail.

ABA Resolution 112

The full text of Resolution 112 is as follows:

RESOLVED, That the American Bar Association urges courts and lawyers to address the emerging ethical and legal issues related to the usage of artificial intelligence (“AI”) in the practice of law including: (1) bias, explainability, and transparency of automated decisions made by AI; (2) ethical and beneficial usage of AI; and (3) controls and oversight of AI and the vendors that provide AI.

The resolution is followed by a 15-page report with several sections, including an introduction section. Section II provides an overview of AI and the different AI tools used in the practice of law (which is rather outdated over four years after the resolution was adopted).

However, the next three sections are still quite relevant. Section III analyzes a lawyer’s ethical duties (competence, communication, confidentiality, and supervision) in connection with AI technology. Recent high-profile court filings citing fake cases are examples where lawyers failed to adhere to their competence duty. Section IV explores how bias can affect AI (including the Microsoft Tay and COMPAS examples we discussed in our post on bias) and the importance of using diverse teams when developing AI technology. Section V discusses questions to ask when adopting an AI solution or engaging an AI vendor.

ABA Resolution 112 is the most significant AI guidance directed to practicing lawyers who should heed its advice before considering the use of AI.

ABA Resolution 700

As discussed above, this resolution is focused on governments’ use of pretrial risk assessment tools. The full text of the resolution is five paragraphs long and available via this link.

As the Executive Summary at the end of the 8-page report states: “This resolution advances the need to align court decisions on pretrial release from jail with the presumption of innocence by refraining from the use of risk assessment tools and pretrial release evaluations where data demonstrates continued conscious or unconscious racial and economic bias.” It highlights that the algorithms and mathematical models used in these pretrial assessments are only as effective and unbiased as the data that feeds into them.

ABA Resolution 700 is designed to be a significant step towards addressing the challenges posed by the integration of AI and algorithmic tools in the criminal justice system, particularly in the context of pretrial risk assessments. It will be important to continuously evaluate and improve these tools to ensure they do not perpetuate existing biases and inequalities in the legal system​.

ABA Resolution 604

This resolution is directed at the organizations that design, develop, deploy, and use AI systems and capabilities and the guidelines focus on ensuring accountability, transparency, and traceability in AI applications. The full text of this resolution is also five paragraphs long and available via this link.

As discussed in the 21-page report, key aspects of Resolution 604 include:

Human Oversight and Control: Developers of AI should ensure their products, services, systems, and capabilities are subject to human authority, oversight, and control.
Accountability for AI Consequences: Organizations should be accountable for consequences related to their use of AI, including any legally recognizable harm or injury caused by their AI systems, unless they have taken reasonable steps to prevent such harm.
Transparency and Traceability of AI: Key decisions made regarding the design, risks, data sets, procedures, and outcomes underlying AI systems should be documented to ensure the transparency and traceability of AI systems.
Prevention of Discrimination and Bias: This includes efforts by various organizations and governmental bodies to ensure AI complies with anti-discrimination and privacy laws.
Legal Responsibility and AI: Legal responsibility for actions should not be shifted to computers or algorithms but should remain with responsible individuals and legal entities.
Guidance for Legal Professionals: Legal professionals should stay informed about AI-related issues, as understanding and addressing these issues is seen as part of their responsibility as lawyers.

Just about any organization today designs, develops, deploys, and/or uses AI systems and capabilities, so this resolution is an important one to address the organization’s responsibilities in this area.


Three resolutions on AI in 3 1/2 years indicates the level of significance that the ABA places on the responsible and ethical use, design, development and deployment of AI. The potential importance of Resolution 700 on the criminal justice system is immeasurable as law enforcement embraces the use of AI in more ways than ever. And the importance of Resolutions 112 and 604 is universal, as everybody uses AI today, and many of us deploy the use of AI solutions. It’s important to not only be aware of these three guidelines, but also stay informed of other potential guidelines to come from the ABA.