Most lawyers know their main task is communication—explaining issues and educating clients, courts, or regulatory agencies to arrive at informed decisions or allow a dispute to end in their favor.

So why do the words “artificial intelligence” cause some communicators to stammer, freeze, or lean on poorly defined jargon? Lawyers who counsel clients in acquiring AI tools or offering their output as evidence in litigation must know what to ask to understand and advise on the technology.

This basic element of a lawyer’s work is more than just a matter of business; it goes to the heart of the lawyer’s obligation to provide appropriate representation. It’s been 12 years since the American Bar Association updated Model Rule 1 on competence to state that lawyers must “keep abreast of changes in the law and its practice including the benefits and risks associated with relevant technology.”

A lawyer advising on AI-related matters doesn’t need a computer science degree or coding experience; they just need to know the right questions to ask and be familiar with certain terms to explain the legal issues—that is, render sound advice or make cogent arguments on behalf of their clients.

This simple framework, or explainable AI, is critical to establishing trustworthiness for stakeholders funding its use and those relying on its outputs. Legal admissibility of the tool requires a showing of reliability, which in turn requires explainability to establish trustworthiness—or lack thereof if challenging its admissibility.

What is the design objective of the AI tool? In other words, what does or should the tool do? What is its business purpose? These questions help determine if an AI tool is right for the job.

Is the tool truly AI, or is it machine learning? AI mimics human intelligence, acting on its own once prompted according to its algorithm (that is, rules of operation), while machine learning uses algorithms to teach a machine how to perform very specific tasks.

What type of AI is involved? Is it predictive AI, where the tool analyzes a defined data set, and its outputs can be a trend such as costs predictions, optimal shipping routes, and so on? Or is it generative AI, where the tool creates text, art, or code based on its training data? Hybrid AI tools also exist, generating reports based on analysis from predictive AI findings.

What was the data set used to train the AI tool, and from where was it derived? Poor data input, including bias, can undermine reliability and pose discrimination risks. It also can create the risk of intellectual property claims if the data is protected by copyright, trademarks, trade secrets, or patents and the developer failed to obtain proper licenses.

How is the quality of the tool’s output monitored? Generative AI is notorious for making mistakes or creating “facts” that are untrue, known as “hallucinations.” A defined quality assurance or control process is necessary to ensure reliability, and clients should be advised that this requires headcount and budget—critical for a business plan for acquiring and deploying an AI tool. Reviewing output is essential to establishing reliability as evidence.

Litigators face similar issues, but with additional hurdles for the admissibility of evidence. The US District Court for the District of Nevada based its guidance on evaluating AI evidence on the National Institute of Standards and Technology AI Risk Management Framework. This framework equates reliability with trustworthiness, requiring explainability, accuracy (or validity), security, and bias mitigation.

Considerations for AI evidence include whether a challenge to AI evidence will entail a request by the challenger for proprietary information (such as a trade secret) in the algorithm, as well as training and testing of the tool or its source code.

If so, counsel should request a strong confidentiality or protective order signed by the court. Some clients will still be skittish about revealing this information. This could become a factor in settlement negotiations or deciding whether to bring the claim at all.

Another consideration is whether the evidence can withstand a reliability challenge under Daubert v. Merrell Dow, which considers error rates, general acceptance, testability, and peer review.

Proponents must build an admissibility argument, addressing concerns that new technologies such as generative AI may struggle with these standards. Most AI hasn’t faced such scrutiny, especially peer review. This is where explainable AI and effective communication are crucial. The proponent must articulate the tool’s purpose, function, and relevance to argue for its admission.

This navigational aid to explaining and arguing for or against AI tools or AI-based or generated evidence will evolve with changes in the relevant tools and technologies. As the landscape shifts, understanding these fundamentals will be crucial for effectively advising clients and meeting the challenges of AI-related legal issues.

Author Information
Kenneth N. Rashbaum is partner at Barton focusing on privacy, cybersecurity, and e-discovery.
Lani E. Medina is a senior associate in Barton’s corporate practice.