With the development of Artificial Intelligence (“AI”) technology, some lawyers have argued that there may be a coming transformation of the practice of law. Allegedly, AI computer programs appear to perform legal research, analyze contracts, review documents, and do many other lawyer tasks. There is even software that purports to manage cases, draft briefs, and make legal judgements without the need for human involvement.
But any theoretical time savings, or efficiencies gained by using AI are accompanied by ethical issues which likely outweigh the expected benefits.
One of the biggest problems of attempting to use AI for real legal work is that many of the currently available systems are plagued by inaccurate or false answers or product called “hallucinations.” The reason that AI hallucinates is beyond the scope of this article, but regardless of why it happens, it makes AI unreliable in a setting where factual and analytical accuracy is critical. Mata v. Avianca, Inc. is a recent, and already well-known, case from New York where a lawyer filed a legal brief that included non-existent judicial opinions with fake quotes and fake citations, which were written by an AI program. The court sanctioned the lawyer for the misrepresentations, Citing Federal Rule of Civil Procedure 11, which codifies the principle that a lawyer makes a certification of merit and truth when filing papers with the court. Mata v. Avianca, Inc. (S.D.N.Y. 2023) 678 F.Supp.3d 443, 459.
Some courts have begun to address the problem of AI generated legal documents with prohibitions and rules. For instance, some have begun to require attorneys to file a certification attesting that their papers were not created using AI. On the other hand, some lawyers and organizations have essentially accepted that adoption of AI in the legal context is inevitable and therefore should be regulated. For instance, the Pennsylvania Bar Association recently published an opinion clarifying that lawyers who chose to use AI still must fulfil their ethical obligations, including duties of competence, confidentiality, truthfulness, and condor to the tribunal, among others. The Pennsylvania Bar Association approach requires a lawyer to verify any representations and citations written by AI.
If the lawyer must independently read the cases and verify the research, the question becomes why wouldn’t the lawyer just write the paper himself? Unless there are significant advancements in technology, for now it is probably not worth using AI in the legal field.