Article Excerpt:
There has been an interesting case pending in the U.S. District Court for the Southern District of New York, being Mata v. Avianca Inc., Case No. 22-CV-1461 (S.D.N.Y., filed Feb. 22, 2022). The case involves two lawyers who used artificial intelligence, in the form of ChatGPT, to draft their legal brief in the case. It didn’t turn out well, as related in a May 27, 2023, article by Benjamin Weiser of the New York Times. Essentially, ChatGPT just make up citations and quotations out of thin air, and when opposing counsel and eventually the court started to check those citations, they simply didn’t exist. This is part of a larger problem of AI in that AI has shown a disturbing tendency, like its human creators, to tell porkies and just outright lie about things. For their part, the lawyers threw themselves on the mercy of the court and begged forgiveness — the smartest thing they had done so far. Ultimately, each lawyer was fined $5,000 and were required to apologize to the judges that they (or, rather, ChatGPT) had blatantly misquoted. That was probably the least of their punishment, as they have now been immortalized as professional buffoons.
In federal courts, attorneys are required by Federal Rule of Civil Procedure 11 (popularly known as “Rule 11”) to sign off on all things filed with the court, and that signature essentially certifies that everything in the filing is true and correct within the best of that attorney’s knowledge and belief. That includes not just evidence put in front of the court, but also legal authorities which argue a particular position. Rule 11 applies to the signing attorney even if the person who drafted the filing was a law student clerking in their first summer of law school who slops together a bunch of authorities having nothing to do with anything as they are so apt to do; the attorney’s duty is to carefully check those authorities to make sure they say what they mean. Here, the attorneys were not even using the wet-behind-the-ears law clerk, but simply (and quite lazily) had ChatGPT generate their filing and then failed to check whatever it was that ChatGPT had ginned up. It was a blatant violation of Rule 11, and they got dinged for it.
The point of this article isn’t about this particular case, but rather the dangers to clients of their own attorneys using AI programs to generate planning documents, things such as contracts, trusts, wills, and even legal memoranda upon which a client may desire to rely upon later as the “advice of counsel”. This isn’t a 2023 problem but has already been going on for some years. The first time I saw it was in 2018 when one of my own clients told me about draft documents that were being created for him through a law firm’s AI program for a series of transactions that the other law firm was handling. This saved a lot of time and cost to the client, since the AI program was belting out these documents in seconds before some associate attorney, if they had been tasked with doing it, could even grab a legal pad. The documents were then carefully reviewed by attorneys of the firm and presented to my client at a much lower cost — albeit they did charge a hefty fee for their development costs of the AI program in the first place. The point being that AI drafting of legal documents is already happening, has in fact been happening for some years, and there is certainly the potential for benefits if it is used correctly.