Legal Ethics: The promise and ethical pitfalls of ChatGPT

By Categories: Ethics

Legal Ethics: The promise and ethical pitfalls of ChatGPT

By Sari W. Montgomery

It’s hard to believe that  months ago, few people had ever heard of OpenAI or ChatGPT. Yet, in that short time, the technology platform (and others like it such as Google’s Bard) has become an indispensable tool that promises to make a multitude of tasks exponentially more efficient and, potentially, better quality than most humans could produce given unlimited time. In fact, in February 2023, Krystal Hu of Reuters reported that ChatGPT had 100 million active monthly users in the two months since its release.  To put this in perspective, the same Reuters article reported that it took TikTok nine months, and Instagram two and a half years, to reach that many users, according to data from Sensor Tower.

According to OpenAI, ChatGPT’s developer, ChatGPT is trained on vast amounts of data from the internet (although it is not actually connected to the internet) and learns from its experiences and inputs. But it is also trained using Reinforcement Learning with Human Feedback (RLHF), which uses human demonstrations and preference comparisons to guide the model toward desired behavior. To respond to a user’s query, ChatGPT draws upon its data and RLHF training and predicts what it thinks is the preferred answer, which generally results in much more “human” sounding outputs. Depending on the circumstances, users may be able to adopt the output with minimal changes.

Although this technology is game-changing in ways we have yet to fully comprehend, it is not without its challenges and, like any new technology, it still has many “bugs” to work through. For example, by OpenAI’s own account, ChatGPT can produce incorrect answers.  It also “has limited knowledge of world and events after 2021, and may also occasionally produce harmful instructions or biased content.” Even more alarming, if the platform doesn’t know the answer to a user’s query, it can “hallucinate” (i.e., make stuff up). These “hallucinations” sound perfectly plausible and, while they may have elements of truth to them, they are often completely factually inaccurate and/or nonsensical.

A recent example of ChatGPT gone wild in the legal profession, is the well-publicized Avianca case, where two New York lawyers and their firm were sanctioned when one of the lawyers (Schwartz) used the platform to draft a response to a motion to dismiss, which the other lawyer (LoDuca) signed and filed in the U.S. District Court for the Southern District of New York.  Unbeknownst to Schwartz, many of the cases cited by ChatGPT in support of Schwartz’s argument, and the quotes and principles purportedly contained in those cases, did not exist, even though the cases named real judges as their authors. In other instances, the cases cited did exist, but did not stand for the proposition which ChatGPT claimed in its brief.

Schwartz and LoDuca might have been forgiven for their initial reliance on this new technology, but for the fact that, when their opposing counsel filed their reply brief pointing out that they could not locate many of the cases cited in Schwartz’s brief, and the court ordered subsequently LoDuca to produce copies of the cases cited,  Schwartz doubled down and maintained that the cases were real, even though he acknowledged that he was unable to find full copies of any of them. LoDuca then signed and filed an affidavit, prepared by Schwartz, which attached copies of fragments of the purported cases in an effort to affirm that the cases were real. The judge was not amused and, in a 43-page scathing opinion, sanctioned Schwartz, LoDuca, and their firm $5,000 for their misconduct related to their improper use of ChatGPT.

The lawyers’ conduct in the Avianca case highlights some of the ethical issues that must be considered when using generative AI platforms like ChatGPT in conjunction with the practice of law. Fundamentally, lawyers have a duty of competence (Model Rule 1.1) which requires, at a minimum, that they verify that the information they incorporate into their work product, whether produced by humans or by technology platforms, is true and accurate.

In addition, using tools like ChatGPT raises concerns about the confidentiality of the information a lawyer may input into the platform (See, Model Rule 1.6). OpenAI’s website states that, depending on a user’s settings, any queries entered into the platform may be retained and used to “train and improve our models.” Even the information that is protected  by the platform’s privacy settings is still retained for 30 days and may be reviewed for abuse. As such, any client information that a lawyer enters into ChatGPT may no longer be confidential and could also be subject to data breaches. Lawyers should, therefore, refrain from doing so without the client’s informed consent.

Similarly, lawyers have a duty to supervise the use of technology tools whenever they are used in conjunction with providing legal services to clients, whether by other firm lawyers, non-lawyer assistants, or outsourced service providers, and must ensure that their conduct is consistent with the lawyer’s ethical obligations. (See, Model Rules 5.1 and 5.3).

Although it may be tempting to rely on generative AI platforms like ChatGPT to improve efficiency, at this early stage in these tools’ development, lawyers must seriously weigh their potential ethical implications, only some of which are explored here, and exercise extreme caution.  Lawyers and firms should also consider adopting a policy on the permitted uses of AI tools in conjunction with client matters.

Sari Montgomery is a partner at Robinson, Stewart, Montgomery & Doppke in Chicago, Ill. Her practice involves representing attorneys in legal ethics and professional responsibility proceedings. She can be reached at [email protected].

Share this story, choose a platform

Recommended content

Go to Top