Legal Ethics: Lawyers should be wary of divulging case information through AI chats

By Categories: Ethics, Judiciary

Legal Ethics: Lawyers should be wary of divulging case information through AI chats

By Jim Doppke

By now, the fact pattern is familiar: Lawyers use generative AI to draft a pleading and then are found out when their opponents or the court detects hallucinated case citations that had not been properly verified by counsel before filing. Sanctions and other consequences most often ensue.

We’re familiar, too, with the ethical implications of this scenario: Did the lawyers act competently (as required by Rule 1.1 of the ABA’s Model Rules of Professional Conduct)? Did they make misrepresentations to the court (Rule 3.3) or otherwise act dishonestly (Rule 8.4)?

But there are other ways that the use – or misuse – of generative AI can give rise to ethical concerns for attorneys. One such scenario came to light recently following the publication of thousands of prompts and responses previously generated as consumers used OpenAI’s ChatGPT. The users of the application, having requested and received information, clicked a button marked “share,” intending to create a link to the conversation that they could then share with someone else – but not anyone else, and not in a public way.

But that’s not what happened. Instead, the users – prompted by the app – affirmed that they wanted to make the conversations “discoverable.” That made them both public and searchable via Google and other search engines.

Before we dive into the ethical implications of this situation, we should note that the provenance of the prompts is not specifically known. The blog Digital Digging, whose investigation led to the discovery of the publicly searchable information, declined to publish the specific and identifying information that was available in at least some of the chats. That was the judicious and correct call. It does, however, leave a reader unable to determine which prompts are genuine inquiries, and which might be flights of fancy.

Alarm bells

Assuming the genuineness of the prompts, though, there’s reason for ethical alarm bells to be ringing (if the word “discoverable” wasn’t enough all by itself). There are prompts that appear to have been submitted by lawyers and businesspeople that are chock-full of information we would normally consider confidential. One questioner identified himself or herself as a lawyer for a multinational energy corporation seeking advice on acquiring land in the Amazon. That alone could be restricted by Model Rule 1.6, which generally prohibits a lawyer’s release of information “relating to a representation” of a client without the client’s consent. But the questioner went further, inquiring about how to ensure that his or her client could obtain the lowest possible price in negotiations with the indigenous people who owned the land, given the supposed lack of sophistication of the people.

The lawyer-questioner might claim that Rule 1.6 was not violated if the information used was not sufficiently specific to identify the parties involved. But that’s not entirely clear. Our adversaries, or even just people at large, can be motivated and resourceful in ways that seemed impossible just a few years ago. They can collect and connect information quickly and accurately.

OpenAI’s CEO, Sam Altman, acknowledged last month that conversations on Chat GPT do not have the same “legal privilege” protections as do direct, confidential interactions with professionals such as lawyers, doctors, or therapists. OpenAI is currently seeking to prevent the disclosure of ChatGPT conversations in a copyright case filed against it by the New York Times in federal court in Manhattan.

“We haven’t figured that out yet for when you talk to ChatGPT,” he said on the “This Past Weekend” podcast of protections for chats.

Best practices

The best way to protect information – even if you think it’s vague enough to escape notice – is simply not to release it in any form or forum, especially not one whose commitment to professional-level privacy is in serious question. Lawyers can, of course, rely on Rule 1.6’s concept of implied consent to release some information when it’s necessary to accomplish the goals of the representation. But asking a chatbot for strategic advice might fall short of “necessary,” especially when balanced against the risks of the information becoming freely accessible.

Even when the questioner did not identify as a lawyer, their prompts raised issues that nevertheless should trouble lawyers as they try to guide clients through sophisticated matters. There were prompts that requested information on corporate strategies, trade secrets, and even how to engage in criminal conduct without detection. It’s possible that at least some of those posts might not be able to be tied to particular individuals. But what if they can, and what if it was your client? They could have unwittingly placed their own interests in jeopardy, and you might have to help mitigate or reckon with that damage.

In another affecting anecdote, a questioner identified as a lawyer trying to take over a case for a colleague who had suffered a sudden accident. The questioner depicted themselves as completely at sea on even the basics of the case, and they seemed to want to rely completely on ChatGPT for strategic advice: “Please take the role of a litigation expert.” Here was a lawyer seeking much-needed mentorship, understandably, but in a way much more likely to put client information at risk than would a conversation with a colleague.

Ethical considerations

I am not suggesting that the lawyers – if they were lawyers – were or are guilty of misconduct by virtue of their ChatGPT prompts. Nor are the other questioners posing business-related questions necessarily at risk, or their lawyers now putting out fires.

But the ready availability of their stories suggests that security practices at the chatbot are not always adequate to meet the legal profession’s ethical requirements. Whatever tools we use to obtain, review, or synthesize information must allow lawyers to retain control over the information we submit, or at the very least to understand the uses to which that information is put. Our clients can consent to such uses, where appropriate, but even then, we must guard against foreseeable negative consequences.

The more stories of AI errors and infirmities we see, the more we can learn about what might come next, and how best to protect against it.

 

Jim Doppke is a partner at Robinson, Stewart, Montgomery & Doppke in Chicago, Ill. His practice involves representing attorneys in legal ethics and professional responsibility proceedings. He can be reached at [email protected].

Share this story, choose a platform

Recommended content

Go to Top