Notice to lawyers
November 23, 2023

We have all been hearing about OpenAI’s ChatGPT, Meta’s LLAMA, the Midjourney platform, and a host of other artificial intelligence (AI) tools. While AI has been in use for a very long time (both in the legal profession and more broadly), generative AI is a new phenomenon where the AI system algorithmically generates content, rather than simply providing or analyzing data. Generative AI has been programmed to mimic the intelligence of humans and to take information provided to it into context when providing responses.

Recent generative AI breakthroughs have brought tremendous opportunities for efficiency and effectiveness in all professions, including the practice of law. However, all lawyers must be aware of and consider the risks, including those set out below, before adopting AI into practice.

Client Confidentiality

Lawyers must keep their clients' data confidential and secure. Because most law firms are not going to host and run their own internally-developed generative AI tools, there is a risk that sharing confidential or privileged information with an online AI tool, like ChatGPT or other generative AI, will make that information public. Why? Because (a) the lawyer’s prompts need to go over the internet to the AI servers in order to be processed, resulting in the sharing or transfer of information, and (b) generative AI learns from the information provided to it and may, either immediately or over time, use the information you provided to respond to others.

What can you do? Before using any AI tool, like any other cloud computing tool, consider the Law Society of BC’s Cloud Computing Checklist and in particular ask: Where is the data stored? Who is the data shared with? What security is in place to minimize data breaches? What steps are taken to encrypt and prevent client confidences from being used inappropriately? Will the data be used to train the AI in order to generate future results for you or others? Is it possible to redact or reduce client confidential information in data before using generative AI tools, and if not is client consent necessary? You likely have systems in place to protect client data, and should apply those here as you would for any specific AI tool, but when you use AI you need to consider what specific processes the AI technology has in place to protect against a data breach.

Inaccurate Results

One of the reasons that tools like ChatGPT have proven so alluring is that they seem to be able to provide accurate, fast results. However, generative AI is more about generating convincing, confident, and human-sounding results, and is less about accuracy.

This risk can best be illustrated in the recent decision of Roberto Mata v Avianca, Inc. 1:22-cv-01461, (S.D.N.Y.). In that case, a New York lawyer relied entirely on ChatGPT in creating his legal submission and did not confirm or verify the authorities provided by ChatGPT. Justice Castel held:

The Court is presented with an unprecedented circumstance. A submission filed by plaintiff’s counsel in opposition to a motion to dismiss is replete with citations to non-existent cases… Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.

One of the non-existent sources cited in that case was Shaboon v. EgyptAir 2013 IL App (1st) 111279-U (Ill. App. Ct. 2013). When the lawyer was questioned by opposing counsel about the authenticity of this case, the lawyer asked the AI to provide the entire cite, which it did (again, entirely fictitiously) and which the lawyer submitted to the Court!

Before using any AI, but in particular generative AI, every lawyer must understand that it is designed to replicate human-like responses, but is not always designed to be accurate. Why? First, many generative AI technologies lack direct access to proprietary legal databases or even publicly available legal databases, and may be using outdated materials. Second, responses are often based only on information that it has been trained on, the easiest of which to source is publicly available Internet information (and we all know what a unreliable source the public Internet can be) with no guarantee of reliability or timeliness. In fact, the latest dataset that trained ChatGPT is from 2021. Third, most generative AI tools do not verify the accuracy of the information they provide and do not have any training in legal concepts. Last, generative AI tools want to provide a response and, in many cases, will creatively generate content in order to provide something that it statistically determines will be accepted by its user as a useful answer.

What can you do? Lawyers remain responsible for their work product, even if generative-AI assisted or produced. A lawyer should always double-check all of the AI-generated results for accuracy, completeness and assumptions, as they may contain not only inaccuracies but also entirely fictitious information. Source checks and alternative research methods should also be employed to ensure that the results generally accord with information that has been human-provided. Generative AI in its current state should never be trusted in lieu of a lawyer’s actual knowledge, but rather as a supplement to existing knowledge. Last, check with any courts, tribunals or authorities on their AI policies; some courts in Canada have started requiring disclosure of the use of generative AI tools in submissions.

Biased Results

Inherently, all AI results are to some degree biased, whether as a result of a bias in the output of the algorithms being applied, a bias in underlying data on which it is trained, or a bias in the prompts.  

What can you do? Assess the datasets used in your AI tools for quality and potential biases, and consider using the right AI for the right task. Ensure that you understand your own unconscious (and conscious) biases when using AI-generated results, and take steps to ensure that the results mitigate the risk of bias.

Cybersecurity and Fraud

Finally, do not forget about cybersecurity. Any time a new technology is adopted, the security measures must be considered to ensure that a vulnerability has not been created. As AI tools emerge, firms will need to remain vigilant to not only educate all staff about the risks with AI, but also to enhance security training. Firms will want to consider adopting policies and guidelines about the use of generative AI.

What can you do? In an AI-driven world, lawyers must be increasingly vigilant and skeptical when clicking on any link and relying on any digital medium to ensure that the content is authentic. Read our tips here on how to minimize the risk of compromising your business email.

Conclusion

Technology has certainly seen its share of fads that come and go, but generative AI is here to stay. Take the time to become knowledgeable about AI and review the Law Society’s Guidance on Generative AI: Key Points.

The Lawyers Indemnity Fund extends a special thanks to Ryan Black from DLA Piper (Canada) LLP for his assistance in preparing this Notice.