Guidelines Relating to Use

When using generative AI, which is changing at rapid pace, lawyers must be aware not only of the potential benefits but also of the associated risks and the requirement to continue to respect their professional obligations. A number of cautions enumerated below follow particularly relevant sections in the Model Code 1 and provide guidance on identifying and mitigating risks associated with the use of AI.

4.1 Be Competent: Rule 3.1

Lawyers must be competent in the delivery of legal services.  Most Canadian jurisdictions include specific commentary in their respective Codes of Conduct about the need to be technologically competent. 

Part of the duty of competent representation as required by Rule 3.1 of the Model Code is mindfulness of the risks associated with the use of innovative technologies (including but not limited to misinformation).2 This also includes regularly refreshing and rechecking policies and practices relevant to the use of AI.

When consciously using AI, legal professionals must be aware of any attendant risks in using particular technology and appropriately mitigate those risks, including but not limited to having policies and procedures in place relating to testing and verifying content created by generative AI to ensure its accuracy, relevancy and reliability.

Failing to use AI competently puts clients at risk and, may result in sanctions ordered by the court or even disciplinary sanctions imposed by the regulator. For example, a British Columbia (BC) lawyer was ordered to personally pay costs for citing AI generated non-existent caselaw in the BC Supreme Court. The court cautioned: "competence in the selection and use of any technology tools, including those powered by AI, is critical. The integrity of the justice system requires no less."3  As a result, the Law Society of BC announced it will investigate the lawyer’s conduct. It referenced that it has issued guidance to lawyers on the appropriate use of artificial intelligence technology and stated its expectation that lawyers comply with the standards of conduct expected of a competent lawyer if they do rely on AI in providing client services.

As noted, and as this case illustrates, LLMs can present misinformation quite confidently, often throwing even seasoned lawyers into self-doubt/ diminished authority.4

Key Takeaway – Generative AI can create inaccurate, false or misleading information.

Lawyers have professional obligations – to clients and the Court- to verify and validate any information created by generative Al to ensure its accuracy. They must be aware of and adhere to law society guidelines and court directives on point.

Understanding AI’s Limitations Rather than Understanding the Technology Itself

In jurisdictions that have adopted the specific requirement that lawyers be technologically competent, lawyers are not expected to be information technology experts. For example, as set out in the Model Code, at commentaries 4A and 4B to Rule 3.1,5 lawyers should develop an understanding of, and ability to use, technology relevant to the nature and area of the lawyer’s practice and responsibilities. Lawyers should understand the benefits and risks associated with the relevant technology. The required level of technological competence will depend on whether the use or understanding of technology is necessary to the nature and area of the lawyer’s practice and whether the relevant technology is reasonably available to the lawyer.6

It is not possible for lawyers to “understand” the relentless complexity of generative AI functioning. Instead, they should focus on understanding the risks associated with its use. These, like the technology itself, are constantly evolving.

“It is inscrutable because, even for AI developers, it’s often impossible to understand how a model reaches an outcome, or even identify all the data points it’s using to get there — good, bad, or otherwise.”7

While sophisticated technologies associated with generative AI cannot be simply “understood”, even by trained humans, due to their inherent complexity and inevitable lack of transparency, a competent legal practitioner must be aware of their risks and limitations in terms of a core legal practice.

The European Bars Federation notes the following:

[While] we acknowledge the significant advantages of using generative AI in the legal field, we caution against hastily and inappropriately applying GenAI tools to tasks that lie at the core of legal competence and the lawyer-client relationship8

In the context of creating policies and practices relating to the use of generative AI, a useful distinction may be between engaging AI in the rote tasks of legal practice versus deploying it for strategy formation and analysis. Thus, for instance, the guidance provided by the European Bars Federation favours “complementary” use of AI in legal practice. 9

Key Takeaway: Being aware of and acknowledging AI’s limitations is crucial as this will influence decisions to use certain types of AI and how to use AI in a manner consistent with professional obligations. A high degree of customization is needed for legal practice.

4.2 Maintain Confidentiality: Rule 3.3

The duty to maintain confidentiality is a fundamental ethical obligation under Rule 3.3 of the Model Code. Lawyers have an obligation to safeguard client information.

When a lawyer chooses to use a generative AI tool to create new content, there is a real risk that they may impute confidential or case specific information into a general/ ‘free’ third-party commercial system/ prompt tool that trains on data scraped from the internet, memorizes data and repurposes it, thereby comprising the security of client information and breaching the duty of confidentiality. 

As set out in the guidance provided by the European Bars Federation:

Attorneys must be mindful that inputting personal data into genAI systems requires a proper legal basis and assessment in compliance with data protection and privacy provisions. Remember that GenAI tools not only process data to generate responses to prompts but also employ the provided data to enhance the system itself. However, these risks can be partially mitigated by utilizing GenAI tools through application programming interfaces (APIs) and/or by using a special "opt-out" option, which can help separate the input data from system development. To ensure data privacy and protection, it is essential to implement robust security measures encompassing both technology and processes, guarding against unauthorized access, use, or disclosure of personal data.10

Lawyers and firms must understand that further processing of the prompt content (as evident in many companies’ contracts with AI service providers) can lead to leakage, unauthorized sharing, or re-use of client data, and confidentiality of input issues.11

Moreover, as has been recently revealed, generative AI’s privacy settings can, in fact, be relatively easily bypassed to reveal information previously believed to be inaccessible to third parties.12

Furthermore, the scope of information used by generative AI commercial models is unknown and the legality of its use has been called into question.13 The confidentiality issues associated with the use of third-party models present significant risks in the context of legal practice.  

Key Takeaway: Using generative AI in your legal practice may result in improper, negligent and unethical disclosure of confidential information. However, there are ways to mitigate – although not eliminate - risks.  To improve data privacy and protection, it is essential to implement robust security measures encompassing both technology and processes, guarding against unauthorized access, use, or disclosure of personal data. Lawyers must reasonably ensure the security of an AI system before use, to the extent feasible.

Key Takeaway:  Lawyers and law firms must be mindful of legislation that may impact their delivery of legal services in accordance with professional obligations. 

4.3 Supervise and Delegate Appropriately: Rule 6.1

Rule 6.1-1 of the Model Code sets out that lawyers have complete professional responsibility for all business entrusted to them and must directly supervise staff and assistants to whom tasks and functions are delegated.

AI should be used as a tool, not as a crutch in the delivery of legal services. As one author states: “LLMs represent a decoupling of agency and intelligence…they should not be relied upon for complex reasoning or crucial information but could be used to gain a deeper understanding of a text’s content and context, rather than as a replacement for human input.”14

Some AI products specific to legal practice use AI to answer or solve legal problems (sometimes referred to as “Legal Tech” or “RegTech.”) They are sometimes marketed as having “the answer” to a legal dispute. This type of product, tempting in its ease, should nevertheless raise doubts as law is in great part about judgment, context, and nuance and exercising appropriate professional judgment is part and parcel of a legal practitioner’s obligations towards clients.

Rule 6.1-3 of the Model Code sets out ethical obligations when delegating tasks and subsection (b) specifically states that a lawyer must not permit a “non-lawyer” to give legal advice.

Lawyers should guard against misplaced over-reliance on generative AI tools as they may compromise or even prevent independent legal judgment. Thinking “fast and slow” or ambidexterity,15 the ability to work in the “old ways” in areas that require judgment and nuanced reflection and integrate new tools for rote tasks, is a helpful approach.

Whether lawyers ask an articling student to conduct legal research or use a generative AI tool to assist with legal research or the handling of a legal matter, their professional obligations remain. Lawyers must be competent and must provide appropriate supervision when tasks and functions are delegated.

Even if lawyers are not yet using generative AI themselves, they must be aware if other staff are using generative AI and develop clear policies and provide training relating to its use as well as adequate oversight.

Training should include both lawyers and non-legal staff, with direction provided on how to prevent inadvertent disclosure of confidential data and training on how to mitigate risks relating to the accuracy of new content creation.

Key Takeaway: Law firms should implement clear policies and training relating to the use of generative AI by lawyers and other staff in the provision of legal services. Information about the policies should be disseminated, regularly reviewed, audited, and complemented with regular training in the appropriate use of generative AI. Given the ever-changing nature of generative AI technology, law firm policies, practices and training should be reviewed and updated on a regular basis.

4.4 Communicate with Clients – Rule 3.2-1 and Rule 3.2-2

Lawyers have a duty to provide clients with a quality of service that is competent, timely, conscientious, diligent, efficient and civil as set out in Rule 3.2-1 of the Model Code.  Numerous examples of expected practices are given such as keeping a client reasonably informed, ensuring that work is done in a timely manner so that its value to the client is maintained, and offering quality work.  Also, when advising clients, as set out in Model Code Rule 3.2-2, lawyers must be honest and candid and inform clients of all information known to the lawyer that may affect the interests of the clients in their legal matters.

Lawyers may use generative AI tools to perform certain legal tasks that clients may typically expect them to do.  Lawyers should consider disclosing if they intend to use generative AI and provide explanations about how the technology will be used (e.g. research, analysis, document review, for discovery functions, trial preparation.)  Effective communication may require that a lawyer explicitly inform a client about how generative AI is being or will be used in their matter and obtain consent.  Disclosure should include information about any benefits and risks relating to such use, including any risks related to potential breaches of confidentiality and loss of privilege.

Key Takeaway: Consider whether clients are aware that AI is being used and how it is being used and have they given meaningful consent. Even if they have consented to its use, are the implications (potential loss of control over what is inputted into the generative AI tool) such that use may cause harm?

Key Takeaway: Obtaining client consent to the use of generative AI is not a panacea to the risks associated with its use. Many legal instruments and policies now recognize that AI systems are not sufficiently comprehensible to allow for meaningful consent relating to its use and that harm may be caused and professional obligations breached, even with consent.

4.5  Treat Tribunals with Courtesy and Respect: Rule 5.1

As per Rule 5.1-1, lawyers must treat the tribunal with candour, fairness, courtesy and respect. Treating the tribunals and courts with respect, includes informing oneself of the guidance, processes and practices created by the relevant tribunals and adhering to them.

Many regulators and courts have issued guidance on the responsible use of AI in legal practice. As members of specific jurisdictions and officers of the court, it is imperative that legal practitioners consult and strictly adhere to these rules (to which they are subject, depending on jurisdiction and practice) and not substitute their own judgment for that of the courts before which they appear.

Treating a tribunal with courtesy and respect also requires that lawyers take care not to provide the court with authorities that are inaccurate, non-existent or misleading.  It is imperative that lawyers review all materials that are submitted to court to ensure that the contents are accurate.

Key Takeaway: As officers of the court, lawyers are expected to comply with relevant court notices and directions including those relating to the use of AI and generative AI in proceedings before a particular court.

Key Takeaway: Lawyers must take special care to ensure that they do not supply the courts with misleadingly or inaccurate information when filing written materials with the court or when making submissions relating to a legal matter.

4.6  Charge Appropriate Fees and Disbursements: Rule 3.6

As set out in Rule 3.6 of the Model Code, fees and disbursements must be fair and reasonable and disclosed in a timely fashion. Legal practitioners must be mindful of technological advances that may assist them in providing legal services in ways that are beneficial to their clients, such as improving upon efficiencies in service delivery and costs. Especially in times of rapid change relating to client service delivery models and methods, lawyers and law firms should periodically and meaningfully revisit their billing practices to ensure continued compliance with their ethical obligations relating to fees and disbursements.

Lawyers should inform clients about their intended use of generative AI in the delivery of legal services and explain how such use may impact the processing of legal matters, including by generating efficiencies and reducing the amount of time spent on some legal tasks. Should the use of generative AI reduce a lawyer’s time or costs, those savings should be accurately reflected in any billings to the client.

Key Takeaway: Effective client communication is foundational and may in some circumstances include sharing and consulting clients on AI practices, including how the use of generative AI tools in the delivery of legal services may improve efficiencies and impact costs.

4.7 Guard Against Discrimination, Harassment and Bias: Rule 6.3

Lawyers are prohibited from discriminating against or harassing colleagues, employees, clients or any other persons.  The Model Code, in Commentary 1 of Rule 6.3, sets out that lawyers:

  • are uniquely placed to advance the administration of justice, requiring them to commit to equal justice for all within an open and impartial system;
  • are expected to respect the dignity and worth of all persons and to treat all persons fairly and without discrimination;
  • have special responsibilities to respect and uphold the principles and requirements of human rights and workplace health and safety laws in force in Canada, its provinces and territories, and, specifically, to honour the obligations enumerated in such laws.

Lawyers and law firms must be cognisant that generative AI may be trained on biased or discriminatory information, and it may systematically ossify bias in a way that is difficult (if not impossible) for the user to perceive, perpetuating discriminatory practices and outcomes.16

Discriminatory input data is one of the main sources of discrimination by AI systems. To illustrate, suppose that an organisation’s human resources (HR) personnel has discriminated against women in the past. Let’s assume that the organisation does not realise that its HR personnel discriminated in the past. If the organisation uses the historical decisions by humans to train its AI system, the AI system could reproduce that discrimination. Reportedly, a recruitment system developed by Amazon ran into such a problem. Amazon abandoned the project before using it in real recruitment decisions. AI systems can make discriminatory decisions about job applicants, harming certain ethnicities for instance, even if the system does not have direct access to data about people’s ethnicity. Imagine an AI system that considers the postal codes where job applicants live. The postal codes could correlate with someone’s ethnicity. Hence, the system might reject all people with a certain ethnicity, even if the organisation has ensured that the system does not consider people’s ethnicity.”17

In light of these risks, lawyers need to be careful when making decisions about using generative AI.  As one author states:

Currently, there is no way to peer into the inner workings of an AI tool and guarantee that the system is producing accurate or fair output. We must acknowledge that some opacity is a cost of using these powerful systems. As a consequence, leaders should exercise careful judgment in determining when and how it’s appropriate to use AI, and they should document when and how AI is being used. That way people will know that an AI-driven decision was appraised with an appropriate level of skepticism, including its potential risks or shortcomings.18

Lawyers must learn about the potential for AI biases and their implications for legal practice. They should also consider putting systems in place to help identity and root out bias and discriminatory outputs (e.g. conducting audits). Taking such action is necessary to mitigate – though not obviate– these kinds of risks.

Legal workplaces should establish policies and mechanisms to identify, report and address potential AI bias. As noted in one article about the responsible use of AI: “AI will only be trustworthy once it works equitably, and that will only happen if we prioritize diversifying data and development teams and clearly document how AI has been designed for fairness.”19

Responsibly anticipating risks and guarding against them, includes developing human resources management tools for ensuring organizational awareness of and compliance with AI policies and procedures internally. As noted, these must be regularly revised, tested, and updated.

A thorough human rights-based review of AI outputs is required in sensitive contexts such as where grounds for discrimination may be a factor in decision-making.

Key Takeaway:  Lawyers must provide meaningful oversight when using generative AI and critically screen results obtained for evidence of bias. They must not independently rely on content created by generative AI given the significant risk that content may be impacted by biases inherent in the specific tool.

4.8 Comply with the Law – Rules 2.1 and 3.2-7

Lawyers must act with integrity and comply with applicable laws when providing legal services. When acting for a client, a lawyer is prohibited from doing anything that the lawyer knows or ought to know assists in or encourages any dishonesty, fraud, crime or illegal conduct. These obligations apply to all aspects of a lawyer’s practice, including a lawyer’s use of generative AI tools.

There are many relevant and applicable legal issues surrounding generative AI including compliance with AI-specific laws, privacy laws, cross-border data transfer laws, intellectual property laws as well as cybersecurity concerns.

These and other laws and rules are likely to evolve as the field of generative AI develops.  Lawyers need to stay up-to-date on the applicable laws governing generative AI use to ensure they are in compliance when using generative AI tools in their legal practice and to ensure they are appropriately advising their clients, as competent counsel, about applicable laws and regulations.

In addition to addressing increasingly complex data security and data governance issues, lawyers must cultivate awareness of the potentially corrosive impact of “Deep Fakes” seeping into evidence and disinformation in the justice system,20 access to justice considerations, and trust and alienation when utilizing generative AI.

Key Takeaway: There are many relevant and applicable legal issues surrounding generative AI including compliance with a variety of laws. Lawyers must educate themselves and stay current to ensure compliance when using generative AI tools themselves and to ensure they are giving competent advice to clients about applicable laws and regulations. 

Endnotes

1 Supra, note 3.

2 See, for example, LinkedIn Pulse. (n.d.). The Liar’s Dividend: How Are Deep Fakes Impacting the Justice System? Retrieved October 31, 2024

Commenting on American Bar Association. (2024, January). The Impacts of Deep Fakes on the Justice System. Webinar featuring Prof. Maura Grossman.

See also Austin, Doug. (2023, October 12). Rules Change to Address AI Evidence Proposed by Grossman and Grimm: Artificial Intelligence Trends. eDiscovery Today. Retrieved October 31, 2024

3 Zhang v Chen. (2024). BCSC 285. CanLII. Retrieved October 31, 2024), see also the now infamous decision Mata v. Avianca, Inc. (n.d.). Retrieved October 31, 2024.

4 See Eltis, Karen. (n.d.). Judicial Independence and the Corporate ‘Custodians’ of Digital Tools: A Call to Scrutinize Reliance on Private Platforms as ‘Essential Infrastructure.’ In Céline Castets-Renard & Jessica Eynard (Eds.), Artificial Intelligence Law Between Sectoral Rules and Comprehensive Regime Comparative Law. Bruylant. Retrieved October 31, 2024., https://ssrn.com/abstract=4599274.

5 Supra, note 3;

6 Neeley, Tsedal. (2023, May 9). 8 Questions About Using AI Responsibly, Answered. Harvard Business Review. Retrieved October 31, 2024.

7  Neeley, Tsedal. (2023, May 9). 8 Questions About Using AI Responsibly, Answered. Harvard Business Review. Retrieved October 31, 2024

9 Ibid. at p. 9.

10 Ibid. at p. 11.

11 Depending on the circumstances, the legal practitioner relinquishes control over the ultimate use of the data entered / queries formulated and the retention of input data. Generative AI is evolutive ‘self-learning’, meaning that a client’s information, once revealed in a query/ prompt may be stored and re-used in response to other practitioners’ subsequent prompts on point. This might constitute inadvertent disclosure, effectively violating Rule 3.3 of the Model Code and potentially various additional normative instruments, including but not limited to the extraterritorially applicable to Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), OJ EU L 119/1 (5 April 2016) ("GDPR"), which can apply to client data.

12 Jeremy White, “How Strangers Got My Email Address from Chat-GPT’s Model”, New York Times (22 December 2023), online: “Mr. Zhu, had managed to extract from GPT-3.5 Turbo in the fall of this year. With some work, the team had been able to ‘bypass the model’s restrictions on responding to privacy-related queries’ ”;  See also Xiangyu Qi et al, “Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!”, arXivLabs (5 October 2023):We note that while existing safety alignment infrastructures can restrict harmful behaviors of LLMs at inference time, they do not cover safety risks when fine-tuning privileges are extended to end-users.. ..Disconcertingly, our research also reveals that, even without malicious intent, simply fine-tuning with benign and commonly used datasets can also inadvertently degrade the safety alignment of LLMs, though to a lesser extent. These findings suggest that fine-tuning aligned LLMs introduces new safety risks that current safety infrastructures fall short of addressing – even if a model's initial safety alignment is impeccable, it is not necessarily to be maintained after custom fine-tuning”.

13 University of Toronto. (n.d.). Generative AI Tools and Copyright Considerations. Retrieved October 31, 2024.

14 Floridi, Luciano. (2023). AI as Agency Without Intelligence: On ChatGPT, Large Language Models, and Other Generative Models. Philosophy and Technology. Retrieved October 31, 2024

15 Eastwood, Brian (2023, July 19). How to Set Technology Strategy in the Age of AI. MIT Sloan School. Retrieved October 31, 2024 .g

(See, for example: von Ungern-Sternberg, A. (2021, June 29). Discriminatory AI and the law – legal standards for algorithmic profiling.   Vöneky, S., Kellmeyer, P., Müller, O., & Burgard, W. (Eds.). (n.d.). Responsible AI. Cambridge University Press (Forthcoming).

Draft Chapter, in: Silja Draft Chapter, in: Silja Vöneky, Philipp Kellmeyer, Oliver Müller and Wolfram Burgard (ed.) Responsible AI, Cambridge University Press (Forthcoming ) gand see Van Bekkum, M., & Zuiderveen Borgesius, F. (n.d.). Using sensitive data to prevent discrimination by artificial intelligence: Does the GDPR need a new exception?

16 Van Bekkum, Marvin, and Zuiderveen Borgesius, Frederik. (n.d.). Using Sensitive Data to Prevent Discrimination by Artificial Intelligence: Does the GDPR Need a New Exception? Retrieved October 31, 2024.

17 Supra note 2. 

18 Neeley, Tsedal. (2023, May 9). 8 Questions About Using AI Responsibly, Answered. Harvard Business Review. Retrieved October 31, 2024.

19 Supra note 2

20 Supra, note 17.