Episode 34: Amanda Chaboryk and Alex Hawley on how to use AI in a legal practice

Yves Faguy: Welcome back to another episode of Modern Law. I’m your host, Yves Faguy.

Today we return to the topic of AI. But this time we explore its impact in the legal industry. Needless to say, generative AI has made significant advancements in terms of its capabilities and uses. In law, there are several key areas where generative AI is showing a lot of promise, namely in the efficiencies it can create in contract generation, document review, legal research and predictive analytics.

It’s expected to become an indispensable productivity tool across the legal profession. But how does that get implemented in a legal practice or department? I have two guests joining us today on this latest episode, which we recorded just before the holiday break. Amanda Chaboryk is our guest for the second time on the show. She’s the Head of Legal Data and Systems within Operate for Price Waterhouse Coopers in London.

Having started her career in legal project management, she now focuses on leading the operational delivery of complex managed legal programs and helping clients navigate through emerging technologies.

Also joining us is her colleague, Alex Hawley, an ESG Regulatory Solicitor for PWC in London. And she started her career as a commercial litigator and now is working with clients to help them figure out these new developments in ESG and how to incorporate emerging technologies like generative AI.

I should add that PWC announced last year an exclusive partnership with the legal startup Harvey which is a platform built on AI technology from the creator of ChatGBT, open AI. So the idea here is to get a sense of how lawyers should think of approaching generative AI tools, how to work with them, and how to adapt their skillset. We also talk about some of the ethical and transparency considerations they need to think about.

Yves: Amanda Chaboryk, Alex Hawley, welcome to Modern Law.

Alex Hawley: Thank you, it’s great to be here. Thank you for having us.

Amanda Chaboryk: Thank you for having me again.

Yves Faguy: Yes, it’s your second time. There are a rare few who have managed to make it on the show twice now, so congratulations.

Amanda: Thank you so much.

Yves: I hope it’s an honour. We’re talking about generative AI and this is a topic that’s been of great interest to our listeners over the last little while. We’ve talked about it in the context of developing laws and regulation around how to govern AI.

But today, we’re going to focus a little bit more on the practice of law and the use by legal professionals of generative AI tools. So maybe Amanda, I can start with you. Perhaps you could paint us a general portrait of how you see generative AI impacting the legal industry and profession as a whole. What are the kinds of work environments in where it’s being used and tested? What are the sort of different attitudes toward getting to know AI and becoming familiar with it? Give us a general sense of what you’re seeing in the marketplace from your vantage point at PWC.

Amanda: No, yes of course. So to start off, I would say that generative AI, the reason it’s been compared to the internet, the impact the internet had, is just because of the large functionality, accessibility. For example, if someone wanted to create their own large language model, they can and they don’t even need to be a developer.

The reason why I think it will be particularly transformational for the legal industry is the potential it has to automate so many tasks that are conventionally administrative heavy. Example: in terms of like low effort, high reward generative AI is very effective at summarization. So a really interesting use case I actually experienced today is looking at all these financial reports that showed spend to budget from Power BI. And I asked a gen AI model to summarize the findings for me, and it did it in seconds. This would conventionally take me quite longer, printing out the visuals, using a calculator to determine the variances spend to budget. That is just one use case.

But the reason why I think this is going to be transformational is work that would conventionally take a long amount of time, being known as arduous, looking at questions, summarizing them, extraction, this can be done very effectively from gen AI, so more work could be put on the core legal technical.

What I’m also seeing in terms of usage is different organizations are on different journeys. Some organizations are just navigating doing lunch and learn sessions on ‘This is Gen AI’, this is how it can be used in legal. Some firms will be doing active proof of concepts for legal gen AI apps. Some will be in the discovery stage saying, “Do we need to deploy a gen AI tool? What parts of our business would be prime for using gen AI or what is our stance on gen AI and our position on gen AI for clients.?”

So in the banking industry for example, or a highly regulated industry such as financial services, regulators insist on knowing why decisions are made in a certain way. And this is why the black box problem in gen AI can be quite problematic. So for this reason, different firms will have different positions. And they’ll be working with their different SMEs and data privacy in IP to try and navigate the risk. And some will also be developing AI governance frameworks to say, “This is what we can do. This is what we can’t do. This is high. This is high risk data. This is low risk data,” etc.

Yves: What is your mandate now in terms of use of these AI tools? What is it that you’re trying to do at PWC?

Alex: So at the heart of it, because this technology is fairly new, but as Amanda said, it has the potential to be an enormous accelerator and also to kind of take away or to accelerate some of that sort of administrative or time consuming work. At the heart, we’re trying to find the places where the new tools – which by the way are coming at us thick and fast, there’s a new technology almost every week at the moment – we’re really trying to identify the areas where they’re going to make the greatest contribution, while also being safe in terms of safeguarding client data with where the outputs are going, with the reliability of the output and with our professional commitment to privilege and to data protection.

So I’ve had this described, and this is from a paper that some researchers at Harvard Business School released in September, as a jagged frontier. When you’re looking at – and this is probably a well known expression – when you’re looking at exploring with sort of trial and error, what you’re trying to do with large language models in particular, you can think of it a little bit like a castle wall. It has some battlements that go out. It has some sort of coves that go in. It’s not a straight line. And unless you explore it, it's not kind of easy to see what tasks might fall within that castle wall. But gen AI is fantastically greater and it will accelerate your business performance and allow you to deliver the best outcome safely for your clients.

And what actually falls outside the castle walls of the new technology and that can become either risky or ineffective. So a lot of our mandate at the moment is really exploring those use cases to see what’s going to fall within our castle walls, what’s going to be a great accelerator and deliver really good service to our clients, and what’s falling outside, and as we navigate the new technology, sort of learning a little bit more how to tell the difference between the two?

Yves: Amanda?

Amanda: So I would say building on what Alex mentioned, it’s this complex task where we need to stay at pace with the development, the new – as these models improve – the different use cases, understanding how are clients are using them. So for example, in the insurance industry, AI has been used for fraud detection for a very long time. And it’s also used widely in claims management. An immediate use case for example can be a generative AI model or a chatbot improving and having more human responses in order to queries, like when you engage for example with a chatbot when you are waiting for a delivery.

The primary challenge is keeping up to date with the rapid developments, the regulation such as the EU AI regulation for example, as well as at the same time applying caution and making sure you’re compliant to these strict standards. So it’s not just down to the technology, it’s also training the humans. If we want people to engage in this technology, we need to make sure they understand things such as what is the transformer model? What is supervised learning? What is tokenization, as well as understanding being able to translate commercial requirements into technical requirements, and technical requirements into commercial requirements. So bridging the expertise between let’s say an ESG, SMA, like Alex, to a full stack developer, someone who might be fine tuning the models.

So for, I don’t want to say for the first time, but during probably one of the most critical times where very different skillsets need to work together and not just provide education, but also develop these applications and use cases safely while making sure they’re accessible.

Yves: I want to get to that. Just perhaps first, how are you seeing people, particularly in the legal profession, respond to this? So responding to a new technology that has been billed as something that’s going to be quite disruptive for their lives, not necessarily meaning that it’s going to get rid of their jobs or anything like that, but that they are going to have to adapt to a fairly different way of practicing. And how are they reacting to sort of some of these challenges that come with having to collaborate with people who have different skillsets? Who wants to take that one?

Alex: If I take the first bit, I think gen AI actually hits quite a fundamental tension of being a lawyer, that the approach to it, that we’re always told to be very commercial, but we’re also trained to be extremely cautious and to foresee every eventuality and to point out problems or scan the horizon for problems that others might not foresee.

So from my own point of view as a practitioner, I’ve seen a real conflict within people who really see the transformative potential and would like to be early adopters, that are approaching it correctly I think with a great deal of caution, conscious that although this can address quite a lot of the tensions of the profession, not least cost which we’re always being pushed by clients to look at as a whole profession.

But they’re very mindful that, as Amanda said, the black box problem means that we can never truly have full, full confidence in the output. Very keen to learn about it but also keen to make sure that we have proper protections and an overlay of human view, and also keen to ensure that when it comes to the use of gen AI for some of those more junior tasks, that we’re ensuring the future of the profession by continuing to train those juniors in the skills that gen AI may on the surface be quicker at.

Yves: We’ll get to the skills. So what is the black box problem and how does it apply in the legal context for legal practitioners using this?

Amanda: What’s the black box problem? It’s the challenge of understanding an AI model generates its output. So you put in an input which is a prompt, and it’s how did it get to this result? So in gen AI, so generative adversarial networks, also known as GENs, the models train to generate new data that resembles a given data set. So the process where it does this, it’s often kind of difficult to interpret and it’s not opaque.

So the thing I mentioned about regulators wanting to understand how decision-making works. In the UK under the Insurance Act 2015, risks need to be quantified so there’s always application forms that quantify risk. In terms of transparency, it can be quite problematic when there isn’t full transparency, particular in high stakes scenarios. So let’s say autonomous driving, medical diagnosis, how did the model get to this?

There are some tools for example that when you can upload a document and then you can ask questions from the document and it will provide the page reference, so that’s saying trust but verify. You can say, oh, this is a 300 page report but it came from this report, so you can verify the output. With other models, that isn’t a possibility. When you receive an output it’s really hard to determine if it’s trained on millions of documents. Where did this output come from? Which documents? And that alludes to the black box problem.

Yves: This is a big question. So how do we ensure that lawyers remain in a position in which they can critically assess the outputs of the AI tool? Is there kind of a peer review component to lawyers checking the AI that needs to be developed, or are we working on this? Do we need to teach that to lawyers? Do we need to teach that to lawyers coming out of law school? It raises a lot of questions, because I think we’ve been used to, over the decades, as lawyers having our own methods about how to verify information, how to make sure it all stacks up. This is quite a game changer in this respect. So what do we do now?

Amanda: In terms of the saying ‘trust but verify’, in delivering any legal work, there’s often a senior person who will check the work and work is often looked at by quite a few sets of eyes. But it’s leveraging the SME expertise and experience as well as verifying the output. For example, if someone did research on say what is the role of the Financial Conduct Authority, then they would go and verify from a different legal research. Or they would go onto the site directly and confirm their output. So that would be probably a simple version of verification.

But this is where the whole AI governance comes into practice, so having rules, regulations, structures, procedures at different levels. Let’s say someone was doing a tax calculation. Instead of asking a model, let’s say in the tax year they’ll be different thresholds, going on to the governance site directly and putting those specific thresholds into the model. So having that human augmentation but it will still aid efficiencies.

Alex: To completely agree, Amanda, and to pick up the second part of your question about training the human, my perspective is that never mind the trepidation, these models exist. And if you don’t think you’ve already got an email drafted by ChatGPT from a trainee, I dare you to look again, because you possibly have. And that’s the inherent problem or beauty of the model, that sometimes it’s indistinguishable from a human output.

I see it a little bit like learning to drive a car. These things are tools that, although they may seem very intuitive because you’re querying them with natural language that we all use in our day-to-day life. So it can be easy to lose, that there are certain levers, buttons, tips, tricks, ways that you can use them, things that they’re good at, things that they can actually be quite dangerous at. But in the same way that we’ve assimilated as a culture how cars fit into our day-to-day life, what tasks are they good for, what speed you should drive at, and rules of the road, that is how I think we should be training a human when we’re looking at gen AI.

Some of that is yes, to answer your question: out of law school I think people should be trained on prompt engineering and how this is going to be a factor of the legal practice of the people graduating today. But that doesn’t mean exactly as Amanda says, that we should all go just jump in and use them for everything without being a little bit critical about what their use is actually going to be.

Yves: There is a lot of talk today about prompt engineering and not just obviously in the legal industry or the legal profession. How does that translate into lawyers’ skills? There’s a fellow here by the name of Jordan Furlong who’s a legal analyst. And he’s actually shown quite a bit of interest lately in AI and lawyer formation. One of the points he made, which I thought was a little amusing, but he was talking about how lawyers historically have not always been the best at instructing others. I think he takes that illustration of the poor, young junior coming into the senior partner’s office and being told to do something with minimal direction, minimal understanding of the background of the case, and runs off and sort of fends for himself.

But this notion, and I know we shouldn’t be anthropomorphizing AI by any sense of the imagination, but generative AI does sort of behaves like a human being. So we speak to it, we prompt it. And because it does, I’m wondering do lawyers have to learn how to work with it as if it were a person? Or is this a skillset that we need to start developing? How do you approach the skillset of prompt engineering? I think a lot of people have questions about that outside the legal industry too. But if we are going to be using this in legal, what do young lawyers, older lawyers need to know about that?

Alex: Well, I’m happy to go first because personally, my background, I’ve got an English degree. So I love the genesis of prompt engineering because it gives me an excuse to justify all those years studying. Literature and linguistics were worth it.

So I think the analogy of approaching gen AI like a trainee or interacting with gen AI like a human is a good one, because it makes people realize, and this is for young lawyers and old lawyers alike, that gen AI is not going to solve human problems in the sense of how humans interact with others they work with in the workplace, giving adequate instructions, and sometimes taking the time to think through what it is that you actually want, rather than giving, in your example, a flippant instruction to a trainee that maybe wasn’t properly thought through. And just as you get a bad output from a trainee, you’re going to get a bad output from a gen AI if you do that.

The other piece that I think is key to this training and again goes all the way up from junior to the most senior, is there are pretty much as many versions of legal practice as there are lawyers. There’s not going to be a centralized solution. There are specialist models, sure. Maybe we’ll talk about PWC’s Harvey model which is a legal specialist gen AI large language model.

But there’s not going to be a centralized solution. It’s on each of us to actually engage with the technology and to train ourselves, and to gain the skills necessary to actually keep up with the pace of development in our own legal practice.

Amanda and I work together frequently and I can guarantee that that we’re using the same model on the same project is completely different.

Yves: So how do we teach that? Is it just by doing or are there rules that we should be following, Amanda?

Amanda: What I would say, what Alex and I have in common is we both actually studied law in English. Well, if you think about it, large language model is long language. There’s already a really unique relationship. So prompt engineering and lawyers asking the right questions are similar in that they both have the process of eliciting information from a human, or a system, and then both law and large language models include the importance of language and how to use it to convey meaning and extract information.

But one of the most important elements is still leveraging that human expertise and lived experience. So an example is I was asked to extract information from a series of contracts using a gen AI model. And I had to change the prompts after six times. And once I changed the prompts, I included the prompts that worked most successfully in a playbook. And then when other members in the team had to do that extraction, they used those prompts and they provided a rating.

So from the experience of engaging with the tool, we determine the prompts that will elicit the best output. And the other benefit is that because it provides the page number, we had someone do the kind of quality assurance, so that entire system for a set of contracts, we put into a playbook and we said we can leverage this system again. But it’s based on X contract. And if someone has to do a contract extraction exercise again, they can do the same thing. But it’s that experimentation, improve and learn, and how to relay those findings for other tasks.

Yves: So you’re prompting and you’re testing a series of prompts to extract information from this contract. Meanwhile on the side, you’re having a human do the same exercise just to sort of quality test it?

Amanda: Yeah. So we have a human go back because it says the page, it provides a reference where this information came from, a human would go and verify yes, this extraction was done correctly and provide a ranking of how well it completed the extraction.

Yves: Okay, and my next question is: because if I go into a creative AI tool and I ask it to perform a task for me and I present it in a certain way, it will give me an output. And the very next time, I can ask it exactly the same way and it will provide me with a slightly different output. And so how do you factor that in?

Amanda: Because of the case sensitivity, adding an additional word, so in terms of prompting, it’s so important to provide context as well as limitations such as have this 250 words. That is very natural to have a different response, but that’s where you would verify the output. Let’s say you asked a question about capital gains tax in Ontario, you would generate the prompt and then you would go to the relevant authority or relevant case law and then verify that the output provided is correct.

Yves: Okay. How does it work in the context of e-discovery?

Amanda: E-discovery, so targeted assisted review, so e-discovery is probably one of the first uses of AI in legal. And it was transformative being able to review volumes of documents at speeds and accuracy that hasn’t been done before. But I would say that some e-discovery apps are looking into gen AI additions and ways to incorporate gen AI into their existing infrastructures. But I would say it varies, and that it is used for a very specific purpose. So Alex, you might want to chip in on this. But in e-discovery it’s to find the smoking gun. That has a very different purpose than gen AI.

Alex: Yes, and I think some of it, as in Amanda’s previous example of reviewing contracts or documents but then getting a human to apparently repeat the same process, some of the use cases we’re exploring in e-discovery are the ability of a large language model to actually identify a concept rather than – we have tools that can identify a key word or related key words or words within a cluster, sort of intelligent review tools.

But really, if you’re trying to be a little bit more intuitive and to ask about the concept of let’s say a tenet of your case, then it can really just accelerate the number of queries. A human could do it as well, but they’d have to read back through the documents or a certain set of documents multiple times. So really, its just allowing those documents to be brought to your attention for human review so much quicker and so much more intuitively than previous tools.

Yves: Is that different than use of large language models, or is it the same thing? Because as far as I always understand of large language models, is that we’re really divining the next word, or the tools divining the next word. So how does this work. When you’re applying this, let’s transpose this to the task of e-discovery where we’re trying to get it to be more intuitive about finding a concept, is that an actual different exercise going on there?

Amanda: So gen AI models are designed to learn patterns and structures within data and use its information to do new content. So in the legal industry often used for contract analysis, document review, legal research and gen AI is also effective at creating synthetic data sets. So e-discovery is a bit different. It’s for identifying and collecting electronic documents with a use in legal proceedings.

So they’re designed to search large volumes of data, so emails, social media posts and other electronic documents. So it’s to find relevant info that can be used in litigation.

Where it’s a bit different from gen AI. So gen AI models could be trained to work autonomously with little or no human intervention. But e-discovery tools typically require quite a lot of human oversight, so defining key words, ensuring the data collected is processed in a legally defensible manner as well. So they can both assist with legal tasks but they have quite different purposes.

Yves: I think another part of this conversation that’s very important and probably one that worries a lot of practitioners is everything that touches upon – we talked a little bit about transparency earlier – but also the data privacy component of this. And obviously, lawyers deal in very sensitive information, information of their clients. And I’m wondering, from your vantage point again, because legal regulators, certainly in Canada – I’m sure it’s the case in the UK although you can educate me on that one – legal regulators are trying to figure this out.

I mean in fact our national governments are trying to figure this out. I think the UK has had its own, has contemplated its own sort of artificial intelligence governing legislation although it seems to have decided to go for a more of a laisse faire kind of approach on that. But in the EU, you have the EU AI Act that is likely to become law shortly. So I realize that those are probably going to impose some privacy obligations on outfits that are using these tools.

Here in Canada, we are also making a bit of a hash of it, but we’re trying to debate our own AI legislation. So help me understand in the context of the legal industry itself, how should legal operations teams, how should law firms, how should legal departments be approaching the privacy issue, from its use to the protection and collection of that data?

Alex: So in this kind of, as you say, pre-specialist regulation on our use of AI, there are a number of already relevant regulations and also professional obligations in how we conduct ourselves, AI or not. And they become really relevant here because we’ve got professional duty to our clients to safeguard their confidentiality, as you said, extremely sensitive data.

And therefore I think it’s incumbent on each of to actually understand that if you put information into a public non ringfenced large language model, you might not know exactly where it’s going but you know that it’s going into the ether, into the training set potentially of that large language model. It’s not a characteristic of safeguarding confidentiality of that data.

We’ve also got issues of privilege, which this is something the legal community are talking about a lot but has not been fully bottomed out, that as you know, privilege which belongs to the client, if you broadcast or disseminate information, it loses that inherent quality of confidentiality that’s a prerequisite for privilege. So by putting client information into a non-ringfenced large language model, are you actually putting the client’s privilege or claim to privilege and the information at risk?

This is something that we, as Amanda alluded to, we’ve got very strict policies, procedures, as PWC, that have been put in place, not to use public large language with any client identifiable client data, confidential data in any shape or form. There are new models being developed that are ringfenced and that do help safeguard that confidentiality, but it’s for the practitioner I think to actually engage with the different models out there, to engage with the impact of the client’s data and for what the professional duties are of each, because it will only ever be on the individual practitioner to select the correct tools, and to make sure that they are fulfilling their professional duties.

Yves: How far do we need to go? You can remove easily identifiable information about an individual or about a client. But once you start cross-referencing context and other information and all that, you can sometimes start finding ways to identify that person or that client. Is this a concern for you? And how do the regulators need to address this?

Amanda: What I was going to say on this topic is the importance of accountability in governance, so supervising AI systems and staff, making sure they work as expected, verifying results, keeping up to date with regulation, working closely with IT teams, developers to understand transparency, to understand for example how the models work, making sure before people engage in them, they do training, making sure fairness is understood so that personal data is only processed in ways people can expect the benefit of law firms in general.

They have SMEs in data privacy and IP who are familiar with these issues, checking. For example, in the UK we have the ICO which is Information Commissioner’s Office, per using their recommendations and learning what kind of impact the technologies can have and how to optimally engage.

Yves: So we are recording this by the way, just before the break. It’s December 19th. And this probably will get broadcast early in January. But looking at the year ahead, what is it that you anticipate is going to happen? Let’s talk maybe first about how wide do you think the adoption of generative AI will be in firms big and small?

Alex: So looking to what I alluded to earlier, the pace of development. And I think looking forward to 2024, and the rumours of I think ChatGPT Enterprise which is going to make it a lot easier for non-programmers to build sort of specialist iterations of large language models, I think even though there will have to be this caution and this trepidation around how we do it properly, I think the market will just move on such that it’s expected, that this is incorporated in legal practice.

And I think that there will be two drivers and they’re both quite human, not technological at all. One of them is the constant press for cost reduction and for efficiency in law. And that’s not for its own sake. A lot of time it’s a driver of access to justice for people. You know, we see it in the courts, we see it in the approved rates of litigation. This is going to be a pressure on our industry and it’s one that large language models sort of promise to address.

The other one is: on the whole, in my experience, lawyers want to do interesting work. We’re curious folks. We’re interested in the intricacies of new cases and new people and new clients. Nobody comes into law to want to do that repetitive, administrative work that large language models, again, promise to do so well.

So I think the adoption will also be driven from a very human place of people pressing their employers, of saying, “Look, I will get a more fulfilling career and I will be a better lawyer and I will have exposure to higher level, more specialist training and work if we’re able to in effect have a bit of an industrial revolution of how we adapt to some of that administrative work.”

So I think they’ll be two pressures in 2024. One, it will be from clients and will be from the market. And the other one will be from lawyers themselves sort of pressuring the firms and saying, “This is something maybe I use, I read about, and maybe I use in my personal life. Why has it not trickled into my employment?”

Yves: Is there an advantage? Does it level the playing field between smaller outfits, smaller firms, or is it the big incumbents who are going to really be able to leverage this to its utmost capacity? Or does everybody win from it?

Amanda: I think it will depend on an organization’s strategic priorities and the SMEs they have in that area. Companies should look at AI through the lens of capabilities rather than a technology. So what are the important business needs? Is it automating processes? Is it gaining insight or is it engaging with customers and employees, for example?

Some organizations will say, “Well, maybe this is more prime for robotic process automation,” which has been around for quite some time. And then some organizations might say, “We need something more sophisticated,” and they’ll start looking at use cases for generative AI. And they’ll make a decision saying do you think that gen AI has been a lot more accessible than other technologies, and in a sense, a bit more inclusive for that reason.

So companies will need to leverage the capabilities of their key employees. They’ll have to look at the skills and say: what can gen AI be doing in terms of extraction? Or how can we almost supercharge our people with the tools that are available? In different industries they’ll be a wide abundance of use cases, and companies will be looking to think: where do we have the bottom next? What technology will work best?

But where I see the opportunities in 2024, where people have been doing concept proof of concepts in discovery, I think some of these models will actually start being built, tested, and it will be these previous ideas that seem far fetched. I think they’ll be executed and they’ll be improved and fine tuned.

Alex: To pick up on that point about the different players and whether one has an advantage and agility over the other, it’s interesting. One of our partners, Sandy [Pagawell] who’s spearheading some of our gen AI partnerships, his perspective was that in the coming months or year, gen AI is not actually going to replace humans, and it's not going to replace lawyers, but it’s going to replace lawyers who haven’t engaged with gen AI.

And I think we’ll see more and more of that, small outfit, big outfit. These tools are democratized. The access is so universal now that one of the determining factors, even though bigger outfits, who are able to be very agile will be able to train specialist models, will be able to have really good SMEs, you know, fantastic specialist tools. I think a big determining factor will be willingness to engage and actually willingness to find those new ways of working, and not become one of those lawyers who is left in the dust out of just not engaging with the new era.

Yves: Now, you have luddites. There are 25 year old luddites out there. There are 75 year olds who are on to the latest gadget. But how are you seeing it play out in the workplace, nd I don’t want you to name names or anything like that. But how is it playing out generationally right now?

Alex: Actually, my experience – and I’ve been running some sessions on prompt engineering and on responsible use of AI throughout our tax business – and actually, some of the people who have been most engaged and most excited and seen with maybe more perspective the possibilities of gen AI, have been the more senior, on the whole, the older generation.

So maybe this is a benefit of working somewhere as agile as PWC, but I’m not seeing a generational divide personally. A lot of the push is coming from the juniors, who there is that bit of tension that us qualified lawyers are saying, “No, I’m sorry. You need to go through the motions just like we did to learn the skills that you’ll need in the future.” But that’s always been there. That’s been there as a tension between print resources and online resources. And now it’s gen AI.

But save for that, I think people with 15, 20, 25 years of legal experience have actually seen far more, and in a way, are getting more excited about how it could have helped them at different points of their career.

Yves: Amanda, if you had some parting words of advice for lawyers who now a year into generative AI are beginning to say, “Okay, well this is probably more than just a fad, and how do I approach this,” what would you tell them?

Amanda: What I would recommend is there’s an abundance of free resources available online to learn about generative AI and to learn how the models work. There’s free courses for example by Deep Learning. And you could even invest three minutes a day and that will be 21 minutes in a week. And then in a month that will have compiled quite a lot of learning. But take advantage of the free resources available. Read how people in your respective industry or practice area are utilizing the technology. Engage with it. Think in your practice, what results in the most administration? What parts of my practice do I wish can be automated? Do I spend a long time replying to emails? Do I spend a long time having to summarize judgements?

Think of how you can supercharge your practice and start testing these technologies and have a structured format for doing this testing, such as: is this the response I would be comfortable sharing with a colleague? Can I rely on this? Or, I use the gen AI to do this and it was very successful in summarization. Or it summarized this company’s house report for me quite well. And from there, determine how you can find efficiencies in all the work you do, and as well, find opportunities where you can help your clients do the same.

Yves: So this is probably my last question. But you’re both working with clients. And as I understand, you’re really collaborating with clients on the use of these tools. There might be lawyers out there who are working with clients and they haven’t really had this conversation. So how should they address the issue with the clients? Because I think some people worry, “Well if I’m going to be putting things through a creative AI tool, that’s a time saver and therefore I’m worried that the client might think that, you know, I should be charging them less or I should be taking that into my cost account.” So how should lawyers be having this conversation with their clients?

Alex Well, to answer the first piece, yes, we are working with clients. And actually some of the sort of most direct challenges of ‘why aren’t you using this technology more’ have come directly from my clients. And maybe I’ve been that cautious version of a lawyer and had to actually be the bad news and say, “Well, it’s because I’m safeguarding your data confidentiality, or I’m not confident that this is going to do a better job.”

So some of it I think is engaging enough that you can have that conversation and you can actually stick to your guns when you are being pressed. It’s your own practice and if you don’t think that your client is going to be best served, then equally, you should learn to say no.

But the other piece is actually having done that thinking behind the scenes of where could it – it is incumbent on us as part of doing the best for our clients to give them the right service efficiently. So actually to have done and engage with that conversation with the client and say, “Actually, you know, where do you have concerns? Where would you like us to see it more? What’s your level of understanding? What’s your risk tolerance?” Because there will be.

I was speaking to a law lecturer at King’s University, at King’s College at the School of Law there. She was working on gen AI and access to justice. And actually for some people, your client will come to you and will say, “I can’t afford for a lawyer to go through all of these documents, and functionally, I would have no access to justice if that was the only model available to me. How can we explore my risk tolerance and how can we actually go forward and explore new ways of doing this that would have been available to me a year ago?”

Whereas another client will come to you and say, “Look, I expect the top, top level of going through this with a fine toothcomb, and that cannot be achieved confidentially, properly with a gen AI.” So to an extent it’s about listening to your individual client. And having done that thinking in your practice to stick to your own boundaries of where you can have your confidence that you’re fulfilling your professional duties. But it’s not one size fits all, and one shape of use of large language model will be appropriate for every client.

Yves: Alex Hawley, Amanda Chaboryk, thank you very much for taking the time to speak with us about this. It’s been a great conversation.

Alex: Thank you.

Amanda: Thank you so much for having us.

Yves: You’re listening to Modern Law, presented by the Canadian Bar Associations National Magazine.