The Law With AI: Regulating AI: Can the Law Keep Up? with Paul Schwartfeger
In this podcast, host Will Charlesworth speaks with barrister Paul Schwartfeger about how AI is reshaping legal practice, emphasising its impact on data protection and privacy.
They discuss the polarised views on AI's capabilities and its potential to democratise access to justice by providing legal tools to individuals who might not otherwise have access.
They also explore the evolving role of lawyers in ensuring the ethical use of AI, including the importance of human oversight in legal processes.
Join them as they navigate the complexities of AI in law and its implications for the future.
Transcript
The information provided in this podcast is for general information purposes only and does not constitute legal advice. Although Will Charlesworth is a qualified lawyer, the content of this discussion is intended to provide general insights into legal topics and should not be relied upon as specific legal advice applicable to your situation. It is also Will's personal opinions. No solicitor-client relationship is established by your listening to or interacting with this podcast.
Paul Schwartfeger:In the sort of nearish term time frame, AI is very much going to be handling more of our routine tasks looking at things like disclosure, contract analysis, admin tasks.
I think we'll increasingly use and rely on AI as an assistant as well, so it might even be small things like paying court fees on time or making sure that documents are filed by a given date. Rather than just setting reminders as we might now, we might actually hand over or delegate some responsibility to those tools to do so.
Will Charlesworth:You're listening to WithAI FM. Welcome to the Law with AI podcast. I'm your host Will Charlesworth.
This podcast is about breaking down and understanding how artificial intelligence is challenging the world of law, policy and ethics.
Every couple of weeks, I will look at important topics such as how AI is impacting established areas. of legal practice, how it's challenging the law itself on issues such as privacy intellectual property rights, and how it's raising new ethical concerns and essentially reshaping the regulatory landscape.
To help me in this task, I'm having some candid and engaging conversations with some fascinating guests, including fellow lawyers, technologists and policymakers, in order to gain a real insight into what it means to not just for the legal profession, but for the commercial landscape and society as a whole. As always, this podcast is for general information purposes only and does not constitute legal advice from myself or any of my guests.
And it's also the personal opinions of myself and my guests. So whether you're a lawyer or just someone curious about how AI and the law mix, you're in the right. Thanks. So let's jump in.
Today I have the pleasure of being joined by barrister Paul Schwartfeger. Paul is a barrister at 36 Stone, specialising in commercial litigation and international arbitration.
Drawing on his extensive background as a technology and business consultant across Europe and in the US before he was called to the Bar, Paul focuses on engagements and disputes involving data and those grounded in or intersecting with technology.
His deep expertise in system architecture, networks, data, AI, blockchain and other emerging technologies enhances his legal practice, enabling him to tackle complex legal challenges and deliver meaningful solutions for clients in the ever-evolving digital landscape. So with that having been said, with that impressive introduction, Paul, it's a pleasure to have you here today. Thank you very much for joining me.
Paul Schwartfeger:Thanks. Will you make me blush?
Will Charlesworth:Before we jump into the questions, I just wanted to quickly congratulate you on the excellent series that is Artificial Intelligence Navigating the Legal Frontier.
Paul Schwartfeger:Thank you.
Will Charlesworth:And that was a seamless plug for that. But if the people listening to this, watching this haven't caught it yet, I would encourage you to check it out to view it. It is.
It expertly breaks down some really interesting legal, technical, technological concepts, breaks them down to something that I can understand, which is. Which is fantastic because that's always a bit of a challenge and has some amazing visuals and some insightful comment.
I will put a link in the show notes for that direct. We will do that. But yes, congratulations.
Paul Schwartfeger:Thank you. I mean, it brought together a few of my favorite topics, dinosaurs and space. So it was quite an exciting one to make.
Will Charlesworth:So you have a background in technology and so probably my first question is how has AI impacted on your practice as a barrister?
Paul Schwartfeger:Yeah, I think my starting point's probably a little different to most because, as you say, I do have a background in technology, so I use artificial intelligence tools quite extensively in practice, but I've always used technology extensively in practice. So I used text summarizers long before we had LLMs and ChatGPT and I used K means clustering algorithms to look for patterns and data and the like.
So for me, artificial intelligence really feels like something of an evolution. It's and expansion and augmentation of tools and practices that I've long had.
But what I'm seeing, and certainly when speaking with clients and speaking with others, is that AI is very much seen by some as a revolution. So I think we have slightly different starting points and slightly different expectations as a result of that.
One of the things that came through, I think, both in the series that you mentioned and also just in my work generally, is that I find that you tend to get relatively polarized views.
So some are astonished and impressed by the technology and are therefore optimistic about it, and others are perhaps astonished and terrified about it. And so they're somewhat more pessimistic about these things.
And I think I even see that sort of polarization playing out in terms of the nature of the claims that I get.
So the sort of slightly more optimistic people, perhaps sometimes those clients are coming to me because they're disappointed because there's been a delivery failure, so something's been overhyped, or there's issues of misrepresentation they might have expected more from a solution.
But you also find that some people come to you thinking very much that AI is a threat and looking at issues of unfair data processing and perhaps reliance on data points within AI that oughtn't have been relied on.
One of the things that I think, regardless of your starting point, though, that there's a definite impact on my business and I would expect on others, is greater consideration for data protection and privacy.
So as these tools are coming out and coming into the sort of mainstream, being alive to what's happening to your data, to what's happening to clients data, how is that information being used, I think is probably one of the biggest changes for many.
I've always been one for reading the terms and conditions and checking the settings on these apps, but I think this is something that we all have to do now because it's a real concern not knowing necessarily where that data is going, how it's being used, whether it's being incorporated into models and the like.
So I think that's probably the most significant impact for me on my business, but I can certainly see that it has quite substantial impacts on others as well.
Will Charlesworth:Yes, and I've certainly noticed that there is, or there certainly has been a thought amongst many people that perhaps AI isn't quite delivering what it has promised to deliver.
And obviously in the press, the articles that get the most traction are those which perhaps, maybe over promise or certainly hype up the capabilities and the delivery around AI technology, how it will revolutionize in a very, very short period of time. However, I think if we were to go back to those where we are now and where we were promised to be, there's probably quite a large gap between that.
Paul Schwartfeger:I think there is.
I think the thing that's perhaps slightly more interesting, as well as thinking about how AI is impacting on my life rather than just necessarily my business. Because I'm acutely aware that AI does actually have small impacts on my life, but quite regular impacts in that way.
And they're not necessarily measurable and often they're invisible.
But there's definitely things that I know through my work as a technologist, before I came to the bar, and also just through my own research and understanding of legal issues that are arising now, that AI is doing things to all of us. So AI already shapes the music that I listen to. It's very much present in curating my playlists for me.
It shapes the food that I buy and the food that I eat, because of course, the loyalty apps that I use for the supermarkets that I shop at are all plugged into algorithms and artificial intelligence systems looking to make recommendations that are going to encourage me to buy and shop at the store Concerned and AI also obviously shapes the news that I read in some ways because it's informing what sort of information goes into my newsfeed.
So my newsfeed has quite a strong technology bent, but that's coming about as a result of algorithms learning about the sorts of information that I'm looking at.
So, like I imagine, all of us am increasingly becoming boxed in by these sort of filter bubbles that are being imposed on us and that are deciding what it is that we see or listen to or buy or engage with. And I think this sort of issue, it's small, but it's fairly pervasive.
And thinking about it over the slightly longer term, it risks limiting our discovery, it risks reinforcing bias and it really is going to start having an impact on the sorts of choices we make. And I think this is where I become slightly more concerned about what freedom will I have to make those choices in future.
So when it comes to picking a credit card or looking for a particular financial product or service, am I going to have a true choice of the market? Am I going to be able to look at all the parameters and think about, well, what actually suits me best?
Or am I going to find that I'm narrowed in on products that artificial intelligence systems think are best for me?
And the cumulative effect of that, of course, it's slightly worrying over the longer term because these are the sorts of influences that have the potential to shape our identities, that have the potential to shape our experiences. And they're very hard to unwind once they're sort of overlaid onto you and you've been exposed to them for a long time.
So there's definitely sort of small impacts on my life as well that I am aware of that I can see growing going forward.
Will Charlesworth:Yes, and I agree with you. I think it is the thin, thin end of a wedge, and it's obviously been happening for quite a long time because we don't think about it.
But as you go to good examples of shopping, so Amazon probably knows me better than most of my close friends do. Netflix certainly is something that you rely upon to I suppose, deliver to you content that you probably like.
But as you've said, how much we lose the option or the opportunity, or it makes it more difficult to actually go outside of what we would normally what we are being fed into something different and get to experience different culture or a different content perhaps because human beings that we're naturally wired to make efficiencies in our brain and if things are fed to us, perhaps that makes it easier for us to just click and say buy. We think you might be interested in this. It's interesting you mentioned the music points as well. I had that, I had this this morning on my music app.
We're just saying I might be interested in this, interested in this.
And it sends you down perhaps a rabbit hole, which I never really would thought of going down, but then you are then boxed into that rabbit hole as well. And you mentioned it, interesting you mentioned it at the start.
So as you said, you take the time to go through, say the terms and conditions or the settings on AI apps or just generally on apps or websites and you take obviously a keen interest in where your data is going and how much is being shared.
Do you think that there, I mean, possibly, maybe it's a straightforward answer, but do you think there is enough to raise awareness amongst people about how much they should be sharing with AI?
Do you think that there is, there's enough awareness out there and if not, do you think that awareness would actually even perhaps change how people interacted with AI on a big scale? Or do you think they would still be willing to click through and just accept that their data is then becoming part of a global LLM?
Paul Schwartfeger:I mean, experience tells me that people will just click through, but putting that aside, I think it's slightly more difficult now because on the one hand we don't want to inhibit adoption and scare people off using these tools because they are amazing and as I say, I use a lot of AI in my life and my business and for many years people have been uploading their files to various cloud providers and not necessarily considering, well, which jurisdiction are they in and therefore how is that relevant to my responsibilities as a data controller under the gdpr. So there are these sorts of issues that I think people should have been grappling with anyway and that a number of users won't be.
The difficulty with the new world is it's not just storing that data somewhere in the belief that it's yours and yours alone, it's that that data may, as you say, become somebody else's data.
It may be incorporated into a model in some way, not actually stored as the files that you have, but broken down and understood and potentially then made available answers or responses in some form of LLM. And I think what I'm seeing when I do look at the terms and conditions of various providers is they are not always clear. Some are very clear.
There's a prominent one out there that has a very clear warning label on it about what is going to happen to your data if you use their product. But there are others that sort of give you options to fine tune your privacy.
And it's a bit like some of those cookies lists that you see sometimes when they say will you accept cookies? And you click no. And then you're presented with 100 different options you have to say no to.
And they're all sort of buried and nested in convoluted ways. And I've seen a little bit of that as well where it's not immediately clear what options are actually wholesale.
Stopping your data from being used in perhaps ways that you don't intend it to be. So even if you take a keen interest, it's not necessarily straightforward. I think that's one of the problems that we need to address.
Will Charlesworth:Something as well, which I'm not sure how much is actually being tackled or considered is the cost of AI. So if we're looking at say large language models, most are there is a free version pretty much, pretty much everything.
But I wonder whether it's, whether it is a luxury to be able to keep a hold of your data and not have that shared with the LLM because you pay more for a subscription.
So a tiered subscription system whereby you know it will be still be, it will still be as, as accurate as it can be but you know your data isn't being shared or whether you pay for free access to this by, you know, through the, through the sharing of your data.
So whether it will become a two tier system of those with resource will be able to have more control over those essential of those essential items of data.
And other people who don't have necessarily have that much resource or they have the ability to do that, have to just surrender to the system because it is all pervasive and you can't really function efficiently without it. I don't know if that's me just being overly kind of, maybe slightly pessimistic or a little bit frightened.
Paul Schwartfeger:I think it's an issue.
I think there's also another angle onto it which is that if you are in that free camp, then obviously you're the data, you're the revenue stream and thinking about how that might affect the nature of the service offering to you. So if it's then driven by advertising, how does that advertising, for example, feature within the responses that are given to you?
So it's one thing to have a list of adverts down the side of your browser which you can just ignore, but it's another if you're asking questions and actively being given responses and if those responses are themselves being shaped by advertisers.
So if that was the sort of the model, then it would become quite worrying that the sort of commoditization of that tool and as you say, the different sort of information and levels of access and service that you would have depending on whether or not you're a paying subscriber or not.
Will Charlesworth:Yes, exactly. Because the solutions that are put out there are commercial solutions.
It's a business, it's not necessarily in a lot of cases, it's not an altruistic type of society's general general good necessarily because at some point somebody has to pay for huge data centers and it has to pay for all of the processing of the, of that, of that information in terms of the, in terms of. And you, you touched upon it a bit at the start.
And the impact of AI on legal practice and how that has changed and we've seen reference to certainly at least one judge making reference to using AI as a useful way of case summary within a judgment. What are you finding in terms of your experience of how legal practice is probably going to. Has changed or how.
Well, probably more how it's going to change, say over the next three to five years. How do you think AI is going to impact on that?
Paul Schwartfeger:I think this comes back to the starting point issue in some ways. So for me, I think I'm probably already in this phase of the sort of the three to five year time frame and I think many others will be as well.
The sorts of tools that we're using and seeing I think will just amplify and more lawyers will be doing the things with AI tools that some of us already are and we'll be doing them more and more and perhaps in some new ways. So I think in the sort of nearish term timeframe AI is very much going to be handling more of our routine tasks.
Looking at things like disclosure, contract analysis, admin tasks. I think we'll increasingly use and rely on AI as an assistant as well.
So it might even be small things like paying court fees on time or making sure that documents are filed by a given date, rather than just setting reminders as we might now, we might actually hand over or delegate some responsibility to those tools to do so. So those sorts of routine tasks I think increasingly we'll see in the very short term being taken on by AI and thus relying on AI for them.
But that's going to free us up to work smarter in this sort of short term as well.
So looking again at tools that to certain extents we already have document creation and automation tools, I think we already have basic drafting capabilities, but those are going to become more sophisticated and more streamlined and I think more of us will perhaps be using them for certain aspects of our work. One thing that I do think is going to be interesting is a sort of potential growth area of predictive analytics to forecast outcomes.
So obviously as lawyers we're often asked about the prospects of the case and we arrive at our conclusions on prospects based on our experiences and knowledge and also legal research. But I think this is an area where it's still going to require human oversight for the foreseeable future.
But the number of data points that we'll be able to bring to bear by increased use of AI in this space I think is quite interesting because it's going to be possible for the systems to look across far more reference points and sources of information than we potentially could in the time available to produce that prospect's advice. And therefore it's going to be better informed on with that sort of the vastly greater quantities of data that are going to be available.
But it's still going to need lawyers to ensure that it's accurate and also to weigh in and think about, well, what's the sort of innovative argument here, what's novel or new that I'm adding that the system won't be able to forecast because AI systems are inherently backward looking by their nature. They're obviously making predictions on the basis of pools of information, past events, things that have already happened.
And if you're about to do something amazing and creative and new, it's not necessarily going to be able to weigh that up in the balance when it's contributing to a view on prospects. So I think AI will be used in this sort of assistive way to provide more information, richer data, help us with analysis.
But we still have very much a human function in that space, making sure that we're adding the value and adding that sort of human factor which AI can't.
Will Charlesworth:Yes. And that's certainly good for lawyers currently. At the moment. At the moment, at the moment, yes. And we'll come on to a bigger prediction in a bit.
But something that I agree with you in terms of how AI can really help that data processing side where you're having to absorb and process large amounts of data from chronologies to Discovery in emails is now hundreds of thousands or millions of emails in some cases. So being able to process those and to go through those is incredible advantage to us.
But something that certainly in the short term I can't see AI doing. And it's what you said was the human factor.
So the interface between the AI and say the client or the court and picking up on a lot of, on the emotional intelligence side, so putting, picking up on a lot of behaviors and being able to I suppose, understand what the client wants and which may be very different from what the tone of, say, an email or the tone of other instructions. I think it's the, the role of the, the role of the lawyer I see is still within that and perhaps it focuses more on being.
Yeah, the interface between the technology that sits behind it and actually the delivery of that to the client.
Paul Schwartfeger:Yes, I think you're right. I think that's actually perhaps where my sort of slightly longer term view of how the legal profession will be shaped falls as well.
And that it's one thing for us to be using these tools as assistance and as a means of informing or assisting with our research, but it's very much another if the technology itself is starting to take on more and more of the delivery. And I think this is where the skills that lawyers will need in future probably are different in some ways from where they are now.
And there'll be a need for there to be greater development in some areas. Some areas which we already have and also some new areas as well.
Will Charlesworth:No, certainly. And I mean in terms of the development for the next five to 10 years, it's incredibly difficult to see where AI is going to be necessarily.
But yes, in terms of those future skills, I can see it focusing, focusing back on, on the lawyer again, because at the end of the day, a lot of the time, particularly, I mean, I work in disputes mostly and the client wants to interact with a human, with a human being and even if that, you know, and to discuss advice and to discuss instructions and to get reassurance and to interact with somebody, which you don't get with AI.
Paul Schwartfeger:I think this issue of empathy and communication has always been relevant and important as a skill for lawyers.
But I can see that the nature of the sort of empathy and our communication skills will need to adapt because it won't just be us giving that sort of reassurance as we ordinarily would as part of our practice. It will be helping clients understand why it is that they can rely on the output from a machine.
So thinking about, you know if an AI system has been used in the drafting of a contract, helping them to understand why it is that they can rely on that output.
Now, it might be that the lawyer goes on to explain, it's because there's still human oversight, they've been actively involved and they reviewed it, or it might be that they're actually bringing some form of statistical information to bear and making some form of presentation on why it is that there should be trust in the output still, even though aspects of that work may be being done by some more form of AI system.
So I think that's an example of a skill that I would expect lawyers to already have, certainly, but I think it will change slightly what needs to happen. And the nature of the advice, reassurances and the nature of the communications that flow, certainly.
Will Charlesworth:I mean, we're talking about law and AI, and obviously a hot topic around that is regulation of AI within the uk, and obviously we were aware that different territories, jurisdictions are approaching regulation in very different ways. And what are your thoughts on how, I suppose the UK has approached this and where we are now? And where are we now? Maybe a bigger, bigger question.
I meant to be that too political. But where are we now?
Paul Schwartfeger:We're obviously in a different place to the eu. So the EU has implemented quite thick regulations which attempt to regulate all aspects of artificial intelligence through the EU AI Act.
My view is that it's possibly premature to do so. I think there is a place for some specific regulation.
So looking at things like autonomous vehicles or anywhere where strict liability arises, obviously we're going to need to address that through legislative instruments now, but I do think we need to be alive to the issue of overregulating and stifling innovation. And another concern that I have is the risk of.
And this comes through, actually, I think in a number of my articles and also the video series, this issue of how do you define artificial intelligence? And if you say that something is going to be regulated because it is AI, how do you deal with things that aren't AI?
So an algorithm isn't inherently intelligent, it's not artificial intelligence, it's just an algorithm. You can have an algorithm as part of an AI system.
You can just have an AI as rather an algorithm to the side of that sort of system, or with some form of procedural process or workflow that's not intelligent. The algorithm could be harmful, regardless of whether it's within an AI system or not.
My concern is when you start saying that we're regulating AI, if that algorithm's harmful but the context depends on whether or not it's going to be regulated, then the legislation is not particularly effective in protecting consumers, potentially. So looking at this issue of how else is it going to be dealt with, might it be caught by the gdpr? That very well could be the case.
If, for example, it was processing personal data, but doing so in a way that was unfair or harmful.
But again, having a different regulatory instrument for dealing with that might mean that we get some form of divergence, even though essentially it's the same, the same algorithm. So I have these sorts of concerns about trying to define AI within any instrument that we produce.
I think where our focus needs to be is making sure that we consider whether or not existing regulations do provide sufficient public protection.
So recognizing that some AI systems and some algorithms are dangerous or harmful, and thinking about how they are dealt with, there's an interesting class action going on in the US at the moment, which is against a healthcare insurer.
And they're using algorithms to predict when inpatients in hospitals reportedly, they're using algorithms to predict when patients in hospitals should be discharged.
So looking at the profile of the patient, the individual concerned, and also looking at the nature of the illness or reason that they're in hospital, and then determining that after X days it's appropriate for them to be discharged and therefore stopping their, their payments, even if that goes against the doctor's advice, that's an example of a potentially harmful algorithm. An issue where we need to think about how are the public protected from this?
Now, we have a different healthcare model here, we also have different regulations available to us, and it's arguable that you might, under Article 22 of the GDPR, be able to object to the processing of your data or being subjected to some form of decision that's based solely on automated processing of your information. But again, thinking about it in practice, is that right, really accessible to people, Is it practically enforceable? Is it responsive enough?
Because in a healthcare environment it could be a matter of life and death. You've very much got to be able to answer that question quickly.
So what's the sort of status quo whilst you're waiting for some form of human intervention?
We've got the Insurance act, we've got doctors, professional codes of conduct, we've got all these sort of instruments and we need to think about are they sufficiently targeted, do they need updating, do they respond appropriately to the sorts of scenarios that could arrive in the sort of the new AI world.
So I think that that's definitely an area we do need to focus on, and I do feel it would be better for government to be doing so proactively, rather than us necessarily having to wait for legal action to be taken and the courts to then respond reactively in that way.
Will Charlesworth:Yes, because part of it is identifying where the real harms could take place, where the real social harms could take place, and trying to craft legislation around that. But is, I mean, is legislation too. Is it going to be too slow?
Paul Schwartfeger:Oh, I'm not sure. I'll give you the lawyer's answer. It depends. But we do occasionally see appetite for it and then we see it getting kicked off into the long grass.
I think there is certainly an awareness that more needs to be done, quite how quickly and if what nature is not clear. But government does have a rule role to play in this, and we have seen various proposals and bills put forward that might assist in some ways.
I'm not sure there's. I think a couple of months ago I probably would have said that there was a fairly strong appetite.
It doesn't necessarily feel as though that's quite where the focus is at the moment. But more does need to be done, that's for sure.
Will Charlesworth:Yes, certainly. And it can be easier to be more just reacted rather than proactive, to stick your head above the parapet.
I mean, there may be something to be said to see how other jurisdictions fare with, with varying approaches of a more prescribed EU approach to other jurisdictions, where it may be a bit more hands off and a bit more, a bit more lax in terms of, in, in terms of regulation. It's, it's difficult.
Paul Schwartfeger: ng on the Data Protection Act:And then we were able to sort of evolve that into a more useful and certainly more powerful instrument. There is something to be said for doing something, but it doesn't necessarily need to be quite as heavy as it is.
Maybe looking at the EU AI act, and I think when I look through the act myself, it is really important that we consider how to balance innovation with regulations to make sure that we're not stifling development and stifling uptake. There are heavy burdens under the AI Act.
We have to see how they play out still, of course, but developers have to maintain extensive technical documentation. They're required to conduct regular audits and assessments of their systems.
I think even looking at the nature of certain practices that are banned, on the face of it, it looks as though it's straightforward and sensible and I don't dispute that protections are needed. But there is an article, article 5A, for example.
It bans purposively manipulative or deceptive techniques that materially distort the behaviour of a person and which is likely, reasonably likely to cause harm.
So this idea of being able to influence a user's behavior in some way, and if you read through the recitals, there are various examples of what constitutes harm and one of them is financial harm.
So you potentially have a here where an app that might be designed for weight loss purposes, which nudges people and tries to manipulate the behavior, potentially for positive reasons, to assist with weight loss and the like, or other health issues, it might encourage them to buy more expensive food in response to that or various supplements that would potentially be a financial harm. So that tool in and of itself might be unawful.
Now, in all likelihood, I expect that it would pass, but it's those sorts of issues that we have to look at. How is the act actually going to work in practice? And what is the value of these regulations?
Because a number of them are quite onerous and quite heavy in that way.
Will Charlesworth:And it's always a balance. You don't want to stifle technological innovation for the sake of it.
And it may be that technological innovation actually, if it's not stifled, that actually does help and it brings in greater safeguards, safeguards itself.
But perhaps in the short term it comes around again to making people as much as you can be and appreciate can be difficult to make them aware of what's happening, for example, with their data or how the models that they're using potentially influences their behavior. And we saw an interesting uptake in the number of. And it could be that AI is shaping this for me because I'm in a legal bubble.
But how much around the US elections and the UK elections, but definitely around US elections, How much social media and technology around that and how people interact with technology, how that has polarized people perhaps in one camp or the other, and how that influences their behavior and how it influences is just every element of every element of their lives. It's quite scary and perhaps we're too deep into it now to start trying to unwind that. I mean, how do you see AI influencing access to justice?
Do you see any particular sway on that at all?
Paul Schwartfeger:I do. I think it's an amazing tool myself, just from my use of it. And I think that AI has the potential to democratize access to justice in many ways.
So litigants and persons, for example, making available to them the sorts of legal research tools and drafting tools that traditionally would be the preserve of quite well resourced legal teams.
So ensuring that they're able to better prepare for court, better understand what it is that they need to do, better prepare their arguments and their submissions. I think it's amazing in that respect that these sorts of tools are there.
I think we need to bear in mind all the considerations you raised at the start, and we were discussing around ensuring that answers are accurate and that the tools are used responsibly and that they're not being influenced unduly by advertisers or the like. But we also need to think about the sort of safeguards that we need against unfairness when it comes to the use of AI.
So this might come down to issues of transparency, ensuring that people know when and how it's been used. I think a good example of this is the sort of the witness statement that's been translated by AI.
And if you use certain generative of AI tools, then it's not just a case of becoming a true translation. You may very well find that certain gaps are filled in by the sort of the GPT aspect of it.
And in that way, it's no longer an accurate representation of the witness's account. It's something that meets the requirements in order to be able to be advanced as part of the case.
So procedural directions are going to be important here, thinking about those issues of transparency, whether or not we need directions from the court that ensure or tribunals that ensure that parties were aware of what tools have been used and how we might even need to rethink things like the Statement of Truths, consider whether or not it needs expanding to ensure that it's adapted to the potential for these sorts of harms.
And think also about how we're going to avoid this issue of misleading the courts and ensuring that things like witness statements and statements of case and other documents aren't the creation of machines improperly, but really are an accurate account of the facts of a matter.
But those sort of issues aside, I very much do view AI as something that's powerful and positive and able to provide greater access to justice for a number of people.
Will Charlesworth:It's a very interesting point because I have heard there have been some calls from litigants in person.
The number of inquiries have increased dramatically around tribunals as to whether they can use an AI assistant during hearings and also in mediations as well as a way of being able to kind of level the playing field effectively. Which is obviously what access to justice is primarily about.
It's interesting how that's grown and how that will certainly help with access to justice.
Paul Schwartfeger:Yes, it will.
I have seen witness statements that have had the hand of AI on them, so I'm slightly alive to the issue of, you know, and I think this comes back to those sort of skills that lawyers will need in future as well. You know, that we are going to, as lawyers need to have far greater ethical oversight. We're going to need to be alive to the sorts of issues.
We're going to be able to. We're going to need to be able to recognize when a witness statement clearly doesn't match the author that it's alleged to come from.
We're going to need to be alive to these issues of bias, looking for unfairness and understanding those sorts of limits of artificial intelligence and the functions of the system itself. We're really going to have to need to.
Going to need to step into this role and ensure that we are making sure that AI is used ethically and accurately and responsibly by all the parties and court proceedings.
Will Charlesworth:So it's an interesting shift in our, in our roles as lawyers going forward. And it certainly keeps things. Keeps things interesting and dynamic from that point of view.
I mean, you've mentioned some really interesting points there, Paul. And as I mentioned at the very start of our.
Of this episode, you've created this amazing series, Artificial Intelligence Navigating the Legal Frontier.
And that's obviously not the, not the total of your content because I'm aware that you also feature your articles on a website and I'll put the link in the show notes, but I think it's commerciallawbarister.com if I've got that correct.
Paul Schwartfeger:You have, yes. Thank you.
Will Charlesworth:So what's. How did you find creating your artificial intelligence series that you've just put up there? How. How is that.
Paul Schwartfeger:It was fun.
I mean, without, you know, I think everyone knows what the punchline of the series is now, shall we say that the series was brought to life by the very technology of critiques, should we put it that way?
And, you know, this was very much an experiment for me, understanding what was possible and also gauging reactions because people were in a lot of ways surprised, and I was surprised by this surprise.
I think the nature of the series itself, it really underscores exactly the tensions that both we've discussed today and that are also discussed in the series.
So this promise of innovation versus the risks to trust and identity and thinking about what Does AI do, How do we control it and how do we make sure it's used responsibly? I enjoyed producing it enormously. It was a lot of work, I have to say.
There were a few of us involved and it would have been an awful lot more, rather an awful lot easier not to have used AI in the production, I think. But it was certainly eye opening and it all helps with that sort of process of education and ensuring you really understand the technology.
If you have to stand up in front of a judge at some point and explain something about the tech, it's always helpful if you have that insight and understanding that you can really ground in reality from your own experiences with it. So, yes, it was great fun.
Will Charlesworth: it. And I imagine going into:If people want to find out more and to contact you, whereabouts are they best to go in terms of websites or social media?
Paul Schwartfeger:Well, they can either visit my website, which is commerciallawbarister.com or they can get in touch with Chambers. We've got really lovely clerks, so they're also another option for reaching out and getting a hold of me.
And my member profile is on the 36stone website as well, so contact details and information is available there.
Will Charlesworth:As we come to the end of the episode, I just want to say thank you so much, Paul, for joining me on this. It is always fascinating talking technology and AI and all things with you.
Thank you very much for sharing your experience and your expertise as well. I really appreciate it.
Paul Schwartfeger:It was a real pleasure. Thank you for having me, Will.
Will Charlesworth:And thank you everybody else for tuning in as well. And don't forget to like and subscribe to the podcast if you haven't already. And I will catch you in the next episode.
And I will leave links to everything we've discussed in our show notes as well, so you can contact Paul and continue any conversations around what we've discussed today.