Balancing Innovation and Accountability: Simon Deane-Johns on Navigating AI’s Legal and Ethical Challenges
Artificial intelligence is rapidly reshaping the landscape of law and regulation, and Simon Deane-Johns joins Will Charlesworth to explore the implications of these changes.
With a wealth of experience in fintech and legal frameworks, Simon discusses the challenges AI presents to existing regulatory structures, particularly in consumer protection and financial services. He emphasises the need for regulators to keep pace with technological advancements, highlighting the risks of AI's potential inaccuracies and biases in decision-making processes.
The conversation looks into the different regulatory approaches taken by the EU and the UK, with Simon advocating for a more robust stance on privacy and copyright issues.
As they navigate the intersection of AI, law, and societal impact, Simon and Will also reflect on the importance of ensuring that technology serves the public good rather than exacerbating existing vulnerabilities.
Takeaways:
- AI technologies are often overhyped and underutilised, making it essential to understand their true capabilities and limitations.
- Regulators face challenges in keeping pace with the rapid advancements in AI and its implications for consumer protection.
- The conversation around AI should prioritise immediate concerns such as data privacy and copyright issues, not just futuristic threats.
- A well-defined regulatory framework can provide businesses the confidence to innovate while maintaining compliance with legal standards.
- AI's potential for bias and inaccuracies raises questions about its deployment in critical areas like loan approvals and law enforcement.
- Legal professionals and developers must communicate effectively to ensure compliance while fostering innovation in technology.
Companies mentioned in this episode:
- Reuters
- Amazon
- Worldpay
- Zopa
- Nutmeg
- Rooster Money
- Peer to Peer Finance Association
- AOL
Transcript
Voiceover
::The information provided in this podcast is for general information purposes only and does not constitute legal advice.
Although Will Charlesworth is a qualified lawyer, the content of this discussion is intended to provide general insights into legal topics and should not be relied upon as specific legal advice applicable to your situation. It is also Will's personal opinions. No solicitor client relationship is established by your listening to or interacting with this podcast.
Simon Dean Johns
::And, you know, let's not get carried away. Humans don't necessarily do a great job of that either.
I mean, the subprime crisis involved awarding loans to, you know, people with no income, no job, and no assets. That wasn't. I don't necessarily think you could blame the machine.
Voiceover
::You're listening to WithAI FM.
Will Charlesworth
::Hello and welcome to the Law with AI podcast. I'm your host, Will Charlesworth.
I'm a solicitor specializing in intellectual property and reputation management with a keen interest in AI. This podcast is about breaking down and understanding how artificial intelligence is challenging the world of law, policy and ethics.
Every couple of weeks I'll be looking at important topics such as how AI is impacting on established areas of legal practice, how it's challenging the law itself on issues such as privacy and intellectual property rights, how it's raising new ethical concerns, and ultimately how it's reshaping the regulatory landscape.
To help me in this task, I will be having some candid and engaging conversations with some fascinating guests, including fellow lawyers, technologists, policy makers, to gain a real insight into what AI means not just for the legal profession, but for the commercial landscape and society as a whole. As always, this podcast is for general information purposes only and does not constitute legal advice from myself or from any of my guests.
It's also the personal opinions of myself and my guests. So whether you're a lawyer or just someone curious about how AI and the law mix, you're in the right place.
So let's jump in and keep you informed and ahead of the game. So today I have the pleasure of being joined by Simon Dean Johns.
Simon is a solicitor with extensive experience in online financial services, E commerce, personal data, IT software as a service, crypto assets, smart contracts and AI. He's admitted as a lawyer both in England and Wales and in Ireland.
Simon has a remarkable career advising clients such as E money providers, peer to peer lending platforms, Canadian consumer credit firms, and cutting edge fintech innovators.
He's advised on the launches of some of the most interesting and innovative financial services including Nutmeg and Rooster Money, and the launch of the Peer to Peer Finance Association. He's consulted at companies such as Amazon and worldpay and is the co-founder and general counsel Zopa, pronouncing that correctly.
He's worked for Reuters in London and New York and is a fellow of both the Society for Computers and Law and the Finance Innovation Lab.
He is recognized for his thought leadership as well as if that weren't enough, he is a sought after speaker regularly presenting at conferences and webinars on issues such as crypto regulation, payment services and the evolving legal landscape for financial technology.
It's a privilege to welcome Simon here today as we delve into the complexities and some of the more interesting and hopefully light hearted areas of fintech, law and AI. So welcome Simon, thank you for joining me.
Simon Dean Johns
::Thanks Will, Lovely to be here. Thank you very much for having me on and for that glowing introduction, almost as if I was listening about somebody else.
Will Charlesworth
::It's absolutely my pleasure and I mean it's a very impressive introduction but I think probably one of the first questions I want to ask and possibly it's one that lots of people want to ask is I mean your career spans an incredible range of expertise. Can you tell us about your journey into the intersection of law and technology?
So what sparked your interest in this field and how do you come to be where you are today?
Simon Dean Johns
::Yeah, I suppose the first real contact I had with both with technology in a professional sense other than having a sort of laptop in 1990 and a one of those old Motorola flip phones was in 1995 when I joined the Reuters legal department in London which involved putting together software licensing agreements, data licensing agreements, the various things that went into creating Reuters real time and historic financial and news information services and I then went to about a year later I then went to over to the New York legal department where it was client facing or customer facing and there were.
I got involved with supplying Reuters financial information to big investment banks but also news products to the emerging E commerce platforms. So I remember going down to Redmond, Virginia to do a deal with aol.
Steve Case was in the room and the deal was to supply the top 10 headlines of the day to just the headlines mind to AOL subscribers for a dollar a subscriber per year the $6 million deal.
So from that you know I developed a, you know I got to sort of see you know birth of ebay and well very early years of ebay and various other web platforms and sort of took that back to DLA in 97, July 97 and and I was given as an associate I was given a job on the basis of my understanding of how data flowed on networks in the communication technology team at. @dla and had a business plan approved that had Information Society services at the top. But BY that was July 97.
By September, I got to replace that with E commerce because the FT started using that term. September 97. There you go.
Will Charlesworth
::It's incredible how one organization just changes the terminology and instantly an industry grows out of that.
Simon Dean Johns
::Yes, yes. Fintech is a sort of more recent term that also spawned an industry, it would seem.
Will Charlesworth
::I mean, that's an incredible first. That's incredible. First deal involving. Involving AOL.
And it's always interesting to see how we've come on so far since then and how rapidly technology. Technology changes. You know, as you said, your career spans from fintech and crypto assets to AI and smart contracts.
With somebody who's extensive experience in that. How do you see AI challenging existing regulatory frameworks? I mean, particularly in the areas of, say, consumer protection and financial services?
And do you think regulators are keeping pace with technological advancements?
Simon Dean Johns
::So there's obviously a lot in that. I think a big feature of AI, like a lot of new technology, and I sort of refer to generative AI in that is that it's overhyped and underutilized.
speed now on AI three times.:And each time they've talked about general intelligence being the ability for computers to do what humans can do, being just around the corner and so on. And it never is. I think, you know, there's.
Artificial intelligence has been around for a while, obviously, and in a narrow sense, you know, you can teach a computer to play Go better than a human.
And you know, so there are definitely narrow use cases and there are obviously lots of cases now where AI is present without us really realizing it, chatbots and so on and so forth. And I suppose from a regulator standpoint, that's kind of alarming.
And you see these questionnaires go back up from the regulators from time to time saying, hey, is anyone using AI? And if so, explain how we.
Which presumes that the board or the compliance team who gets these missives realizes that AI is being used and they may not. And AI has so many different kinds of permutations. And then I suppose the other thing to obviously consider is where AI can go wrong.
And there's a fair old list. I mean, it hallucinates. If we're talking about generative AI, it hallucinates and makes stuff up. It's inaccurate. There was a recent, the Australian securities regulator recently published a study where they had got people to rate summaries produced by AI against humans. And the human summaries were scoring 81% on average, I think, and the AIs were only scoring 47%.
So, you know, these things there, you know, there's, there's also bias, etc. Etc. Going on. And the experts tell me you can't weed this stuff out. You can't stop a, you can't stop an AI hallucinating.
You can, you can interrogate it and say, hey, you know, have you just shown me an actual ancient Greek poem or one you made up? And it can say, oh no, that's an example of an ancient Greek poem, you know, so you need to be very careful about how you use it.
But of course people are being encouraged to use it kind of willy-nilly without checking the output or, and they might be under time pressure and told to trust it from above, etc. So, you know, these are all areas of concern generally and particularly for financial regulators.
Will Charlesworth
::Is there anything that you think that they could be doing or anything that could be being done which isn't immediately obvious in terms of it being done, apart from trying to keep a human in the loop as much as possible?
Simon Dean Johns
::I mean, they have rules.
If you look at the FCA rules, there are sort of high level principles which should guide management in how they approach the introduction of AI, for example. But it's like any large IT project, but a much more complex one than is typical.
So I think, you know, the regulators would expect people to follow the rules in how they implement, you know, how they specify the requirements and follow through in implementing and, you know, coding these things and deploying them and checking that they continue to work appropriately.
You know, there are, there are situations where they could save a lot of work and there may be situations where say, two commercial parties are prepared to accept a level of, a level of inaccuracy with a certain tolerance because that's cheaper than employing humans to sort of crawl through the detail.
But you know, I think it would be unwise to deploy AI in a situation where you're deciding whether or not to give a consumer a loan or their, you know, their likelihood to default, or, you know, anything that really deprives somebody of their, their rights or determines their rights or liberty, dare I say it, you know, police forces, you know, I think that's where it starts to get quite problematic. Even if, even if you're able to deploy the AI appropriately, you know, it's going to cost you a huge amount to make sure it stays on the rails.
Will Charlesworth
::So we're not quite, we're not quite there at the moment. And you're right, you touched upon some interesting points there about AI being involved in decision making around loans and offering people finance.
And of course it touched on other areas such as employment and police, police monitoring as well.
Simon Dean Johns
::And you know, let's not get carried away. Humans don't necessarily do a great job of that either.
I mean, the subprime crisis involved awarding loans to, you know, people with no income, no job and no assets. That wasn't. I don't necessarily think you could blame a machine.
Will Charlesworth
::No. So in that, in that case, do you think a machine could have provided some valuable, valuable oversight?
Maybe not necessarily in that case, but going forward, is there a way of having both side by side as checks on the other, or is that too simplistic?
Simon Dean Johns
::I think potentially, I mean, I go back to my Amazon days and we all hated the legal documentation process being used as what we call the straw man for governance.
And I think there's a danger [of] that with AI, but it's an interesting thought experiment to run through what you would need to do in a particular business process context, how you would automate it, whatever goal you had using AI, because that would force you very much to figure out how the process works. Is it stable, is it being done properly now? Does it need, does the whole process need re engineering before you'd sort of encode it?
And I think that that in itself would be a revelation. Whether you ever got to an AI would be another thing, but. And you possibly wouldn't. You'd be just horrified.
When I was at GE, we looked at several processes which weren't producing. They were legal processes which weren't producing the right outcome fast enough.
There were two sets of people doing, addressing the same, what you would think to be the same process, certainly aimed at the same outcome, but they were doing it both very differently. And we, we had to. First we did a sort of twin track Six Sigma project which I, which I led.
And, you know, we first had to figure out what, what both sets of people were doing, sort out a process that was, identify a process which was going to be the most efficient, get them both to do it or commit to doing it, and then, and then price that and we saved over £7 million with a sort of volume, volume Deal on the back of the reformed process. But, you know, none of that. Their process was largely encoded, if you like, in traditional software, but so hardly AI.
But if you wanted to create that as a sort of an AI type arrangement, then you would need to teach the computers what the humans were doing to get the computers to then say, okay, well I'm now going to optimize that or replicate it in another situation or whatever. So it would be a very, very intensive business process engineering or re engineering project. Huge, you know, really very, very expensive indeed.
I say in terms of pulling all the teams together who do that stuff now and getting them to educate you how they do it and trying to figure out how to improve it and then, then encode it in terms of.
Will Charlesworth
::Going through those processes, do you ever, I mean, I think you probably, you touched upon it before.
Do you, do you find that there is a tension between the lawyer, that you're a lawyer in the room advising on, say, regulatory or other legal frameworks that sit behind new technology, but then you also have the developers there who are chomping at the bit with some very exciting and very innovative approaches to solving a problem or creating a new efficiency. Do you, do you find that there's sometimes a greater tension there between that? Or are you?
I mean, I imagine you're extremely experienced now and kind of navigating, navigating that and not just saying no to. No to everything.
Simon Dean Johns
::No. Well, I accept that question. Actually. What I've found is that the lawyers and the developers are pretty much this, taking the same approach.
The lawyers have certain legal rules that have to be met. The developers have certain, you know, they write in certain code. They need to know what they're encoding.
The tricky bit is in the middle of the business rules that are kind of fed through the lawyers to say, are we even allowed to do that? And you know, there are some changes there.
And then somebody really has to explain the business rules in a way that the developers can understand to code it into a system. And they hate change at that stage. Everyone hates change. Lawyers hate change, the developers hate change.
So I found that lawyers and compliance people and the software developers are pretty much on the same page in terms of what they require to kind of get the job done. The tricky part is the sales and marketing and business development people.
I would say the sort of Sam Altman's of this world who are busy pushing a vision of what can be and what they preach or what they promise to the world is very much informed by what they perceive the world to Want and that can, you know, the further we move towards, you know, away from traditional computing, the more fluid that becomes. And you see them promising all sorts of stuff.
You know, I saw something the other day from, you know, Microsoft AI CEO saying oh look, copilot sings.
You know, so there's, and there's, you know, there's all sorts of issues that are involved in that, not least of which is the sheer amount of energy that must be required to generate a song from text. But you know, somewhere behind the scenes, you know, is as I've already shown a lawyer going, it does what? And what are you promising now?
And then the developer saying, well how, how do we fit that into the next release? What are they going to dream up next?
Will Charlesworth
::It's that natural pressure there must be to constantly have something shiny and new in the next update, even if it's. Well, people often criticize them. I have recently criticized the iPhone for having very small, very incremental changes or improvements.
But certainly in the world of AI, I suspect in order to get funding or to guarantee funding, there needs to be lots of headlines and to grab headlines you need something, something more extreme or something more, with more excitement.
Simon Dean Johns
::Yeah, I mean it's the new thing, right? From Michael Lewis's book The New New Thing in 99. It's.
How can you convince Wall Street to, you know, you're looking, these guys are looking for a liquidity event as soon as possible and it's, and it's.
What's the, what's the minimum viable product, the minimum amount of kind of traction that they need to show in order to get someone to buy this thing off them or throw money out.
Will Charlesworth
::I mean with much of the AI hype focused on, focused on what exciting new things that it can do. And in the more extreme, and we've touched, touched upon it in terms of job replacement or I mean in the more, more extreme.
It's a Terminator style existential, existential threat.
But how, how do we ensure that attention stays on what we might call, I might be bold and call the real and immediate challenges such as, and forgive me, I am a lawyer, but data privacy, copyright issues, regulation of AI driven disinformation, I mean particularly you know, in, in election years as we've currently been living through, immediately how do we ensure attention stays on those, those real and immediate challenges?
Simon Dean Johns
::Yeah, it's, I mean it's tough, right?
atest thing I saw was that by:You know, climate change is going to cost us seven and a half percent of GDP if we don't do something about it.
Now, I think humans are really bad at, well, humans are really good at adapting, but they're really bad at adapting to something that might happen later. They're really good at adapting to something that just, you know, I, I'm in a car accident right now. How am I going to get out of the wreck?
You know, so, you know, I think, I think focusing people on the immediate consequences of using a new technology is going to be the best way for people to sort of adapt to its presence and address those consequences. Right? And I think that's tough because the business development folk, they want to dangle the next new bit in front of you.
The shiny thing, as you say, some of the people who use it, like if I'm a politician going for re-election, I want to be able to tell a lie that will move as fast as possible before anyone can kind of correct it.
And I think regulators are at risk of getting distracted by all sorts of, I suppose, techniques, but also just not, I suppose if they understand the immediate consequences, maybe they are worried about being Chicken Little, you know, the sky's falling in. You know, maybe they're always, if they're always talking about the sky falling in, people will ignore them.
So yeah, I think it's, it's, it's probably down to lawyers and compliance people and anyone who really engages with these technologies to really understand the immediate consequences and sort of call them out, but super tough. And you know, it doesn't seem to, it doesn't really seem to work.
Will Charlesworth
::I think perhaps, and I don't know if you agree, but perhaps our attitude towards personal data may have changed and this may be an unfair generalization of the general public, but we've become more comfortable in giving up some of our, or a lot of our very personal data in exchange for the latest shiny new thing and something that's free. And not everybody is in a position to pay more to get something that actually takes less of your, less of your data.
Simon Dean Johns
::Not even in exchange for, not even an exchange for will.
I mean, if you look at a lot of the plans for AI from the big platforms, they've been telling people, oh, by the way, we've been training Our AI on your data and here's how you can tell us to stop and, you know, feature surfacing, some, you know, arcane route to doing so. But a lot of people, I mean, how many billion people use Facebook, for example?
And I don't mean to necessarily call them out, but there are, there are huge numbers of people using these platforms who wouldn't have got, wouldn't have understood what was, you know, what their data is being used for, couldn't be bothered to go and switch it off. Do they trust, do they understand the consequences? Do they trust the government, the government to protect them?
Or are they using the platform because it somehow escapes government control? I mean, you know, I don't think that, I think there's just inertia is enough. Not even an exchange of sort of functionality for.
They say people only use about 8% of copilot functionality. I don't think there's necessarily even an exchange of functionality for their data. They just kind of let it happen.
Will Charlesworth
::I mean, obviously there's a different approach taken in terms of, say, data and regulation and AI between different territories and different countries have got different approaches to it and certainly in terms of the European approach to AI regulation and then the UK's approach, and I'm not sure a hundred percent sure exactly what the very up to date, very latest approach in the UK government is because it's been relatively quiet or hasn't given as much detail as I probably would have wanted on that. But I mean, did you have any thoughts on perhaps whether the more prescribed and more detailed approach.
So the EU AI act is perhaps more beneficial or something with a lighter touch with the UK or it's just, you can't say one or the other. Perhaps there is something in the middle which is a good compromise.
Simon Dean Johns
::Well, I think first of all, I think first of all with Europe, you have to understand that there's a very different ethos in a civil law jurisdiction which virtually all of the EU is bar Ireland or potentially Malta, you know, which is still common law. But
And that is that, you know, adopting the old, you know, Roman law approach, which is the state is there to tell you what you can do and how to do it. There isn't this idea that all things are possible without state intervention.
And the French, for example, approach to national ID cards is very different to the British.
The French say the government is useless if it can't, if it can't prove who you are, which is kind of ironic, but that's how, that's how the law is Viewed in, in the EU versus in a common law jurisdiction where we're used to doing anything until the government says stop or do it differently. So I've always liked to say that, you know, a Frenchman's, an Englishman's red tape is a Frenchman's business plan.
So I think it's actually ironically, incredibly important that the EU has come out saying, yes, you can deploy AI, develop and deploy AI, and this is how you do it.
And it may seem very prescriptive, but it's, I think, at a level where a lot of people would require it to be prescriptive in order for them to feel confident to do it. So I think that approach is quite important.
It seems hugely restrictive to Americans and I guess Canadians and Australians and English, anyone in a common law country. But I think if you view it through that lens, you should feel encouraged that at least the EU has come out saying, hey, it's cool, you can do this.
And then you argue about where things are too restrictive.
I think, as you've hinted at, I think the UK government approach is just kind of delinquent given the things we talked about already, privacy, copyright infringement and so on.
They should have been much firmer in saying, hey, those things have to be respected, copyright and privacy and so on, and if you don't respect them, then we will legislate immediately.
And I know there are lots of legal academics who've said for a long time, actually, that kind of soft regulation and self regulation in this sort of space doesn't work. You, you really do have to come in fairly heavy.
Whether it's as heavy, whether you need to be as heavy as the EU in a common law jurisdiction is, is a bit moot given the different ethos at play.
I think the Chinese approach is quite interesting because obviously they're quite, they're probably a bit more, it's more of a civil law jurisdiction, I think, but they also have this.
And you know, I'm not into, you know, controlling the population to the level, well, in this kind of way, but the Chinese have embedded in their regulation the kind of cultural norms they expect to see, which if you kind of think, well, gee, I see what they see, where they're coming from, it would be better if there were more liberal norms, but it's quite interesting to see ethical type or cultural type outcomes being embedded in legislation. So ironically, I suppose the Chinese model may have something to teach us in terms of, well, hey, you can sort of encode ethics.
And we kind of do that, I suppose. But I think the UK government's been too slow to do that so far. Well, and kind of the genie's out of the bottle.
The US on the other hand, is interesting because if you look at the FTC approach to regulating AI, they've been extremely aggressive in requiring the destruction of large language models which have used data, training data, for example, that is deemed to be somehow infringing, I think copyright and maybe even privacy and so on. And they say that they have the regulatory tools they need. So that's.
It's not, you know, the common law approach in some ways is not to be underestimated, points like that.
Will Charlesworth
::I mean, I'm aware that you. I mean through LinkedIn, but also through your blogs as well. We should probably share that you comment on various things either through your blog.
The fine print from a legal point of view or the pragmatist from a consumer tech and that type of angle. I love reading about how you approach these difficult issues when they come up in. Either through the press or in society.
And something that you touched upon.
I think it's in your most recent article, which is inspired by the American, recent American elections and the role of television and social platforms in perhaps influencing society. And certainly everybody's aware how polarizing it is in every form, from television news stations to social platforms.
And there's certainly a risk there that looking at it from, say, an AI point of view, a technology point of view, is there anything to. Is there anything to stop the algorithms further entrenching us in one side or the other?
Is there anything that can be done to counter this trend of simplifying and dumbing things down on either side of the political fence, where we are just constantly, I suppose, pitted against each other? Whichever side of the fence you fall. Do you think we're inevitably heading towards an idiocracy, I think was one of the phrases.
Simon Dean Johns
::Yeah. Thanks. You're referring to the latest blog where I sort of.
I read the words of David Foster Wallace, who was an author writing about TV back in 93, and he was saying, the unfortunate thing is that as humans we. We can all unite around the vulgar, the prurient and the dumb.
And that's, you know, no matter how bright or how uneducated you are, we can all laugh and we can all, I suppose, want for a better word, appreciate those three things. The vulgar, the prurient and the dumb. We can all laugh at those sorts of jokes, for example.
And yeah, I guess as humans, therefore, we are kind of vulnerable to being approached at that level.
And I once held a view that humans uniting bottom up would somehow come up with a, would somehow unite and overcome a kind of the sort of top down greed and stupidity. But I actually think that's, I actually think I was wrong that I think we are kind of, we're captives to our, to sort of human nature if you like.
And AI unfortunately reflects our biases. You know, I've again, you know, experts will tell me that you, you can't, you can't fully randomize AI. You can't, you can't overcome bias.
It continues unabated and you lose sight of really what random is.
So you know, I think, I think, I suppose just as TV was disrupted by sort of social media tools that were only used in kind of pockets and perhaps by more educated people early on, you know, in academia and so on.
But you know, and I guess the big social media platforms may as people kind of various, maybe, maybe people will lose interest in the vulgar, the prurient and the dharma and you know, develop other places where they will exchange kind of more noble rational views.
But yeah, if anything, I worry that AI is just going to put the whole kind of, it will put wheels under the vulgar, the prurient and the dumb, put it on skids so that, you know, it just further, it will further exacerbate that type of content and those types of, I suppose the, the extent to which we unite around those sorts of things, how you guard against that and you know, can, can one develop AIs that, that sort of counter. That would be, would I suppose be a question in my mind.
But I mean it doesn't look terribly, terribly hopeful if you look at the, if you trace the, if you trace the trend in politics or cultural exchanges in the western world over the last 20, 30 years, they've kind of only got seemingly got worse in certain areas. Obviously we've moved human knowledge forward in many ways and healthcare and so on and so forth made huge leaps.
But in terms of cultural exchange, it seems to have got worse and worse and more polarized.
Will Charlesworth
::Perhaps there is a gap there for more of a nuanced and insightful cultural exchange which could perhaps bridge that gap.
Simon Dean Johns
::Yeah, but I think that could be popular in niches. Right, But I think we've got to recognize that at scale those things will have to kind of dumb down to meet.
I mean, if your objective, if you're Netflix and you're trying to grow the user base incrementally year on year, quarter on quarter, gradually, I think you'd have to coalesce around the vulgar, the prurient and the dumb.
here television had got to in:Will Charlesworth
::Hopefully there will be some, some light at, light at the end of the tunnel on that. But I, I fully understand exactly what you, where, where you're coming from and the concerns, the concerns around that.
Simon Dean Johns
::Sorry mate, I, I, I, I just wonder, I just wonder whether you know that AIs or technology generally could be used to improve things like public services or infrastructure.
You know, if we can all agree that we can unite around the vulgar, the prurient and the dumb in, in a sort of in the entertainment world, but acknowledge that hey, you know, let's not, let's not approach public services or you know, bridge building in the same way. You know, somehow reserve, reserve areas of even endeavor for more rational treatment.
Will Charlesworth
::Yes, and you're perfectly right.
And I, Yes, I mean myself, I definitely fell into that, into that trap that we often do of thinking of AI because what makes the headlines is the exciting new AI that generates movies or can sing songs and of course AI on the whole adds so much and has so much potential benefit to areas such as medicine and construction, as you say, and other public services and making things more efficient way that we live our lives to, to, to, to help people. There's potent, yeah, there's a potential for that there.
So I suppose having a, yeah, a wider, a wider view around, around AI rather than, but it's, yeah, it's certainly easy to fall into it because that's perhaps what's the more attention grabbing certainly from our screens as well.
As soon as we switch on our phones and as we rely more and more on ChatGPT and other LLMs for you know, creating content and helping us with, with how we operate each day.
Simon Dean Johns
::Yeah, I mean whether we use them, I doubt we use them to their full in anything like their full functionality or potential even in that.
And I do question whether people will actually pay enough to justify the power of the water, the people and so on required to maintain these platforms. Whereas you know, embedded in the infrastructure there may be funds available.
Will Charlesworth
::That's an interesting point.
And perhaps there needs to be more publicity about it or perhaps there isn't as much thought going into it, but the environmental impact around AI or the potential impact because as you say it requires water, it requires cooling, it requires immense amounts of computing power to create these what appear on the face of it or been engineered to appear on the face of it, something very simple like the ChatGPT app that sits on your phone behind it has potentially huge impact elsewhere in the world. It'd be interesting to see how that comes to the fore in the near future.
Simon Dean Johns
::If you think about it, you know, if you consider that, you know, somebody said that a single image generated from text on one of these OpenAI's, it takes about the same power as it does to charge an iPhone. Okay, that's not happening instantly on your. It's not your own electricity bill. It's happening in a data center somewhere. But by doing that you are kind of bidding against your own. Get bidding against yourself in the electricity market.
You know, there is a demand for power there and you could be bidding up your own electricity bill by using these AIs to their full one song and you've added, I don't know, 10ft, your electricity bill.
gh. I think the last time was:Will Charlesworth
::Well, that's quite a time away in terms of AI technology and technology in general.
Simon Dean Johns
::That's why they're raising billions in every funding round because it's so expensive in terms of power and I guess people but computing power and where to locate data centers and how to keep them cool and so on. Yeah, it's big money.
And whether, as I say, whether consumers will pay enough to cover the power and computing required is somewhat questionable or even businesses because I don't think that the AI companies have yet fully priced these offerings. It's still about. They're still in the business of attracting customers and eyeballs and so on.
Will Charlesworth
::And certainly you can see in the way that streaming, Netflix and Amazon prime, take examples, have. And Disney as well have changed even in recent. Yeah, in terms of how the.
Initially the model was a very, very low price for access to everything and then because of, you know, to attract as many people and once you have that captive audience, then you actually have to start thinking, well, how are we going to. We need to reprice this. And you introduce the tiers and you introduce advertising back.
So when streaming starts to look like traditional broadcast television, terrestrial TV now anyway, once the reality of having to pay for all of that kind of kicks in or they go to a different model anyway of having consumers pay more for that content. But by that time it's already become a way of life. So perhaps you accept it more readily.
Simon Dean Johns
::Well, even so, I mean, the New York Times reported when I was in the States recently that Netflix was junking movies that they'd actually completed on the basis that they weren't going to hold enough attention in the, in the, in the audience to, to, to be worthwhile, you know, keeping on the schedule. They also recently junked their games division without producing a single, completing a single game.
I think on a similar analysis that it just wasn't going to be, if you, if you like, sufficiently vulgar, prurient or dumb. Yeah, you got to feed the beast, right? Yes, feed the beast what it wants.
Will Charlesworth
::You do? Indeed, indeed. And yes, I mean that seems like a very good place for us to wrap up. But thank you so much, Simon, for those insights.
Before we wrap up, if people want to contact you, how do they go about it in terms of, for work and then also in terms of your, in terms of your blogs as well. How do they, how do they get in touch with you?
Simon Dean Johns
::I think the best way to get in touch is probably via, via LinkedIn. Prosaic as it seems. You know, I came off Twitter. I mean, I think there's still an account, but I haven't, I'm not active there.
But yeah, LinkedIn seems to be the last island of rationality. I'm on Mastodon as well, but I don't expect anyone to necessarily find me there.
I should do a better job of promoting the blogs, but I think they're not, they're not vulgar, prurient or dumb. So I'm not, I'm not sure they're getting much. I'm not sure the algorithms are really promoting them very much. So we'll have to pass on that.
But yes, my name at LinkedIn should find me fantastic.
Will Charlesworth
::And we'll put a link in the show notes for this as well.
And yeah, I can definitely testify that your LinkedIn is extremely good in posting up to date relevant, insightful comments and also the latest latest news and updates as well. It's a very good, very good place to be, a very entertaining place to be as well.
Simon Dean Johns
::Good.
Will Charlesworth
::Thank you. Yes, thank you again. Thank you. Post a link to mine as well, but no problem at all. No, it's been, it's been excellent.
And thanks for sharing your expertise and your insight as well because your experiences, absolutely incredible. And yes, and thank you everybody for tuning in as well.
And please remember to like and subscribe if you haven't already and catch us on the next podcast and definitely do check out Simon's LinkedIn. Thank you very much.
Simon Dean Johns
::Thank you. Great fun.