136. Marco Ramilli: Understanding AI: The Importance of Detecting Fake Content
Marco Ramilli joins us to discuss the urgent need for technology that can identify whether images and videos have been generated by artificial intelligence. He shares that the idea for his software arose from a viral image of the Pope in a designer jacket, which sparked widespread debate and confusion over its authenticity. As the digital landscape becomes increasingly cluttered with manipulated content, Marco emphasizes the critical importance of distinguishing reality from fabrication. He explains how his software leverages advanced AI models to analyze visual content and determine its origins. This conversation sheds light on the broader implications of AI-generated media and the challenges we face in maintaining trust in what we see online.
Takeaways:
- Marco Ramilli discusses the importance of distinguishing real images from AI-generated content, especially in today's digital world.
- He shares how the viral fake image of the Pope in a puffer jacket inspired him to develop software for identifying AI-generated media.
- The technology developed by Marco can analyze photos, videos, and sounds to determine their authenticity, which is crucial for preventing misinformation.
- Marco emphasizes that the responsibility lies with technology developers to incorporate safeguards against misuse of AI-generated content.
- He notes that the rise of fake content can dilute public trust and complicate issues surrounding information verification in society.
- Marco believes that collaboration among companies is essential to address the challenges posed by the proliferation of AI-generated media.
Transcript
Marco, welcome to the show.
Marco Ramilli:Hi, David, thank you very much for having me.
David Brown:For those listening, this is about the fourth time Marco and I have tried to record this, this podcast, and for various reasons around scheduling and technical and everything else, we finally managed hopefully to get something down and, and actually have a convers. Thanks for your patience and your tenacity in sticking with it. So we can actually have a conversation.
Marco Ramilli:Yes, thanks. Thank you. And looking forward to start that podcast as well.
David Brown:Yeah, well, thank you.
And we were just talking about this before we started recording and actually I think we now can have a great conversation because there's a fantastic example happening in real time in the world right now where I think your software is going to be hugely important. But we'll get to that in just a second. So a little teaser for everybody. If you wait five or 10 minutes, we'll get to that. But can you.
So basically for everybody out there, you've developed a software product that can analyze photos and videos and determine whether it's been made by AI. I think that's the elevator pitch. But what I'm really curious is how did you, how did you come up with the idea?
Like, what was the thing that made you think, I've got to build this, like right now?
Marco Ramilli:Yeah, so the, the idea came out a couple of years ago when online circulated the picture of the Pope at that time was Francisco.
And the Pope had, you know, was dressing this very fancy jacket and you know, and a lot of people were just arguing about, about how fancy it was and you know, how, how much, you know, how it was expensive, that kind of, you know, dress for, for a Pope, which is supposed to be, you know, in another side, in another way.
David Brown:Yeah.
Marco Ramilli:And that made me feel, you know, feeling made me feel like, you know, you know, it's not a problem for me. I mean, it's, it's a cloth. I mean, it's, it's cold outside.
It was a winter, if I were remember was, you know, late winter to two years ago or something like that.
But people were starting polarizing themselves and after a couple of hours, the Holy See said that, you know, this is not a real image, so please stop. It's not real. It never happened. But, you know, people were alive and they continue to, you know, yelling against or, you know, each other.
And there was a very, very big rumors and a lot of argument on that. And so I, you know, I said I felt at that time that if the human being is not able to distinguish what is real, what is not.
Even if, you know, even if after a while there is somebody who says this is not true and this is not false, or this is true and this is false, you know, it's not enough. We need to stop that at the beginning.
So we need to guarantee that, you know, if somebody will publish something that does not exist, that is not real, you know, we can know that because, you know what, when you see something, we, we are like Santumaso, which is, you know, the, the scent that believed in something. Just if he was seeing something.
David Brown:Yeah.
Marco Ramilli:And, and, and, and if you see, you know, a picture, you start to feel something inside yourself. And if you feel something, you know, it happens that you build your opinion, you know, after feelings, many different time, you build your opinions.
So there's nothing worse than generative artificial intelligence. And in you know, building generative images or videos, but people need to know that what they are seeing is not real. I mean, has been generated.
So, but, you know, not all the people think like that. And so there are many cases where, you know, people generate content without saying that that content was generated by artificial intelligence.
And, and so that is the point.
I mean, from, from that case, from that specific case, I've been starting to think that, you know, we need the technology to detect that and to, you know, try to communicate to people. Hey, this is a good, I mean, this is a great picture, but never exist, you know.
David Brown:Yeah. And for the people who are listening, the people watching, I will have shown a picture of the picture in question.
And it's the picture of the Pope in a white puffer jacket.
But I think it was like a Balenciaga or like, like a super expensive, you know, and it's a, it's a, it's actually a really funny picture if you know that it's fake and you're kind of like, yeah, okay. But I do remember as well when that came out. And yeah, it was. But instantly I kind of looked at that and went, okay, that's a fake picture.
But again, like you said, a lot of people don't, you know, kind of can't distinguish the difference between it. And I think this is becoming more and more relevant as we move forward.
And I think on the show, I think even last year when, when we were looking at the election, the presidential election in the U.S. you know, I said this is going to be the last election where we can trust anything that we see. Because.
Excuse me, because in the future, you, you know, you're not going to be able to trust Anything.
It's, you know, there's going to be some politician who's going to be at a rally and it's going to look like they said one thing and it's just completely fake, or, you know, people are going to make the audience look like it was different sizes. You know, there was this Trump parade thing that happened. And, And I think we're already seeing some images are going, well, no one was there.
You know, there were like a thousand people there. And then there's other images that look like there's a million people there. And it's like, you can't. I don't know which one's true.
And unless you're physically there yourself and can look around to kind of see what it is, you almost can't believe anything anymore. And I think it's. It is, it is really dangerous. I think it's also, I think it's also. I think there's a lot of funny stuff that's coming out of it.
Like, I don't know if you've seen the ones where they have, like, Bigfoot and it's Bigfoot talking and stuff like that.
A lot of stuff on, like, Instagram and social media and TikTok and all that sort of stuff, but there's a whole account that's built around it, and it's obviously just a guy who's running around and talking about stuff, but he's like, you know, he'll. He'll be like crouching down and it'll be like Bigfoot going, wow, that was a really close call.
There's some humans just over there and they nearly saw me. And, you know, stuff like that. And it's. You look at that and you go, that's super creative. And it's really, really funny and it's quite entertaining.
But that's obviously, you know, fake. And it is, yeah, it's, it's, it's massive and it's, it's. Sorry, I've. I've sort of gone off on a little train of thought there.
I might, I might just edit some of that out, bringing it back.
So I know that you have a background in kind of cybersecurity, and I know from, you know, when we've spoken before that you kind of see this as almost like a cybersecurity type of a, of a, of a problem. Can you talk to me a little bit about that?
Marco Ramilli:Well, I, I think, I think that it's a bigger problem, like cyber security, probably cybersecurity.
It's a subset of it as as you said, you know, there are uses, use cases where people can, can use artificial intelligence for, you know, building, building interesting things and for building interesting and funny things to watch or to you know, to, to listen to.
But on the other hand, since we, we built, we build our opinion online today and we build what we believe online and, and what we will vote probably online and you know, everything today is happening online and it's, it's much more than cybersecurity. It's like human right to me. So it's, it's something that, it's very, it's very, it's very fundamental for, for us.
So let me just argument in more that. So we do have artificial intelligence on cars for instance.
And we do have cars that are, you know, can drive by themselves and they works most of the case and that we have a lot of regular mentation over that.
But then, so there are a lot of regular mentation on how the, you know, self driving assistant can do where, you know, different kind of levels and who can use that levels and so on and so forth. And then we have that technology. It's a technology that interact with ourselves, interact with our minds, that can literally talk to us.
So you know, if you are, if you enable a technology to talk with us, they are not just, you know, helping us in doing something, but they can, by talking with us they can change the way we believe. And today there is no recommendation on that. So there is a kind of free technology that is touching ourselves from inside.
You know, there are many people that make the artificial intelligence as a girlfriend, other people that you know, built over of artificial intelligence, the best friend, other, you know, ancient parents that have died and they, you know, they want to talk with them again so they, you know, bring them back using artificial intelligence. So there are a lot of different, very scary ways to use that which touches ourselves inside. So we, we, we do we.
We really need something on one hand to regain, but on the other hand to, to check and to see if what is happening is real or, or not real.
So yes, cybersecurity, it's, it's a topic because with these technologies you can build up phishing attack or spread phishing attack very, very successfully or you can even build pieces of code so you will be able to write new malware families and you can use that technology even to iterate over vulnerabilities to check or, or to find out new vulnerability if you've got the code. But you know, just a technical point of view, actually this is more, you know, A high level problem that we need to solve, we need to face actually.
And you know, fortunately, maybe somebody will solve that problem as well.
David Brown:Yeah, no, you're right. How, how does it work?
I mean, I assume you, I assume you've got a super special IP that you don't want to tell everybody how it works, but just in general, how do, how does it work? How can it tell?
Marco Ramilli:Yes, so we developed kind of artificial intelligence models that we call them degenerative models.
So the other way around of the generative ones, those models actually keeps as input image videos or voices and elaborate probability that those contents have been generated or not. So in other words, we are able to decompose the input in different tokens.
And for us the tokens are bit or bytes for images, videos and for sound as well. And we know the probabilities of each bytes depending on the generators.
So we know how generator number one, what is the probability of tokens in generator number one, number two and so on and so forth. And we train the network to recognize them.
And so making a very long story very short, we developed an adversarial network, adversarial artificial intelligence network that is able to recognize content when has been generated or not with certain probabilities. We offer that networks in two different ways today. One is through a simple web interface and the other one is through API.
So that those models, those trained models can be integrated in different products depending on different use cases or around the world.
David Brown:Right. And through, through the, the trials and the clients that you're working with. What kind of stuff are you seeing?
Like, have you found some really good examples that are, that are, that are a great case that people might recognize?
Marco Ramilli:Sure. So for, for instance, today our, the generative models are mostly used in banks and insurance.
So the banks are for frauds, like image for image morphing. So, you know, when you, when you start, when you open up a new bank account, they need to verify that you are actually you.
And so there are a couple of ways to do that. The first one is by asking you ID cards.
And so there are people that, you know, can use artificial intelligence to modify ID cards or Social Security numbers.
And the other way is to, you know, having a conversation like we do today with the cameras and you know, the people, the verifier that needs to verify the other people, you know, is asking to the people to, you know, touch the, no, the nose or you know, the hair and stuff like that. And there are kind of attack. I can, I can, I can show you one if you want. Right now.
David Brown:Yeah, go for it.
Marco Ramilli:That, that is called morphism. So it's very, very quick. I, I just click a button and you know, for people that can see us.
David Brown:Yeah.
Marco Ramilli:And you know, see that I, I Changing faces.
David Brown:It's a bit creepy, but it's real. Like if somebody got on and didn't actually know what you looked like and you just started off that way.
Yeah, but there's a load of people would never know.
Marco Ramilli:Absolutely, absolutely.
So we with our models, the, you know, by passing out these images from here, they are able to say, hey, on the other side of the camera there is not a real guy. It's some somebody who is changing his face. So back to the reality. I'm sorry if I'm not.
David Brown:That's good. I was about to ask you to turn it off because it's really creepy. Like it, it's, I guess the best word is uncanny.
Probably because it's not, you know, it, it, it looks almost real, but Right.
But not really like, but maybe that's because I know, like I said, if you had come on in the beginning and had it on and I didn't, you know, I'd never met you and didn't know what you looked like, I might not recognize that that was it. But because I know what you look like, it's just so creepy. I'm just like, that's just wrong. Turn it off.
Marco Ramilli:Yeah, yeah. And you know, I'm doing that with just a mobile laptop.
So you know, having, you know, bigger PC on my desk, I would probably, you know, getting better and better the quality of the morphing. So that was just a simple example. Another case is on insurance.
So when to, how to cars crashes and they, you know, they, they have some damages on, on their, their bodies. You know, if the damages is, if the damage is below a certain kind of an amount of money, let's say to €2,000.
So the insurance does not, you know, send observer or people to check the reality of the damage.
David Brown:Right.
Marco Ramilli:But they just trust to picture from, you know, mechanicals and you know, body parts. So in that way, you know, it's enough a mobile phone to change the damage size and to claim more money for, for that damage.
So again, you know, this is pretty straightforward. So, but we have very strange and interesting example as well. So one is on soccer players and somebody can say soccer players. Yeah.
So I didn't, I'm not a soccer player, I'm not a soccer enthusiast. So you know, be patient with me. If my terms Are not the. That's one.
David Brown:Yeah.
Marco Ramilli:But you know, there are a lot of people, a lot of soccer players that are playing in, you know, not in the first class, not in the first leagues, that they want to play for the first league. So you know, because money probably because, you know, because it's a career path for them.
And so there are many of them and there are just few of them as being, you know, will be selected for the premier league.
So you know, today there are procurements or parents or the soccer players by themselves that send videos of themselves that are, you know, playing by saying hey, look at me how good I am. Please let me.
David Brown:Yeah.
Marco Ramilli:And many of these videos, you know, today have been modified from, you know, with artificial intelligence. So you know, at the end of the day the soccer team, they will send real observers to see, you know, the soccer player.
And so eventually they figured out that the guy is not, you know, super, super good as it was in the video. But, but they spent time and money to ship, you know, observers to, to. To see, you know, the player.
And in the same time probably the good ones are on the, on another team. So it really. And since the, the socce friendship.
So I mean it's, it's a job very well paid so you know, it's important to, to pick the right one and not, not the worst one.
David Brown:That's nuts. I would have never thought about that.
Marco Ramilli:Yeah. And I have many of them, I'm.
David Brown:Sure, I'm sure I told. Yeah, I totally get it. And you know, it. I guess there's a subtext to this, right? That it's.
Marco Ramilli:It.
David Brown:Any technology, any new technology will immediately be used, you know, to, to try and get some sort of an advantage by people. And it's people at the end of the day. And you know, this, it's the formal name, you know, I guess is the dual use problem, right.
It's like a kitchen knife. A kitchen knife is, you know, it's handy. You can. You cut your food with it, you use it in the house and whatever.
But it can also be used to kill somebody. Right? And, and everything is that way. Guns are that way. Right? It's like guns can, you know, they are very helpful. They could be used to hun.
You know, they can be used for self defense but it's the people that use them that are the problem. It's not the inherent thing of the tool itself. And this is exactly what we're seeing play out with AI.
You know, the, the tool itself is not, you know, and can be Used for massively for good things.
But you're always going to have those people who will say, oh, how can I use this to, you know, to my personal advantage and how can I use it to get me ahead?
Marco Ramilli:Yeah, it's just people. Yeah. Here there is things. I truly agree with you and what you said.
However, I believe that people and company that are building those technology, they need to put, they must put inside those technologies some frictions in order to be less used as evil and mostly used as good. So.
David Brown:But you've worked in, you've, you've lived in California before. We haven't gone into your background, but I know you've lived in California.
Marco Ramilli:Right.
David Brown:And, and like the Silicon Valley area.
And you, you know as well as I do, because I did, I've worked there for a long time and I, you know, I, I know loads of different companies and people there. They actually, the, the tech bros, you know, is kind of the term. But the tech bros genuinely want to do good. They really do.
They, they develop these tools and these technologies because they want to solve a problem. Yes. Okay. They want to make money at the same time. But if you sit and talk to them, they absolutely 100% want to, to do well and mean well.
And most of them are good people and so they don't even have a concept that someone would use their tool. Whatever it is, it doesn't matter what it is. Right. It could be anything. They don't have, you know, social media.
In the beginning, they literally all they ever saw was this is going to be amazing. It's going to let people. And genuinely they just believed it.
Marco Ramilli:Right.
David Brown:They didn't think people are going to be cyber bullied. They're going to be, you know, they're going to be called out. People are going to put porn up.
You know, like all of that never even entered their mind in the beginning.
And I think that's the, there's no one there that's like the realist going, hello, I think you need to think about what you're doing here because no one wants to slow down. They're like, no, no, no, we need to get this out immediately. Was that your experience when you were there?
Because it certainly was mine when I worked there.
Marco Ramilli:Yeah, yeah, absolutely it was. I mean, I think it is for most of the people. However, probably when you do, you know, we need to change a little bit.
We see a lot of technologies that, you know, are very disruptive and that could be used as, do as usage, as you said.
But, but, but if from the beginning, when we, you know, we think to the technology and we develop the technology, we start to put inside some friction for being used on only one side and not on the other side. So, for instance, in Italy times ago, in, during it was 90th, we had chocolate that is called Nutella, I think is still today.
Very, very, very known around the world. Nutella owner shipped the. The bottle of Nutella with inside a knife. And, and that was a knife.
Yeah, but that knife was developed in a, In a very interesting way. So he had a round point around peak. So it was, it was impossible to, you know, to use like, offensive technology.
And, and it was very light, so it was not able to, you know, it was not, you know, a knife.
David Brown:To spread the Nutella, you mean? Right, Yeah, I thought you meant like somebody put one in a jar of Nutella.
Marco Ramilli:No, no.
David Brown:Okay, I see what you mean now. Yeah. Okay.
Marco Ramilli:Yeah.
And so, and so, you know, that that was, I mean, that was a knife that was developed and, and, and, you know, and thinking to, you know, not to offend somebody, but just to, you know, to spread Nutella on bread.
David Brown:Shout out to Nutella. That's good. I like Nutella.
So I mentioned earlier that, you know, I think because we weren't able to record earlier, it's given us an opportunity to talk about a situation that's developing at the minute in the world, and that's this whole situation between Israel and Iran. Now, we're not going to get into the politics of it because the politics don't matter for this conversation.
But personally, you know, I, I'm on social media all the time. I'm on TikTok, I'm on X, I'm on, you know, Twitter or whatever.
I'm on, you know, LinkedIn, I'm on Facebook, I'm on everything because, because I run my, my company accounts are all on. On that. So I have to be on those platforms. And I see videos and stuff all the time of people saying, oh, this is, you know, they've.
I don't know, Iraq is used or, sorry, Iran has used, you know, hypersonic missiles. But I watched that video and I'm like, that doesn't seem real. It just doesn't feel real to me.
And it hasn't been reported anywhere else except for on this one random, you know, Twitter account. And so I start to go, is that actually real or is it not real? And it just, and the feeling that I get now is almost like 90% of the videos.
I just look at it and go, I Don't think that's real. I have no idea. And this is exactly the problem that you're trying to help.
And it would be, I mean, just having somebody out there to go, hey, our confidence in this is 85%. We think 85% this is probably a real video. Or we go, look, there's like a 15 chance that this is real, so you probably should ignore it.
But that would be enormously helpful, you know, right now because I just, I don't know, I have no idea what's, what's real and what's not real.
Marco Ramilli:It's true. I mean, we, we are observing videos, images and even voices that have been modified during, you know, the past, the past days.
And there's not actually left side, right side. Yeah. You know, almost everybody today use that technology to change a little bit of information.
So it's enough to change, you know, a crowd on your shoulder or your background, you know, to, to, to change the perception of, of, you know, the, the people and say, oh, how many people are there?
So probably it was important and, or you know, just deleting an object, let's, you know, or adding an object in the, in the, in the sky for instance, could be, you know, could be very, very important for the, for the perspective, for feeling the, the people. So yes, that is totally true.
And, and plus, other than that, I'm, I'm worrying that we are, you know, floating Internet the digital space with a lot of that fake content and sooner we will have more fake content than real content. And if you think about that, I mean, why you need to spend hours or why you need to be interested on fake content?
I mean if you want fake content, you just go on GPT on midjourneys, table diffusion or whatever and generate it whenever you want in that time. So I think we are facing a very, a very important piece of history where everything could change.
But I'm talking even on the economy, let's assume the economy of social media. So why you need to go on social media network is everything is generated.
If you want something generated, you just go, we, they will need to protect, to defense the reality. Even for their business. Yeah, even for, you know, try to defending their job and, and what they, they actually do.
So you know, we are kind of floating really, you know, Internet of stuff or content without real, you know, without the depth, I don't to say without the flavor since everything is not, you know, is not real anymore. So what you said is totally true. And, and we, we, we are trying to, to give A contribution for, you know, solve this problem.
I'm not, it's too big this problem to be sold from, you know, a single company. Probably there is the need to, you know, to many companies working together and to, you know, to, to solve this issue.
But since that time we are here and we try to do our best to, you know, to contribute to, you know, to decrease this problem.
David Brown:Yeah, I think you've got a, I think you've got a massive battle ahead of you, mate. Yeah, you're fighting an uphill battle, that's for sure.
And the other thing that just popped into my mind while you were saying that is, is, you know, not only is it the sheer volume of it but now the knock on effect of that is that it's the added expense on top of everything else. Right. Like just having real content would be costly enough for the platforms and the people and you know, right.
Some people, you know in the UK we're quite good at having, you know, unlimited bandwidth and you know, we're on WI fi and but I know a lot of people, my in laws for example, you know, they're, they don't have unlimited broadband on their broadband at home. So fake content if they're, if they're on Facebook or whatever and there's loads of fake content that's being downloaded.
There's this subtle effect which is it's, you know, it's using up their bandwidth, it's costing them money and time and everything else and you know, memory on their systems and it's clogging up their phones and it's doing all this stuff in the background for stuff that's, you know, that's totally fake. And so there, that's a whole separate I think conversation. I don't think we have time to get into the economics of it.
But you know, there was a discussion about this in the past about online advertising and you know, when people paid, you know, when, when mobile companies basically when you didn't have unlimited Internet then if you went to a news site and it was like 80% of the coverage of the site was ads, it's like that's 80% of your bandwidth is actually being used up by ads.
That's nothing that you asked for and nothing that you wanted and it's costing you money and using up your, you know, using data and there is a whole side to that, you know, it's just clogging up the whole system across the board, which is annoying. So I'm conscious of time. How can people help you?
Marco Ramilli:Well, like to spread our our voice like you are doing right now. Thanks a lot for that.
We have a LinkedIn account that and the X account that could be used or if they want there is even a free version of our technology and since we want to you know give that technology to everybody, it's a free version which does not use the premium and business super cool art and perfect let's say models.
David Brown:That's fair.
Marco Ramilli:So but it's free so you know it's if somebody want to use that just go to our website identify net and click on try button and he can you know enter that and use that.
David Brown:Brian and I'll put the links and everything in the show notes so people can just click on it and go straight there. Brilliant. Anything else you want to you want to add to the conversation before we go?
Marco Ramilli:I want just to say thank you to you and to all our and your listener and watchers and that's it. Thank you.
David Brown:Brilliant. Thank you. Cheers. Marco.
Marco Ramilli:Sam.