Google's Magic Editor is a photo editing tool available on Pixel phones and Google Photos. It allows users to remove objects or people from photos, adjust colors, and even add AI-generated content, such as replacing a cloudy sky with a sunny one. The feature uses AI to identify and remove unwanted elements, replacing them with contextually appropriate content based on pattern recognition trained on millions of images.
AI technology has made photo manipulation more accessible and sophisticated. Tools like Google's Magic Editor, Samsung's AI features, and Apple's editing capabilities allow users to instantly alter photos, removing or adding elements with ease. This has eliminated the need for specialized skills, making it possible for anyone to create highly realistic but manipulated images, which can distort shared realities and spread misinformation.
AI-driven photo manipulation raises concerns about the erosion of shared reality. It enables the creation of false narratives, misinformation, and manipulated evidence, which can impact legal proceedings, journalism, and public perception. The ease of altering photos also challenges the authenticity of visual evidence, making it harder to discern truth from fiction in critical areas like news reporting and court cases.
Google incorporates safety filters and metadata (IPTC labels) to indicate AI-edited images. However, critics argue these measures are insufficient, as metadata can be easily removed, and social media platforms often strip it during upload. Google also relies on user feedback to improve safeguards, but the rapid deployment of these tools often outpaces the implementation of robust safety mechanisms.
Courts and insurance companies struggle to verify the authenticity of visual evidence due to widespread photo manipulation. While insurance companies are adopting specialized apps to authenticate photos at the point of creation, courts lack similar tools. This raises concerns about the reliability of evidence in legal cases, where manipulated photos could lead to wrongful convictions or fraudulent claims.
AI is a double-edged sword: it enables the creation of highly realistic manipulated media but also provides tools to detect such manipulations. Researchers use AI to identify artifacts left behind by editing tools, but this is an ongoing arms race. Effective solutions require a combination of AI detection, policy changes, corporate responsibility, and public education to address the broader societal impact of manipulated media.
Social media amplifies the spread of manipulated images by enabling rapid dissemination to global audiences. Platforms often strip metadata that could indicate AI manipulation, making it harder for users to discern authenticity. This creates an environment where misinformation can thrive, particularly during crises, as seen with fake images of the Hollywood sign on fire during real disasters.
Metadata, such as IPTC labels, can indicate AI manipulation, but it is easily removed with commercially available tools. Additionally, social media platforms often strip metadata during upload, rendering it ineffective for consumers. This makes metadata a weak safeguard against the misuse of manipulated photos, especially in contexts where authenticity is critical, such as legal evidence or news reporting.
Tech companies prioritize rapid deployment of creative tools, often backfilling safety measures later. While these tools enable users to enhance personal memories, they also risk misuse for misinformation and fraud. Companies like Google rely on user feedback to improve safeguards, but critics argue that safety considerations should be integrated from the outset to prevent harm.
AI photo manipulation undermines public trust in journalism by blurring the line between reality and fiction. While reputable outlets adhere to ethical standards, the proliferation of manipulated images on social media creates skepticism about all visual content. This challenges journalists to maintain credibility and highlights the need for transparency in how images are edited and presented.
Support for this podcast comes from On Air Fest. WBUR is a media partner of On Air Fest, the festival for sound and storytelling happening February 19th through 21st in Brooklyn. This year's lineup features SNL's James Austin Johnson and a sale of Death, Sex and Money and over 200 other creators. OnAirFest.com. This is On Point. I'm Meghna Chakrabarty. Thank you.
I'm going to start with a question today. I'm not going to answer it immediately, but it provides a compelling sort of mental backdrop for the conversation that we're going to have. So right now, which is 10 o'clock in the morning Eastern time on Monday, January 13th, that's when we first broadcast this show. The question is, is the Hollywood sign in Los Angeles on fire? All right. Keep that in mind.
I don't know if you maybe have heard or seen some things about that already. Now, we live in a world where the visual sense is the most dominant sense. I mean, for sighted people, what our eyes take in is the most powerful confirmation of reality. And that's why there are sayings like, you've got to see it to believe it. Or in the social media age, a more common one is picture or it didn't happen.
Which is what people tend to say when someone posts an extraordinary claim. Give us a picture of the event or we will not believe that what you're saying is true. But of course, ever since the earliest daguerreotypes, photography has also been a medium of visual manipulation. Well now, I will venture that with the latest smartphone technologies, you would be wise to be skeptical of just about any photograph. And why?
Well, let's take Google, for example. Google's hugely popular line of pixel smartphones has a photo feature called Magic Editor. And you take a photo with your phone camera and then with just a tap, you can completely erase something or someone from that picture. Like that weird tourist that photo bombed your picture at the Lincoln Memorial or something like that. You can just remove that person.
You can also remove objects around within the photo. You can move them around. You can make them bigger or smaller. And, of course, you can change colors. Now, with Magic Editor, you can even add AI-generated content to your photos. Say, a sunny sky instead of a cloudy one. In other words, you can create a picture of something that never actually happened.
Now, millions of people around the world own Google Pixel phones. In 2023, the tech company shipped some 10 million of these phones, according to Nikkei Asia. And Google's newest phone, the Pixel 9, reportedly helped Google's share of the North American smartphone market jump from 5% to nearly 13% from September to October of 2024.
In the spring of last year, Google took another step. It announced it was making Magic Editor available for free to anyone using Google Photos, not just people who have Pixel phones.
Now, of course, Google is not the only player in this photo manipulation game. Samsung and Apple are also rolling out or working on similar technology. So between the three of them, you're talking about basically the entire global smartphone market. It's a cool tool, but what will it do to our sense of reality? How instantaneous, in-pocket editing, how could it make the false real?
feel real. So how do we strike the right personal and societal balance when it comes to making tools that can unleash both wild creativity and wild lies? Well, we're going to start today with Google itself. And Isaac Reynolds joins us. He's group product manager for the Pixel camera at Google. Isaac, welcome to On Point.
Hi. Very nice to be with you today. So first of all, let's just nerd out technologically. How does it work in the Pixel phone, Magic Editor, and reimagine these new features? Yeah. So obviously, when you get your Pixel phone, you have this full featured camera and editing package. And Magic Editor comes inside Google Photos when you've
Browse through your pictures. You've picked the perfect one to share that most represents the memory that you have and want to share with maybe grandma of your holiday morning. But before you share, there are just some little things you need to clean up before you actually try to tell that story. And
You can use Magic Editor in Google Photos to, like you said, remove little things in the background that maybe distract from your memory of an event. Because I don't know about you, but what I remember from holiday mornings is joy and happiness and family. I don't tend to remember the coffee mug on the windowsill that I maybe forgot to pour out last night. And so those things I can take away with a Magic Editor or a Magic Erase feature. And then they don't distract from the story at all. So I get a photo that better matches my memory.
And when you do that, it's pretty straightforward. We like to make things really easy and automatic for folks. So when you first enter Magic Editor and you choose to erase, you can just tap the mug with one finger.
And the algorithm will go through and try to figure out whether you're tapping the entire windowsill or just the mug or maybe a person way deep in the background. Identify that and its perfect outline and then automatically remove it. Now, AI is really the perfect tool for this, right? Because AI is extraordinarily good at pattern recognition, which is essentially what's going on here. It is. You could imagine what I used to call easy cases where things like
I don't know, a blue sky. Blue sky is really easy to figure out what's in between because it's blue sky on the left, it's blue sky on the right. And you just put some blue sky in the center and everything looks perfect and your eye skims right over it. But as patterns get more complex, like I would call maybe a brick wall a moderately complex pattern.
You really need larger models that can recognize greater bits of context and figure out what grout looks like in between those two bricks and make sure the grout lines line up on the left and right side of the bricks as well. And so then...
In a sense, though, I would guess that detecting the edges of the thing that the person wants to remove, right, the coffee mug, although I will say that we've got some quirky coffee mugs in our house and they're each...
have narratives in and of themselves. So maybe I wouldn't want to remove the coffee mug off the windowsill in our house. But so finding the edge is one thing. But then tell me a little bit more because, you know, one thing that visual AI generators have been accused of is like putting crazy things in, like people with nine fingers, essentially. How does it know how to replace it with an extension of that brick pattern as you're talking about?
Because in most cases when you see a brick pattern, what's in between the brick pattern is more brick pattern. That's the common thing. And these algorithms are trained on millions and millions and millions of images to know that generally what falls between a piece of bark on a tree and a piece of bark on the tree is another piece of bark. Or when you take a photo of a beach full of sand, that probably what's behind that, I don't know, piece of garbage that you're trying to remove is another piece of sand. Right.
And there are tools that are more creatively expressive that allow you to describe what you'd like to put there instead of sand. And you can find those tools in Adobe Tools and Google Tools and in others as well, where you can say, in this particular case,
I do want to maybe replace that piece of garbage with, oh, I don't know, a beach towel. Maybe that looks more real for your scene and your memory. And you can do those things. So is the replacing feature, is that the reimagine feature? Yeah.
Yeah, there are a variety of features. You can make an image a little bit wider, like uncrop it, you might call that. You can use re-imagine to replace a little bit of an image with something that maybe is a little more real to your context and your greater memory. You can erase things and replace them with, let's just say, something your eye skims over like it was never there in the first place.
There are tons and tons of different things you can do with these tools to make a photo that better matches your memory and the context in which you remember it. I got you. Okay. So, Isaac, hang on here for just a second because we also spoke with Allison Johnson. She's a staff writer at The Verge. She covers smartphone advertising.
smartphones and mobile technology there. And we reached out to her because she's been experimenting with some of these features on her own family photos. And she's been using both the Google Pixel phone and the Samsung Galaxy S24. My child, he's three. He constantly has like boogers on his nose.
I don't have a problem like cloning out some boogers on his nose. What I struggle myself is like, I don't really want to mess with the sky or a lot of other elements in the scene. If you take all the other tourists out of your shot of a waterfall in Iceland, it sort of just looks like the rapture happened. You lose kind of the flavor of like,
I was there and this is how it felt that day. And the ability of the photo to take you back to that moment. So Isaac Reynolds, respond to that because, you know, Alison there is talking about taking a photo, taking people back to a moment. But it sounds like you and Google are thinking about it more in terms of taking people back to a memory, which is different.
It's true. I think, well, I think the first thing to step back and remember is that a picture is a piece of communication. A photo is a story. And it's a story you tell someone else through the act of sharing it. So Alice had said something really interesting, which is that she thinks that when she shares that photo of the waterfall, that other people feel like it's the rapture.
And I'll tell you right now that I would go to that waterfall, and I have done this before on hikes or national parks. I will go at a time when I know that no one else is going to be there so that I can get this beautiful shot of nature in solitude and wonder and
And I love going up before sunrise and seeing the sun sunrise above the hills with no one else around. And it's just this moment that only you get to experience. And that's the feeling I like to share with other people is this special, unique moment that only I got to experience that no one else is around for and is totally unique in that way. But you actually did experience it, right? You physically experienced that blissful solitude of having an amazing vista to yourself. Yeah.
In those cases, yeah, you can. And in other cases, you might have felt like it was perfectly alone in solitude because maybe you were embedded in this romantic moment on your honeymoon and it felt like it was just the two of you running around the world. And all these pesky tourists are just getting in the way of the memory that you had and the true experience that you had. And you can tell now that there's three different contexts for one picture.
And I think what's important is that every participant in our society, including the tools that deal with sharing, right, creating and sharing these pictures, provide the right level of context. I should say the people and the tools responsible for sharing provide the right level of context. Well, Isaac Reynolds, hang on here for just a minute because there's more I want to chat with you about regarding...
why Google decided to put this feature not just in the Pixel phones, but also for Google Photos, for anyone who uses Google Photos. So we'll do that in just a moment. This is On Point. ♪
Support for On Point comes from Indeed. You just realized that your business needed to hire someone yesterday. How can you find amazing candidates fast? Easy, just use Indeed. There's no need to wait. You can speed up your hiring with Indeed.
and On Point listeners will get a $75 sponsored job credit to get your jobs more visibility at Indeed.com slash On Point. Just go to Indeed.com slash On Point right now and support the show by saying you heard about Indeed on this podcast. Indeed.com slash On Point. Terms and conditions apply. Hiring? Indeed is all you need.
Support for this podcast comes from On Air Fest. WBUR is a media partner of On Air Fest, the festival for sound and storytelling happening February 19th through 21st in Brooklyn. This year's lineup features SNL's James Austin Johnson and a sale of Death, Sex, and Money and over 200 other creators.
You're back with On Point. I'm Meghna Chakrabarty. And today we're talking about how AI technology in smartphones, not just the Google Pixel phones, but Samsung and Apple are at work on similar technologies, can make it possible for you to basically instantly alter a photograph and make it into the memory that you wish it were. Meaning you can remove things, objects, people, you can add stuff back in. Sounds like a really awesome tool for creativity.
That also begs the question, though, of how does it impact our relationship with the realities that photographs are supposed to represent? Isaac Reynolds is joining us today. He's group product manager for Pixel Cameras at Google. And Isaac, I will fully acknowledge that since the birth of photography, it has been a medium, as I said earlier, that has been subject to all sorts of manipulation. I used to do a lot of darkroom editing back in my day when darkrooms were more commonplace.
actual dark rooms. And just even the act of printing a black and white photo, that is a subjective decision, right? Because we're not seeing in black and white. But I'd also spend hours and hours and hours and hours trying to figure out like contrast, et cetera. These are all subjective decisions. So it's not the existence of that capability that comes into question, but really rather the extent that you can take them.
So let me ask you about this. Are there any guardrails around the technology in the phones, for example, that says, well, no, you can't put a dead body in the background of a photo?
Yeah, I think you make an interesting point about editing. And it's true to start with that editing is by no means a new thing. Most people are familiar with the concept of to Photoshop it out as a verb. But I'll give you a throwback really quick. The 1930s photo by Dorothea Lange of the migrant mother, in many versions, has had a thumb removed from it.
Going back almost a century, you have made it not just to modify contrast and exposure, but truly remove bits from your picture. I think it's a good shout out from you that the editing is by no means a new thing. What I did talk about was context. I think context is one part that we all as players in this technology and communication ecosystem have to do properly.
Not only today, but honestly, we needed to be doing these things as a society 10, 15, 20, 30, 50 years ago. One thing Google does, for example, is we offer these guided experiences so that the thing that the overwhelming majority of people want to do, the overwhelming majority of the time are automated and easy. And those are things like removing little tiny people in the background.
Or putting in safety filters so that things that are obviously inappropriate just can't be created. Or attaching this metadata called IPTC is the industry standard metadata that Google attaches to images that have been edited with certain kinds of AI. And those are things that all tools, including sharing tools and social networks, can do to make this kind of a safer place for images. Not new, but things we should have been doing for a long, long time.
So to that point around those guided experiences or even putting up some guardrails on inappropriate stuff, let's turn back to Alison Johnson. Again, she's at The Verge. And as I said earlier, she and her colleagues experimented with Reimagine on a Google Pixel 9 device.
They, however, found that they could get the AI to generate some pretty creepy stuff in the pictures. Car crashes, bombs that were still smoking in public places, sheets that seemed to cover bloody corpses, drug paraphernalia. We first read about this in an article that she wrote in the middle of last year. And here she is again. You can add something like...
gross to a plate of food, like a bug and it looks like someone served your food to you with a cockroach in it. Or making it look like there's a bicycle accident in the street or
The big like red flag words like crash or fire, there were like guardrails around that, but we're riders. So we got creative, but it wasn't outside the realm of like anything anybody else can do. At first glance, if you're just scrolling through social media, especially, it looks completely realistic. The lighting, the way it's rendered, it spooked us out, honestly. Yeah.
So, Isaac Reynolds, I suppose the implication there is that those guardrails are not robust enough? I think it depends on the picture and the context around the photo. So, The Verge presented, I think she described a picture of drug paraphernalia, which could be interpreted as actually a pretty benign image, except that The Verge put it alongside this particular context.
that this is some sort of illicit bad thing. But the same quote unquote drug paraphernalia is used by people who self-inject medication all the time. And there's nothing wrong about self-injecting prescription medication under doctor's supervision. And then having a bottle of wine later, which I think was also in the image that Allison made. So it is the context that's important.
and how we describe them in the metadata that we associate with them and the captions that we put alongside them that need to be transmitted all the way to that end customer and end user. That's what I think is really important is making sure the images come with context. And images have been used out of context for decades and decades. There are so, so many examples of that happening.
Well, so to that point, Allison did include a statement from Google that they sent her when she did this story. And Google at that time said that some of the images violated what they called, quote unquote, clear policies and terms of service on what kind of content we allow and don't allow. I mean, the allow and don't allow stuff seems a little weird.
It's gossamer and thin to me because obviously, as you just said, it's all context dependent. So I would say technically anything is allowed depending on the context. But Allison found the statement dissatisfying. And here's what she told us. Putting drug paraphernalia is a violation of the terms of service, which is like, OK, yeah, absolutely.
We broke the rules, but anyone else can break the rules. It's such a rush to get these things out and put them into people's hands. But the safety guardrails are so far behind. Google is certainly aware, and I think these companies are aware, the tools that we have to...
fight the creepy misinformation aspect of it are so far behind the availability of the tools to create the misinformation. So, Isaac Reynolds, this is my last question to you based on that. And that is, I completely see how billions of people would love to have at their fingertips a tool that does exactly the kinds of things that you said. Just remove distracting things from images that you don't want. Like, really create the moment that
that lives in your mind and heart. Totally understand that. But this misinformation part is a big, big deal. And that brings me back to the question I started off with, which, as you heard, was, is the Hollywood sign on fire right now? And I ask that because...
There has been, you know, rampant misinformation on social media along with pictures. I'm not saying—I am not saying that they were generated by Google Pixel phones or Google Photos. I don't know. But it's out there of, like, the Hollywood sign on fire. And given the velocity, the instantaneous velocity that social media can propel things around the world—
It stands to question that does a company like Google feel that it bears any responsibility whatsoever to make it as hard or as virtually impossible as it can be to not use your tools to create misinformation like that? I think that when images get spread around the internet through social media,
And people misinterpret what that image is trying to say. Because you and I both know, we both had conversations, and I think everyone listening has had conversations with maybe a loved one where they remember the same conversation, but somehow the outcomes or the intent or the details very differently, even though it was the same conversation, you were both physically present. So meaning is something that's created by the creator and the audience in equal measure.
And I think when things spread around the internet like that, we need to make sure they come with context. I told you, for example, that Google embeds IPTC labels in the images that we modify with AI. I would like to see those labels presented really clearly so that people have a better sense of what they're looking at. But also, people should be a little skeptical, and they always should have been skeptical of images.
Every time you look at an image at the top of a New York Times article, you should wonder to yourself, what narrative are they trying to tell this image? And how many images did they sort through to pick that particular one? And what was it about that particular image that told the story they wanted to tell?
That's always been true with images. It's always going to be true with images. And I think that people should always be skeptical, regardless of who's publishing the image or where it's being shared or how old that image might be. Because like I said, you've been able to edit images for a very long time. So let me press this point a little bit more. I totally take your take your...
example of the fungibility of meaning. Absolutely. But we're talking about fact, right? And the misrepresentation of facts.
So to your point about the metadata, I mean, would Google even consider saying, well, every time you generate or manipulate a photo using one of our tools, that instead of the information of the AI influence being buried in metadata, we're just going to put a watermark on it, a visual watermark, so that when you put it on social media, it's going to say this photo was AI manipulated. Would Google be willing to do something like that?
I think that we are constantly listening to what users are asking for and what's working and what's effective. For example, you asking me that question here today is part of the feedback that we get on what needs to be done right and what should change and what's being effective. So if adding that kind of watermark that you're describing ends up being the right solution, wonderful.
But what we'd like to do is just have everyone use the industry standard metadata such as IPTC, and then it's effectively what you're describing, but without having to have this awkward watermark slapped on top of an image that has to go through every single different platform. So we're going to try the best solutions first. And then if folks like you and our customers keep saying, maybe this, maybe that, maybe that, then of course we'll listen and we'll shift.
Well, Isaac Reynolds is Group Product Manager for the Pixel camera at Google. Thank you so much for joining us, Isaac. Great to be here. Okay, let's turn to Hani Fareed now. He's a professor at the University of California, Berkeley, Schools of Information and Electrical Engineering and Computer Science. He's also co-founder and chief science officer at Get Real Labs, which develops techniques to detect manipulated media. Professor Fareed, welcome back. Thank you.
to On Point. Good to talk to you again, Meghna. Oh, so much to discuss. Why don't we just pick up where Isaac Reynolds left off? That, you know, is metadata, is AI identification and metadata satisfactory? Is it enough? Or do we need more when it comes to understanding how AI has manipulated a photograph? It is not enough. And Isaac knows it's not enough. And here's the thing he didn't tell you.
is metadata is fine. We love metadata. But what he didn't tell you is that all social media platforms, including Google's, including Facebook's, including Twitter's, rip out metadata at the point of upload. And so it's only useful if somebody sends me that image without it being uploaded to one of the many, many platforms where these things get uploaded.
The other thing he didn't tell you is that it's extremely easy to rip out metadata. Anybody can do this with commercially available tools. You asked him the right question about the watermarks, which I think he dodged a little bit. But yeah, I think again, and I think Allison got this right, is we...
The Silicon Valley motto is deploy, deploy, deploy, and then backfill in the safety. The safety is not the first consideration or the second consideration or frankly, even the third consideration. And I think the other thing that Isaac sort of missed the point on here is he was talking about, you know, you go on vacation, you have these wonderful memories of photographs. And yeah, but that's one relatively narrow branch of photography. We also use images to talk about world events, the fires in Los Angeles, the hurricanes in Florida.
We use photographs as evidence in courts of law where people's civil liberties are at stake. We use photographs to reason about conflict zones and to prosecute crimes in The Hague. Photography is not just about honeymoon photos. It is about our shared sense of reality in the world. And Google and others are now moving further and further to distort that shared sense of reality.
Okay, so I want to go back and for the sake of clarity, underscore something that you said. It is very easy to rip out metadata from photographs. Yes? Okay. And secondly, when you upload a photograph to social media, to the major social media platforms, the metadata is automatically pared down? Yeah, they rip it out. I mean, they store it server side because there's a lot of information, but it's all ripped out.
Okay. Which means me as a consumer, so even though Google puts it at the point of creation, which is great. I'm thrilled they do that. It doesn't really help the consumer on upload because that information is lost. So Isaac Reynolds and Google saying, it's okay, you'll know it's AI because it's in the metadata is essentially –
That's a rickety platform. I think that's a generous interpretation. Okay. So I want to go to – so that's really, really important to understand, right? Because, I mean, I would assert and I think you would agree, Professor Farid, that it's not so much that a very famous –
1930s black and white photograph of a suffering mother had a thumb removed off a few prints and maybe when it appeared in newspapers. It's the scale at which social media and the immediacy which it can propel images, ideas, facts and fiction around the world. The scale is what matters. That's what makes this technology much more pressing to understand now.
It's two things. So Isaac was right. Photographs have always been manipulated, but the way they were distributed, to your point, has changed dramatically. But what Isaac also didn't tell you is that to manipulate photographs like that famous photo required a fair amount of skill, and not everybody had that skill. And what Google has done is...
eliminated barriers to entry to create and distort reality. And that also is fundamentally different. So when we say photographs have always been manipulated, you could sit in a dark room for hours to manipulate it, yes, but not a lot of people could do that. And to his own point, touch an image and you can modify it now. Elimination of any barrier
So we just have a minute before our next break here. But Professor Fareed, that I mean, I deliberately started with the Hollywood sign as an example, because this, you know, the catastrophe is going on in Los Angeles right now.
But do you know how quickly from the start of the fires did those fake images of a flaming Hollywood sign get distributed? I don't know when the first one came out, but I saw them...
within 24 hours, which by the way is not surprising. Every natural disaster in the last, God, 10 years, as soon as they are hitting, you are starting to see stupid images that are, I think, you know, you can make, you can sort of say, well, okay, whatever, they're fake images. But, you know, at a time when this is a natural disaster and people's lives are at stake,
and their properties are being lost. We are diverting attention. We are confusing people. And there's a, frankly, it's really egregious behavior to allow this type of content and then to allow it and to actually amplify it online. Well, we'll be back in just a moment. This is On Point.
Support for AI coverage in On Point comes from MathWorks, creator of MATLAB and Simulink software for technical computing and model-based design. MathWorks, accelerating the pace of discovery in engineering and science. Learn more at mathworks.com.
and from Olin College of Engineering, committed to introducing students to the ethical implications of artificial intelligence in engineering through classes like AI and Society, olin.edu.
It's On Point. I'm Meghna Chakrabarty. Just a quick note on something we are working on for later this week. It's about those ubiquitous, Tribble-like customer surveys. They're like everywhere now. You know, when you go into the drugstore, the grocery store, maybe even your doctor's office, and they either send on the receipt at the drugstore, it's like, hey, take a survey and tell us how did we do? How likely would you be able to recommend us to a friend? Almost everywhere.
every touchpoint that you have in your life with a business, they seem to want to know how they're doing. Do you fill those out? Do you think the survey has made a difference in improving the service that you get from these businesses? So we want to know. If you work in customer service, especially at any of these companies that are deploying these surveys all the time, we want to know why the company is doing it, what's done with that information, if the surveys make any difference to businesses at all.
That's for later this week.
Today I'm talking with Hani Fareed. He's a professor at the University of California and co-founder at Get Real Labs, which develops techniques to detect manipulated media. And we're talking with Professor Fareed because of the fact that basically now in the world's smartphones, pretty soon most of the world's smartphones, you will have the ability of a tap of your finger to completely change the content of that picture. Almost everything about it.
And we really want to discuss what is that going to do to our relationship with visual reality?
And Professor Fareed, before we reconnect here, I just wanted to make one little note that Instagram is possibly the exception to the social media platforms as far as I understand. They do have an AI info label that should pop up when a photograph appears on Instagram that has AI manipulation in it. But they also note that not all AI content contains the information needed to identify it.
So, but Professor Fareed, I want to go to one thing about the why photos are manipulated. I mean, to your point about misinformation, it's not just the photo that's being manipulated. The goal is to manipulate someone's understanding of reality. Right. I mean, and that's the that's the troubling thing, which I don't know how we regulate. Right.
Well, I think you're getting right to it. So of course there is the benign manipulations where people are taking out something that is distracting, making the sky a little bluer. I don't care. But we also know that that same technology is a double-edged sword and that people are distorting reality for political purposes, for purposes of fraud. By the way, we haven't talked about the creation of non-consensual intimate imagery where people are using this to remove clothing and photographs and then extorting people.
We know that people are doing bad things. This isn't unexpected and frankly, it's not surprising. We know that this is the road we have been on for many years now. And what the AI has done is simply made it more accessible, more powerful, and has made us incredibly cool, creative things and incredibly harmful things. And the question to your point is how do we find that balance?
And I think these conversations are part of it. But I think the tech companies also have to be thinking about these things before, not after the technology is developed and deployed. And they continue to do the latter. I think it's continually backfilling in the safety. And you heard Isaac say it. Wow, we'll listen to our customers and we'll hear what they have to say and then we'll go in. But what do you do in that interim? Because years go by and you can say the same thing about social media for the last 20 years.
We've waited too long and then the harm over year after year after year
So I do worry that our shared sense of reality in social media, in the media, in the courts is being deteriorated. And I don't know how we have a stable society when we're disagreeing on what two plus two is. Exactly. I think that's the world we occupy right now. Well, and social media in and of itself is a perfect example of how political the companies themselves are, right? Like, let's be honest. Sure. A corporation is in the business of keeping itself alive and highly profitable. That is what it does. Right.
And, you know, I'm thinking about Meta's recent decision, as Mark Zuckerberg talked about, like removing all of its fact checkers or moving them to Texas or whatever. And, you know, he's swaying with the political winds. Let's just be honest about that. Of course he is. Did you want to say something about that? Yeah. I mean, first of all, I think he's following Musk in the trails. But here's the other thing to keep in mind, too. This is not...
U.S. issue. This is a global issue. We are impacting 8 billion people in the world with the political whims of two multi, multi, multi billionaires. And I don't think that's good for anybody, particularly not our democracy. Yeah. So when you talk about like the tech companies should be thinking about this at the ground level, that's the other thing that Isaac Reynolds said that really caught my attention is
When he describes how they are conceiving this technology at the ground level, he didn't use the word reality at all. No. He said they're in the business of creating technology that helps you match your desired memory.
So, I mean, that's how they look at it. They don't look at it as like, well, you know, we want to help you take better pictures so that you remember what actually happened. Like that mug, I'm sorry, but the mug that he said that he does, he wants to erase from a windowsill. Well,
Well, there is a world in which someone looks at that mug and remembers, oh, that belonged to my uncle who passed away a month after that. And well, how wonderful was it that he was in he was, you know, with the family for that particular Christmas. And there's his mug in the photograph. I mean, that is reality versus it's an inconvenient reality. So let's get rid of it to match our memory.
Yeah, it's funny that you mentioned that because as he was talking, I literally wrote down in my pad of paper, reality with an arrow to memory.
And this is, I think, frankly, a convenient talking point is that you talk about photographs that, well, what this is really about is your memory. It's not about reality. And that's fine if you're sharing your photo with your family members. It is not fine when you're posting that on social media and saying this is what is happening in Los Angeles. It is not fine when you submit that into a court of law as a piece of evidence. So I think this is a convenient talking point.
to make us not think about the fact that they are distorting reality. And I think to your point exactly, I thought his example where he said, well, I wake up really early in the morning to go hiking so I have this pristine sunrise so I can take a photo. And I'm like, yes, that's what you do. And if you don't do that, well, then that's the photo you get. And I think there's a little bit of a disconnect here between what is photography and
And maybe part of the problem is that it's many different things. And it's convenient for Google to say, well, this is really just about your memories. When the reality is that's not what it really is. It's much, much more complicated and impactful than that. Yeah. And you know,
I agree with Isaac that I want to be in a place of natural beauty when there's no one else there. But it was so interesting to me that that was the example that he went to because even he still thirsts for reality, right? Yeah. I mean, you could ask the question, well, why wake up so early? Just go, take a photo, and then manipulate it if that's the memory you want. So there's a little bit of a disconnect in the narrative there. Yeah. Okay. So...
We'll get back to sort of how what this does to our own personal sense of reality or our relationship with the visual image in just a bit. But you've said something several times that I must talk to you about. All the other places in which visual evidence is critical. I mean, let's talk about courts of law, for example, or even things like insurance claims. Right. What's the first thing you do when you get an offender bender? You get outside your car, you take a picture.
Do those worlds, the legal world, the insurance world, etc., do they even begin to have the tools to be able to deal with the fact that they might be dealing with millions of manipulated photographs now? Yeah.
Well, the courts certainly don't, and they are struggling. Insurance companies have gotten a little bit better because what they are starting to require are specialized apps that will take a photograph and authenticate it at the point of creation. These are called content credentials. So the insurance companies where they can control the process have more control over it.
The courts don't. Evidence is evidence. It's introduced, right? It's a voicemail. It's a photograph. It's whatever we have. The courts are definitely struggling and they've been struggling for a while, but I think AI has been an accelerant. I think every piece of evidence, civil, criminal, national security cases is now suspect. And I, even body cams, I get emails all the time, body cam footage from police officers.
How do we know this is what happened? So suddenly everything is suspect. And that is really worrisome about how we are going, where the stakes are very high. I mean, don't get me wrong. The fake Hollywood sign was bad and it was disturbing, but it is nothing. It is nothing compared to people going to jail based on fake evidence.
Does AI present the solution to the problem that it's creating in terms of tools to fight the AI-manipulated photographs, or at least identify them? Yeah, yes and no. So certainly we need to be thinking about policy, guardrails, technology to fight this manipulated world we are entering. I will tell you, for example, I bought one of the Pixel 9 as soon as it came out because I'm in the business of figuring out what are these things.
And it's really good. I was sort of blown away. And we did, as a matter of fact, find an artifact that Google leaves behind when they edit it. It's unintentional. It wasn't the IPTC metadata. There was something unintentional in the way they do the AI editing and manipulation, which for me is great. And I'm not going to tell them what it is, by the way, because I don't want them to fix it. But this is very much an arms race.
We are very much in a, we build a better defense, they build a better offense, if you will. I think it is part of the solution, AI, I think it's, but it's not, it can't be, we can't just rely on technology. We need better policy, we need better corporate governance, we need regulation, we need liability, we need the social media companies to take more responsibility. We need education, we need these conversations so people are aware of what the world is today.
collectively, I think it moves the needle, but it's, you know, we're always playing catch up because the fact is there's more money to be made in distorting reality than in revealing reality. And we have to admit that that's the reality of the world we live in now. There's more money to be made in distorting reality than to actually live in it. Sobering, Professor Freed. Okay. Now, again, in terms of the, it's the nexus of the
the imagery technology and social media, right? Because, um,
Sticking with the catastrophe that's going on in L.A., you're right. It's not just like, oh, here's a picture, a fake picture of the Hollywood sign on fire. But first responders and emergency services, like they're using social media as much as anybody else. And if they get a lot of images from certain places that they think, oh, my God, do we need to deploy more people there? Do they have to start now being skeptical of, well, is that part of the city really on fire or not?
A hundred percent. And this is, you've seen this, every natural disaster for the five years, for the last five years, people are creating distorted images. FEMA is now having to field these questions or getting phone calls. Oh my God, are the Hollywood Hills on fire? And now we're distracting. You distract them for five minutes and I have a problem. When we have these types of emergencies with the scale that we are talking about.
And so it is, by the way, I think there's a special place in hell for people who propagate these things. Like what is wrong with you? I mean, honestly, there are people's lives at stakes, their property at stakes, and you think it's funny
to distort reality like this and to confuse and distract people. What is wrong with people, honestly? Like we can blame the technology. We can blame the Googles of the world and the Facebooks of the world. But at the end of the day, there is a knucklehead human being behind the screen somewhere doing this. And you gotta ask yourself, like, what is wrong with us? Yeah. Yeah.
Well, in some cases, it's a very concerted effort to discredit other ways of viewing the world, politicians, et cetera, that comes from both internally and externally to the United States. But in terms of the individual knucklehead, which is a – I appreciate your soft words there, Professor Fareed. But isn't that – you're right. Isn't that the core problem that what these companies are doing is simply legalizing
leveraging kind of like terrible desires in us already? And what should we do as individuals to resist the urge to, you know, be the factories for these social media companies for the creation of misinformation and disreality? I think that's exactly right. What social media has done is bring out the worst in us. Let's be honest, both in terms of what we share and how we interact with people. Because the fact is that outrage sells.
that when you are outrageous, when you post things that are salacious and conspiratorial and angry, you get more interactions and you get your little monkey brain reward system. It likes that. And social media has tapped into that because that's what drives engagement. That's what drives delivering ads. And that's what drives profit. It's a simple equation.
And frankly, if you are still on Facebook, if you are still on Instagram, if you are still on X, you should get off. It is terrible. It is terrible for you. It's terrible for society. It is just a failed experiment in my opinion. You wanna talk to your friends, great. Talk to your friends. But social media has become absolutely radioactive and I think is net negative to the world at this point.
I'm hearing you say that we are in a world now where we should be skeptical of almost every visual image that we encounter. I mean, is that how you're walking around in this world? You're in the business of trying to identify signs of manipulation. Yeah.
No, and it's not. And I think Isaac said something that is misleading. He said, well, if you see a photo on the New York Times, you should be skeptical. First of all, let's not equate the New York Times with X and Facebook. The people at the New York Times are serious people who do a serious job. And there is journalistic ethics and standards. So when photographs are published, they know something about them. You can trust that somebody has done their job. Is it perfect? No. But you can't say that about X and you can't say that about Elon Musk. So let's not equate
treat the New York Times and NPR and the BBCs of the world in the same way we treat a social media feed. Those are not the same thing. So when I go to a serious news outlet, no, I'm not inherently skeptical. I trust that those people did their job. Now, am I skeptical anytime somebody sends me something or anytime I see something online? Absolutely, if I don't know the source. But once I know the source, I've got a pretty good prior of what I know to believe or not to believe. Mm-hmm.
Although I have to say media companies could, if they wanted to be on the forefront of helping create a culture where there's more transparency, right? Like why not also publish this photo was cropped, this photo, the contrast was tweaked, this photo, you know, whatever AI may or may not have been used to create the image that finally ends up on the website or the newspaper. I,
I completely agree. And if you look at most media outlets, they have a pretty clean set of rules of what you can and can't do. And there's a world of a difference between cropping an image and brightening it and white bouncing it and adding in an entire human being or removing objects in the image. But I agree with you. I think we should be transparent on what was done with the photo and show me the original photo because there's no reason not to do that at this point. You know, also, I just have to say, I don't want to live in a world where every image is a magazine cover.
You know what I mean? The beauty is in the imperfection. Yeah, we haven't even talked about this, but the bizarre reality that is fashion photography and the body image issues that it is created because we have gone from airbrushing out boogers from people's face, as Allison was saying earlier, to creating completely implausible human beings anymore.
And so, yeah, and maybe there will be a throwback. I like seeing my students around campus now with Polaroid cameras. This makes me so happy. They're out there taking Polaroids and, you know, waving the film and then hooking it up. And maybe there's going to be a throwback to real authenticity because say what you will about the Internet, authenticity would not be at the top 10 list of things I would enumerate. Yeah.
Well, Hani Fareed is a professor at the University of California, Berkeley Schools of Information and Electrical Engineering and Computer Science. He's co-founder of Get Real Labs, which develops techniques to detect manipulated media. Thank you so much. Great being with you again. This is On Point. On Point.