Ryan Reynolds here from Mint Mobile. I don't know if you knew this, but anyone can get the same premium wireless for $15 a month plan that I've been enjoying. It's not just for celebrities. So do like I did and have one of your assistant's assistants switch you to Mint Mobile today. I'm
I'm told it's super easy to do at mintmobile.com slash switch. Upfront payment of $45 for three-month plan, equivalent to $15 per month required. Intro rate first three months only, then full price plan options available. Taxes and fees extra. See full terms at mintmobile.com. This episode of the Times Tech Podcast is sponsored by Vanta.
Let's talk about something that might be keeping you up at night. Cybersecurity. According to Vanta's latest State of Trust report, it's the number one concern for UK businesses. And that's where Vanta comes in. Whether you're a startup, growing fast or already established, Vanta can help you get ISO 27001 certified and more without the headaches.
And Vanta allows your company to centralize security workflows, complete questionnaires up to five times faster and proactively manage vendor risk to help your team not only get compliant, but stay compliant. So stop stressing over cybersecurity and start focusing on growing your business in 2025. Check out Vanta and let them handle the tough stuff. Head to vanta.com forward slash the times tech to learn more.
Because when it comes to your business, it's not just about keeping the lights on. It's about keeping everything secure. I got into huge trouble with eBay this week because my kids have got into Pokemon cards, which I don't know if your kids... Did you used to do baseball cards or whatever when you were young? Baseball. It's such a racket, isn't it? You're just selling people a piece of paper for it.
Several pounds. Oh, come on. Pokemon cards are great. But why are you in trouble with eBay? Somebody's daughter bought lots of Pokemon cards off a cheap Chinese website and didn't tell me they were fake. Said, Mommy, please put these on eBay. I put them on eBay. Suddenly the bidding started going wild. It was like 25 quid, 30 quid. It was like crazy.
These cards are worth a lot of money. She chuckled away. She's seven. And then I got an email from an eBay user saying, these look fake to me because if they were real, you would see a sort of special surface on the card. So I had to withdraw them. And then I got an email from eBay wrapping me on the knuckles for selling fakes. Oh, so you're like Uber. You'd have like a one out of five rating and no one would pick you up anymore. Exactly. I'm an eBay fraudster. Wow. Well, congratulations. Congratulations.
Hello and welcome back to the Times Tech Podcast with me, Katie Prescott, here in London. Katie in the city. And Danny in the valley. That's me, Danny Fortson, out here in Silicon Valley. And what is going on, Danny Fortson, in Silicon Valley at the moment? Things are weird. Things are very strange. So I just want to give you like a little like a vibe check.
So out here, well, it feels like a lot of the big cities around the world, there's Teslas everywhere. And if you pull up next to a Tesla behind the Tesla, you'll see a lot. All of a sudden there's these bumper stickers everywhere, all saying some version of the same, which is don't judge me. I bought it before we knew. And the idea, seriously, they're everywhere. Right.
That's brilliant. Before we knew.
Kind of funny. That's absolutely brilliant. Will you do me a favor and get me one? Oh, for sure. For sure. I'll keep an eye out. Thank you. But the other thing is like, and this is like a broader kind of move happening out here, is just like the rightward lurch of Silicon Valley. So all of the big AI companies, Meta, Google,
and Anthropic saying just in the past week, either changing their policies or saying outright, we are going to start selling our stuff, our AI to the military, which again is a huge departure, especially for Alphabet, which had like employee walkout several years ago around the Pentagon drone contract. They just changed their terms and said, this stuff is so powerful. We are effectively now in an arms race.
with China, and we are going to provide the West with the proper tools. And again, it's just like when you step back and look at all of the swirl happening together, it's kind of head spinning being out here and watching these companies just, you know, change and change so quickly. How fascinating. So they're all kowtowing towards the Trump administration. Yeah. And of course, they're all getting rid of DEI as fast as possible as well, which again, kind of mimics what is happening within the government. It's just happening so quickly. It's, you know, just
everywhere, all at once, kind of. It's quite extraordinary. Should we talk about DeepSeek a bit? Obviously, it was the subject of last week's episode, and we double-clicked on it a lot, didn't we? We double-clicked, and we clicked and dragged. Since then, we've had tech earnings season, and it was quite interesting to see what some of the big companies were saying about it, and also just how much they're still spending on AI infrastructure. I mean, we were talking about DeepSeek being developed for far cheaper than
than many of the other models. And yet, I mean, looking at some of the numbers, Google saying that their expenditure in the fourth quarter went up to $14 billion. Yep. Up $11 billion. Yep. The numbers are amazing. Yeah, no, and I think there's just complete lack of deterrence, at least from the big companies, around what deep-seek might mean. I think they're all going to try to integrate some of the lessons that deep-seek showed around what you can do with less data
But Sundar Pichai at Alphabet said he's going to spend $75 billion this year. Zuck said he's spending $65 billion. Nadella at Microsoft, $80 billion. So that's $220 billion between three companies just on AI infrastructure just this year.
So this idea of like, oh my goodness, DeepSeek has shown a way you can do it for, you know, a tiny fraction. I think if you step back, the idea is, yes, they've shown a better way to build a mousetrap, but we're still going to need all these mousetraps. Like, you know, it's just going to get cheaper and
And I think the more powerful and cheaper these things get, the more demand there will be like electricity. So there's kind of no stopping this train in terms of the money they're spending. I think it'd be really interesting to compare it to how much governments spend.
which just is a fraction of that, right? And it's a really good time to be talking about this because the big news here in Europe is that we are gearing up for a global invasion next week. Paris AI Action Summit. AI Action Summit. Action stations. It's being jointly held with India. And it's the third that we've had of these. So they started off with the AI Safety Summit here in the UK in Bletchley Park.
Gosh, that was November 23. And then there was one in South Korea and now Paris. And the idea is to bring together senior leaders of all of the big nations. J.D. Vance is going to be there for you. Chinese are going to be there, Europeans, and then the bosses of the big companies as well.
And so it will be all of the big players like the ones we've just been talking about, you know, with their massive investment in infrastructure. Yeah. And then the governments too, talking about what they're doing. I never quite know what's meant to come out of these things. I know there'll be some sort of declaration that,
what it really means in practice. Yeah, it feels like a lot, especially the way like the roving nature of South Korea and London and now Paris, it feels a little bit like one of these, like the climate summits have become where, you know, every year the great and the good get together and talk about what this all means for the planet. But again, it's a question of, okay, well, what does that mean? Right? Because again, I think this stuff, as we saw with DeepSeek, it's proceeding at such a pace that
And, you know, what the AI Safety Summit, there was so much kind of heat and light in the UK around that. And like, what came out of that? What has happened? Has the AI Safety Institute done anything? Has it been a break or a guide at all?
over this past year plus? I would say in their defense, they've been looking inside models and a lot of that hasn't been made public, but it's certainly been a presence there. And what I think is quite exciting and quite heartening is that they did bring together some really key players very, very early on in the public release of generative AI's life. So if we think about the November 2022 release of ChatGPT,
It felt like very, very quickly leaders were jumping on that and saying, we need to talk about it. It may well just be a talking shop, but at least they're getting together. And I'm quite gutted actually not to be at the Paris event. I was at the Fletchley Park one and it was amazing to see all these people together. And you thought, well, at least they're talking together.
And at least some thought around this. And just picking up on some of the coverage so far, Politico, which is always brilliant at looking at these sort of events, says that there's going to be some sort of discussion about distributing AI's benefits to developing nations.
cheaper models, so DeepSeek, obviously, and Mistral, which is the French startup. And then a kind of fund as well is expected to be announced, which is going to distribute AI around the world. I mean, it doesn't touch the size of the numbers you were just talking about. They're talking 500 million, going up to 2.5 million in five years. But, you know, I think it's interesting that they're talking about it so early because you know what it's like in tech. It always takes such a long time.
for the regulators to catch up with it. For sure. For sure. And that's what I think is really interesting also. What I'll be keeping an eye on for is just like the geopolitical aspect of it, like this idea that we are in this effectively arms race with China. And what does that mean? Is this like a cold war, kind of a new version of the Cold War with China around, you know, as I've mentioned before, you know, you talk to people in the defense world and they're like, this is like the nuclear bomb. Whoever has AI supremacy...
It just automatically renders everybody else defeated before a bullet is fired. And so I think a lot of people are looking at it through that framing. And then you have something like DeepSeek, which kind of freaks everybody out. Not to mention OpenAI and the White House have said, look, it looks like they violated OpenAI's terms of service.
And used this technique called distillation, which is basically using open AI as a kind of a teacher model to teach the student model deep seek, how it works and all of that stuff. And basically using that model to train its own model. They're like, hey, that's not cool, man. You've violated our terms of service. I'm sure you did as well. I had a little chuckle at this.
Actually, I mean, it's obviously no one infringing anyone else's stuff is funny. But OpenAI and Microsoft and the other AI developers have faced so much criticism for allegedly using other people's content without permission. Correct. And which I think is, I'm not sure here, but I know you're a professional. I think this is a, that's a, is that a segue? It was meant to be.
That's a beautiful, beautiful segue into what we're talking about today, correct?
which is copyright because I interviewed the boss of Getty Images recently. Fascinating. So Getty is one of the biggest image photography agencies on the planet. You know, so oftentimes if you find a photograph on a news article or on a website or in marketing material, you'll see a little thing on the bottom right that says Getty Images. Yes. So, and they have got an extraordinary archive here in London, but...
Getty, owning all of this incredible rich content used by so many publishers around the world, is understandably a bit nervous about how it's being used by the AI companies. And it's suing a British company, Stability AI, which has a product called Stable Diffusion, which generates images. So you can say to it, I want to see a picture of Danny Fortson wearing a really, really small helmet and branching a sword. And lo and behold,
Click of a mouse, there's Danny with a cell phone. And Getty said, hang on a second, we think our images are being used by you. So it's one of the companies, along with the New York Times, which is suing OpenAI and Microsoft, that's really turned on the AI companies for what has been used
in the black box to train their models. And we should say that this is just one case of I don't know how many. The list goes on and on and on, but at the core of it all is this one central question, which is this kind of legal doctrine of fair use, which is me as a writer, if I read 10 books and I draw inspiration from them and then I create something on my own, that seems that is fine under the law.
But what they're saying is that this is different. This is kind of illegal pirating of this work. This is not fair use. And I think that's what's going to be kind of litigated around the world.
And including in this case between Getty and Stability. Absolutely. These court cases that are going on right now will rip up the rulebook when it comes to copyright. All of the defendant companies listed have denied all the claims. We should also say that there are lots of companies like our own that are doing licensing deals with the AI companies. It's not just a massive dispute. Some content businesses see this as a huge opportunity. Yes.
We had the boss of Thomson Reuters on not long ago talking about that. But yes, there are all these extraordinary cases going on. Lots of wrangling here in the UK as well within government about how to set the rules around this because it's
completely uncharted waters. So with all of that in mind, we thought it'd be a good moment to sit down with Craig Peters, who's Getty's chief executive, based out in New York, but he was on a flying trip to London, to get his perspective on the debate and the case, and of course, what generative AI means for Getty. Can I start by asking you, Craig, just when did you realise that generative AI was going to be an issue for Getty?
I think that question implies that we believe it's an issue. I'm not sure that that's accurate, but what we knew that was generative AI was going to come onto the scene probably seven or eight years ago. We observed some research that was being done on our content and some of the technology at that point in time was GANs. It wasn't diffusion models.
but were producing computer-generated imagery. And we started really collaborating at that time with entities like NVIDIA to really understand the capabilities, how these models were built,
and ultimately try to understand what they did or did not mean to our business. And when you say research on your content, what does that mean? It means training on our content. So you noticed that people were using your images? Oh, yeah. And that was something that was happening not only...
with companies like NVIDIA, but it was happening in research institutes and in universities, including those here in the UK. And when you say they were using your material, was that with your permission at that time or was this something you just saw? It wasn't with our explicit permission.
permission at that point in time. But we are a believer in research and development. And it wasn't one that we took action against. And again, in certain cases, we started working more collaboratively with companies like NVIDIA to understand where this was going to go. And we're talking about generative AI here, though, which I know you said at the beginning, I was implying it was an issue for your business. But it is, isn't it?
It is an issue for all businesses that own copyright. I think it's something that you have to have strategies in order to understand and address. I think clearly we believe that as models are trained, they need to respect copyright and they should be licensing the content. They like to call it data. I like to call it content because that's what it is in order to train those models.
And there shouldn't be things like is currently being discussed here in the UK. Exemptions, so government-sponsored exemptions from copyright that enable models, standalone models that are essentially only able to create imagery or only able to create music. They aren't really addressing the productivity and other items that you've referenced. And they're allowing them to go and compete in those markets and
with an unfair advantage.
And the creative industries and the media industries don't get to participate in that. Um, and what we hope for and what we're seeking for is having agency over how our content is used. And, uh, if it's going to be used to create new technologies, um, even new competition, like we should have a say about how our, our content is utilized. And again, not have government, um, you know, mandates, uh, and, uh,
and decision making in that. And how has it affected the business over the last couple of years? It certainly means I do a lot more interviews like this. You were interested before, that's not...
And maybe not quite as much. We haven't really seen the technology impact our business. The Getty Images business is one that is built up off of very unique content across the creative and editorial landscape. Our coverage in the editorial world is critical to the functioning of media across news, sport, entertainment. Our archive is incredibly rich in terms of the stories that that enables to tell.
And our creative library, so we have a creative library that's north of about 500 million images. It's a small library relative to the universe of imagery. So there were 3 trillion images produced in 2024. Wow. 3 trillion. 2 trillion were produced via lens-based technologies. 1 trillion were produced via AI. We brought about 50 million new images to market last year.
And so we're a small drop in the overall universe of imagery. But what we do and what we do very well is that we enable our clients that use our imagery to engage end audiences in a way that cuts through the clutter of 3 trillion images. And so our business has held up very well over the past couple of years, and we expect that to continue into the future. It still doesn't mean, though,
that companies should be able to take our imagery and those of our partners and our contributors, and we represent over 600,000 photographers and videographers worldwide, take it, use it, absent permission, and absent a license.
You're very famously in a court case at the moment suing Stability AI, a British AI business, for the alleged illegal use of your copyright. Where are you on the case at the moment? Well, we're not as far along as we'd like to be. And it's actually one of the reasons I applaud the UK government's work to try to clarify the world of copyright. Now, we disagree with a copyright exemption issue.
But ultimately, I do appreciate the work to try to clarify because we launched that lawsuit now, you know, well over a year ago. And we don't expect to be in front of the judge until sometime later this year. But we're hopeful that it will set a precedent that will be useful not only for Getty Images, but for the creative and media industries as a whole.
And will reinforce the fact that ultimately you do need to have permission in order to train on
copyrighted content. And just to clarify, that case is happening both in the UK but also in the US? Correct. And that is because there is some vagueness in terms of where stability AI exists as a business and where that training may or may not have happened. Ultimately, we'll get into the ability to diligence that and as part of that trial. But yeah, that's why we had to do that in two
And obviously that's twice as much money. And we're fortunate as Getty Images that we have the ability to fund something like this where individual creatives and individual
Individual members of the media maybe don't have those resources and can't take that on. But, you know, it's also should be clear that stability is only one of many companies that are doing this. And even we don't have the resources to pursue each and every company that is that has gone after and trained on our content apps and permission.
Are there examples of AI companies that are using your content with permission? Have you started to sign licensing agreements in the way that some other content providers have? Yeah, and we certainly want to be constructive towards the development of these technologies. We are in a
a business that has historically, in over 30 years, Getty Images will celebrate its 30th anniversary this year, have been instrumental in allowing for the progression of the internet and social media. And we figured out ways of working with those new innovations and technologies in order to bring content to them and facilitate their growth over time. And we do the same here. And I think most notably, I mentioned earlier that we started collaborating with NVIDIA.
Well, a bit over two years ago, we signed an agreement with NVIDIA in order to bring our intellectual property, so our content and the associated metadata to the table. And this was only our creative content, so it excludes all of our news, sport and entertainment archives.
but our creative released content to the table. We paired that up with their talent in the technology space, as well as their GPUs and processing power. And we built a jointly owned model. And that jointly owned model is one that we co-own. It's one that as we produce revenues from that model, we pay a percentage of those revenues back to the individual's content who is trained upon.
It's one that because it wasn't scraped from the Internet, it cannot produce deep fakes. So it doesn't know who Taylor Swift is. It doesn't know who Donald Trump is. It's one that can only produce safe, commercially sensitive outputs.
And we think that's a right way to kind of blend ultimately and allow it to coexist, you know, AI, creativity, personal privacy, and intellectual property. But is the idea that it creates images so I can say, okay, it might not know who Taylor Swift is, but I'm looking for a blonde pop star in a gold skirt playing at Wembley Stadium or something like that, and it will generate a picture. Well, it wouldn't know what Wembley Stadium is either. Okay. But yeah, it could produce that, but it's not going to.
it's not going to be a deep fake facsimile of Taylor Swift. So you're going to have to be very, very, very, very descriptive and you're still going to fall short. And is the idea then that you'll sell that to the likes of advertisers, for example? Yeah, we do. So we launched it in October of just over a year and a half ago and we make it available to our customers and they pay for a service that not only gives them high quality advertising,
generative outputs, but gives them safety in the legal sense, gives them safety. You know, it can't produce third party intellectual property. So we talked about Taylor Swift, but if you type in sneakers, it's not going to give you Nikes. And it knows and understands the complexities of intellectual property, where if you use an image of the Eiffel Tower at night,
That is a violation of copyright. And you will likely receive a very nasty letter from the French government on that. And so we understand those risks and requirements of third-party intellectual property, so we can't produce those outputs. So yeah, we give a model to our customers that is fit for purpose for what they're trying to do, which is trying to connect again with those end audiences and reach their brand, you know, get their brands out into the marketplace.
Didn't realize the French government had IP control of the Eiffel Tower. There's a lot. You know, there's interesting things in our world. Tattoos are intellectual property and are copyrighted.
And so if you replicate a tattoo, in fact, there's litigation right now, LeBron James and his tattoo and one of his tattoo artists, you can actually get sued. So it's a complex world. And what we try to do as a company, and again, I think we're really good at engaging in audiences and allowing our customers to cut through the noise of all that other imagery.
but we also are IP experts and we bear the risk of that for our customers because we understand it and we can build imagery services etc that helps them manage that world where you all don't need to be experts on the complexities of intellectual property and how that changes around the world. This episode of the Times Tech Podcast is sponsored by Vanta
Let's talk about something that might be keeping you up at night. Cybersecurity. According to Vanta's latest State of Trust report, it's the number one concern for UK businesses. And that's where Vanta comes in. Whether you're a startup, growing fast or already established, Vanta can help you get ISO 27001 certified and more without the headaches.
And Vanta allows your company to centralize security workflows, complete questionnaires up to five times faster and proactively manage vendor risk to help your team not only get compliant, but stay compliant. So stop stressing over cybersecurity and start focusing on growing your business in 2025. Check out Vanta and let them handle the tough stuff. Head to vanta.com forward slash the times tech to learn more.
Because when it comes to your business, it's not just about keeping the lights on. It's about keeping everything secure.
From ADT comes Trusted Neighbor, the new standard in home access. Through the ADT Plus app, easily grant and automate event-based or scheduled access for neighbors, friends, and helpers. Notify trusted individuals of events like alarms or packages, and set access windows for planned guests or even the dog walker without interrupting your day. Visit ADT.com. When every second counts, count on ADT. Requires ADT Complete Pro Monitoring Plan and compatible devices. Copyright 2025 ADT LLC. All rights reserved.
This episode is brought to you by Progressive Insurance. Do you ever think about switching insurance companies to see if you could save some cash? Progressive makes it easy to see if you could save when you bundle your home and auto policies. Try it at Progressive.com. Progressive Casualty Insurance Company and Affiliates. Potential savings will vary. Not available in all states.
When you look around the world at different jurisdictions, who do you think is landing in a sensible place when it comes to this new copyright world that we're looking at with AI and content providers? I think the EU has taken some positive steps within that.
And I think that the UK, again, the UK government, I applaud them and specifically Minister Bryant, who I believe truly cares about creative and media industries for trying to tackle these issues. But there isn't a model that I can point to around the globe that's got it right. And again, I think in the case of the UK, current government,
dialogue and process. I think there are some things that need to be worked on there where we don't need to undermine copyright. We need to embrace it and we need to recognize that in the UK, the
Creative and the media industries represent well over 100 billion pounds in annual GDP. And I would argue that is a small figure relative to the impact that those media industries and creative industries have on tourism, tourism.
autos, fashion, and other UK industries that really project the UK's brand into the world and has a real knock-on effect. We shouldn't be undercutting those for some Faustian bargain of maybe somebody's going to invest a little bit in an AI sector within the UK. But we don't have a model that exists out there, and I applaud the
the UK government for at least getting into this debate. Because I do think there are solutions that can really help us navigate this into an ecosystem that does work for all AI, creative media, individuals and their rights and ultimately intellectual property. When you talk about a Faustian pact, you think big tech's the devil, right?
I think in that analogy, yeah, you could say that, but I don't believe big tech is the devil. I think they have an important role in society, but society also has to
put some level of guidelines around them and make sure that this technology is beneficial to society as a whole. I think there's an ethos that can be put forward by technology, which that technology is good for society, just full stop. And I don't think the research has proven that. I think what
It's proven that technology can be tremendously beneficial to society when society engages in a way that puts boundaries and constructs around that to make it beneficial to society. And so I think they have an alternative point of view around this space. We take a different point of view, but ultimately I believe that we can get the solutions.
Do you worry that the power and wealth of big tech might drown out the voices of the creative industries? I believe there's that potential. And that's why I think it's so important to engage in this conversation and ultimately arrive at solutions. Clearly, the resources that are amassed in technology are immense. And the political access that they have is immense.
And I believe that a world with technology and with creativity is the world that I want to live in. And so I believe that we have to find how we have both. Let's flip this around a bit. I wonder how you check that the images that are part of your library are not generated by AI. It's a really good question. It's one that, you know, this technology...
We talked about how we engaged to understand what it could create, how it could create it, and how we might work with NVIDIA and others in order to evolve it. But we also realized very quickly that this had the potential to create really inauthentic fake imagery. And there isn't technology. I mean, this technology was released without any technology.
protections in order to identify, right? So we kind of put it out into the world and open Pandora's box, but with no other solutions to address those type of items. So we have to invest more resources in reviewing content. And we use technology inclusive of AI in order to do that. But, you know, it's an arms race and there is no perfect technology. So we rely on a lot on what we've always relied on. In the world of editorial,
Know your sources. Source from trusted partners. Trust from vetted journalists in the field, photojournalists and videojournalists in the field. Maintain staff that you can train and deploy. And then make sure that you have really critical processes at each point along the way to catch imagery.
And, you know, that's going to be a challenge that is only going to increase going forward. And it's, you know, again, we're fortunate that as Getty Images, we have the resources to be able to invest into those types of things to ensure that our product is, you know, reliable and you can be a trusted source in the world. It's not that we're going to be 100% perfect.
The technology doesn't exist to do that. So there will be mistakes. I think the Kate Middleton photo was an example of that. They were as edited.
And we caught that after the fact, and then we had to pull that. Oh, I hadn't realized that came through you. That came through us, and it was released out through a number of wire services. But yeah, we distributed it, and then we realized that it had been manipulated, and we had to pull that back. So we're prone to that technology being used in cases that are trying to misrepresent or not accurately represent in terms of an editorial stance.
you know, what's accurate. And so have you had to up the number of fact checkers that you have in place or how are you considering that extra work that you have to do? We don't call them fact checkers because we're not in the, we're looking for pixels and photos and videos. But yes, we've had to fund them. What's the Getty term? Getty, we call them reviewers. Right. So, or editors.
And yeah, we've had to invest more and we've had to invest a lot in technology. And that's just going to be an ongoing cat and mouse item that we're going to have to deal. But I think the fundamentals of still go back to trust in the individuals that are producing this content and making sure that that's been built up over time and vetted and that you trust the processes that they have put in place because ultimately we, you know,
We view our business as a great business, but we also view our responsibility as a significant responsibility as we feed the world's media visual content from around the globe. It's interesting because it seems like you're saying that that places even more importance on people. It does. It does. I would like to think that there was some out-of-the-box computer AI program that would just tell me if things are fake or not fake.
But that isn't the case. And, you know, we've been advocating for technology where there are some standards that could potentially help on that. Most notably, when generative models provide outputs, that they create a watermarked version of that that can be stored in the cloud.
And that's something we'd like to see. We'd like every model, whether that's from Microsoft or OpenAI or Google or Mistral or I can go down on the line, they should all have a watermark that says this was produced by this model and it is AI generated or AI modified.
The problem is, up until now, is that that has not been adopted across the technology landscape. And where it has been adopted, they aren't producing it from the model. They're relying on an individual to apply that credential.
And that is a big point of failure. Because as we know, not everybody in this world, one, is incented to provide that. But two, there are malicious actors in this world that want to misrepresent, that want to misinform. And so when you put it on the individual rather than the model itself, it just leaves a gaping hole. So we did ask Stability AI to comment on this, but we haven't heard back in time for the podcast. But they did say in previous statements that Stability
It forms no part of the training process for stable diffusion to memorize or reproduce individual images from the training dataset. And they also said that as a result, stable diffusion or any individual element of the model has no copy of any of the training content. So there we go. Fascinating. I love the idea of is like, you know, that just the nomenclature he used like content versus data. When he was like, we call it content, they call it data.
I think we're all going to be seen as like, or already seen by these big AI model company like developers as, I'll call it ad poos. I just came up with this. Go on. Autonomous data production units. Like we're all just here to like generate stuff that is just fodder for the models. That's what this podcast is, isn't it? But you know what I mean? Like, it's just like, we're just data creation units for these models.
That is a way that these companies view
everything that can be fed into their systems. And I think you have companies like Getty and other humans, creators being like, wait, wait, wait, wait, wait, no, that's not, we're not just an autonomous data production unit. We actually are creating stuff. We need to be valued and we need to be paid for it. I'm a real boy. I'm a real person. But this point about language is so important. I remember having lunch with someone from Universal Music last year who was getting really excised about the word training.
It's like, we shouldn't be using the word training. We should be using the word scraping because that's what they're doing. And it's the same, isn't it? With this sort of data and content thing, it's a complete...
opposite ends of the spectrum about how it's perceived. If you're training a model, it sounds very benevolent and very gentle. Scraping the internet, it sounds horrible and it sounds like you're taking things. Yeah, it sounds like you're pillaging. But the other thing you said was just again, it's like, again, it's just, I think it's when you have these new technologies arrive, you have to, it requires a whole new kind of vocabulary when he was talking about lens-based technologies. Yeah.
There's 3 trillion images generated last year and 2 trillion were from lens-based technologies and the others were just from these AI models. Who would have thought two years ago even we'd be even talking about like, that's just sounds like a ridiculous way to describe this. But again, it's just like, these things are kind of forced upon us when we're like, well, if you have a machine that can just like magic up a photorealistic image or video, then you need to kind of figure out how to kind of talk about them in a new way. Yeah.
Have you been sued by the French government for any number of pictures you've taken in front of the Eiffel Tower? Put up on Facebook. I'm never going to do that again. I was like, oh, gosh. Bon Dieu. Gotta hide those pictures away. You could just say, oh, it's just intelligence artificielle.
Yeah, who knew? I mean, the Eiffel Tower, tattoos, all these things. Yeah, it's getting a little... I do feel like there's going to be some kind of middle to be found here. So in the UK, we had Peter Kyle on recently, who's the tech minister. Yes.
And he was explaining to us that there's this consultation out at the moment on proposals that would allow AI companies to use material that's online, the scraping, the training, without respecting copyright, but creators will have a rights reservation. So basically they can say, we don't want you to use our stuff. So that's what's going on over here. It's all up for consultation, which means everyone's now writing in with their thoughts and it's still up in the air. What's going on over in the U.S.?
Not much. I mean, mostly it's just it's playing out in the courts. You know, it's kind of full steam ahead. And I think I've mentioned this before. Just reminds me so much of Napster of just like this tech arrives, disrupts everything. It's full steam ahead. But the legal wheels start to grind forward, but just super slowly. So I think there will end up having a new model. There will be some type of royalty system. But I think it's going to take years for that to settle out.
And then the question is when that settles out, what is the value of that content or data, depending on which way you're thinking about it? What are, what is that going to be? Is it going to be that kind of Spotify version of fractions of a cent for a beautiful picture that was taken on a mountaintop?
Or is it going to be worth more than that? So I think that's where we're going. It's just going to be very slow, at least out here. And not surprisingly, it's all playing out in the courts. Yeah, well, Europe, as usual, has been quicker to action all of this and has got an AI bill in place. So at the moment, AI models, AI companies have to say what the copyright materials used in their training data is and allow copyright holders to object to the use of their works. So basically, they're putting the burden back onto the tech companies, right?
which fits with the European model, really, of how they're treating tech at the moment and is really quite at odds with what they're doing in other parts of the world, like in Japan. So when I interviewed Satya Nadella last year, I asked him about where he sat on this conundrum. And unsurprisingly, he said he was in favour of the Japanese model. Always in history...
Whenever there has been a transformative technology, ultimately, a framework of law has been established on what fair use of transformation looks like, because otherwise there will be no new innovation. And that has to be deliberated. It can't be free rider. So you can't have copyright infringement. But if it's just transformative, it's just like, hey, I read a set of textbooks and I create new knowledge.
having read it, is that fair use of having read it or not? Like that is the level at which you have to think about these LLMs, right? So they're not regurgitating. Regurgitation would be copyright infringement, right? So the key is to be able to understand what is real copyright infringement. So for example, if we are working with a news organization and we want to show citations or we want to be able to ground an LLM with actual facts that are output, that's where the deals are.
And then at some level, we have to also come up with, hey, if somebody is just using it to do transformation, then what are the bounds of fair use? And so I think that this is all where more than any given company, I think ultimately there needs to be a framework of law. And by the way, the interesting thing is different countries seem to be also taking a different view. I was very...
You know, quite frankly, delighted to see what Japan is doing. You know, I think that they're doing the calculus there seems to be that, OK, having perhaps gotten behind in software, they want to make sure they lead when it comes to this next phase. And so they have really said use of content as fair use to create new generation of models falls under fair use. So they're falling under the tech category.
companies kind of side of things. I don't know whether it's tech companies, right? So it's like, what is tech companies in another form? Like, is a media... The next big... Well, the companies that are using the data to power the models. But it could be a media company, right? So this is where...
It's not just us, right? So it's like a media company that has a lot of content can in fact parlay that into some models, right? So therefore there's multiple ways you can go at it, right? So it's not any one company that has the ability to use data. It's more what's the bounds for copyright, which obviously have to be protected. What's fair use?
For any society to move forward, you need to know what is fair use, what's copyright. If everything is just copyright, then I shouldn't be reading textbooks and learning because that would be copyright infringement. There you go, Danny. Don't read books. Yeah. But anyway, it's a brave new world. And of course, they'll be covering that. This will be one of those things at the AI Action Summit. Da-da, da-da, da-da.
Everyone's coming out swinging, aren't they? You heard that from Nadella there and the boss of Getty. Everyone's trying to forge the new rules. But if the last 20 years have been any guide, I feel like I know who's going to win. I feel like I know which way this is going to go, but we'll see. The party's in Paris. Anything could happen. That's true. David beat Goliath. Yeah, yeah. He did. Once.
We have an email address [email protected]. Send us your thoughts, suggestions, even your criticisms. We can take it. [email protected]. Thank you for listening and I'll see you next week, Katie. See you next week. This episode of the Times Tech Podcast is sponsored by Vanta,
Let's talk about something that might be keeping you up at night. Cybersecurity. According to Vanta's latest State of Trust report, it's the number one concern for UK businesses. And that's where Vanta comes in. Whether you're a startup, growing fast or already established, Vanta can help you get ISO 27001 certified and more without the headaches.
And Vanta allows your company to centralize security workflows, complete questionnaires up to five times faster and proactively manage vendor risk to help your team not only get compliant, but stay compliant. So stop stressing over cybersecurity and start focusing on growing your business in 2025. Check out Vanta and let them handle the tough stuff. Head to vanta.com forward slash the times tech to learn more.
Because when it comes to your business, it's not just about keeping the lights on. It's about keeping everything secure. Millions of people have lost weight with personalized plans from Noom. Like Evan, who can't stand salads and still lost 50 pounds. Salads generally for most people are the easy button, right? For me, that wasn't an option. I never really was a salad guy. That's just not who I am. But Noom worked for me. Get your personalized plan today at Noom.com.
Real Noom user compensated to provide their story. In four weeks, the typical Noom user can expect to lose one to two pounds per week. Individual results may vary.