We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Artificial intelligence, intellectual property and the creative industries

Artificial intelligence, intellectual property and the creative industries

2025/3/4
logo of podcast LSE: Public lectures and events

LSE: Public lectures and events

AI Deep Dive AI Chapters Transcript
Topics
Professor Martin Kretschmer: 人工智能技术及其对版权的影响是一个复杂的问题,不能简单地归结为“大科技公司”与“创意产业”的对立,或“选择加入”与“选择退出”的二元选择。AI 的生命周期包含数据收集、组织、训练、部署和反馈等阶段,每个阶段都可能涉及版权问题。当前的立法提案,例如英国政府的提案,试图通过将人工智能法案中的义务与版权法中的例外情况相结合来解决问题,但这种方法存在复杂性和局限性。此外,现有的许可协议表明,大型科技公司并不需要通过立法来获取数据,而新兴的 AI 公司则更依赖于许可协议来获取数据。最后,长期以来数字化的趋势导致艺术家和文学作者的收入下降,而简单的“选择加入”或“选择退出”机制并不能解决这个问题。 Professor Tanya Applin: 英国知识产权局关于人工智能训练和版权的提案未能区分不同类型的人工智能技术,过度关注生成式人工智能,忽视了其他应用。现有的《数字版权管理法案》第29A条并不适用,需要改进。单纯的许可模式也不可取,因为它范围过广,没有考虑到非表达性复制和我们希望鼓励的表达性复制。提案中的“选择退出”机制复杂且难以操作,可能违反国际版权法。更好的方案是改进第29A条,将其扩展到商业和非商业科学研究,并允许复制和转移复制品,以及扩展到数据库权利。 Dr Luke McDonagh: 人工智能技术对创意产业的破坏与知识产权问题应分开考虑,应关注表演者对其声音、形象和人物形象的权利。虽然英国没有明确的人物权或肖像权,但现有的法律,如表演者权利法、数据保护法和不正当竞争法,可以在一定程度上提供保护。然而,这些法律的适用范围有限,可能无法充分保护表演者免受人工智能技术的侵害。因此,需要谨慎考虑是否需要制定新的成文法来填补这些法律空白,并需权衡其对言论自由的影响。 Professor Madhavi Sunder: 美国现有的宣传权法律,尤其是在加利福尼亚州,已经能够在一定程度上保护名人免受数字复制的侵害,但仍存在一些不足。例如,联邦层面缺乏宣传权的保护,各州的法律保护范围和赔偿责任也存在差异。此外,一些州的法律只保护名人的商业利益,而没有保护普通人的利益。针对这些不足,美国正在考虑制定新的联邦法律,例如《反人工智能欺诈法案》和《反伪造法案》,以加强对数字复制的保护。然而,在制定这些法律时,需要权衡言论自由的考虑,并注意数字复制并非完全有害,它也可能具有积极的社会和文化意义。

Deep Dive

Shownotes Transcript

Translations:
中文

Welcome to the LSE Events podcast by the London School of Economics and Political Science. Get ready to hear from some of the most influential international figures in the social sciences. Good evening. So a very warm welcome to tonight's panel discussion on artificial intelligence, intellectual property in the creative industries.

Welcome to those of you who are here with us in person in the Sheikh Zayed Theater and if you are one of the 70 or so joining us online, welcome. So it's, before I

introduce the stellar panel that we have tonight. Let me introduce myself. I'm Dr. Shivata Mishetty, an associate professor of law at the Law School here at the LSE. And this event is organized by the LSE and it's my pleasure to chair this event tonight. Before I go into the introductions of our panelists, a few housekeeping points. Please make a note of your closest fire exit.

Keep your phones on silent, please. But if you wish to tweet or use blue sky, whichever is your favorite method of doom scrolling, please do so. We have some hashtags we are recommending that you use. LSE events, all one word. LSE law or LSE AI. So please do tweet during the event.

So we will be attempting to produce a podcast from tonight's event, so please do look out for a recording. If you've registered online and the podcast is available, it should show on your email. So over the last 60 to 70 years, we've seen several unprecedented shifts and technological revolutions.

synthetic chemistry, genetic engineering, genome sequencing, software, computer implemented services and products, developments in telecommunications. And at the forefront of all of these unprecedented shifts has been intellectual property rights, both from a perspective of individual rights and from the perspective of economic strategic priorities.

So it gives me great pleasure to introduce our panelists this evening who together bring a great deal of international experience and expertise to the topic. So I'll introduce them in the order in which they will speak.

Professor Martin Kreshmer is Professor of Intellectual Property and Director of the CREATE Centre at Glasgow University. CREATE is the Centre for Regulation of the Creative Economy. And Martin has been with us as a visiting fellow in the autumn term, and I know some of the work that he did

with his time with us is to respond to the UK government's proposals for AI. So I hope you'll be hearing some of that from Professor Kreshma. He will be followed by Professor Tanya Applin, who is Professor of Intellectual Property at the Dixon Poon School of Law and currently Director of the King's Postgraduate Diploma in the UK, EU and US Copyright Laws. Professor Kreshma and Professor Applin will speak about copyright first.

Our third speaker today is our very own Luke McDonough and my colleague, an associate professor of law here at the Law School. Luke's work straddles IP and also public law, and he brings a global dimension to his work in IP. He will be followed by Professor Madhavi Sundar, who is the Frank Sherry Professor of Intellectual Property Rights at Georgetown Law University.

And her work traverses equity, developing country interest, brings a global dimension to the study of IP. And she's currently Associate Dean of Graduate and International Programs at Georgetown Law. And Genevieve, our PhD student, will be handling the questions online.

So please have your questions ready. We hope to have at least 30 to 40 minutes of discussion at the end. So with that, Martin, perhaps you can kick us off. Thank you. Well, good evening. Thank you very much for having me here. And I probably have the opening gambit because, as you already said, Shivari did submit a

a rather lengthy submission to the government consultation. You will all know, that's probably why you're here, that last week on Tuesday the consultation closed and a lot of money was spent on that day. And I may show you a couple of slides on that. But my attempt today really is to complicate the debate. I want to in some ways

tone down the temperature. I read today, you know, somebody saying that the UK is facing a cultural genocide. And

In some ways, the AI question has been hijacked by Gen AI, by issues which relate to copyright, less to data protection, they should be as well. And in many ways, it has been skewed, it has been shaped into kind of an alternative. Either it's big tech or it's the creative industries, either it's creators.

Either it's opt-in or it's opt-out. And in some ways I want to just indicate that that worldview is problematic.

And because the world is complicated, I start with a complicated slide. I won't go through the detail here, but AI is a very complicated technology. The underlying lifecycle methodology is in some ways old. It hasn't changed for 10 years that we have these different phases which all have some copyright implications. The phase of...

collecting the data set, the phase of creating, organizing it, the phase of training, the phase of deploying, and the feedback phase with user data or linking it back to the original data set.

All this is not new at all. I looked through modules by IBM 10 years ago. It's structured exactly in the same way. So the lifecycle aspect and understanding of training is not new at all. What's happening in the training box is changing. But also what has changed is that in November 2022,

something was released to the public, a chatbot, and suddenly everybody could see that there were implications. So, reproductions taking place at many of these stages, if reproductions are taking place, copyright in many ways always gets a hand in. And that

then opens the door for the legislator to try to steer the development of technology. And that's the stage where we are at the moment. So just looking here very briefly.

This is the EU system, but the proposal for the UK is more or less to match the EU system. And in the EU, the way the new concept is linking obligations at the scraping or collecting stage, which come from the AI Act,

with exceptions which come from the copyright and the digital single market directive. And so you have obligations on providing sufficiently detailed summaries coming from the AI Act, but from the copyright directive you've got what's now often called the opt-out. It's the rights reservation that certain...

Activities under the exceptions are only possible if the bytes have not been reserved.

So there's a clear exception for research, but a second one for everybody, which is only available if you have no right reservation. And then at the training stage, there's a lot of discussion here whether copyright law is really involved. We know here that reproductions are happening. So there may be copyright-relevant reproductions or not, but they are happening. But at the training stage,

There is a lot of debate about it. What is buried in the model? The vectors and the model weights. Is there somewhere a work in there? Can I resurrect it by certain prompts? That's

under discussion and the technologies are changing there. And then at the deployment stage, we have more or less normal copyright principles. You've got outputs, you produce works, you produce new materials, and then you have got questions about which copyright lawyers deal with all the time. Is there a substantial similarity? Is there a potential infringement?

Okay, so that's a big series of things, but I think it helps hopefully for everybody. So UK consultation.

This is the proposal by the government. So it's the preferred option here, which is put not that subtly. Do you agree that option three is most likely to meet the objectives? And the option three is a data mining exception which allows right holders to reserve their rights supported by transparency measures. So if you go back to how it is in the EU, essentially it says...

We have here something like that, an opt-out, and we have here transparency measures which are similar to a detailed summary of the input. And then there are four options given the zero option, no change in the law, and three others. And I'm sure in the discussion we come back to that.

Our research center, the University of Glasgow, has made proposals how to take this forward. We reject all of these options, and some of my co-authors are in the room. I don't want to embarrass them all, but Bartolomeo is here, Lionel is here. There may be more. Luke? Who knows? Who knows? There may be more in the audience.

We will certainly come back to the proposals we made as an independent research center at some point during the discussion. Okay, so why did the creative industries explode on this issue and why is, as I said already, quite a lot of money being spent on preventing this change in the law?

So that's in some ways the questions I want to address now in the next few minutes. So what's in it here? For whom? Why has it been turned into an opt-in versus an opt-out question? Why has it been turned into kind of a potential meltdown for the creative industries? In order to get a bit of an empirical handle on this, we've bothered to

analyze the licensing agreements which are already happening at a global level. So regardless of the legal system under which AI training operates at the moment, we're seeing licensing deals. The licensing deals kind of started after the, you know, not 10 years ago when the lifecycle was established.

It happened after November 2022. And more specifically, you can see here that it really starts to take off in 2024. And I link this to the adoption of the AI Act in the EU when

the opt-out rights reservation became more of a concrete issue, but at the same time also to the sharp increase in litigation in the U.S. So there's legal uncertainty in the U.S. and in the EU we know that a rights reservation is coming. It will be affected from... It was in place...

since the implementation days of the copyright directive, which was June 2021. But with the adoption of the AI Act and the publication of that, we will have now a code of practice, which comes into force very likely in August this year. So we know that these two pieces of legislation will be linked, and

and there's great legal uncertainty. So behind this background, so we have UK here as a sovereign state in a global environment with digital technology which can cross borders. In that situation, we've moved to a situation where there are licensing deals left, right and center. So what kind of licensing deals do we have here between the content providers and the AI companies?

So here are the biggest licensors, which are OpenAI and Perplexity. And to me, unsurprisingly, what we know as the big tech companies, they're active, but not that active. Because Google doesn't really need the data. It has the data. You clicked on the terms of service of YouTube. Most of you have done.

Google has Google Books. So they are really not in need of massive amounts of new data. And so the companies which need the data are the new kids on the block. These are the licensors. So these are content companies which have made license deals already. Familiar names in there.

You will have heard of some in the press. You may have heard of deals between open AI and social media companies. You may have heard of Reuters and Meta, Universal, Google. There are certain names you will recognize, but there are also quite a few there which are more obscure. But if you look at them by sector, it's interesting that news media dominates.

The explanation for that probably is the integration of large language models into search. So the last bit of my graph, the feedback loop from the deployed model into the original data set, they play potentially an important function there.

But there may be other reasons and we need to know more about these things before we adopt a particular legislative approach. The most money, this is quantified by the number of deals, the most money has gone into images, that's mostly image libraries, so kind of aggregators, but there are also social media deals and music deals. Okay, so,

What I've said so far is essentially big tech doesn't really need the choice between opt-in or opt-out. It will probably make very little difference because they have what they need.

At the same time, there is a general trend towards licensing. And the general trend towards licensing comes from legal uncertainty, but also for the need to get access to quality data. So both of these things drive the licensing economy. So let's go to the

primary creators. It goes to the authors, the performers, to the artists. So in the current heightened atmosphere, they are at the front of the public discussion about this. These are the people who you see in the campaigns. And underpinning it is a quite disturbing trend in their income profile and earnings.

So you can see that over the last 20 years of digitization, this is data from repeat surveys which we have been conducting, there has been pretty much a halving of the income of artists and of literary authors over the last 20 years. And that's pre-AI.

The reasons for that are probably oversupply, the digital technology lowers the barrier to entry to the market, so you have many more people who want to enter it. But you also have got the platform economy, so the bargaining with the intermediaries has shifted.

If that is the background and you have AI as a new technology, you can predict that this trend will keep going. So the professional career options for an illustrator or a translator are not great now.

But does that mean that offering the choice between opt-in and opt-out will improve their earnings? This story suggests it won't. So if you want some of this licensing money channeled to the creator, you have to do something else. You have to find something else than running with the opt-in model we have had for the last 20 years.

Okay, so that's sufficient, really, to indicate that maybe the story is more complicated than has been offered. What the solutions might be, we will discuss. Thank you. Thank you.

So thanks to Martin for that fantastic kind of setup of the copyright issues. And what I'm going to do is to limit my remarks to the proposed options in the UK IPO consultation that are relevant to AI training and copyright.

And as many of you know, the UK IPO frames a successful approach in this space as one that encourages more AI model training to take place in the UK and enables rights under UK copyright law to be respected.

Now what I find interesting about this stated policy goal and indeed the entire UK IPO consultation document is that it doesn't differentiate between the types of AI development that the government might want to encourage or for which it wants UK users to have access.

The assumption is that all AI technology is equally desirable here, whether it's machine learning for scientific purposes, generative AI models such as ChatGPT, or whether it's image recognition or moderation, which might be used in medical diagnostics or facial recognition or autonomous vehicles.

And so I agree that AI and in particular generative AI is here to stay, but we should at the very least be thinking about which types of AI we want to actively encourage or facilitate through copyright regulation to the extent that copyright will have an impact on that. Or which types of developers we want to facilitate.

So in other words, I think here the policy, as Martin says, has somewhat been hijacked by generative AI. And actually policy in terms of potential reforms should appreciate that there are multiple applications and uses and opportunities for AI that go beyond generative AI and not to allow the debate to be transfixed or hijacked by solely generative AI and policy.

the newcomers on the block or commercial developers. So with those initial remarks in mind, I wanted to talk about the options that the UK IPO sets out. And the first option is option zero, which is do nothing. And

In and of itself, if you say do nothing, that doesn't seem very appealing because it seems like we're sort of static and not reflective or dynamic. But aside from that, I would agree with the UK IPO that doing nothing is undesirable here, that we need to do something. But the question is what that something is.

And one area that's clearly not fit for purpose and hasn't been fit for purpose for some time is section 29A of the CDPA, which is our current text and data mining exception. So that is an area where we need to do something. But in terms of generative AI developers,

I would argue here that actually doing nothing might be fine because any legal risks or uncertainties that arise in terms of the landscape in the UK to do with transient reproductions or what counts as a reproduction where there is a non-expressive reproduction are already manifesting in risk reducing behaviors, i.e. licensing. And the same goes for risks that are apparent

or legal uncertainties in terms of fair use in the US and also in terms of the EU scheme. So doing nothing is not an option, but here I think the focus should be on section 29A. So option one that the UK IPO sets out is to require licensing in all cases. So to leave this entirely to the market and to not have any exceptions or other interventions.

And here I think that this option is unwise. Firstly, it frames the reproduction right far too broadly and doesn't take account of the fact that there are non-expressive reproductions that are made in the AI sphere or that there might be expressive reproductions that we want to encourage. So in particular, if we took a purely licensing approach, we would be getting rid of section 29A or something similar to this.

And this seems like a step too far. So, here I would advocate that there is space for exceptions for text and data mining, although we need to think very carefully about the purposes and the beneficiaries of this exception. Which then leads into the UK IPO's option two, which is to have a broad data mining exception.

And this could mean, for example, revisiting the proposal from 2022 that the UK IPO floated in which text and data mining was proposed to be permissible for any purpose provided there is lawful access. So that's one option for a broad data mining exception. Or according to the UK IPO, it could mean the introduction of a fair use exception or something similar.

Now in terms of a broad text and data mining exception like the one that was proposed in 2022, I have a difficulty with this.

because of the fact that it's framed around any purpose. And I think the fact that the proposed exception allowed for text and data mining for any purpose would arguably be contrary to international copyright law, in particular the three-step test. But moreover, what it would do is place too much pressure on lawful access as the constraining requirement.

and it's not clear to me that lawful access can take the burden of that limitation. So I think that sort of previous incarnation of a proposal in 2022 is not the one that we should be adopting here.

As to whether we should be introducing a fair use exception, this of course is a question that has been discussed before in the UK, in particular in 2011 in the Hargreaves report. And at the time, Professor Hargreaves in his report was not in favor of introducing a fair use exception.

And in part those reasons were in response to the consultation which saw the problem or potential problem of legal uncertainty that went with having a fair use exception and the possibility of that only being resolved through litigation and potentially large amounts of litigation.

But also at the time, it was the case that we were still part of the EU and actually that as a policy choice wasn't really open to us because of the Information Society directive and the way in which Article 5 determines the architecture of exceptions, a large amount of copyright exceptions. So given that...

We no longer have those constraints. It is a policy option worth revisiting. And we can see, for example, in Australia, a very comprehensive assessment of this in the ALRC report way back from 2013. As much as I would support revisiting this debate and think that actually there is a lot to be said for introducing a fair use exception, this is not the occasion to do so.

It would need a wider consultation that goes beyond the issues raised by AI regarding the whole structure and scope of UK exceptions. And it would require engagement with a wider set of stakeholders and users than will have been involved in this particular consultation.

So yes, I think it's an important policy debate to have, but it doesn't seem like this is the right time or the right fora in which to have such a wholesale change to the copyright exceptions framework in the United Kingdom.

So that then brings us to option three, which is very clearly the preferred option of the UK IPO. And that is to have a data mining exception, which has a requirement of lawful access and which allows right holders to reserve their rights

and that this would also be underpinned by transparency obligations on AI developers. And this, according to the UK IPO, would seem to meet our objectives of control, access and transparency, and the balance between ensuring that there's encouragement of AI development and also respect for copyright.

The difficulty with this proposal is that the opt-out mechanism, the rights reservation mechanism, has a level of complexity and unworkability and a lack of nuance. And this has been well documented in various literature including in the CREATE paper. And so the

lack of ability to actually use rights reservation properly sort of creates an obstacle.

There are obstacles in terms of how it's operationalized in practice and whether there's a technological standard that can assist with operationalizing. There are issues as to the timing of opt-outs, when they can occur, and the level of granularity of opt-outs in terms of location at the site level or at the level of the work, and also complex jurisdictional questions.

So this idea of opt-out being workable, I think, has been challenged by a considerable number of commentators and scholars. And so I don't think we should be putting too much faith in that system. As well, I think there's a real concern about whether relying on such an extensive opt-out mechanism is contrary to international copyright law in terms of

conflicting with the spirit of Article 5 and the need not to have formalities shaping the enjoyment and exercise of rights. Okay, so where does that leave us then or where do I think that, where does that leave us? Now for completeness sake I'm going to mention a couple of other options which the UK IPO didn't mention but which you see reflected in the literature and one of those options is to create a statutory license.

And commentators such as Christoph Geiger, amongst others, have suggested this, where they've said actually what we should do is create a system whereby AI developers are entitled to copy copyright works for the purposes of training their model, but in return there should be a general payment obligation for this use.

And according to Professor Geiger, the advantage of this system is that actually it would ensure that the copyright issues concerned, sorry, raised by generative AI are dealt with in a fundamental rights compliant manner.

And in terms of the remuneration that would be paid, here the authors suggest that actually you could look at proportionate and appropriate remuneration looking at the actual potential economic value of the work. Of course, this kind of system would need to go hand in hand with transparency measures and there would be challenges around the way in which remuneration or the value of the work or works would be calculated.

But I think more problematic from my point of view is that actually this type of solution arguably goes too far and essentially converts the ability to control the training of AI models on copyright works into a right of remuneration.

for any type of AI use. And that, given that there are licensing models that are emerging, this seems to be too extreme a solution. So another solution that has been suggested by scholars, in particular Professor Martin Sempflaben from the University of Amsterdam, is to move away from a statutory licensing model and instead to adopt a levy system. And here his suggestion is we would apply levies to the AI tools themselves.

And he sees the advantage, or several advantages of this. One is that levies can be applied in a uniform manner to generative AI systems.

and that we wouldn't have the questions of trust and transparency about what it had been trained on. We would just be applying a levy to the use of those AI tools. And we could do that according to a lump sum or a percentage of revenue. And we could trust, according to Professor Senflaben, collective rights management organizations to collect and distribute these levies in a way that was beneficial to individual creators.

This puts a huge amount of faith in the levy system and I'm very skeptical. I think it's too much faith in the levy system and I'm basing this on the experience that we've had with levies in terms of private copying. I realize private copying is a different sort of use case but if we look at private copying we see that there have been a slew of Court of Justice references and rulings which deal with the private copying levy system in the EU.

Which products? Who pays? The manufacturer, the importer, the retailer? What should be the levy levels? Can there be variations between member states? What are the cross-border implications of levies? And in terms of how they are distributed by CMOs. So we've not had a great experience so far and arguably this would be even more complicated in the case of applying levies to AI tools on the market.

So this brings me to my preferred option and this happens to coincide with also, I think, I don't want to misrepresent it, the preferred option of CREATE and some of the other submissions that I've read from academics. And that is the focus of attention should be on improving section 29A. That there is a case for having an exception.

but we should be focusing on an exception that is going to enhance scientific research using AI. The problem with Section 29A at the moment is that whilst it's available to everyone, it is restricted to non-commercial research purposes. It's also restricted to only making copies and doesn't allow for the transfer of those copies and therefore the knowledge exchange that might come from that activity.

There are also some concerns about the ability to override the exception using the lawful access requirement and also TPMs. And this exception doesn't apply to the database right, which is an important omission. So the focus could be on amending section 29a to widen the purposes to both commercial and non-commercial scientific research to allow

copies and transferring of copies and to extend to the database right. Of course, the difficulty here or the challenge here is what will constitute scientific research purposes and whether that could be we could be mired in legal uncertainty as a result or actually there could be some kind of you know workaround by beneficiaries that we didn't want to, we didn't intend to benefit from this particular exception.

But that is something that is worth discussing and it's something that the CREATE paper raises and they say, one of the suggestions they say is, you know, we can look at the fact that in other IP rights there are exceptions for experimental use which do envisage where there might ultimately be a commercial sort of application or purpose in mind.

I would say that we should also look further and look at some of the reverse engineering exceptions and for example looking at software or looking way back at integrated circuits and you see that there is a clear policy goal to be able to learn from that intangible work

but also to ensure that it is not affecting the competition downstream. And one of the tools that integrated circuits legislation has used is to ensure that any kind of resulting product is itself original, and in the case of software, to ensure that any resulting product is not itself substantially similar. I'm not saying that precise kind of terminology would work, but what I think it does show

is that we do have an example of exceptions in the past where it envisages both commercial and non-commercial research and where it's cabined by sort of forward design requirements, if you like. So the challenge here is to think about how we could build that into a reformed Section 29A. And I'll leave my remarks there. Thank you.

Good evening everybody. I'm Dr. Luke McDonough and I'm going to begin this evening by saying in a similar manner to what we heard from Professor Kretschmer that we should separate the issue of the disruption caused by the new technologies of artificial intelligence and generative AI from the intellectual property issues.

The issue of automation disrupting the labor market, destroying areas of practice, leading to de-skilling of certain areas,

goes all the way back to the Jacquard loom and the disruption caused by new technologies, the effects that that had on weavers. And we've seen that effect happen time and time again since the Industrial Revolution began. Right now, there is a strike going on in the labor market for video game voice actors. You can use generative AI technologies to create a podcast

at the touch of a button. It's very likely that these disruptions will change the creative industries profoundly. They are already doing so. Actors are already reporting a drop in work for voiceover activities.

This is, to me, a separate question from whether you, as an actor, as a performer, should be able to own your voice, your image, your persona. And it's to that issue I now turn.

I'm sure that you will have been aware of the cases that have come up, maybe not in court yet, but there have been a lot of famous cases that have played out in the media. Famously, OpenAI used Scarlett Johansson's voice for their AI tool without her permission. They eventually backed down once she began the possibility of legal proceedings against them.

The universal national treasure of Britain is David Attenborough. His voice has been cloned, something that he describes as disturbing. There is also, of course, the recent Oscar-winning performance by Adrian Brody, where his voice was altered by AI to correct the mistakes in his Hungarian accent. How that isn't considered part

his performance in this context is very interesting for us to ponder. It's a very profound question for performance studies. If you're going back to correct the performance of the actor in post-production in this manner, it begs the question of how much else can be corrected.

As a very proud Irish person, I'm hoping this may be the end of the crap Irish accent in Hollywood movies. But it remains to be seen whether or not that promise will be borne out or not. In the UK, we do not have a specific right of personality or of persona. Instead, what we have is a patchwork.

of different protections that may offer some help to the Scarlett Johansson's of the world in a situation like that or the David Attenborough's who consider that their voice has been taken, generated and used in a commercial context without their permission. There is of course the law on performers rights though it is very limited in scope in this context.

If the specific performance is not used, it might be hard to enforce a performer's right over your voice, your image in general. That is something that we haven't seen happen in the law as yet.

There is also the potential for some data protection rights here under the GDPR and its successive status in the UK. But again, limited in scope and the application of the GDPR in practice has been quite limited and it hasn't had the same bite that perhaps people thought that it would. Perhaps the most promising area to look at here would be the law of passing off.

There are three key elements, as many of you in the audience are aware. The idea that you should have some goodwill, reputation and so on in your status. That there would be a misrepresentation of you, usually your business name. But in principle, there's no reason in my mind why that couldn't be applied to something like a persona, an image, a voice.

And of course then there has to be a heart. So if we look at cases like the Irvine case, the Fenty case, which of course involved the singer Rihanna, though not about her voice but about her image on a t-shirt by the high street retailer Topshop.

We can probably read in the possibility to the law of passing off that if the Scarlett Johansson circumstances arose in the UK, she may have been able to argue that the law should be extended to meet her in those circumstances.

One of the reasons why I think the law of passing off could be useful to a famous actor like Scarlett Johansson in that situation is that the common law does care about fairness. And although it might require some stretching of the principles of the law of passing off, it would seemingly be the fair thing to do, in my view. Nonetheless, the law of passing off does not fit the bill entirely. If you were an actor of less

than Scarlett Johansson, you may not meet the standard of goodwill. If there is no misrepresentation, in some of the cases involving David Attenborough's voice, the users who've uploaded videos of nature with his AI clone voice narrating them have been quite open about that it is not the real David Attenborough. So that is not a misrepresentation, as we would normally understand it.

And so this brings us to the key question that the consultation did not ask about in great detail, but it did allude to, which is, should we consider a new statutory right to fill in these gaps in the law? And my view is that we should be quite cautious in doing that. At the moment in the UK, there are established practices

to deal with things that we would consider to be replicas or digital replicas of people who would certainly be entitled to a certain level of protection of their persona under some sort of right of personality or right of publicity.

In the past few years we were treated not only to the real thing in terms of the Emily Maitlis Prince Andrew interview but also two separate dramas about the circumstances of the interview and the interview and the aftermath. If we were to proceed with creating a new statutory right, could the scope of that right give a celebrity

the right to veto a documentary or in particular a drama that would involve an actor replicating them on screen, whether through AI or not. The scope of the right would need a lot of debate and discussion if we were going to limit it purely to the AI context.

And in situations such as Adrian Brody's, where would we draw the line between the actor that is aided by AI in their replication of somebody's accent or in a completely AI generated clone? We would need to think very carefully about the duration of this right. In certain jurisdictions such as France, where there are some personality rights,

These can work on a perpetual basis. They can last even after the economic rights in the works that that author created have expired. Would this right be alienable? Would we allow people to sell or waive this right? If so, it's quite likely a lot of actors, a lot of performers would waive the right without too much thought. A lot of

authors and performers waive their moral rights under contract in the UK at present. So there are a lot of questions that would need to be answered before we go down the road of creating a new statutory right. We would need to look at jurisdictions like France and of course the United States that have more experience and case law on this issue.

And that allows me to turn us now towards the United States and my colleague Madhavi, who's going to explain the current moves there to create a similar publicity right at the federal level. So thank you for your attention.

Hi, I'm interrupting this event to tell you about another awesome LSE podcast that we think you'd enjoy. LSE IQ asks social scientists and other experts to answer one intelligent question. Like, why do people believe in conspiracy theories? Or, can we afford the super rich? Come check us out. Just search for LSE IQ wherever you get your podcasts.

Now, back to the event. Thank you so much. It is lovely to be here. I am Madhavi Sunder. I'm a professor of intellectual property law at Georgetown University Law Center in Washington, D.C. And just as Professor Kretschmer and Professor Kretschmer and Applin did a bit of a tag team on copyright and AI,

I am going to be the follow-up here with Professor McDonough on the right of publicity from the U.S. perspective. So Professor McDonough introduced the popular controversy that you may have all heard of with respect to Scarlett Johansson's voice being appropriated to be the voice or one of the voices of OpenAI's chat GPT. This controversy erupted last May. Well, as it so happens, I was in OpenAI's

offices meeting with general counsel and the head of IP on the very day, if not the very moment, that the Johansson story broke. So our meeting was in fact interrupted by the council's phones buzzing nonstop, and I learned in the cab ride on my way back to the hotel in San Francisco that day that McClure

that Ms. Johansson had gone public with her dispute. So the first point though that I want to make about this dispute with respect to U.S. law is kind of like the option zero that Professor Applin talked about is that the contention is that we might not really need that much new law in the United States with respect to this issue of digital replicas or deep fakes, specifically in the context of creative

celebrities and producers. So one issue is that we actually already have pretty strong right of publicity law, particularly at least in the Hollywood circuit of California,

was strongly in Johansson's favor, arguably. Indeed, we had a four decades old law case under California state right of publicity, which was strikingly on point. The right of publicity is a tort recognized in most U.S. states and

Some under specific statutes and others under common law. Broadly speaking, this right in the U.S. recognizes the right of celebrities to control the commercial value in their identity. So on point to Johansson's case was a well-known case about the grand dame of music and entertainment back in the 1980s, Bette Midler.

So what happened was in 1985 Ford Motor Company advertised its Lincoln Mercury using different popular songs of the time, but Middler rejected doing any commercial for Ford. So what did Ford do? Well they hired a previous backup singer for Middler to study Middler's style and to sing the song quote "to sound as much as possible like Mett Middler" to really just imitate her.

And in a decision that really expanded significantly right of publicity law in the United States, the Court of Appeals for the Ninth Circuit significantly sought for the first time and announced protection of a celebrity's voice against sound-alikes. So the court said, a voice is as distinctive and personal as face is.

and concluded that to impersonate Midler's voice was to pirate her identity. Now the facts in Johansson's case were similar. So in 2013, Johansson voiced an AI girlfriend in the film "Her." And a decade later, OpenAI CEO Sam Altman, a fan of the film, asked Johansson to be the voice of ChatGPT. She refused.

A year later, when OpenAI launched an expressive voice for ChatGPT named Sky, which arguably sounded similar to Johansson's, Johansson said, cease and desist. Now, to be sure, the Sky voice was just one of a few different voice options that ChatGPT had rolled out at the time, and a jury would have had to have deliberated on how similar or not the voice really was to Johansson's.

But California right of publicity law has been interpreted very broadly, covering not just exact imitations or sound-alikes, but even anything that evokes the identity of a celebrity. So there was a famous earlier case that involved Vanna White, the glamorous co-host of the game show Wheel of Fortune, who had won a right of publicity case against the use of a robot that was wearing a blonde wig turning letters,

And though the defendant in that case never used White's name or her actual image, the US Court of Appeals said that it violated White's right of publicity just to evoke her image.

So given this broad reading and some additional factors, open AI may have been implicated here. So first, two days before the rollout of the Sky Voice, Altman reached out again to Johansson and said, "Have you changed your mind?" And she hadn't. Second, this is probably the most damning, Altman tweeted about the launch of the new ChatGPT voices with the simple tagline, "Her."

plainly evoking Johansson's identity from her voice portrayal in the earlier film. So as Professor McDonough said, the case ultimately was never litigated because Altman retired the Sky voice soon after Johansson objected. But it's worth noting that do nothing, option zero, would actually, at least in the Hollywood circuit, probably go a pretty long way to protecting against voice appropriation like Scarlett Johansson's. Notable

there are many other gaps in US law with respect to deep fakes. So US law, US copyright law for example, would not protect against appropriation of the deep, perhaps sexy style of Johansson's voice work in her. Indeed, US copyright law expressly says it protects neither voice nor style. And I'm going to come back to the style point later. So let me turn to the second point, and that is that style

Still, many critics and observers are arguing today that despite this broad right in California, the Hollywood circuit, we really need more and stronger legal protection against unauthorized digital replicas, which include not just commercial appropriation of voice and image for advertisement and monetary gain, but also deep fake pornography. So these arguments are gaining traction.

And basically the argument is we can't do nothing. And I think it really echoes what Professor McDonough was also saying, that there's a lot of concern about other disruptions that may be on the periphery of intellectual property, but the call is to use intellectual property law to address these other disruptions, like deepfake pornography. Even Melania Trump has recently gotten in on the action, endorsing a new Take It Down Act.

that would create liability for unauthorized deep fake pornography and require social media hosts to remove the imagery. So where are these renewed calls coming from for stronger and federal protection against unauthorized digital replicas? Well, there are several arguments for it. First and foremost, as I think Professor McDonough said at the end of his remarks, there is no federal right of publicity in the United States.

And meanwhile, coverage and monetary liability vary greatly from state to state, leaving a celebrity like Taylor Swift, for example, who saw deep fake pornographic images of her reach tens of millions before social media hosts like X took them down without any clear and effective recourse across the country.

Second, some states like California protect only the commercial value in one's likeness, including voice, but that right does not extend to ordinary people who typically don't trade in our image, likeness, or voice. So thus, many states leave ordinary individuals without a remedy against misinformation and pornographic deep fakes, which have become a large threat.

Another variation is with respect to whether a state right of publicity law applies to celebrities even after they're dead. So until very recently, the right in California was not descendable, meaning that the right is extinguished upon the celebrity's death. So in addition, not all right of publicity laws state to state cover voice. Some do, like California, but others don't.

And even where a statute might cover voice, like in California, it would again only apply to well-known, identifiable voices with commercial value. So it's this state of legal affairs that's led the U.S. Copyright Office to conclude in a report on digital replicas last summer that, quote, new federal legislation is urgently needed given the speed, precision, and scale of AI-created digital replicas.

So I'm going to highlight a couple of new pieces of legislation that we're seeing both at the state level and then two at the federal level and then make some comments about these. So first, Tennessee was the first state to rush in. Not surprisingly, it's the home of a vibrant music scene in Nashville. So Tennessee passed the Elvis Act last March, and this stands for the Ensuring Likeness, Voice, and Image Security Elvis Act.

And this act is the first to address rights of musicians in the age of AI, and the act makes voice appropriation a criminal misdemeanor. Now this act is in response, no doubt, to the release of an AI-generated song called "Heart on My Sleeve," simulating Drake and The Weeknd in 2023, known as the fake Drake before Kendrick Lamar called out Drake himself as fake.

The AI generated song drew some 15 million views before it was revealed as AI generated and unauthorized. Now I note with some irony that the Elvis act is named after a musician who himself was one of the great appropriators of other musicians, particularly black musicians, voice, style, and movement.

So California has also amended its even already broad right of publicity statute in light of the AI threat. It extended the law in late 2023 now to allow for protection of a celebrity even after the celebrity's death. And already we see a lawsuit by the estate of the late comedian George Carlin.

Under the new act, George Carlin's estate is suing a podcast called Dudezy under the new California law for using AI to impersonate his voice and style in a YouTube comedy cheekily called I'm Glad I'm Dead.

So the case of Carlin, I would suggest, raises issues not just of sound-alikes, but also of metaphysical voice, right? More akin to style, the protection of which is much more controversial. So it's not clear to what extent Carlin's style is protected under right of publicity law, though again, California's right of publicity

has been interpreted broadly to include uses that just evoke the celebrity. So we don't know whether that would be included. For its part, though, the US Copyright Office, which has urged federal legislation and new legislation in this area, still does stop short of recommending protection for style outright for fear of stifling innovation.

There's two notable pieces of federal right of publicity legislation also that are being considered by the US Congress, or at least they were.

under the last administration, but I suspect they will continue to get close attention. So one is the No AI Fraud Act that was introduced in early 2024. This would create a federal intellectual property right in voice and likeness and protect against the use of unauthorized digital voice replicas and digital depictions that readily identify an individual.

So, again, I think there are important questions to be raised here with respect to if the concern is other disruptions that have historically been outside of intellectual property, harms to personal dignity from unauthorized pornographic AI-generated images, for example, is an intellectual property right and now at the federal level perhaps the right answer? This

bill would allow these rights to be transferred or licensed during an individual's lifetime, and they would endure at least 10 years after death of an individual, even if that person hadn't used their identity commercially during their lifetime.

There would also be punishing of trafficking in quote, personalized cloning service, unquote, designed to produce digital voice replicas, so the creators of this technology.

And there would, of course, be secondary liability for the social media hosts who materially contribute to or facilitate infringement when there's knowledge that the subjects of the replica have not consented to them. So the law would, of course, allow for some balancing of free speech considerations, though many critics are concerned that social media hosts will be pressured to limit liability by taking down otherwise speech-enhancing content.

So, there's another proposed No Fakes Act, which would similarly create a federal right to control image, voice, likeness. The key aspect that I want to highlight about the No Fakes Act is this expressly would apply to all individuals, not just celebrities.

Again, there's an inclusion of exceptions for freedom of expression with respect to commentary, criticism, scholarship, satire, and parody, et cetera, and also provisions for online service providers

that would be required to quickly remove all instances of infringing material once they obtain knowledge about it. So the U.S. Copyright Office has opined in favor of such laws, especially to the extent to which they extend federal right of publicity type protection to all individuals. So I find this very interesting that the Copyright Office seems quite

accepting of extending federal copyright protections to traditionally non copyrightable subject matter like deepfake pornographies and privacy interests so and and I think that we should be thinking about whether or not we're seeking to use intellectual property as a tool well beyond its traditional parameters but I want

want to end with the final note, and that is that in the scramble of state and federal legislators to push for more legal protections, that we shouldn't forget that digital replicas are not all bad. And I think that Professor McDonough alluded to this. So a recent AI-generated parody of Zelensky and Trump's fateful Oval Office meeting last week makes this point. So look up AI, Zelensky, and Trump Oval Office boxing match.

And you'll see a parody of a hostile showdown that ends in an all-out brawl. Now, while such portrayals may raise concerns about disinformation, if properly tagged,

AI-generated content can provide poignant social and political commentary that should not face the immediate threat of being taken down to limit the liability of service providers. And I'm going to end just on a final optimistic note, which is that alongside the right to prevent AI-generated likenesses lies the right to license such uses. And on that score, the future of digital avatars is already here. So last month, I had the privilege of seeing the avatars

the ABBA Voyage Concert here in London. And it was really a wonderful example of how AI concerts can revive musicians from the past and create beautiful new works full of emotion, nostalgia, and innovation for new and old generations alike.

So AI technology allowed for the creation of life-sized CGI avatars, what they dubbed "Avatars," which replicate the pop stars in their prime, but performing now new choreography and new routines, but to the old standbys. And in this case, all the members of ABBA are living and fully authorized use of their voice, likeness, and image. But it wasn't just the machine doing the work.

Four decades after their last public concert together, the real ABBA recorded the show in a studio in Sweden over five weeks using motion capture technology. The real live stars of ABBA in recent times now sang the songs, danced the dances, and chatted between the songs, but they did it all in a quote, NASA style studio with monitors and cameras everywhere and 100 people capturing all the data, unquote.

As we think about regulations, we need to pay heed to the innovative and transformative possibility of AI-generated replicas, as the Best Actor award this week to Adrian Frodi perhaps helps begin to help us recognize. So before we're too quick to ban, let's appreciate that hybridity between man and machine is our future. AI can allow us to bring back music legends of the past,

or perhaps give new physical life and animation to disabled performers and individuals in short we have the technology to bring back the dead all you need is a license ai and law together may yet give us a new lease on life thanks thank you that gives us about 20 minutes for questions from the audience and from online i think given the last two speakers the um

Our choices seem to be embrace AI or be assimilated. So we'll take questions from the audience. I think what I'd like to do is take two or three first and then give it to the panel while we collect some online questions. So please wait for the mic to come to you. Okay, so we have two lady in the blue here and the shirt. So if you go first there and then. Thank you. Thank you very much. Is it working?

Yeah, I can hear you. I can hear you. Very, very exciting, very interesting. I have a question on the opt-in and opt-out. And I know that we've spoken a lot about the problems with the opt-out.

But I wonder, and that's more for Martin and Tanya, I suppose, how would an opt-in work? And have we thought about potential difficulties with it? And what, I might be totally overthinking this, but...

What happens if somebody actually purports to be an owner of a work? They opt in, and then there's a bit of a trust issue there that we actually, the person who has opted in, doesn't have the right to opt in for training. I tried to dig out a bit of literature on that, but I'm not really sure where we end up. Okay, great. So if you can have the second question, please. Hello. Yeah, I just wanted to ask about...

With any of this sort of review of copyright at points where copyright happens, we have to remember that there's always a political overlay on top of the kind of legal discussions that we're having. And I wondered if, given the recent developments in the sort of US dependent...

regarding who's in charge these days and their support for the AI industry and the UK's need for growth and what seems like betting quite heavily on the AI industry and the tech industry in general to pull us into a better place. I just wondered if you had any thoughts on how that might affect

the way that decision making goes when it comes to either new legislation being proposed or interpretation of the existing legislation in the courts. Okay, great. I'll take one more question, please. We had someone here in the middle. Yes, go ahead. In the middle, leading black.

I work in the UK film industry, so I'm a sole trader, I'm a freelance person. And I actually filled in the consultation, which is no mean feat if you're a lay person. But the people that it seems to be directed at are the owners of Warner Brothers or UK Netflix or whatever. And so they will...

rewards from opting for option three, you know, let everybody rip and mine everything. The film industry brings into the UK 5.6 billion pounds every year and employs an enormous amount of people. Where is our say?

Okay, brilliant. So we have money, politics, and law. So I'm going to let each of you choose the question that you think you can most effectively address. Please go ahead. Martin, do you want to? Okay, so I'll probably answer a few.

them together in some ways. So the opt-in in many ways is just what we have now. It's just an assessment whether there's a copyright infringement or not. And if there's a need for a license, that is still an open question. And my main point really underpinning everything I said before was big tech has got the data it needs.

So pulling up the drawback now for everybody else is as a policy measure crazy. It will politically, in the setting where you think about regulatory competition between countries, is a terrible move. And the key question really needs to be what do you need for R&D and research and scientific research in the UK to flourish?

But that doesn't solve the issue for the creators. That needs to be addressed separately. They can't be conflated. I think that is the core of my message. How you concretely do this, that's a longer talk. I don't want to hack the floor here too long. I'll pass straight on. I think it's really interesting to talk about how we might do this.

I mean, if I'm right in thinking, the main proposal coming out of the CREATE consultation was to say, let's have a right of equitable remuneration in terms of the licensors of data sets vis-à-vis the creators. And I was interested in that because it certainly would involve a certain amount of remuneration going back, whether it changes that graph dramatically is another matter.

I agree. You need to start somewhere. And we discussed it at length. In many ways, what we're proposing is just a contract regulation, really, isn't it, in some ways? So where there is a contractual relationship between you and the intermediary and the intermediary licenses on some money has to come back.

But we also point out at the different end that we're essentially proposing a broader research exception. And we propose that you need to focus on market entry. Before something enters the market, there is really no competitive threat.

And increasing the costs prior to market entry is bad for UK market entry and for R&D development in the UK. So prior to market entry, let people do what they need to do. But once they're interested to offer a product to the market, a service to the market, or the model to the market, at that moment, transparency requirements hit in. And if once you know what the sources of training data are,

the risks increase dramatically. So then as a right owner you can prompt the model and you can see whether your stuff comes back in that way. You can also assess whether the proposed product competes with the product which is already there, which is offered by an incumbent producer.

So the question is how would you, if you're now a developer, want to enter the market and you have these licensing demands coming at you? So how do we deal with that? And there a kind of collective licensing solution may still be a second layer apart from the contract regulation. It may well be that something may need to happen also which facilitates as a side

line, mass digitization. So I think that there is probably a role for a different level of channeling money, but it needs work.

- Great, so I think-- - Sorry, is it too much now? - No, no, no, look, and by the way, if I can take this sort of second question and put it to both of you and ask, is intellectual property the wrong way of looking at some of these questions? Are we trying to shoehorn too much into an existing framework that is by definition narrow, and should we be looking out of these rights? So briefly, if both of you can just sort of address that and pick up on any of the other three questions.

- Sure, I'll jump in on that. I was really intrigued by Professor McDonough's framing on the disruption and I was thinking that Professor Applin, when you were saying the statutory license might be going too far, how much of your feeling about that or your sense of that is because it is seeking to answer these other disruptions around labor rights, the existential threat to creators that AI creates,

it serves as more of like a guaranteed income for artists and creators in the age of AI. So I guess that's something that I'd love for us to talk about, but...

In the United States, what we have been seeing is the use of copyright, or at least an attempted use of copyright, for example, to do things that go well beyond the traditional scope of copyright. So, for example, to claim a copyright in the work, in photographs of yourself to prevent revenge porn, or to create a copyright of yourself in a work to shut down speech, perhaps, that you don't agree with.

And thus far, in the copyright context at least, courts have really pushed back on that. But now what we're seeing is this happening in the context of the right of publicity. And again, I think perhaps it's like the one thing that everybody has consensus around is protect Taylor Swift. And so in the rush to do that, except for of course Donald Trump and JD Mance. They don't want that.

But in the rush to do that, at least prior to the changeover in the White House, in Congress, members on both sides of the aisle, the current bills that are being debated would in fact give everyone an intellectual property right to

prevent the unauthorized pornographic deep fakes really because of concerns about celebrities like Taylor Swift, but of course ordinary people that are increasingly faced with these threats, including young girls. Again, I don't think that IP is the right term

tool or framework to deal with that. And there are other bills in Congress that are being debated that would just specifically address this issue of the pornographic deep fakes and have tort remedies, et cetera, that would be created for that. So I think it's a better thing to keep them conceptually separate and to think about property rights largely as commercial rights

In the United States, also unlike in the UK and the continent, we don't have a strong, if any, tradition of moral rights.

So it is very much a commercial right with respect to both right of publicity as well as copyright that we've at least historically been talking about. So I think it would be a really big shift for us that is not advisable. Great. Thank you. And do you think we need an Attenborough Act? It doesn't have the same rating. Well, just to respond to the question of the small-scale filmmakers...

I don't know if it's any consolation, but small-scale industries and SMEs always find it difficult to get a response in every sector. So you're not at all unique in that regard. I'm talking big scale, actually. I'm talking Warner Brothers, Netflix. I'm talking big, you know, Marvel. Yeah, well, I mean the...

There are obviously layers to this. So in the music industry, for example, what you hear in the responses, Kate Bush or Paul McCartney, the famous names are mentioned, but maybe some of the lesser known names are not mentioned.

But it's worth remembering, going back to the disruption of the technology, that the film industry was itself a huge disruptor of the theatre industry. And part of the way that theatre survived and is now so vibrant is through things like subsidy, also creators themselves doing incredible things with small resources. So it's very difficult when you're at the eye of the storm and you're in the industry that is being disrupted.

It's very scary. But creators, particularly in the UK where it is so vibrant, they will find ways to keep doing what they're doing and doing even more imaginative things than we can think of right now. Okay, great. Thank you for those answers. Let's take two questions from our online audience. Genevieve needs the mic.

So we have a really large global audience joining us from online. We have close to 200 unique viewers watching live across 28 countries. And so the first question for the panel is from David Walter from London.

So he asks, why not expand the traditional statutory law by applying a levy algorithm which can be designed and used by artists, authors, and performers, which can be attached to any image, performance, or writing? Was it not so in the past that authors and artists use symbols or marks on your work to protect their copyright? Okay, great. Let's take one more, shall we?

So another big theme was about the topic of authors and a question from Oshada Rodrigo from Sri Lanka is that

How do AI-generated deep fakes of artists and performers threaten their moral rights of integrity under UK law, and what lessons can be learned from the EU's more robust system of moral rights protection? Okay, great. Thank you. I think we've dealt with the moral rights to some extent, but Tanya, would you like to take the first question on levies? Yeah, so I think...

I think the question, if I understand it correctly, was driving at the fact that we could have a technological solution, so whether that's watermarking or some kind of embedded technology which allows for money to be more precisely collected. This has been a fantasy for many years in terms of the digital realm.

I think it's fair to say that that fantasy hasn't really become a reality in the last 20 to 25 years. So I wouldn't be too optimistic about that kind of practical workaround. I don't know what you think. It's just kind of an engineering solution to a legal problem, which is a nice way of framing it. Let's take a couple of questions from the in-person audience. I think we just barely have time for that.

At the back there, gentlemen at the back there, and then Leidy in the middle, please.

Hi. Thanks so much for really interesting talks, really interesting discussion. I just wanted to ask about something that wasn't directly touched on, which is the copyrightability of AI outputs. So the US Copyright Office just put out a report on that question. I wondered if you could comment on what you see the stakes there being and perhaps the slightly confused situation in the UK law and its difference maybe from the US.

Great question. Thank you. And then lady in the middle, please. Hi. I really enjoyed the discussion by all of you. I just had one question about one interest that hasn't been mentioned so much, which is the user. The last year or so, we've seen a lot about AI model marketplaces where you have users that are sharing and downloading and uploading different models that can be fine-tuned.

And I just wanted to understand whether you could see copyright expanding to cover new business models that use those type of fine-tuned models, particularly because they kind of come from this open source community. Thank you. Great. Thank you. So on the question about protectability of outputs, I do think the position under UK law

would be different to that as specified by the US Copyright Office, even though we're using a standard of originality that is similar. And I think it's important to be clear that that's the US Copyright Office position in terms of registration of outputs. We haven't yet had a court decision which kind of tests how the US originality standard would apply. But

that kind of practice from the US Copyright Office is going to create certain sort of understandings of protectability in the US. I wouldn't say that a UK court would take the same approach to the US Copyright Office. I think the other thing we didn't touch on is the computer-generated works provision in UK law, which is unique to UK law, which doesn't exist under US law. This could capture...

all sorts of AI outputs that didn't satisfy the originality requirement. But I think here, I mean, my position on that particular unique provision is that we can get rid of it. It doesn't seem to have been useful. We need to have evidence that it has been useful and we can just rely on the authorial works provisions and to the extent that actually related rights kick in for AI generated sort of sound recordings, for example.

Should I say something on the open source question? I think it's a really important question and almost impossible to solve. We can run Lama 3 and DeepSeq R1 on our laptop. So the compute required is fairly small and to get...

any kind of legal regime that works here is terribly difficult. So if you start with transparency obligations of a open source model, which is kind of on a different jurisdiction,

Where do you start? If you offer a service in the UK based on that model, you probably can have an enforceable transparency obligation. But if you run it on your laptop, I can't see what we possibly could do. So,

So the question of extraterritoriality is really critical, I think, for open source. But it is one, I think, globally there's no solution for that. It's innovation we want, but it also comes sidewise to our intellectual property system. So we struggle.

Great, thank you. So we have, I'm sorry that's all we can take, but we have a couple of minutes, so I'm going to take the opportunity to ask our panel, in about half a minute each, what do you think legislative and policy priorities should be for the next three to five years? So you have about half a minute each. Madhavi, shall we start with you? Yes, yes.

Well, I mean, just specifically, I think that in the US, it was interesting to hear the perspective of UK scholars and practitioners weighing in on whether to adopt our fair use approach. And I think the consultancy is probably already way ahead of where we are. And once we are relying on a US fair use analysis, this is gonna take possibly a decade to work itself out in the courts. And I think that's

So I think that this is, we need to be thinking not just in the copyright office, U.S. Copyright Office, which is making decisions around all sorts of pronouncements around things like authorship in AI-generated outputs that

I don't know how much validity they ultimately will have in courts of law. So we need, I think, a more focused effort on the part of Congress to address this in a more efficient and quick way, similar to the approach that you're all engaging in here. But good luck with that for the next four years. Thank you. So Luke? I think that the main thing that the law should do is tread very carefully here.

Don't rush into... The law is a very blunt tool, so if we get this wrong, it could have lots of different downstream effects that we don't appreciate. The question of the user that was brought up and the audience, I think, is going to be very important for the future. What will we value in the future in terms of creativity? Will we actually... Will AI outputs find an audience that will...

be a crucial question and if that is the case and if they are generating huge amounts of value then the law will be forced to regulate them in some way whether that be through court cases of extending principles that are already in the common law or whether that be through new statutes but I don't think we're there yet I don't think we need to rush in great thank you so I would be looking to focus on

tools or policy areas that aren't just about IP in terms of innovation and remuneration. So thinking about how do our tax incentives, how do certain kind of research grants

facilitate development in this area and that's something that you know doesn't get discussed in terms of government policy it's either copyright or it's to what extent do we regulate and so discussions around what are the kind of conditions which are helping to kind of foster creativity and innovation and those sort of other tools thank you

Yeah, with our team of IP lawyers, I think we all agree, you know, we probably think it's not the right toolkit here. In the concrete situation of the UK law now, I mean, I still think that the opt-out is the wrong line to go, and I think, you know, copyright law can be in the way, it can be a problem, so it does matter, you know, but it's probably not the solution. And a

But I feel that we need to really think very carefully, more horizontal. It's probably a European word, but it's data protection law and many other layers which make a decision whether you as a researcher can compile a data set which then can produce new data.

relevant research. So it's a layer of different rules which make it possible for a research environment in the UK to flourish. And that's what we need. If we can't manage that, all the other regulatory tools are kind of fraud.

Wow, that's quite a big word to end on. Well, thank you all for your attention. Thank you to our online audience. Thank you for joining us. And please join me in thanking our wonderful panelists. Just a great discussion. Thank you. Thank you.

Thank you for listening. You can subscribe to the LSE Events podcast on your favourite podcast app and help other listeners discover us by leaving a review. Visit lse.ac.uk forward slash events to find out what's on next. We hope you join us at another LSE Events soon.