The core problem is that social media platforms have been designed in a way that incentivizes polarization and the circulation of harmful content, leading to mental health issues and societal fragmentation.
Social media platforms are designed to maximize engagement, which often means promoting content that polarizes users, such as lies, false information, and offensive images, leading to increased division rather than unity.
Couldry suggests supporting federated social media platforms that operate on a smaller, community-based scale, allowing for better moderation and interaction. He also calls for regulatory changes to enable users to transfer their data and contacts freely between platforms.
The 'space of the world' refers to the artificial environment created by social media platforms, which has been designed in a way that allows for the global circulation of content, both good and bad, often leading to harmful consequences.
AI is both a problem and a potential solution. While it can help moderate content, it also exacerbates issues by enabling the proliferation of bots and deepfakes, and it reinforces the profit-driven business models of social media companies.
Human solidarity is essential because it is a prerequisite for addressing global challenges like climate change. Without solidarity, societies cannot effectively collaborate to solve these pressing issues, which are exacerbated by the divisive nature of current social media platforms.
The main argument is that AI cannot effectively determine what is offensive or harmful to humans without exposing people to vast amounts of harmful content, which is both unethical and impractical.
Couldry acknowledges that capitalism drives the profit-focused business models of social media, which prioritize engagement over user well-being. However, he does not propose an alternative to capitalism but suggests that addressing the toxic ecology of social media could lead to broader societal changes.
The second book, 'Corporatizing the Mind,' will focus on the risk of corporations redefining human knowledge and expertise through AI, potentially undermining human rationality and collective problem-solving abilities.
Couldry aims to address the fundamental challenges facing humanity, particularly climate change, by exploring how social media, AI, and the role of art can contribute to fostering solidarity and a better future.
Learn from the world's best all in one place with Masterclass, the only streaming platform where you can learn and grow with over 200 of the world's best. Masterclass always has great offers during the holidays, sometimes up to as much as 50% off. Head over to masterclass.com slash Spotify for the current offer. That's up to 50% off at masterclass.com slash Spotify.
Looking for a pickup truck to get just about anything done? Look no further. The Chevy Silverado EV isn't just the most powerful Silverado ever, with next-level towing capability and technology. It also offers game-changing versatility with the available Multiflex mid-gate and tailgate, which means Silverado EV helps you carry large, bulky, and oddly shaped items up to nearly 11 feet in length. Chevrolet. Together, let's drive. Visit Chevrolet.com to learn more.
Ryan Reynolds here for Mint Mobile. One of the perks about having four kids that you know about is actually getting a direct line to the big man up north. And this year, he wants you to know the best gift that you can give someone is the gift of Mint Mobile's unlimited wireless for $15 a month. Now, you don't even need to wrap it.
Welcome to the new Books Network.
Welcome to the New Books Network. I'm Joanne Kwai, your host for today. I'm a PhD candidate at the Department of Geography, Media and Communication at Kallstatt University in Sweden. Joining me today is Nick Kodre, Professor of Media, Communication and Social Theory at the Department of Media and Communication at the London School of Economics and Political Science. We're here to talk about his latest book,
The Space of the World, Can Human Solidarity Survive Social Media and What If We Can't? Published by Politipress in 2024. I'm very excited to have Professor Kodry here with me in person in Sheffield, the UK. We are here ahead of the official book launch at the annual conference of Association of Internet Researchers. So Professor Kodry, thank you so much for joining me today. Oh, pleasure to be with you, Joanne.
Can you give us a little bit of overview of the book? What is it about and who did you write it for? Yes, sure. This book was written almost all in the pandemic. I had the idea just before the pandemic. And it's an idea addressing the fundamental problems we have with social media in many societies today. It's a book written for a general audience, so for everyone.
Students at any level, including undergraduate and any general reader who's troubled by the social harms that seem to be emerging from the commercial social media that we have. So what I do in the book is I start off by summarizing the mess we're in in social media with polarization, mental health problems for young people, especially young women across the world.
And then I try and tell the story of how we got to this problem in the form of a fable or a fairy tale. Because it doesn't really make sense that we got into this problem. When we tell a story to a child at night, we tell it as a fairy tale, as if everything really made sense, either for evil or for good. So I tell it that way, as if it made sense.
And then we realized the extraordinary crazy steps that we made in the past 30 years, which have led us to the commercial social media we have that are doing so much harm. And then in the middle part of the book, I summarize all the evidence that exists for why the design choices that were made with social media, uh,
And the business models that drove them were guaranteed to lead us into this toxic mess. And then the last part of the book tries to think of how we get out of the mess, what measures we can take. And they include things like supporting federated social media platforms that are run at levels much closer to communities.
Our regulators allowing us to transfer all our contacts to the platforms we want to be on, say away from the big tech platforms, and maybe taxing those platforms much more so they actually put money into the journalism. This has been largely destroyed because of the social media advertising model. We need radical change. And the conclusion of the book is that if we want solidarity...
If we want the solidarity we're going to need to confront the terrible challenges of our time, above all climate change, then we need a social media that's going to give us more solidarity. Right now, it's giving us less. And therefore, we have no choice but to confront the problem and rebuild the social media we've got into a better social media. And that's the challenge of the book.
You mentioned the word human solidarity, and we're experiencing this world that's probably increasingly fragmented and polarized. Do you think human solidarity is actually something that people would like to achieve? Or what do you mean by human solidarity? Well, that's a good question, because when we're in a polarized situation, when we're very angry with other people,
and we feel we can only trust those who are very close to us, our family or the people who think exactly like us, then maybe we don't feel we do want solidarity with different people. But that's as a result of a bad situation. We know that all the big change in human history, such as fighting slavery, fighting for civil rights in America for black people, or bringing solidarity across countries to bring international alliances, all these things have happened
fighting war and so on, they've all happened by building solidarity between people who didn't know they had something in common.
So without solidarity, we have no chance of having a better politics. So I think we should be wanting that if we think straight. The question is, can we get it in the social media we've got? And what I conclude in the book, looking at all the evidence from social psychologists, economists, sociologists, and data scientists, is that the social media we have are basically a polarization machine. They've been designed in a certain way
to do the exact opposite of what we needed from social media. Put it this way, if you decide to link up 3 billion people or more on the planet, which is what we've done, in fact, we've linked all of us up through the internet potentially, 3 billion people or more on social media, then it's really important to make sure that we don't make
create problems by having bad stuff circulating. We know there are evil people out there in the world. We know there are people who want to stoke division. But what we've done is the exact opposite. We've built social media that encourages us to like things, to show that we are different from those other people. Social psychologists 50 years ago taught us that that's exactly the conditions under which polarization increases rather than decreases. And I add to that,
We've built business models inside social media companies that are designed to incentivize the content that will polarize people. Lies, false information, offensive images that get people talking and so doing stuff on the platform. We've designed social media exactly the wrong way. In other words, it's not the right way.
We built the wrong social media, so we need to dismantle and build a better social media. Why do you focus on social media? Why did you choose that as a case? Well, I thought I wanted to write a book about social media above all, although I also in the book discuss commercial search engines like Google. For example, Google's YouTube platform also causes a lot of polarization too.
But I wanted to focus on particularly social media because we know most about them. And there's an enormous amount of evidence now, including the evidence gathered by the social media companies themselves, that their platforms are actually creating conditions which are mentally harmful.
for young people, particularly young girls in terms of their body image, but also in terms of the general polarizing context. And they know why, because they know their business model incentivizes engagement stuff happening on the platform, because that activity is what generates the type of attention that advertisers are attracted by. And that's where the money comes from. Um, the social media platforms, they know this, uh,
And as came out with the evidence of Francis Haugen, who resigned from Instagram, they know this very well, but nonetheless, they want to go on making money through this problem. Of course, they're trying to do something to correct it. So people might think, well, it's just a matter of sorting out the problem. But that's where I think the problem goes deeper. And that's why I wanted to write the book, because...
For me, the underlying problem underlying all the debates about social media is the type of space that we built collectively by allowing social media of these forms to emerge.
When the internet became a thing in everyday life about 30 years ago, it potentially linked up everyone on the planet and no one wants to undo that. It's a great thing. We love being connected on some basis. The question is, do we want to be connected this way through these sorts of platforms? What we really needed about 10 years ago was for someone to say it would be cool for us to be connected more with our friends, our family, the people we feel something close with.
and maybe get more information from those we don't know so well, but circulating stuff we're interested in. The core idea was not a bad one.
But what we need is both social media companies 20 years ago to see there's a very deep risk here. Because it stands to reason if you connect potentially 8 billion people on the planet and 0.01% of them are going to be evil. That's a guaranteed fact in human history. Or at very least, they want to do harm to a lot of other people, maybe for political or other reasons. If you take that as the guaranteed situation, then what you want to do is...
is you don't want to incentivize the bad stuff to travel. You want to keep it locked up where it can be controlled, monitored, moderated, and so on. But we built exactly the opposite type of social media platform where there's an incentive to circulate bad stuff, troll material, bot material that's based on pure lies because it generates engagement. After all,
There are not many things that are true. There's an awful lot of things that are not true. And the things that are outrageously untrue get your attention. You think, that can't be true. But what if it is true? Let me send that to my friend. Sounds interesting. That's the engagement on which social media platforms decided as a business choice to capitalize. So they basically have been faced with the potential risk of having built a space where
They could become toxic. If it wasn't controlled and dampened down, they made exactly the wrong design choices to ensure that that circulation, that bad circulation, will be incentivized and increased. We created, in other words, bad echo chambers rather than what in the book I call resonance chambers.
If they were resonating, it would be up to me as a sound object to resonate at the frequency I want. That's the way resonance works in the space. But this is not what we've got. We've got social media platforms deciding you will benefit from receiving that sort of content that benefits us. So we're going to keep pushing it at you until you break. And societies are breaking under that sort of pressure. So that's why I thought social media was not just a really important tool.
problem for us as societies and citizens. And it is, for example, spoiling the sorts of information we get on climate change. It's hard to see straight anymore. Or on the pandemic, people believing nonsense about vaccines and the harm they do. So it's harming us, but it's also intellectually a very interesting problem because it's a very deep mistake we've made. We've really designed the space of the world wrong. And until 30 years ago, before we had the internet, we
No one had ever designed the space of world in the entire history of humanity because there was no way of doing it. Any emperor or king
or ruler would have loved to design the space of the world and say, I can control what people are paying attention to at every moment in my empire. But they couldn't until we designed something like the internet. And that's why we needed to be super careful about that opportunity. And that's exactly what we want. And that's really what the Buddha's about. Do you think the problems that you mentioned are fixable? What can we do to actually address them? There are a lot of actors that you just named that are involved.
the trolls, the platform companies, the government? How can we actually address this problem? Well, that's a good question. That's what I tried to face head on in the book. Clearly, it can't be a solution to undesign the internet. There's no one in the world who wants the internet not to exist, not to be able to find things that are out there online. Everyone wants that.
That's why I discuss search engines. We do need search engines that have a public responsibility because without search engines, we can't find anything online, simply too much.
Search engines are a difficult problem because at the moment we don't have a credible model for designing a public value-based search engine and that will require taking on the power of Google. Let's put that one-to-one side because that is genuinely hard, but many are working on it. Social media, we have more of a chance to reform that. Why? Because we actually have alternative social media platforms right now, federated social media platforms in the so-called Fediverse.
Which are designed on a much smaller scale.
They're not governed by remote people in California. They're governed by people who are inside actual communities who know broadly the types of people on their version of federated social media. And they all interconnect through an underlying software protocol, which makes sure that if you are on one platform, you can nonetheless post to someone on another platform. It's technically difficult. It's caused a few problems, but it is potentially a very good solution.
to this problem that the spaces are too big, that the moderation happens at the level of a vast global corporation, which of course cannot control everything that's going on between three billion plus people. And the decisions are made by people who know what they're trying to moderate. They know something about the community
that they help build up because it's on a human or community-like scale. This seems to me a much more rational way of designing social media such as Twitter or versions of Instagram. These all exist in the Fediverse, Masterbond being the most famous one. So the design exists, but of course we do need help from governments and regulators to make it viable.
People have said in the past two years, since many people wanted to leave X when Elon Musk took it over, that there is no alternative. People tried to join Masterton, they found the platform difficult, and then they found their friends weren't there, their contacts weren't there, and they couldn't rebuild them.
That is a problem, and that's why we need regulators in every country to ensure that we actually do have a free market in social media. Why wouldn't we want that? A genuine market that's competitive. So if I fall out of love with Facebook or with X, I'm quite free to take all my contacts, the history of what I've posted, and
And all those things that matter to me, maybe chats with family, photos, or whatever it might be, and take it to a platform of my choice so that people that I want to connect with, I can still connect with from there. That will be a well-operating market. It's not what we have at the moment. We probably also need governments, maybe with help from regulators, also to
Change insists that social media change their business models. Maybe they will be forced to anyway if there's a more competitive market than social media. They'll have to change because everyone will want to go on a less extractive business model where it's not run for profit. But even if that didn't operate, why not ban business models for social media that make profit out of people's engagement? Why should that be a valid business model? It's harming society.
So therefore, there's a responsibility on social media platforms to change it. And the hook would be if you don't change your business model, we'll tax you at a penal rate and use that money to sponsor alternative social media platforms that do have a better business model, such as Mastodon, which are not run for profit, which are run for community benefit.
So a lot of the social medias you mentioned do run on a global scale, but some of the social media uses are also very localized. That's right. And the kind of problem we see, are they global? And especially the kind of solution we're proposing, can they be translated into different cultural contexts, geographic contexts? I think they can because the solution I'm suggesting is at a fundamental structural level. It's not cultural relative.
Every society, every culture is harmed by polarization. We have so many examples which are discussed in the book from Myanmar to Sri Lanka to India, Brazil, Australia, Europe, the US, parts of China and Africa. Virtually every country has faced polarization increase due to these underlying structural blocks. So everyone has an interest in the change.
The problem, therefore, is a general level, and we see it across the world. We won't solve it by trying to build a new massive social media platform that somehow works the global scale. The scale is part of the problem.
We need to build social media on a smaller scale, but make sure they can interact because the software is not incompatible. And the solutions for that already exist. If we do that, then we can leave the detail to be solved at the local level where people know where the solutions lie. No one in California knows how to solve polarization in Myanmar.
They don't have a clue. Why should they have a clue? The problem was asking them to do it in the first place, which was a design problem. So that's the idea that scale is everything. And that's why the book's called The Space of the World, because it's that spatial error that we made in building the wrong type of space that is the core of all the particular problems that we know about in social media. It's not just because of the business models. The whole design was wrong, and therefore we need a different design. Mm-hmm.
Building a portfolio with Fidelity Basket Portfolios is kind of like making a sandwich. It's as simple as picking your stocks and ETFs, sort of like your meats and other topics, and managing it as one big juicy investment. Mmm, now that's pretty good. Learn more at fidelity.com slash baskets. Investing involves risk, including risk of loss. Fidelity Brokerage Services, LLC. Member NYSC SIPC.
Empower your business or digital agency with Bluehost, trusted by over 5 million WordPress users globally. Bluehost features top-to-bottom hosting optimizations designed specifically for WordPress, giving you 24/7 access to a team of experts for support, plus thousands of WordPress help articles. So if you want to streamline WordPress website creation with intuitive controls and premium support, choose Bluehost, powering over 2 million websites worldwide.
How about listening to the sounds of Istanbul? Beautiful, isn't it? But you can't discover the coolest city in the world just by listening. Check Istanbul.GoTurkiye.com now and plan your Istanbul trip today.
So I think it was around 20 years ago, you started already working the concept of media space. You've continued to work on like the social space. And now we have the space of the world. So how do you see space? How do you define it? And how do we make sense of the space? Yeah. Well,
I would say that I've been interested in things to do with space for a long time, since my PhD, you know, 30 years ago. I always felt that although I was interested in media, there was a lot we weren't understanding in terms of media's relations to space.
If you think about what media are, basically, it's about the transmission of messages and contents, images through space rather than just staying locked where I'm sitting right now. It's about transmission in space. But we don't understand a lot about how media relate to space. But with the broadcast media, it was fairly easy. We know we've got a transmitting center, a
It transmits, the messages are picked up by local aerials and we watch the TV sitting in our home and the message, the image comes from some center, a broadcasting center in Shanghai or Beijing or London or wherever. That was a relatively simple relationship between space and media to think about. And it wasn't until about five or six years ago, I had this idea that there was a problem
in our relations with social media because polarisation is already clearly becoming an issue six or eight years ago and that at its core was a problem of space.
And that this wasn't really being talked about in all the press commentary and all the public debate about what's these evil social media, they're doing bad things. And they have made a lot of mistakes. But underlying that is a deep problem with the type of space that we've built. We've built the wrong type of space. If we'd started off 20 years ago thinking internet space is a dangerous place,
We know people can be harmed. People were aware of that in the first 10 years of the internet when they started trolling started and bulletin boards and things like that. It's hard to control. People understood that when you're protected behind anonymity, you do strange things and you act irresponsibly. People became aware of that. But when in 2003 and 2004 we had the beginnings of MySpace and Facebook...
which was a potentially exciting idea to create infrastructure to connect people more regularly so that there's a stable infrastructure for connection, they didn't see the danger that if you do that on a scale that can go up to the global, because the more it spreads, the more money there is, then you create a very big risk that the bad stuff that for sure is out there somewhere will circulate everywhere.
I mean, this is the thing about internet space. When I'm online, in principle, someone utterly distant from me, from a different cultural world, who knows nothing of me, who knows nothing about what will harm me, can send messages out that circulate fast to me and do me real psychic harm. This is what's happening with kids who...
almost accidentally commit suicide by following a challenge that set for them on TikTok that someone thought was fun somewhere in the world. And they die because of that, because TikTok didn't control that circulation. So the consequences can be tragic, but they follow only from the design of the space and then
to understand the risks in the internet space. If we'd had something like federated social media 20 years ago with that philosophy of building on the scale that's needed, the scale that's the smallest we can do to perform what we need to connect with each other we want to, then we would avoid the problems. But that's the mistake we didn't make. And we've had to go through 20 years of gradually learning what a mistake it is. And now is the time.
to correct, to make that change. With the recent development of technology, especially when people talk about, for example, artificial intelligence and now even generative artificial intelligence. And before, I guess, within social media, we had algorithmic content distribution, targeted recommendation, advertising. And now with generative AI, we have all these problems with deepfakes.
And misinformation, misinformation, and people are having a lot of fear around these issues. How big a part does technology, technology development, or more specifically AI, has played in these problems we were talking about? And how do we address those issues?
Well, some people, of course, including the management of the social media companies, has hoped that AI can be a solution here, as they do in relation to many problems. That AI can moderate bad content, profoundly offensive content, more effectively.
There's two problems with that. The first is that the only way AI can be taught what is offensive to a human being, given that AI is not a human being, is for human beings to watch an awful lot of that offensive content. And people in the global south are paying the price of that. They're being paid small amounts of money to watch endless streams of images that you and I will never, ever want to see in our life. That is an inhuman cost.
that is just unthinkable. But it is necessary if you believe AI is the solution to bad content. So it can't be the solution. And in fact, it doesn't work most of the time anyway because it's too hard to work out what is truly offensive for human beings.
It's beyond AI because AI is not human. So AI won't be the solution. The problem is that AI is actually making the problem worse. AI, as you said, lies behind the proliferation of bots, automated agents that circulate lies much faster than human beings can in a more coordinated way, like a weapon, a coordinated weapon in the way that human beings cannot.
So that's been making the social media situation quite a bit worse for a long time. And I'm afraid that situation with AI is going to get worse. The reason is that AI is now becoming even more crucial to the business models of social media. We take the example of Meta, the biggest social media company in the world. Three or four years ago, Apple caused a huge damage to Meta because it changed the terms of service of the iPhone.
which meant that you as an iPhone user had to give your consent before a third-party advertiser could target ads through apps on your phone directly at you or through Facebook directly at you. Facebook had to change. As a result, Facebook's relationship with the advertisers was disrupted. Facebook started losing $10 billion a year or more.
However, it's recovered. It's now making bigger profits. Why? Because it's improved its own internal AI, its large language models such as Lambda. And as a result, it's now through the AI analysis of everything we do on the meta platforms, is able to send beautifully packaged ads to the advertisers directly.
So they buy the ads rather than buying the information to generate the ads. Their net result is exactly the same. And hence, Facebook Meta's profits are rising from advertising because of the AI. So AI will only make the problem worse unless you force social media platforms to change their business models. And that's the core solution. That's what we have to do. AI will not therefore solve the problem. It will deepen it.
So a lot of the problems I'm hearing here and also in other research, it tends to point to the business models and at the end, the capitalist system. What's your take on that? Well,
I don't, like most people, have a solution to what comes after capitalism. No one has a clear answer to that question. I hope we can move to a more fair, less exploitative economic system. But I don't have the answer to that. I don't claim to in this book or any other book.
I think we can only move towards an alternative capitalism by solving the problems that we have. We have this really deep problem now in the way we've built the space of the world, which is toxic for capitalism.
so many people, particularly young people, the very people who are most vulnerable in society, are being harmed by the type of space of the world we've built. So we have to confront that problem. And maybe by confronting that together and thinking about the design issues that that raises, we will start to think about the harm that capitalism is causing in other aspects of our society. Because what I'm really saying in the book is that
We built an ecology of social media, an ecology of communication, and that ecology is guaranteed to be toxic. And when we face that in the physical environment, we
we've tried to do something about it because we know we can't live sustainably in a toxic environment but that's what we built in social media so i believe we will take that challenge on i think there are increasing signs that this is happening perhaps the collaborations that brings to the new design principles could be the start of rethinking why do principles in society are
I mentioned Jeremy Gilbert's book in his book on 21st century socialism in the final chapter, where he argues that we don't know the future of capitalism. We don't know the total answer anymore. It's clearly not a Soviet-style planned economy. It's not like that. It's something very different. But maybe collective creativity in addressing the very big problems we do chase, we do face, could be the start of a bigger transformation. So that's what I hope for.
But the thing we need to focus on is this. It could never have been the right decision to delegate to business people whose goal, quite rightly, is profit, the design of the space in which we live. Not just the buildings, but the very social air we breathe, the very way that what I think, what I feel, the way I express myself directly affects how you feel and express yourself and those on the other side of the planet.
It never made sense to allow business to make profit about the way, the conditions under which we can be together, because that is the basis of our life together. That's not for business to decide. That's for all of us. That's why we have to make a choice about this, all of us.
change the space of the world that we have. And so far as I understand, this book is also the first book of a trilogy titled Humanizing the Future. Can you tell us a little bit about the trilogy or what's the plan? Yes, this was an idea that came to me in the first year of the
pandemic. Because to be honest, in the early months of the pandemic, like many people, I really doubted whether it was worth going on writing. What's the point of writing when the world is falling apart? And I did stop at one point and I decided,
at that time in mid-2020, that it only made sense for me personally, at least towards the end of my career, to write books that for me seemed essential to write, that I absolutely needed to write, hopefully other people needed to read. And at that point, I decided that the most fundamental challenge to humanity is climate change and the desperate need for all of us to change our lives so
to address the challenge of carbon and so on, carbon emissions and other emissions. And we have to do that in coordination with each other to some degree. Otherwise, this will not happen in time to save the planet. For that, we need various conditions to be met.
I'm not a climate scientist. I can't write about what we need to do from a technical point of view. I don't pretend to be a climate scientist. But as a social theorist and a sociologist of media and culture, I ask myself, what are the things that I do know about that could contribute to humanity being in a better place to answer the fundamental existential challenge of climate change?
And that's when I decided that the idea I'd had a year or so before, The Space of the World, I wasn't quite sure what it was about yet, would be about the challenge from designing the wrong space of social media. Because without solidarity, we can't confront climate change. So that's what led to the first book. The second book is going to be called Corporatizing the Mind and Socialism.
It's now going to be answered at a variation of the same question, which is if we don't value our human intuition, our human wisdom, our capacity as human beings to think rationally together about what we need to do together in the world, if we regard human intelligence as inferior to artificial intelligence,
as trivial, as endlessly outplayed by machines which are run by corporations. If we no longer value our rationality, in other words, we won't value our ability to rationally confront climate change. We're often told that AI is the solution to climate change. It may or may not generate useful information. But if we no longer are valuing human rationality, then we're not going to do anything together to solve
the fundamental problem of changing our lives to address climate change. And therefore, this book, Corporatizing the Mind, is about the risk that we allow corporations to redefine what we count as knowledge and expertise in their terms that profit them, and we no longer take account of
what we know about human intuition and have known for thousands of years, which is that we do have a rational faculty, which under special conditions, we can sometimes use to act together and do the right thing. So that book is about, as it were,
arguing against the corporatizing of the mind, particularly through AI, but there are also aspects like neuromarketing that are dangerous. The third book, I'm not so sure, but it will be a more optimistic book and it will be asking, what is the role of art, the aesthetic, the sense of beauty and joy in the world? What is the role of that still in a supremely difficult world in helping us have a better politics and live better together? And, uh,
I think I need to finish the next book before I can decide how the third book will be, but I determined to end it in a hopeful way. So that's the trilogy that I have in my mind. That's an exciting book project and I wish you the best of luck. And I hope to have the opportunity to welcome you back when the new books in the trilogies are out.
So thank you again, Professor Kutrych, for joining me here today. Thank you very much, Joanne. It's been a pleasure to talk to you. To our listeners, the book is available now and we'll be sharing the relevant information and links in the show notes. Do check it out. Thank you so much for listening. Until next time.