We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode The last human job: AI, depersonalization and the industrial clock

The last human job: AI, depersonalization and the industrial clock

2025/2/19
logo of podcast LSE: Public lectures and events

LSE: Public lectures and events

AI Deep Dive AI Chapters Transcript
People
A
Alison Pugh
S
Sonia Livingstone
Topics
Alison Pugh: 我研究了依赖人际情感连接的职业(如教师、治疗师、护士等),探讨了将这些工作系统化后会发生什么。我发现,人工智能的应用带来了机遇和挑战。一方面,人工智能可以提高效率,解决服务不均等的问题,并提供比人类更客观、更全面的服务。另一方面,人工智能也可能导致去个性化危机,削弱人际连接,让人们感到被忽视和不被理解。我们需要在利用人工智能提高效率的同时,保护人际连接,避免其负面影响。 Sonia Livingstone: 我介绍了 Alison Pugh 教授及其研究,并指出人工智能的应用引发了人们对工作、偏见和隐私问题的担忧。然而,我们也需要关注人工智能对人际关系的影响,以及其在去个性化危机中的作用。 Kevin Roos and Michael Barbaro: 这段对话展示了人工智能在提供情感支持和建议方面的潜力,但也引发了关于人工智能能否真正理解和回应人类情感的疑问。 Jenna: 作为一名社区诊所的儿科医生,我面临着巨大的工作压力和有限的资源,这使得我无法充分满足病人的需求,导致病人感到被忽视。 Pamela Murray: 我分享了我作为一名学生时与老师建立情感连接的经历,这对我产生了深远的影响,也激励着我成为一名能够与学生建立良好关系的老师。 Grace Bailey: 作为一名治疗师,我意识到在与病人沟通的过程中,有时会因为缺乏对病人充分的理解而造成伤害。 Tim Bickmore: 作为一名人工智能研究人员,我致力于开发人工智能应用来改善医疗保健和教育等领域的服务,并认为人工智能可以弥补人类服务不足的地方。 Katya Mudry: 作为一名初级治疗师,我发现医院的标准化流程和数据收集方式会阻碍我和病人之间建立情感连接,并导致病人感到不被理解。 Peter Almond: 作为一名人工智能工程师,我认为人工智能可以完成许多人类可以完成的任务,但人类仍然在创造有意义的连接方面具有不可替代的作用。 其他受访者: 其他受访者分享了他们在不同职业中与人建立情感连接的经验,以及人工智能应用对他们工作的影响。

Deep Dive

Chapters
This chapter explores the allure of AI in deeply personal human interactions, examining the assumptions underlying this appeal. It introduces the concept of "connective labor"—the forging of emotional understanding—and highlights its value in various professions.
  • AI's appeal in personal spaces stems from its potential to solve problems inherent in connective labor.
  • Connective labor involves "seeing" and being seen by another person, requiring empathy, reflection, and emotional regulation.
  • The injection model of care, where expertise is simply transferred, mischaracterizes the interactive nature of connective labor.

Shownotes Transcript

Translations:
中文

Welcome to the LSE Events podcast by the London School of Economics and Political Science. Get ready to hear from some of the most influential international figures in the social sciences. Welcome everybody to tonight's hybrid event. What a pleasure. It's hosted tonight by the Department of Media and Communications here at LSE. And as you know, we're going to hear a lecture, The Last Human Job, AI Depersonalization and the Industrial Clock.

My name is Sonia Livingstone. I'm a professor of social psychology and director of the Digital Features for Children Centre at the Department of Media Communications at LSE, and I'm very pleased to welcome here Professor Alison Pugh.

to our online audience and to our audience here. Alison Pugh is a professor of sociology at John Hopkins University, and her research focuses on how meaningful emotional connections between people are shaped by rationalization, precariousness, and inequalities of race, gender, and class. And she's going to be talking today about her fourth book, The Last Human Job,

the work of connecting in a disconnected world. And it's based on a federally funded study of the standardization of work that relies on relationship.

Reviews described the last human job as "impeccably researched and beautifully written", which I can confirm, and as a timely and urgent argument about preserving the work that connects us in the age of automation, something I think that's very much on our minds at the moment. We're surrounded by public fears that AI will take over our jobs. We are witnessing the struggle of governments to regulate AI for the public good.

And we are witnessing the renewed energy of the industry yet again to move fast and break things. And it's striking that the New York Times review of Alison's book highlights the idea of human interaction as becoming a luxury good.

Professor Pugh is going to talk for about 45 minutes, then there's going to be a chance for you to put your questions to her. We'll have roving microphones for the audience here, and for our online audience, you can submit your questions via the Q&A feature, and please include your name and affiliation. The hashtag for today's event is #LSEEvents. As you can see, I think, Alison, the floor is now yours. I invite you to come and deliver your lecture.

Thanks very much for that. Okay, thanks so much to Sonia for the invitation. When I first got the invitation, come to England in February, I actually responded with, you know, I leapt at the chance because of her towering reputation and the extraordinary intellectual community that you have here. So February or, you know, it didn't have to be June.

And I'm really glad to be here. I got to go to an Arsenal game on Sunday and yeah, so it's been fun. So today's talk is from my latest book, "The Last Human Job." And in my research, I was lucky to follow around a hospital chaplain whom I call Erin. And she prays with the patients and she sings with them.

And they are facing some of the worst moments of their lives. But in addition to this spiritual care, Erin also had to wander around the hospital finding computers to enter data about her work on no fewer than three different platforms. No one was billed for her work.

And it was at the time a puzzle that I set out to decode and I'm showing you this was a kind of cheat sheet that she carried around so that she figured out how to code them on the platforms and here's one of the platforms she actually had to put it into the epic standard epic electronic health record that doctors use and also two others. A chaplain coding whatever she's doing.

This is a puzzle and it actually was a really good example of the central question of the book, which looked at deeply personal, humane service jobs that relied on emotional connections between people and asked what happens when we try to systematize that work.

And from a critical perspective, the question gets right at the inherent tension between human feeling, human care, and efficiency. And as it happens, when I was looking at this question in 2015, the only other group that was also trying to answer it were AI engineers.

So it became clear that the issue of systematization involved not just simply streamlining practices with data and data analytics and checklists and manuals, but also automation through robots, AI, and apps. Fast forward seven years later, in 2022,

there was an episode of the New York Times podcast, The Daily, hosted by, you know, which millions of listeners listen to, and it's hosted by Michael Barbaro. And this was the episode shortly after ChachiPT was introduced to the world. And the tech journalist, Kevin Roos, is talking to him. And Michael asks the bot, asks the ChachiPT bot,

why he tended to be so critical of other people. And he reads aloud the response. He says, "Being overly critical can also be a sign of low self-esteem or lack of self-confidence. It may be that you are using criticisms of others as a way to feel better about yourself." And he stops. He says, "Ooh, I'm feeling seen."

And then he continues reading, "Or try to control a situation that you feel anxious or uncertain about." And then he stops and goes, "Really seen!" And when he finishes reading, his guest, Kevin Roos, asks him, "How does that land?" And he says, "It lands!" Yeah, I mean, it's conventional and a little rogue, but it also feels like if it came out of the mouth of a relatively high-paid psychotherapist, I would take it very seriously.

And Roos agrees. And he says, for something that is free and instantaneous and available on your phone at all hours of the day, it is actually capable of some pretty remarkable kinds of advice and guidance. So we are living in an AI spring.

A moment in which artificial intelligence is being deployed to solve problems we thought were intractable, such as how to conquer drug-resistant bacteria in hospitals, or how to predict earthquakes, how to decode the language of sperm whales. That is my personal favorite. AI has ushered in a new era of economic possibility.

But we also know that AI brings serious problems. And the most common criticisms have involved algorithmic bias, surveillance or privacy issues, and job disruption. We have all heard of those kind of three major problems that AI brings with it.

We have heard that AI turns historical correlations, often based on bias and stereotyping, into built-in assumptions so that sentencing algorithms, for example, are more likely to predict recidivism among black defendants than white ones, for example.

we hear that apps track Amazon drivers when they look away from the road, or that the Chinese government has deployed a social credit algorithm to assign citizens a risk score determining their ability to book a train ticket or get a loan.

We hear that AI will radically reduce many occupations, for instance, dermatologists or radiologists or truck drivers. These are all very worthy concerns. I'm not saying those are not what we should be talking about. But there is something that AI critics are not talking about. The conversation has not yet begun.

thought about the impact of AI on relationships, on the connections between people that are forged in emotional, interpersonal work, work like teaching, counseling, primary care, even being a chaplain. And by omitting this concern for human connection, the critical conversation about AI is not just missing a vital issue, but also blinding us

to its role in what I call a depersonalization crisis. So here's the plan of the talk, and that's where we are just at the beginning. Today I'm going to talk about the work of Human Connection in the AI spring and outline its implications for this crisis. But in doing so, I'm going to address three primary questions. The first is what's the appeal of AI?

I should say it like this. What is the appeal of AI in this personal, clinical, human space? And then what kind of assumptions underlie this appeal? And then how and why should we protect human connection in an AI spring? So to answer these questions, first we have to think about what the work involves, like how people forge relationship in their work.

therapists, coaches, sex workers, even business managers and high-end sales staff. Many jobs require connection, and after five years of research, I concluded that these connections had in common that they involved seeing the other and the other feeling seen.

So that process involves some form of empathic listening and reflective witnessing and regulating your emotions to get out of the way of kind of hearing and reflecting back to the other. And most important, this process is deeply interactive, not least for something to land successfully, the other person has to kind of assent to some degree with the vision that's being reflected their way.

Like Michael Barbaro has to be like, "Yeah, I am kind of critical, I am kind of perfectionist," or whatever. I call this work, this interactive, emotional, interpersonal work, "connective labor." And with that term, I'm trying to capture that mutual achievement. And I define it as the forging of an emotional understanding of another person to create valuable outcomes. And its outcomes are indeed valuable.

Reviewing a battery of randomized controlled trials, for example, medical researchers found that the patient-clinician relationship has a detectable effect on health. It's an impact that they described as stronger than that of taking an aspirin every day to ward off heart attacks. In therapy,

the connection between the counselor and the client even outweighs the particular version of the therapy, the modality, the kind of therapy that the therapist is using. So the therapists fight about which one is more, I don't know, effective, but the thing that trumps the efficacy, the kind of variable efficacy of all of these different kinds of therapy is the relationship that happens between the therapist and the client.

And the value also comes out in the stories that I heard. So Pamela Murray, another person I interviewed, she's an African American middle school teacher, and she told me that as a child she had something called selective mutism. And her teacher at the time took the time to find out why she wasn't speaking.

And it turns out the teacher decided or ascertained that she wasn't speaking because her family was moving constantly. They would spend only months in a particular place.

So Pamela said to me, "Kids just get us talking to them all the time but not listening." And that's what she did. Just listened. Just sat and just listened. And that made a world of difference. I could have been tested and put into special ed. Instead I was tested and put into gifted and talented.

Yet despite its... Before I get to that, I'm going to get to you one more. That relationship also inspired her own exhaustive efforts to connect with her own students today. So she said, "I thought I want to be that teacher for my middle school students. I want to be the teacher that I wanted and that I needed and that I finally got."

So, despite its ubiquity and importance, connected labor is essentially invisible, only partially understood, and not usually recognized, reimbursed, or rewarded. Its benefits have long eluded measurement,

making them easy to ignore, while the capacity to connect to others has long been presumed to be innately feminine, making it easy to devalue in women's caregiving jobs and to disregard or downplay in jobs where men predominate, like management or the law.

To date, the discussion about AI has mischaracterized this work, relying on a sort of injection model where teachers or nurses or counselors

involve me, the expert, putting my expertise into you. And the model relies on a kind of shared individualistic cultural frame in which emotions are a particular skill or a trait attached to an individual as in emotional intelligence or EQ.

When Roos, Kevin Roos, the tech journalist, tells Barbaro that Chachi PT actually is capable of some pretty remarkable kinds of advice and guidance, he kind of reflects this injection model, actually, when he's talking about therapy as simply advice or guidance.

But therapists, doctors, and teachers know that there is more to this work than simply downloading information. That it is instead the relationship that matters for the positive outcomes that they look for.

And people get this wrong for a lot of reasons, but first among these is that we seem to have trouble thinking about what happens between people. We don't even have much of a vocabulary for that. When we think of workers as individuals, we think about how innovations like ChatGPT might monitor or replace them. What is at risk, however, is more than an individual's privacy or his or her job,

but instead the connections that are the mutual achievement between and among human beings. And misunderstanding this work as an individual enterprise has also made us unable to see and address the depersonalization crisis. So, depersonalization crisis, what am I talking about? You have heard, I'm sure, about the loneliness crisis.

Political elites and some academics talk about it a lot, and there are loneliness ministers in government. But social scientists actually disagree about the evidence, what the evidence shows, whether or not people have fewer friends or family, etc. But instead, I argue that we face a depersonalization crisis, and this distinction matters for what we do about it.

Depersonalization is what happens when people feel not just lonely, but instead profoundly invisible. What is missing here is what philosophers have called recognition or psychologists have called mattering. The notion that you are seen and heard by the people around you as opposed to feeling insignificant or invisible to other people.

And I want to add here just parenthetically that I am not arguing that being seen as some sort of universal biological need. But instead, it is a nonetheless real historical and culturally specific one.

So even in the developed West, some don't share this. There's a sociologist, Frieden Blum Orr, who discovered this when he studied a school serving mostly black low-income boys. And Orr found that while some sought respect or dignity, others actually wanted to be unknown and urged strongest among those boys with prior formal contact with the criminal justice system. To them, relative anonymity felt like a privilege.

the privacy of being free from other people's presumptions, a way of belonging to their communities without the mark of a criminal. But despite some exceptions like these, the longing to be seen is currently widespread, acknowledged in popular culture, and supported by research. And I'll just give you one example that I found in Starbucks pretty recently. There is some evidence that being seen is in too short supply.

A sense of feeling invisible clearly animates working class rage in the United States and in many other countries. May have powered Donald Trump's reign or his victory in the US last fall in the election aftermath. One op-ed even declared, voters to elites, do you see me now?

Films like Parasite or Nomadland portray the intense effects of depersonalization for marginalized others who in the US suffer from the aptly termed deaths of despair, in which suicide and drug and alcohol overuse deaths have radically lowered life expectancy.

Similarly, the Black Lives Matter movement protested the brutality and othering that comes from being profoundly unseen as fellow human beings. So what can make people feel unseen? While there are a number of factors, one major contributor would clearly have to have something to do with standardization or feeling like a number.

which has certainly increased with the spread of industrial logics in interpersonal work, the rise of data analytics, and what some have called audit culture. And there is a debate among scholars that has addressed these trends, which we can apply to think about the systematization of connective labor.

And so you might call this a systematization debate or a dehumanization debate or a depersonalization debate. On one side, you have people saying that, like for instance, Braverman showed us that tailorization led to de-skilling in manufacturing, where craft work was broken down into its component pieces and then split among cheaper labor. So people didn't make a whole thing. They instead were only in charge of their one windshield or whatever.

So what happens when that process hits emotional work? Standardization, also still over here in this corner of people who are critiquing this, they're saying standardization is inherently dehumanizing, workers are separated from their own expertise and discouraged from treating people as individuals, subject to demands for speed up, efficiency, commodification, which is alienating.

Then on the other side of who would be against this, but on the other side are people who say actually systems help us. First of all, they help prevent or kind of they ameliorate human unevenness in performance.

and they kind of elevate people from parochial concerns to a kind of, you know, a standard of fairness that applies to all. Other people say, don't worry about standardization. People actually don't

human beings when confronted with a standardized system, when they have to like implement a standardized system, they're constantly messing with it. And there's a lot of research that shows this. So they'll be like, well, we have this standardized thing, but you know, under these circumstances, I do this. And under those circumstances, I do that. So standardization is more like something people are afraid of, but in actuality, it's a little more messy and humane.

Other people say, no, no, no, or that's fine, but don't worry about standardization because actually people use systems. It's not imported and imposed from without. People use systems to convey meaning and create their own relations, and it's like a human tool just like anything else. So there is a fight. There is a debate. But for all of these scholars...

Just like the injection model, they are all talking about individuals. They are all talking about people's individual skill or capacity to manage their emotions, to gin up the right feeling, to convey particular meanings, to mess with the standard. It's all about the individual worker or person doing this. And nobody's really talking about relationship and what happens between people.

So that's where we are. For their part, in response to excessive standardization, technologists also embrace the individual and they advocate greater customization or what they call personalization. So that involves

a process of ever more precise tailoring in which data is harnessed by technology to analyze someone's health history, how a person likes to drive, even the content of one's sweat. Personalized medicine and personalized education are each an effort to assess needs and produce recommendations tailored to the individual, akin to being seen, but by a machine.

So, socio-emotional AI has been burgeoning for over a decade, with engineers designing everything from AI couples counselors to virtual preschool to apps that advise diabetes patients

Of course, many of these forays remain in the lab and critics caution against believing too much of the hype about AI's capacity to disrupt caregiving jobs, for example, or other interpersonal jobs. But since JAT-TPT burst upon the scene, large language models have taken mechanized recognition to a new level.

For example, chatbots have been designed to teach, provide therapy, and give medical advice, in each case allegedly better than humans. So I ask again, what is the appeal of AI in this personal clinical human space? What kind of assumptions underlie its appeal? And how and why are we to protect human connection in an AI spring? Well, to answer these questions, this was what I did.

I talked to more than 100 people, mostly people who practice, supervise, or analyze connected labor. So a big chunk of therapists, physicians, and teachers, but also including a kind of raft of working class occupations, including sex workers, hairdressers, home health care aides, et cetera. And I also interviewed people who supervised or evaluated this work or automated it from administrators to engineers.

And then I also watched them in action. So hundreds of hours of observational observations of like clinical visits, of classrooms, conferences in California, Virginia, Massachusetts, and Japan, actually. Observations included, for example, six months watching physicians, nurses, and patients in an HIV clinic.

A semester spent observing the training of school counselors, many hours watching videotaped therapy sessions with supervisors giving commentary to the clinicians who were in the videos, et cetera. I'm happy to answer more questions about the methods in the Q&A. So to understand why AI might be appealing, we have to think about not just what AI promises, but also what's wrong with the connective labor as it's

currently experienced. And engineers I spoke to suggested that they were solving three kinds of problems and that AI was thus three kinds of better. And underlying these propositions were three core assumptions that I came to think of as fallacies. So the first problem that they were addressing, the first problem that plagues connective labor is its uneven distribution. For too long,

and this was really actually a motivating factor for doing this research in the first place, getting a good teacher or a doctor has depended on whether you were rich or lucky. And

and the disadvantaged are served by public clinics and classrooms that are at best staffed by mission-driven individual heroes connecting as much as they can under unsustainable conditions compressed by either public austerity or profit-driven efficiency. My patients, it's just like they're singing their siren song to whoever will listen because no one will take care of them. They're used to not getting needs met and they're just desperate.

This was Jenna talking to me. She's a pediatrician at a community clinic in St. Louis, San Francisco Bay Area. And she told me that her patients were frantic for her attention. Their longing was so intense and so unrelenting that she said it overwhelmed her ability to meet it, given time constraints imposed by the extraordinary patient loads and limited resources at her community clinic.

So researchers report that working conditions contribute to clinician bias and stereotyping, in other words, to practitioners' inability to see the other well. To Jenna, this felt like a tragedy.

Patients want so much more from me than I can give them. I don't invite people to open up because I don't have time. And that is such a disservice to the patients. Everyone deserves as much time as they need, and that's what would really help people is to have that time, but it's not profitable.

The contemporary degradation of connective labor in which physicians and therapists work in increasingly sped up conditions that encourage rushed, dismissive, or even scripted interactions is directly linked to technologists' arguments that apps and automated connective labor are better than nothing.

"It's where we think we can have the most impact," said Tim Bickmore. He's a Northeastern University researcher.

who had developed an AI couples counselor, an exercise coach, a palliative care consultant, and a host of other applications. He says, "We try to find these areas where there's either no service provided or the service that there is doesn't meet the needs of the individual, and by automating, we can greatly improve the care that they're getting." And so hopefully, the AI is better than nothing. A few years ago, Bickmore created a virtual nurse.

an AI program to help low-income patients at Boston Medical Center with discharge procedures, hoping it would enable them to understand what are sometimes long and complicated instructions. And the virtual nurse, whom coders dubbed Louise or Elizabeth,

was no more than an animated figure on the screen, just like you see here. She had a mechanized voice asking viewers if they were Red Sox fans before going over the after-hospital care plan. And Bickmore was surprised, however, when 74% of the patients said they liked getting their discharge information from the virtual nurse more than a human one.

"I prefer Louise. She's better than a doctor. She explains more, and doctors are always in a hurry," one patient told Bickmore. The virtual nurse gave the patients more than information. She gave them time. Physicians spend an average of seven minutes with patients at discharge, Bickmore said, and it's clear from our studies that especially for low literacy patients, they're going to need somewhere like an hour.

The virtual nurse gave them time, while the unspoken message here was that the busy clinicians were offering disadvantaged patients a weak facsimile of connective labor akin to nothing. So that sounds great, or I don't know great. Sounds like dystopia, but it sounds like certainly plausible. So what's wrong with that? Well, the fallacy, the hidden fallacy underneath this is that the assumptions underlying better-than-nothing arguments

They're themselves particular ways of seeing the world that are not inevitable. Instead, they reflect the extraordinary increases in inequality over the last few decades and the spread of efficiency campaigns and data analytics stemming from applying the industrial model to interpersonal work, as Jenna might have told us.

Current staffing levels and compromises about quality are not fixed or inexorable, while the only way to relieve the pressure on busy clinicians and strapped education systems is not necessarily to automate the work. Second problem. The second problem that plagues connective labor is uneven human performance. Instances of misrecognition that can engender shame or other harms. So...

Sometimes I just miss it. I just miss who they are. Grace Bailey, she's a white therapist, and she told me that. Recently she said she was treating a woman, a so-called serial monogamist, whose philosophy was to leave when a relationship stops being good. And when the woman's partner became ill, Grace was interested in what that meant for her plans.

I asked her how that would affect the state of their relationship, and she was really, really hurt by that. "Why did the question hurt her?" I asked Grace. "Because I don't know who she is," Grace said. Still unsure, I asked whether the problem was that Grace had suggested that the woman was, that Grace was unclear about whether the woman's intentional approach to relationships that she left when they were not good would mean she might abandon someone when they were ill. And Grace nodded, "Exactly.

While the question may have been a reasonable one, I noted it was nonetheless one the client likely had to defend against all the time. "Absolutely," Grace said, and I fell right in that camp, and then the next time I saw her, I said, "I think I hurt your feelings," and she was like, "You really did." For some, machines solve problems like shame, stigma, vulnerability, unpredictability.

Human judgment is sometimes wrong, delivered harshly, or crippling in its impact, which we know, by the way, from research, is again more likely under pressurized working conditions. Automated connective labor then is not just better than nothing for these folks, it's actually better than humans. So people widely consider bots or apps as a judgment-free zone.

They are routinely more honest with computers than they are with humans. This is a surprising fact that researchers have found again and again and again. Adults will disclose more to a computer about their sexual practices before they give blood.

They'll tell a computer more about their financial troubles than they will another human being. Children are more willing to disclose bullying to a computer than to a person. And these findings recur again and again, even though machines are not private, not anonymous, definitely judge people as anyone subject to like insurance rates or bank loan terms as computed by algorithm might report.

But for some, these kind of judgments don't feel like the moral gaze of another human being. But in addition, humans give the benefit of the doubt to machines. And researchers again have repeatedly found that if programs or robots signal in the tiniest way understanding or relationality in some way, people will approach them as if they have feelings or empathy.

So as MIT psychologist Sherry Turkle points out, roboticists have learned those few triggers that help us fool ourselves. We don't need much. We are ready to enter the romance.

A recent report on chatbot therapy is typical. On the one hand, some users complained about the app's imperfect witnessing and how that made them feel invisible, but others suggested that people can indeed feel seen by the machines, a la Michael Barbera. Said somebody, one of the users said, "This app has treated me more like a person than my family ever has." So what's the hidden fallacy here?

The hidden fallacy in these heady days of the proliferation of empathic chatbots, it is tempting to believe that machines can do the work of seeing the other and that any imperfection we're facing are momentary blips on their way to be ironed out. But first of all,

There's two points to make here. First of all, therapists told me that if people seek only to avoid shame, say by opting for an AI companion or counselor, then they might never be free of it. Although shame is piercing in human interactions, it is something to walk through together rather than run from. I actually think

They may be right. It's quite, sounds quite plausible, but I also think that argument is a tough sell because it asks people to kind of live with and face down their shame rather than run away from it. And it's, I think, perfectly human to run away from it. But nonetheless, it points to kind of a, it points to the crucial fallacy here, which is that judgment is the problem. In fact,

power of human connective labor actually, ironically, tragically relies, rests in part on the very judgment that people fear.

So if you get rid of the risk of judgment in connective labor, you also get rid of its very profound impact. And here's Jenna again. She says, "So many other people could do what I do. Anybody could do that. But the patients don't want just anybody. They want me. It's just that one of the values society has placed on doctors is that somehow our ears are magical listening ears and that when we hear the story, we're going to magically extract whatever needs to be extracted and give it back to them in a way that's going to make everything better."

Jenna's patients seek her out because they value her opinion, in part because her expertise brings with it the risk of judgment. As engineers struggling with plummeting app retention rates, engineers know this already, that humans make other humans interested too.

even at the risk of their judgment. Even for those who think computers are mostly better than humans, the cost of automation is that it removes the power of accountability, of aspiration, of the client wanting to do better. So I asked, for example, Peter Almond, he's an engineer who's actually making, may I say, an automated teaching assistant, what he thought humans still had to offer in this work. And he's a true believer.

And he said, "An audience that matters." In his view, robots would someday do most everything humans could do. This is what I mean by true believer, by the way. In education, for example, that includes grading papers and answering questions about the material. He still wasn't sure, however, if one could project enough humanness onto a robot that you want to make it proud of you. So, the third problem of connective labor

is not about the uneven performance, nor the risk of judgment faced by recipients, but instead it's a problem faced by workers. It's drudgery. Particularly the mind-numbing tasks involving data entry.

In recent decades, data collection and analytics has transformed to the practice of primary care, of teaching, of nursing, leading to burnout rates of sometimes more than 50% in these occupations. And the mania for data analytics has extended even beyond these fields, as Aaron, the chaplain, might tell us.

So, Katya Mudry, for example, was a brand new therapist hired out of graduate school as an intake counselor at a busy county hospital in California. And part of her current job was to screen patients for mental health problems. And the hospital assumed the task would take her 15 minutes for each patient and gave her a questionnaire to help the process along. But she grew to hate both the clock and the survey.

She said, I'm the first person they're talking to about mental health and we have to do some stupid questionnaire and we have to ask about suicidal ideation. I've had somebody totally shut down during that part. I was doing a suicide risk assessment and when it came to the gun part, he said, I'm not answering anymore. I thought, oh shit, I've lost him. Whatever connection we had was just severed in the moment.

She says, "How can you? I will just not follow typical protocol." When it's their first time they've approached mental health and they're a male and they're crying and just say to them, "Okay, what services would you like? Come back in three weeks." It's such a disservice to the vulnerability that they've expressed. If that means somebody has to wait a lot longer, I'd rather offer that full part of me than pump people through.

Katia objected to the survey as a form of scripting, a dis-standardization of her work that reduced both her discretion and the complexities of the people she saw. And the survey was also a form of data collection designed to generate measurable, comparable metrics about the man and his suicidal ideation. For the hospital, if not for Katia, pumping people through was her job.

The technologist's solution to this dilemma is that AI can help the human out. In other words, that AI and the humans are better together. A common phrase here is that AI will free us up, freeing us up for other, often more meaningful work. Raise your hand if you have heard that expression, that AI will free us up. Give me a break. It's got to be 100. Okay, maybe 70%.

researchers then try to ferret out which work might be considered rote and thus optimal for automation.

And here's one researcher who told me, "There are things that machines are starting to do that are presumed to be not rote. These so-called creative tasks. Is writing a news story, is that rote? Some people think it is, some people think it's not. You've got companies that are playing with automating a computerized intake nurse, which people would have thought was not rote, but on some level, it's just asking a bunch of questions."

So Jeffrey's comments make three things very clear. Number one, there is an automation frontier in which people can test just what is and is not appropriate to automate.

Number two, he and Katya would disagree about what counts as rote work. And number three, the degradation of connective labor by the industrial model in which it is subject to the demands of audit culture, data analytics, etc., is intricately connected to its susceptibility to AI. Okay, so what's the hidden fallacy here?

So, while it seems clear that AI will, automation will generate jobs and industries that we cannot foresee, the free up language seems strikingly optimistic to me. It is surely possible that with the cost savings of automated human work, employers could keep the same number of employees and instead imbue human jobs with newly meaningful tasks.

But it seems likely that barring state regulation or effective labor action, employers will cut the people that they can. So I observed at an experimental school in Silicon Valley, for example, where the kids learned from apps in the morning and then sat down one-on-one with caring, credentialed teachers in the afternoon, and they were serving as their advisors there.

And there were a lot of adults in this school. It's an expensive private school, and it actually looked pretty great. But at the same time, you don't have to look far to imagine how a different district with more financial constraints might at best replace those advisors with professional feelers who come with less training and could command less pay, but at worst, just assume that the kids can learn from the apps without them.

So, indeed, journalists in the United States repeatedly report stories of public schools that rely on Khan Academy videos to teach math without the advisors to make it stick. But if we were somehow to address the concerns about job insecurity for the laid off,

When an intake nurse's job is boring and repetitive, and when any connected labor they offer is scripted and performative, automation begins to look like an appealing alternative. But it is a false choice.

One generated by decisions made even earlier to overload practitioners, to shrink the time they have for their tasks, to script their work so they can take on more. Decisions that corrode the connective labor that they are charged with giving. There is another path, one that involves respecting the power of connective labor to create social goods that we value, like shared dignity and purpose and understanding.

But if we opt to script this work and to respond to workers' inevitable alienation with apps and AI, we cannot then be surprised by a depersonalization crisis. Each kind of better-than argument sketches a vision of what AI and automation have to offer.

as reluctant stand-ins, as kind of triumphant replacements, as willing partners to the human workers, and they also rely on contradictory notions of how standardized or rationalized the human work is. For better than nothing arguments, humans are overly rationalized, automatons themselves, kind of unable to act with discretion or mercy in the face of human need. Think

you know, those automated discharge nurses and the bad doctors they were replacing. For the other two arguments, humans are insufficiently rationalized, either their messy imperfections threaten vulnerable patients and students a la Better Than Humans, or they need machines to handle the thinking jobs while they are relegated to full-time feelers in the Better Together version. And each of these arguments also paves the way for the widespread adoption of machines in connective labor.

But this adoption takes place within an existing ecology of modern capitalism and modern bureaucracy, where human connective labor is squeezed by twin imperatives of profit and austerity, erecting a social architecture that constrains their witnessing.

Hiring a human is already a luxury, and marginalized populations currently receive connective labor that is scripted, surveilled, and sped up.

Given that existing ecology, better than nothing arguments hold sway in the public sphere, convincing lawmakers and administrators to adopt AI virtual preschools, which are happening in Utah, for example, and virtual nurses like we saw with Louise.

On the part of affluent users, on the other hand, while busy people talk a lot about preferring automated services to avoid having to waste time interacting with others, the convenience and status of having personal services delivered personally continues to have meaningful appeal.

With the trajectory of automation shaped and determined by these inequities, we are hurtling towards a future where they are amplified, one in which less-advantaged workers provide an artisanal form of connective labor in person for the wealthy while receiving automated services for themselves. We are at a critical juncture where the decisions we make

or failed to make will affect the trajectory of AI and connective labor. On the one hand, remember this is a wondrous moment in which AI is transforming our world leading to unprecedented advances in biology, linguistics, the list is long.

But thanks to the depersonalization crisis and the corrosion of connective labor that contributed to it, AI is also being deployed as an alternative to human witnessing in fields from therapy to teaching to medicine and the like.

Turning to tech replacements for socio-emotional work is likely to have serious consequences, including the radical shrinking of the connective labor workforce, the spread of fairly shallow, judgment-free services that prioritize information over relationship. In education, we might call the likely outcome Cliff Notes degrees. I don't know whether Cliff Notes is a UK phenomenon. It is.

Well, who needs it? Because you have AI. But anyway, the extreme stratification of human contact with personal connective labor as a luxury and the loss of human-to-human jobs, sorry, bonds, human-to-human bonds that underlie our civic life.

Engineers trying to solve these problems with AI are doing so because they focus on the individual patient, the individual worker, but by not talking about the implications for relationship, we also make it impossible to treat the depersonalization crisis, which is a malady of our social health that begs for human intervention.

So we know that connective labor has profound impact on people, but it also has serious problems among them, inequality of service, misrecognition, shame, and vulnerability, and mind-numbing data reporting burdens that lead to widespread burnout.

People report that they find bots convenient, cheaper, less judgmental, and sometimes even warmer than humans, who reflect all the time constraints and efficiency pressures that Jenna lamented at her clinic.

Somehow, we have found ourselves in a particularly absurd moment in the industrial timeline when people are too busy for us while machines have all the time in the world. This is the heart of the depersonalization crisis and by not recognizing its roots in the spread of industrial logic in emotional work, we find ourselves looking to machines to solve problems created by excessive rationalization.

AI scribes in doctors' offices or following chaplains around would help relieve their data entry problem, for example, but so could being more careful about what data we require and from whom. AI agents working as intake staff could relieve Katia's anguish about having to pump people through, but so could giving practitioners like her more time to meet people in crisis.

In this freewheeling era of so little regulation, when the tech industry fights back every criticism with accusations of being against progress,

It can be difficult to differentiate between what is valuable and what is not in the field. But we can commend some uses of new technology while discouraging others. We can implement a connection criterion in which we evaluate technology by how much it replaces, impedes, or enables human relationships.

In my research, I found several examples of clinics and schools that relied on tech to be sure, but also that already had the social architecture, the resources, the visionary leadership, the culture that enabled them to prioritize recognition. Finally, even if machines were to do connective labor well, why would we want them to?

Among the myriad human activities that we might disrupt or free ourselves from, it is not clear to me why we want to mechanize the relationships that give life meaning. Seeing others is how we experience connection, forge community, even conduct democracy, and we automate it at our peril. Thank you.

Hi, I'm interrupting this event to tell you about another awesome LSE podcast that we think you'd enjoy. LSE IQ asks social scientists and other experts to answer one intelligent question. Like, why do people believe in conspiracy theories? Or, can we afford the super rich? Come check us out. Just search for LSE IQ wherever you get your podcasts.

Now, back to the event. Fantastic. Thank you so much, Alison. So much there to think about. And I'm sure people are going to have lots of questions. I just wanted to make a comment about the book. One thing that really struck me on reading the book is that Alison practices what she preaches. So in the framework of a lecture, we met a lot of people doing this work together.

kind of quite briefly, but I think in the book you really kind of, you present their lives with the care that you are yourself trying to write about and we get to know a lot of the people and their troubles and their journeys through their lives and through their work, which I really appreciated.

I'm going to ask a couple of questions, if I may, just to kind of kick off while people are thinking of their questions. So the first one, I think, might be the obvious question you're expecting, so I thought why don't I just ask it. So...

There's a lot of dystopian narratives out there right now. And in some ways, your book is not as dystopian as many. And I really like the way you kind of capture the ambivalence of the people living this journey into automation. But maybe the overarching historical arc is...

seems to be one from community and care and being seen and understanding to a world of alienation and automation and overwork and is that the narrative? Is that and do you maybe hold some hope for the kind of efforts that responsible tech and ethical tech and meaningful tech perhaps?

I love that question. Do you want me to take them one by one? Yeah, go for it. The question about nostalgia and is there a

Is there a perfect era of when we all were seeing each other with tons of time? There's so many inequities of the olden days that were kind of codified and institutionalized, and so it's hard to look back on all of that with a lot of nostalgia. And also, I want to say that just as the yearning to be seen is actually not a biological universal need, and you heard me talk about the

disadvantaged kids who were trying to not be seen by the criminal justice system. Thank you. It's also true that it's historically variable and it has not always been this thing that people have been yearning for and it's really a kind of

We didn't even have a word for empathy before like 1910. So these are kind of cultural constructs that are pretty new in the overall scheme of things. That's not to say they don't exist or they're not true or they're not real, but they're variable and important.

you know, kind of culturally constructive of the moment. And I would say actually that they are intensifying at the same time as the, as the intensifying, intensification of,

this kind of industrial model and the kind of efficiency pressures and stuff like that and they're happening at the same time and they're kind of colliding. So I actually am not sure I'm looking back to an era in which we all see, although when I talk to my doctors for example in particular, the doctors did talk that way. They say, you know, my dad is a GP and he's retiring because it's just too horrible. And

But of course the person who told me that was a woman and would she have been able to be one? There were other things going on to make that era not something necessarily to hold up. But yeah, just in terms of this question of did people have time to see each other, the doctors remember when they did. Oh, let me say one more thing about responsible tech. Yeah, because that's like maybe a solution. Yeah, yeah, yeah. I'm of two minds.

People keep asking me this. They keep saying, isn't there something we can look forward to? And I keep coming up with, well, maybe this one will work. And so for a while, I've been talking about this book for about a year, and for a while I was saying, AI scribes.

That sounds great because I've been hearing for years how the doctors are so burnt out because they are, as is the case in all of these occupations, the people who are supposed to forge the connection are themselves the very people who also are supposed to be inputting the data. And that just seems crazy. So because they're basically squeezing as data demands get bigger, they squeeze

the relationship into less and less time or they try and as they would say to me, the doctors would say to me, they'd be like, well, see how I can do this. I can like type and not look at what I'm typing. I've perfected this way of having eye contact and it's just, it's deeply problematic. So I was like an AI scribe, perfect.

But I recently talked to a diabetes physician and he was telling me a lot of problems with the AI scribe. I won't list all of them for you, but for example, two problems.

he really, with diabetes, you really need to listen to the lifestyle question, you know, like, who's buying your food and do you have time to exercise? And you know, like the lifestyle questions really matter. And he says the AI ignores all of that entirely and only starts talking when it's like, what's your blood levels and what, you know. And so the part that sounds to the AI like just chatter,

may I say, the connective labor. It doesn't even pay any attention to that. And then the second problem is it's

very cold in the way it talks. It says, you know, Joe Smith, a diabetic, his levels are out of control and he confesses to having too much sugar. You know, like, he uses this very kind of patient hostile language and he's a very collaborative, you know, kind of patient-centered doc. And he's like, you know, the AI, it's a light, but it's a fluorescent light. It's cold and harsh and I want an incandescent bulb.

So yeah, AiScribe, you know, maybe in the future, but right now, not working.

I'm actually going to turn it over now because I realize I could spend the next half hour asking you all my questions. But I'm going to, so everyone is welcome to ask questions. And so please put up your hands if you have a question. There's a couple of people with roving mics who will come to you. And please say your name and affiliation. I'm going to start in the middle there, if I may.

My name is Helen and I'm a visiting fellow at the Department of Media and Communications. Thank you. It was absolutely thrilling. And I have a question that kind of goes back to, I think, the very beginning of what you were talking about, and also the lower right part of that chart you gave, which is compassionate capitalism. Oh, yeah. Because

What you're talking about, I'm not really sure if your argument is towards undermining capitalism because it's profit-driven and this is where we are and we're in the middle of techno-solutionism for problems that we are unable to deal with because we're in a capitalist society. And I mean, this is also what Shoshana Zuboff has been saying. Or are you actually making a

a slightly different argument that, well, there is a way that we can actually meaningfully maybe collaborate with the opportunities, the affordances given by all these new technologies while kind of maintaining our, what I'm guessing, kind of community spirit, because you're talking about individualism a lot and alienation.

an age-old Marxist argument. So I'm wondering, like, where can we go next with there? Is it a big argument about abolishing capitalism as such, or maybe finding some other ways out of this rut that we're in? Thank you. Yeah, thank you for that question. Let's think about abolishing capitalism. Well, let's take a few more. Oh, God, more than that. Okay, so...

Okay, can I just ask Jason Amarhaji, no affiliation. Hi Jason. Good to see you. Former student. So my question is very similar. The book is, at least as you've described, it seems like it's from the American context, which...

you know, I would assume is sort of like the archetype of the most atomized, individualized society on earth. And this seems like a natural progression of the logic of that culture. You know, how do you see this playing out in other places where that isn't the case? And also, you know, where the value equation is different, right? Where the cost of labor is different, where there's a communitarian sort of, you know, already society in place that's very different than the United States.

Thank you so much for your speech, it was really interesting. My name is Joanna, I'm a social psychologist, a professor here, and I'm still considering what's the cause and what's the effect, and where shall we begin to introduce some changes?

The fact is that we are in this point in our society, Western society, that we have this epidemic of loneliness. And this is the fact, right? Also, we have the, as you said, the need for recognition. A great body of research that you may know or not is by Professor Ari Kruglansky, "The Significance Quest", where he brilliantly showed in many studies how the quest for significance can lead to terrorism.

So we need to be seen, otherwise we may do some dangerous things. So for many reasons we become lonely and we need this human contact. However, I'm reluctant to agree that professionals, practitioners, medical doctors are those who are supposed to

you know, cover this demand because... Are those supposed to what? Are supposed to cover this demand because people lack a contact and I'm sure many doctors said to you that people come to talk to me not only because they feel ill but just they want to talk to somebody. So we cannot really rely on professionals to give people what they actually need which is this human touch, this human connection

which is so hard to gain from society because relationships become so difficult. So my question would be where do we begin because we created this problem now so obviously people lack this human contact so shall we give it to them by technology or shall we try to fix, go back and fix the problem which probably technology also created

It's like the diabetes that you mentioned, right? So we know that some people develop diabetes because of their lifestyle, but should we deprive them of the medicine and tell them you should eat right or shall we do both? Sorry. Thank you very much. Thank you. OK, I'm going to-- can you-- Sure. I'm going to ask questions-- Kind of these are all to be the same question. Not exactly.

Thank you. I'm Chris Heathcote. I was a government and history student here 18 years ago, and I now work in UK Public Services, who I think are grappling with the use of AI. And so my question is,

Do you think that what you've described is happening is happening because humans have become as consumers too needy and rights-based and as employees they've become so expensive that service-based organizations are reaching for AI to cope with demand and to try and filter and triage it. And if we don't like the destination then how do we change those trends or manage those trends? Thank you.

Yes. Super interesting. People entitled. Yes, exactly. Okay, so I'm sure that this is what standardization does. It kind of, you know, kind of...

papers over important differences but to me the first three questions are very similar sorry for your individual you know particularities but I kind of feel like all of them are saying what do we do now where are we going now what's the right solution now and some people are asking you know kind of of course the what do we do now comes from what is the diagnosis so those are kind of all three of them I think are related to that and I'll try to kind of address that

I guess in terms of my feelings about the way I work it out with capitalism is

Sure, I think it would be a lot of our problems. I think, I guess I kind of think both sides of what you've said. Yes, I would diagnose a major contributor to the things I've described is capitalism and the versions that we are living with, especially in the United States. But I also kind of don't want to wait around for revolution and get tired of thinking

of that being the call, because I'm all in favor of it, but I need something much more immediate and practical right this second. And I actually think that we, this group, and kind of modern Western society right now faces an urgent problem, which is this kind of flood of products

coming our way unrestricted and people are a little bit on their back heels and being like they want they don't want to be left behind and they're anxious worried about like you know kind of not

not being able to take advantage of it the way they should. They're anxious. And so they're not critical enough. And I'm worried about it. And I want people, I'm trying with this book and with these talks and these conversations, to give people the language or a language, you can make up your own if you want to, but give you a vocabulary with which to kind of meet the flood of news coming our way. So that's kind of

what I would say to that. Okay, so Jason, great question about the cultural specificity. Yes, a lot of these examples are from the United States and sometimes it can seem like an entire other world, but I am here to say that you all can recognize some of the kind of components. It's not like you here in the UK are living outside of a place with a lot of industrial logic.

you can just take it as maybe a canary, cautionary tale. Or you can say, is this the future coming unless we do something? I don't think they're that far away. I agree that a more collective community, a place with a much more substantial labor movement, could do some important things here. And that's one of the reasons why I'm talking about the book to people, to try and give them a means to focus their organizing.

Cause and effect. I think that I love the idea of the quest for significance and the heroism. That's so interesting. And the question of like, where do we, where do we begin? I agree also with your overall point, which is that these professionals will never meet the demand because, and this is actually a little bit related to the person in the back that like,

When people say there's a mental health crisis, it's not because there's not enough therapists.

health and community feeling and alienation, all of that comes not just from not being met well enough by the professionals like the people I interviewed, but also the society that we're living in and how it's organizing our work and our families and our commuting and our screen time and et cetera. So these are massive kind of socially produced

And these people are themselves feeling, especially in the public sector, they're feeling like the finger in the dike. They're feeling like they're holding back the flood. So I agree overall that we want to kind of address it on a social issue.

But again, so not again, I'll say. So for people who are out there thinking about like, what can we do? One of the things you can do in addition to being quite critical and attuned to what AI is doing to relationship, you can kind of choose human whenever you can.

So my husband used to be a big believer in self-checkout. And now he doesn't really do it. I mean, sometimes he does and he's like, ah! But he thinks about it. Choose human when you can. And that...

Actually, it's not just good for society, but it's actually good for you. And there's an entire industry of psychological work that demonstrates that to be the case, including a great scholar down in Sussex called Gillian Sandstrom. In the back, are consumers too needy? Are workers too expensive? The problem that I have is

I mean, yes, when you look from a very particular perspective, which is about kind of shrinking the state and looking for private market solutions. And I've been swimming in that pool in the United States for decades. And it's my view that this is the work that matters. This is the work that is worth doing.

So there's something that economists call Baumol's cost disease. It's a way they explain the fact that human labor has become so expensive. It's actually not more expensive than it used to be. It's just that everything else is so much cheaper because you can automate it. So like clothes are cheaper, cars are cheaper, you know, everything's cheaper. But what doesn't get cheaper are the human laborers because you can't like add

You know, you can't say, "Okay, teacher, you're going to teach 300 kids now instead of 30." Like, it actually ruins the things that they do. So, that's what they call that, "Bomol's Cost Disease." And I was having a kind of semi-argument with an economist a couple of weeks ago about this, and I was like, "You know, I understand that that's an economic problem, and a problem for economists to solve, but I actually think that

This kind of connective labor, that's why we have cheap clothing and cars and electronics. That's why we want to save money in those other things, because this is what is worth paying for.

Yeah. Are the workers too expensive? I would say no. Thank you. I just want to check if there are some online questions. Yeah, so we have two questions. One from Stuart MacIver. Given mankind's historical record in effective management of technology, what, in your view, is the likelihood we will be able to manage AI at all in the near future?

Managing AI what? Managing AI. In the future. In the future. Yes. Okay. And the second question is, how can we bring attention to collaborative labor and have it sold to present society focused on hard skills? So how can we really value and recognize the skills? Yeah, yeah, yeah. Totally get that. Okay. So the first question is,

Can we manage AI in the future? Like, there's a conversation happening in Silicon Valley about like, you know, kind of, what do they call it? You know, kind of the scary future of AI. The existential crisis. Yeah, the existential crisis. I am not worried about that at all. Like, I don't.

I don't really care about that question. And so if that's where the question is coming from, I actually don't have anything to say. Like, I kind of feel like that's a question that people who don't care about what I'm dealing with right now are asking. Like, they're like, oh, what kind of major threat can we kind of wonder about? And I'm like, no, no, no, no. Let's deal with the threat that we're dealing with right this second. Like that...

But I'm going to move that question because maybe I'm being uncharitable, sorry, and maybe they're asking is this something that we can manage? More of the is there a kind of positive vision? And there is a positive vision. Like my sister uses ChatGPT all the time. And you know, like my brother-in-law, he's a diplomat in Mexico.

He says he uses it to role play difficult conversations. So he manages, I forget how many people, 20 people or something like that, and he has to tell one of his subordinates something, and he kind of works, you know, kind of says, well, what if I said this? Or, you know, he works it out with the chat GPD, and then he's kind of going to be better in that conversation.

Connective labor moment. So I feel like that's a moment of technology helping and that's a kind of positive story and the question of will

Humans manage it. I don't know. This is, again, what I'm trying to do is spread this gospel so that more people get emboldened to kind of affect change and manage it in some way and kind of discourage it and encourage it in some ways and discourage it in others.

I'm not sure whether that's a good enough answer. Tell that person to email me and I'll work on it more. The second question was... Self-skills. Oh, yeah. How can we value the self-skills? Yes, I mean... Okay. I have this idea that there are three futures that we are all seeing right now, by the way. So they're not really futures because I'm not really a futurist, but they are coming. One is...

AI, you know, the kind of triage version. That's, you know, AI takes care of the low-level stuff that is easy, and then the human is in the back taking care of the complex situation. You already see that when

the call center AI is taking care of you at the beginning and you're like screaming at the, you know, kind of airplane, airline saying, agent, agent, trying to get to a human. That is a triage future that is coming. The doctors are like, that sounds good to me. You know, like, anyway, so that's one.

Another one is the inequality future. And that is also happening. That's where the low income get the bots and the affluent, advantaged people get human labor. That's happening right now.

The fastest growing occupations in the United States, when you look at the top 10, I forget how many, eight of them, seven of them have the word personal in it. So personal trainer, personal chef, personal investment counselor, personal, those are all connective labor to rich people. So that's happening already right now. The third one is a binary model in which

Machines do the thinking and humans do the feeling. And I used to think that sounded like kind of, if I had to choose among those three, that sounds like the best one. The problem is that feeling. So in answer to the person's question, that's one where maybe soft skills would be valued actually, because that's what is particularly human.

And that, you know, we'll give the machines, the analytics, the tech, the coding, whatever that doesn't take in this connective labor, that'll be what humans bring to the table. And actually, that's behind what I say to students today. I say to students, do not major in STEM fields. Do not try and become a coder. That is what Chachapiti does well now.

The thing that you should be doing right now for the next, for whatever time you have left, is becoming really good at reading and seeing the other around you, at human relations. You should be in tons of group work. I know you all hate it. You should be tons of, you know, that's what you should be constantly doing, getting really good at social skills because that's your value add. So in answer to that person, maybe that's the impact of...

Large language models is that all of a sudden soft skills become useful. And actually, one last story here. I met a radiologist in radiology school.

is one of the ones that's going to be maybe disappearing because those are the people that analyze x-rays for other doctors. They don't do any connective labor at all. It's all analyzing the x-rays and reporting the results to doctors. And so I talked to one while I was doing this research, and he was like, yeah, we've got to get more patient-facing. Mm-hmm.

So he's like, we've got to figure out how to be in charge of giving the results straight to the patient, trying to figure out a way in which their job will not be erased by this technology. And I was kind of interested in that, because radiologists make three times what primary care physicians make. Primary care physicians in the United States make about $150,000.

And radiologists make three times that on average. And they don't have any, and they're going to be all automated. And so I was like, and the primary care physicians are the ones who own patient time. Like that's who they would have to let, they're the people who would have to let the radiologist in the room. So I was like, huh, maybe they would get a little more pay equity there, you know. Anyway, so soft skills. There's going to be a reshuffling of soft skills. And I recommend students...

Provided employers are going to value it, which is kind of almost part of your story as well. Yes, they're trying not to with this, but I think maybe they'll have to. We'll see. We have just like a few minutes left and I'm wondering, yes, you see, okay, we're going to do a really quick fire round because I see three hands. Okay, I'm going to try and be really quick.

And then we can all go out and take as long as you like. Yeah. And talk to Alison and buy her book and have a drink. Yeah. Shani. Shani, I'm about the Department of Media and Communications here at LSC. Thank you, Alison. That was really fantastic and fascinating and a really nice photo on your tumbleweed society. Oh, thank you.

So I wanted to ask you if you can reflect briefly on COVID, given the timing of your project, but also COVID in relation to the moment you're describing. And particularly, clearly, it's made health systems, education systems and so on overburdened. But I'm struck by a slight paradox that after COVID, we all heard about how we're all yearning for...

the human connection. So where has that gone? Yeah, I love that question. Thanks. Thank you. And a question here in the second row. Thank you very much, Alison Abdul from UCL. So much to unpack from your talk. And I feel like one of the central themes, obviously, being human connection. And I was curious to hear your thoughts on the supply of human connection. Because we've spoken a lot about the demand for it. It's highly...

highly critical for anyone in the quality of life. But I'm curious to get your thoughts on how easy it is to supply. Because it really depends on the complex nuances between two people's communication styles and to forge that connection is something very difficult.

And it would seem that perhaps part of the appeal of sort of artificial agents is that that sort of flattens the expected value of the quality of the connection you get there. It's less variable. In actuality, it's personalized towards your own communication style. And then when you obviously apply that into an industrial context through connective labor, the profile becomes even more complex. Particularly when we consider that systems are not monolith. There are various different

types of systems, they're designed in a variety of different grades of quality and also used in very different ways as well. And so it appears that there's a balance there, a very unstable balance that we need to strike to extract maximum value. And I'm curious as to your thoughts around how we might extract that value in a maximum way. That's interesting. Thank you. And a question I think of the fourth

My name is Elizabeth, I'm an industrial engineer. Thanks for your time and your presentation. In relation to your research and interview, specifically during the observation process, did you identify whether there are ethical regulations for the use of AI or the use of machines during the interaction with patients?

And do you think the companies, professionals, or society are prepared for these transitions through the use of AI in patient and customer care? I didn't quite hear that. I think you're saying, are there ethical issues and are they handling it? Are they addressing it? Through regulations. Through regulations? Ethical regulations. Okay, thank you.

Yes, I think we're going to call it a day because we have like about three minutes left. Okay, three minutes. It's okay, go breathe, breathe, breathe. COVID, great question, totally true. I'll tell you, I was researching this before, during and after. So, I mean, mostly done before, but with some after. And what I noticed is the variable volume of the messages from Silicon Valley in particular, because before...

engineers were telling me all this kind of better than nothing language. There was this or better than all that stuff.

But then it was silenced in not just engineers, also like in like advertising and in punditry. And, you know, it's just these messages were coming. And then it was during COVID silent. And I think it was because people were so, so many people were meeting the kind of deadening feeling of not having in-person interaction. So they just were like, this is not going to be a friendly audience. And they stopped. Yeah.

But then essentially ChatGPT changed the conversation and it became, and this is my answer to you, like where did that go, you said? And my answer to you is that I think it was replaced by panic. It was replaced by, but that's my perception. When I get questions from people, they're like, you know, they're worried about being left behind. So they're not open to, you know, this is not,

relevant or good for us today. That's not what they can hear. They want to hear much more like, is there a positive story of technology? Isn't there a way we can be friendly with them? Can't we have a little bit of it? You know, like, they're anxious. And that's where it's gone. It's moved the starting line, I would say, the debate. Okay, number two.

Yeah, super interesting question. You had me more with about supply. When you started talking about extracting, I started to feel a little more like, hmm. But I do think the supply of connective labor and seeing the other is an interesting question. I actually think the answer is more, or the supply is more, the story there is more positive than you think.

Because in my experience in doing this research and observing people, you know, et cetera, it's not... You don't have to be that good at it. That actually...

I should have the term "good enough" connective labor in it. That's the standard because actually it's a give and take. And so like for instance, in-depth interviewing, which I'd use for this research, is a form of connective labor. I'm sitting there going, "What is your truth? It sounds like you're saying this. Is that about right?" And they're like, "No, no, no, no. It's more like this." And they're correcting me. It's not like I say it and it's wrong and so it's all over.

It's instead, I say it, it's wrong, they're like, "No, no, it's kind of like this." I adjust and actually the adjusting makes them feel more seen. And therapists actually know this. They call this a therapeutic rupture and they talk about redemption or the kind of therapeutic redemption. Like if you can correct it, it actually forges more of a relationship. So you don't have to be perfect. And it's like give and take. All you have to do is listen, kind of. So it's not like the perfect standard.

to my mind, this applies a little bit better and the kind of ability of practitioners to handle variability in their own abilities and in the complexity of kind of mismatch in communication styles, et cetera, is less daunting or, you know, kind of a fatal flaw than perhaps you think. I can talk more about that later. Okay, number three, ethical issues. I don't actually...

Like, I would say no, there's certainly not enough regulation because nobody's regulating anything, at least in the United States. There is some regulation here and in Europe. I think that I do think that maybe I should end here with a more positive story because I actually do think it's coming. People are starting. People are waking up.

People were, there was, we moved from ignorance to panic very quickly. But I actually think there's a light drumbeat coming of people going, "How can we make this better? This is actually not, can we, hold on a second." And that is what I'm feeling. So I had a high level government official call me, I mean contact me by email today.

saying, you know, I'm interested in what your work can do in terms of helping, helping kind of, helping us get a handle on these issues. And that's, that's novel. That's amazing and great.

So that's more positive. The answer is not a lot of regulation now, but I do think the day is coming, especially if all of you are now part of the army of the true believers in connective labor. Thank you. I think we have, I know there are lots more questions in the room, and we do have an opportunity, as I said, outside to pursue some of them. But I think it's testament to an excellent talk that there were so many questions.

In the Department of Media and Communication we have more public events coming up in the coming months, so do please keep an eye on the website and on the LSE events website. But for now let me give a very warm round of thanks to Alison. Thank you. Thank you.

Thank you for listening. You can subscribe to the LSE Events podcast on your favourite podcast app and help other listeners discover us by leaving a review. Visit lse.ac.uk forward slash events to find out what's on next. We hope you join us at another LSE Events soon.