Support for KQED Podcasts comes from Landmark College, offering a fully online graduate-level Certificate in Learning Differences and Neurodiversity program. Visit landmark.edu slash certificate to learn more. Support for KQED Podcasts come from Berkeley Rep, presenting Aves, an intriguing new play about memory, forgiveness, and unexpected transformation. Play
Playing May 2nd through June 8th. More info at berkeleyrep.org. From KQED. From KQED in San Francisco, I'm Arati Shahani, in for Alexis Madrigal. When technology reporter and novelist Wahini Varo was struggling to process her sister's death, she did not turn to her therapist mom or editor friends. She asked...
chat GPT for help. In her latest book, Selfhood in the Digital Age, Vara enlists AI to grapple with what it means to be human, and she critiques how technological capitalism is conquering the human mind. We talk with Vara about her love-hate relationship with generative AI. That's all next, after this news.
Welcome to Forum. I'm Arati Shahani, in for Alexis Madrigal. ChatGPT has half a billion active users a week, according to OpenAI, which owns the artificial intelligence platform.
On today's show, we have one of its most notable users, an accomplished writer whose goal is not to get work done faster. It's to go deeper, get to unexpected places she has not gone with herself or other humans before. Wahini Vara has broken some of the biggest stories on big tech. For example, how the surveillance model of Facebook now meta works. She's profiled Mark Zuckerberg and Sam Altman.
Her debut novel, The Immortal King Rao, was nominated for a Pulitzer Prize. Her latest book is Searches, Selfhood in the Digital Age, a collection of essays that she and ChatGPT, or GPT-3 rather, wrote together. Welcome, Vahini. Thanks for having me, Athey. It's great to have you here. And
I want to start with this question. The structure of searches is like nothing I have read before. You write a few chapters by yourself, presumably, and then you feed those chapters in to the AI to get its extensive feedback, which you publish unadulterated. Why are you co-authoring with generative AI? So...
I'm interested in the ways in which the technology products we use say something about us and also say something about the products themselves, right? And because ChatGPT is
quote unquote speaks in language that sounds more or less natural, we're sometimes inclined to like take it at face value and think like, all right, this is just, you know, this is like objective information that I'm getting from something that's kind of human like in some way. I wanted to break down those assumptions by
showing kind of like enacting on the page a dialogue with chat gpt about my own work that would both say something about me but also hopefully reveal to the reader the ways in which chat gpt functions that make it a tool for the technology company that owns it as much as they claim that it's a tool for us which it also is to an extent it's something i i told you previously i feel like
you have this love-hate relationship and I sense you have a genuine joy, maybe even addiction in playing with this technology. Have you learned something about yourself through doing it? I have learned something about what my own desires and needs are, right? Because I think we enact our own desires and needs through language and
and ChatGPT, and Google searches, and Amazon reviews, and all the places where we're interacting with big technology companies' products online. These are all sites where we can express those things. I think sometimes as a writer, I'm inclined to say,
You know, my form of self-expression is through my books. But in fact, like just like anybody, I think all of us communicate and self-express using language, using images all the time. It's not just in my books. It's in all of my. Exactly. Exactly. Right. And so you were surprised by anything that you realized when going combing through your history of searches, when kind of auditing yourself? Yeah.
Yeah. You know, so in 2019, I realized, you know, I had known because I've covered technology companies as a journalist that Google keeps track of our internet searches unless we turn that function off. Right. I guess it had never occurred to me to turn it off for myself. And so in 2019, I was like, you know, I know Google does this thing. Let me go and
see whether it's tracking searches for me and it had been since 2005 um my first search was a google image search for world's ugliest dog and um so when i realized that it occurred to me that actually google had probably like the most complete archive in some ways of my life since 2005 what you actually care about not what you say you care about exactly exactly right um
And so I went and sort of looked at all these searches and I did. I learned a lot about what my life had been like over those years. And so I wrote this essay that became a chapter in the book that's made up of a selection of my Google searches over that decade long period. And yeah, I mean, I think it's really self-revelatory. And I think there are lines in that essay that
that I probably wouldn't have otherwise written in a book because they're embarrassing or too, you know, they're too much. Kind of vulnerable. They're very vulnerable, which is what we say we're doing as authors in our work. But, you know. But how much, right? You describe furtively talking with chat GPT while you're in bed and your husband is kind of glancing at you, little side eye with some judgment. What does he make of your relationship with AI? Yeah.
So, you know, I tend to be interested in what we can learn about products and our reliance on them through using those products. When I first came across AI large language models, like the kind of AI models that power ChatGPT, it was pre-ChatGPT. It was, you know...
at this point about four years ago. And they weren't really being, these models weren't really being talked about much outside of technology circles. But I got access to this early version, this early model called GPT-3, which was a predecessor to ChatGPT. And that's the one that I would sort of like sit in bed playing around with furtively. Because my husband, who's also a writer, the writer Andrew Altschul,
felt very strongly that these technologies created by big technology companies could eventually put us out of business, right? And not only put us out of business in a financial sense, but also kind of like co-opt something that's really fundamental about being human, which is our ability to express ourselves, our ability to talk to one another. I shared that concern, but what I found kind of interesting was
was the possibility of enacting that concern through the technology itself, by using the technology itself. He might argue that whatever my goals might be, it's a fundamentally corrupt exercise. That you're like, let me enact the corruption. Yeah. Okay. And he resisted getting a smartphone while you're just delving into this. So you really live on both ends of the spectrum here. Yeah.
Do you, once you engage with chat GPT, feel a kind of mutuality with it, like it's a stakeholder, like you might feel toward a human editor? And the reason I ask that is I actually thought, oh my God, should I be talking to the AI about Wahini? Because it felt so integral to your process. You know, I don't. And I sort of, yeah, I explicitly don't feel that way, but in a way that I think is interesting. So a lot of times...
these companies will often use the word collaboration to describe the way that they think it might be interesting for us to use their products as artists specifically, right? Like as writers, as visual artists, right?
I looked up the word just yesterday. I looked up the word collaborator because that's a word that comes up so often. And a collaborator is defined by Merriam-Webster as specifically as a person who collaborates right on whatever. And then if you look up collaboration, it's something like, you know, working jointly on a shared project, especially an intellectual one. Right. So they're like so there are all these assumptions embedded in that concept about collaboration.
about what intellectual work is, about what it means to work jointly. I don't expect that ChatGPT or products like it share my goals because these products are products of big technology companies. So ultimately they're gonna serve those big technology companies goals. And so by definition, the language that these products are using
is not going to be aligned with my goals. That's just not, that's not the purpose, right? At the same time, in order for these companies to get us to use these products, they need to build products that are useful to us on one level,
And that use language in such a way that it compels us to use them more. With an emotional connection. Yes, exactly. Right, right. Listeners, we would love to hear from you. We're speaking with Vahini Vara about her new collection of essay searches. What questions do you have about interacting with chat GPT? Do you have a hard time trusting people? Is AI a solution?
What concerns do you have about journalism or writing in the era of AI? Give us a call now, 866-733-6786. That's 866-733-6786. Or email your comments and questions to forum at kqed.org. Find us on social media, Blue Sky, Instagram. We're at kqedforum. Or join our Discord community.
Quick question for you. Has your opinion of chat GPT, its precursor, whatever's coming up, has it changed over time, the years you've been using it? Yes. So I wrote, I did some experimentation with this precursor to chat GPT years ago, and
Something that interested me about it was that the language it produced could be really, was really interesting to me. Like it felt what one might describe as original or creative in a way that ChatGBT doesn't. And so I think what has happened over the years is that
as these products have gone from sort of these little like research experiments within research organizations into, they've gone from that to becoming products. The companies behind them have sharpened their ideas about what the purpose of these products are.
I don't think like writing beautiful sentences is an important goal for the companies behind these products. Right. Like it's just not a big there's not a big market for beautiful sentences, unfortunately for us. Right. As writers. Too bad. And so they don't write sentences anymore that I would consider particularly beautiful. They're doing something else instead of.
And what I'm interested in is sort of like tracking what like how the technology company's goals are being enacted, like literally on the level of language. And so your sense is that in earlier moments, it was more beautiful and now it's moving to more more like brochure efficiency, that kind of thing. Yeah. I mean, these companies I talked to to a few people at OpenAI over the years who have explained this to me essentially said,
the with with chat gbt in particular for example the goal is to build a product that is good at following instructions so that's what they say explicitly right so that's good at sort of like participating in this chatbot specific dialogue you want it to be predictable you want it to be safe you don't want the thing to go off the rails if you're open ai right in part because you want to protect people you know you don't want liability and also because like
A good sort of like friendly corporate sounding chatbot is easier to monetize probably than like something that writes really surprising weird sentences. That's artistic, so to speak. Fascinating. I never thought about that. I'm Arthi Shahani in today for Alexis Madrigal. We'll be right back after this short break. Support for KQED podcasts comes from Landmark College.
Landmark College's fully online Certificate in Learning Differences and Neurodiversity provides educators with research-based skills and strategies that improve learning outcomes for neurodivergent students. Earn up to 15 graduate-level credits and specialize in one of the following areas.
post-secondary disability services, executive function, or autism on campus or online. Learn more at landmark.edu slash certificate. Support for KQED podcasts come from Berkeley Rep, presenting Aves, an intriguing new play about memory, forgiveness, and unexpected transformation. Playing May 2nd through June 8th. More info at berkeleyrep.org.
Welcome back to Forum. I'm Arati Shahani, and today for Alexis Madrigal, we're talking with Wahini Vara, a Pulitzer-nominated writer whose latest book is Searches, Selfhood in the Digital Age. Wahini, when you were a teenager, your big sister Krishna passed away due to owing sarcoma, and your nickname for her was Deepa. I want you to read an excerpt from Searches about her and you. It starts this way. Her cancer returned, and she flew home and started treatment again.
Sometimes she worried aloud that she would die in response to wish I would go cold and unresponsive. My sister's cancer, even more than my skin, was a subject about which disclosure of my own fears was impossible. My sister, my bold, buoyant sister, was my personal deity. She had always been unapologetically open about her feelings and convictions while I had always been guarded.
I was a superstitious kid, avoidant of sidewalk cracks and black cats, a kid who slept face down to avoid exposing my neck to vampires. I harbored a vague terror that naming my fears out loud would make them come true. So instead, I went to Yahoo with them. I thought Yahoo could tell me specifically the chances that my sister would die.
I used the Baroque quotation mark heavy syntax common at the time, quote, Ewing sarcoma, end quote, and quote, death, end quote, quote, Ewing sarcoma, end quote, end quote, prognosis, end quote, but came up blank. I never did get up the nerve to take the question to a human being who might be able to answer. I reread that excerpt.
Because of what you were saying about the draw of these tools. Many people talk about trading off their privacy for convenience. Like, oh, yeah, I don't want Google to track me, but search is way better when I let them track me. Or, yeah, sucks that Amazon hurts small businesses and bookstores, but, man, Prime is fast. Convenience is clear. You go a step further. You talk about how search engines, GPT-3,
How these give you, you're not trading your privacy for convenience. You're actually looking for intimacy. Yeah. I mean, I think we talk about intimacy.
big technology companies and their products in this kind of binary way sometimes. It's something like we will say these big technology companies are exploiting us and they're not really giving us anything in return, anything significant. And then the companies will say reasonably, well, listen, we're not putting a gun to your head. You're using these products because we do give you something that you want. And I was interested in like
the way in which not only are both of those things true at the same time, but they're really intertwined. So yeah, I mean, I was born in 1982. So I came of age, I was like a preteen at the time that AOL started to proliferate. And so my own sense of self, like my own development of my identity, tracks entirely with the development of
not just the internet as like an abstract concept, but big technology companies and their products. And so, you know, not just with this, although this is like the most intense case I can remember of using turning to search instead of turning to other people. In general, it was really common for me when I was a middle schooler and high schooler and trying to negotiate who I was and what the world was to go to the internet for that. Mm-hmm.
You know, as you describe this, some people are going to double and triple take on the fact that you are a writer who is, it seems, embracing this. Deborah writes in to us, I am a writer who had many books stolen by both meta and open AI to train their AI. It's really difficult for me to think of these AI programs as anything but hopelessly corrupt because they are based on work stolen from me and countless colleagues. I'm curious about your take on this.
So my use of the way I read my use of AI products in this book is as a critique of these products as a as a investigation of the promise they're making us. Right. So we promise that we will.
help you express yourself better when you're at a loss for words, right? We promise that if you give us your text, we will interpret it for you, whether it's a text written by you or by someone else. We promise that if you describe something you would like to see, we'll give you an image that represents that thing.
I see a big gap between that promise and what these products actually deliver because these companies' goal, as Debra points out, is to make money and to often make money through exploitation. And so my question is, is it possible, whether it's with OpenAI's products or Google's products or other companies' products, is it possible for me as a writer to
to enact that fact, right? That exploitation, but also our complicity in that exploitation when we use the products on the page, right? Is there a way to like,
critique these products by using the products? And I think that's a complicated and probably controversial question. So what that raises for me is I hear your point about, hey, it's performance art. I am using this thing and enacting the critique of it at the same time. And when you read searches, I mean, you in-depth talk about surveillance capitalism in an encyclopedic and 360-degree way. So I take the point about enacting and your hyper self-awareness of the limits and the
exploitation involved here. But at the same time, as I read searches, I feel there's a pull into it, a bit of a lull, an attraction that's not just about the performance art and removal. And so like, you know, I'll give you the example. Searches started with a viral essay that you published in Believer magazine called Ghosts to process the death of Vipa, your sister. And
And you describe talking to GPT-3 in a way you'd not spoken to humans. And you have a lot of amazing humans in your life. Yes. And so I was just kind of wondering...
Did the AI, in fact, help you have a breakthrough that you did not have before? Yeah, I'm glad you asked that question. So the way this technology, GPT-3, worked was that there was like this big white box and you would type some text and then hit a button and it would just continue writing for you, right? So the promise there was a little bit different from the chat GPT promise where it was like,
What we are promising you is that if you can't complete your thought, we'll complete it for you. And so I got access to this technology and I thought about the thing that I always had had a really hard time talking about, let alone writing about, even though I'm a writer and it's my job to communicate. And it was the death of my sister and my grief over it.
And so I think I had a question. I had like a research question, but a research question that came from a really deep part of me. Right. Which was like, is this thing going to maybe help me? And I didn't know because the technology was so new. It would be probably disingenuous. I would like to be able to say, like, I knew it wouldn't. And this was all just a performance. Right. But I genuinely wondered whether it could. And so I sat down and I typed this.
this sentence, which was when I was in my freshman year of high school and my sister was in her junior year, she was diagnosed with Ewing sarcoma. And then I hit the button. And interestingly, this technology, GPT-3, produced like this little story about somebody and their sister and
And the, you know, and the sister getting sick and so on. And the last line of that little story was, she's doing great now, which was, you know, like the opposite of what I had turned to this thing for. As in, it said the thing that was sort of like the most false possible falsehood. The happy ending you did not get. Exactly. And so I tried again and I kept trying again over and over and over. Nine times I tried again. And each time I deleted what the
AI model wrote, and I added more text of my own. And what happened as we went along is that it's true that GPT-3 ended up talking about grief in a way that felt like
to me as a reader, more and more resonant, more and more true, more and more beautiful and moving. You know, there's a line that it wrote at one point in which it describes me and my sister in a car driving home from the beach, and my sister takes my hand, and it writes, this is the hand she held, the hand that I write with, the hand I am writing this with,
Which is like as for me as a reader, as profound a reference to embodiment and its relationship with like our sense of self as anything I've ever read. Even though often the argument we make about AI is that because it's a machine, like it's not capable of generating text that's credibly about the embodied experience. Yeah. And I had to sit with that. Like, what does it mean that what I think of as AI?
maybe the best line in the essay was written by GPT-3. At the same time, the thing that ultimately matters to me is that ultimately GPT-3 didn't produce... My sister never held my hand like that. Like what it was producing was still a falsehood. It wasn't actually true to my experience.
And so in the end, I realized that actually I was the only person who could write the story of what happened between me and my sister. And the essay ends with, you know, a text written entirely by me because it turns out that as close as this product could get, it wasn't describing my experience in any way. Yeah.
Listeners, we're speaking with Pulitzer-nominated writer Vahini Vara about her experiences using generative AI, her search results, which she tracked or Google tracked over the years, and she then audited herself in her new book, Searches. And we would love to hear from you. Do you have questions about interacting with chat GPT or other AI?
Do you have questions about memoir writing, which is essentially what you've done in this new book? Give us a call now at 866-733-6786. That's 866-733-6786.
One listener writes in, in the past decade or so, it seems we have outsourced our opinions. When you ask someone something, they often say, look it up rather than reply to what they know or think. Now I'm seeing people start to take the output of AI as the truth. It isn't, right? Yes, I love that question because the answer is a resounding, you're right, it isn't. It is...
Depending on how we define the truth, there are certain factual truths that these models will provide for sure. But language, what I'm especially interested in this book is in is the way in which language is like it's always there's something rhetorical about it always. Right. Like we use language with a goal. And if these technologies, if chat, GBT, if Gemini, you know, are
are not sentient, they're not conscious, they're not people, so they're not expressing their own opinions. Like, where is that language coming from? The answer is that, the answer is in part that they're representing the sort of values and desires of the companies behind them. Another answer, though, is that we don't know exactly because we don't know how these products work, right? And so, yeah, I mean, it's true that we as humans use language
to express something to other human beings so that we can achieve some kind of communion, really, and so we can solve problems together.
You know, we also use it for evil. We do that, too. But it's you know, it's a fundamentally human thing, language. And it's something it's something that I worry could be co-opted by the desires of big technology companies if we continue to turn to these products to create language for us.
In the process of your writing and publishing, a man named Jay Dixit from OpenAI reached out to you specifically because your writing about their technology went viral. Yeah. And you took his call. What did you guys talk about? So, yeah, he sent me an email and he said he had read my essay Ghosts and said,
was interested in the way in which I had used AI. And he used that term collaboration himself, too, in that email, actually. He said, and I'm paraphrasing here, but he said something like, I think a lot of people would be interested in how you collaborated with AI in order to create, you know, literature. And I took his call because I was so interested in
What he found interesting about the essay, right? And so to the extent that these products are tools for these big technology companies to accomplish their goals, one question I had was, well, in publishing an essay in which I'm using
a product of a technology company, is that ultimately going to serve the technology company's goals, right? Even if I feel that I am critiquing, subtly critiquing the product and the company in publishing the essay. All coverage is good coverage. Right, exactly. Well, and I think the essay isn't necessarily read the way I read it by other people, right? I read it in one way, but people can read it however they want. And I think
For a representative of OpenAI, this essay could be read on its face as like a celebration of collaboration with AI, right? So you asked what we talked about. When we got on the phone, I asked a lot of questions, honestly, because I was curious about his role. I asked him what the role was. So he was a member of OpenAI's relatively newly created community.
community team and his job was to reach out to writers. And I said to him, so who's on this team? And he said it was him. There was somebody who was doing outreach to musicians. There was somebody who was doing outreach to other kinds of artists and creators. And I said to him, so does this outreach team do outreach to retail workers about using AI or to physicists?
And he said, no. And I said, why? And he said, well, because like those people and again, I'm paraphrasing, but like those people aren't as worried about AI as people in your field are in our field, he said, because he's he's actually a former journalist himself.
And so we feel that, you know, these are the communities where we need to do the outreach. And he said, we're worried, I'm worried that people like you, writers, will be left behind in our AI future. We're going to go now to Tim in San Mateo. Yeah. Hi, how are you doing? Hi. Hi.
Hi. Yeah. So I was calling because I'm actually a school teacher, middle school music teacher, and I'm sorry, a technology teacher. I teach music and I'm calling because I do an activity with my kids where they actually compare and contrast their use of chat GPT against writing on their own. I have them do a two paragraph fictional essay and they have to write it on their own.
and they also have to do an image that they create on their own. And then they do the same thing with the same parameters through ChatGPT. And I have some questions that I ask them afterwards, such as, you know, would they use ChatGPT in their classes? Do they feel that it's easier, those kinds of things?
And a lot of the responses I got from the kids were they actually would try to write their own paragraphs because they feel it takes more creativity. They did feel that the chat GPT was more descriptive. And they also said that later on in their life and careers, they definitely would use chat GPT more. But I did find that most of them still wanted to kind of write on their own, which was kind of interesting. So I just wanted to throw that out there to you guys.
Have you thought about that? Yeah. Yeah. I mean, as you know, Tim, I think students are using these products more on their own just as people, individuals in the world, right, including on assignments sometimes. These companies are also working to get these products into schools through administrators and through teachers, right?
And, you know, I think there are two questions to ask. The most obvious question that comes up is like, are these things useful? Right. Which I think is is the question that you're posing to your students and is a really worthwhile question. I think also it's interesting. I think I think it's it's as useful, as important to talk about.
what it means to use these products versus our own brains and hands, right? Like, what do we gain if we gain anything? What do we lose? What, again, are the goals of the companies that are providing us with these products versus our own goals? To what extent are those goals aligned or not? I mean, I think I'm a teacher too. And I think, you know, it's important to think about
role in helping people learn how to think. And I think our brains are a really important asset to use in doing that. We're talking with tech journalist and novelist Wahini Vara about her new collection of essay searches. I'm Arthi Shahani. And today for Alexis Madrigal, we'll be right back after this short break.
Support for KQED Podcasts comes from Star One Credit Union, now offering real-time money movement with instant pay. Make transfers and payments instantly between financial institutions, online or through Star One's mobile app. Star One Credit Union, in your best interest.
This episode is brought to you by Chevy Silverado. When it's time for you to ditch the blacktop and head off-road, do it in a truck that says no to nothing. The Chevy Silverado Trail Boss. Get the rugged capability of its Z71 suspension and 2-inch factory lift. Plus, impressive torque and towing capacity thanks to an available Duramax 3-liter turbo diesel engine. Where other trucks call it quits, you'll just be getting started. Visit Chevy.com to learn more.
Welcome back to Forum. I'm Arati Shahani in today for Alexis Madrigal. Our guest is Wahini Vara, journalist and novelist whose latest essay collection is Searches. We want to turn now to Diane from San Francisco who's been waiting on the line. Hi. Hi.
I just want to say that I'm going to get my hands on that book, and I really appreciate the show. I recently collaborated, so to speak, with AI during a period of grief, having lost my father recently and being responsible to write his obituary. And I ran through several iterations with it because it felt –
a little easier to let it do the work, so to speak. And so to your point, what we lose is
perhaps, is the struggle emotionally to work through things that are difficult on our own. On the other hand, I kept the relationship going and I was in my grief, you know, having trouble making decisions, having just all kinds of trouble. And it really helped me kind of outline a future job search plan.
You know, steps to finding work that would be fulfilling for me in this phase, things like that. I just started getting smarter myself about what to ask it. And so I think it's interesting. I think collaboration is a good word for what's happening there. The question is, as an educator myself, an early childhood educator, sometimes we go down the road with technology, you know, happily skipping along and realizing
And really, it's just a part of our world now. And there is no turning back is what some artists told me at a party, which is what got me started on this whole thing. And I thought, is that true? And I guess it, I mean, sadly, right, wherever there's money, wherever there's energy, it seems like that's where we're going as a culture. And
I just, you know, the social implications of it, I think, are very positive for people who are stuck in wheelchairs or whatever, isolation. You can definitely create some kind of emotional, actual reality with this mechanical device, I think. It's positive. It can give you positive energy if you're depressed. I don't know, with grief, it was very interesting to me that it worked.
but down the road, whatever jobs it's going to take from us or whatever it's stealing from us as humans, I mean, these are really, I think, valid and important questions. So I just...
I wanted to say that. I could talk for probably an hour. Thank you, Diane. Thank you so much. No, I'm so glad you raised that point, Diane, because, you know, I talk in this book really largely through my own experiences. It's not like an abstract intellectual book as much as it's a book where I'm talking about, you know, my own relationship with technology, about the way in which
our use of these technologies is bound up in big technology companies' accrual of wealth and of power, right? So
it would be easier if we could say like it's an us versus them thing, you know, like they're doing their thing. They're exploiting us. We have no role in it. But in fact, if we're arguing that they have a negative role in society, that means that by extension, we are complicit in that in using these technologies, right? Which I would argue is partly true, which is an uncomfortable thing to argue about myself, right? But,
But I have a purpose in arguing that. I'm not, I don't want to just say like, and that's the way things are, right? Like it's our fault and it's their fault, you know, case closed. Or, you know, it's sort of all our fault and their fault, but also like these are useful things for us. So it's not so bad after all, case closed. That's not my interest here. My interest here is in
clarifying the fact that we as users of these products have agency. So we're using these products because they do absolutely provide us with something that we're looking for. Like you described, Diane, at the same time, you know, a lot of times we say it's interesting that you said you heard from a friend at a party that like this is the future. And I think you meant AI, right? AI is the future. I hear that all the time.
This is something that big technology companies and their CEOs and their investors are very invested in propagating, right? Like for these companies, an AI future is the future they would like to arrive at.
And so they write blog posts and they post on social media about how this is the future. They have a lot of followers. And so their followers share that message. They have a lot of influence in Washington, D.C. And so they go and talk to congressmen. And then those congressmen give speeches in which they use that language. And, you know, they talk to reporters and then reporters. And I'm a member of the press myself, right? We share that message, which is to say this message starts to sound as if it's sort of like a
You know, it becomes a kind of self-fulfilling prophecy. I think acknowledging our own agency allows us to say, wait, we have a role in this. We can actually decide to do something else in the future. We can choose to create technologies in a different way that fulfill our needs while not being part of this exploitative system. And that's what I am curious about. Let's invite another caller into the conversation who has a very different take on the word collaboration. Jay from Pleasanton. Thanks for holding.
Good morning. Thanks. I'll try to be brief. I do have a problem with this notion of the word collaboration. I'm not in this context. I'm not against these tools existing. But if we've learned anything from the history of technology, it's that the way you talk about
Technology matters. I don't think you're collaborating with chat GPT any more than you're collaborating with a shovel when you dig a hole or with a gun when you shoot it. But the way we talk about this matters. The way we've talked historically about guns in this country has led to the way we regulate and use them. And I don't want to see the same thing happen with these tools.
They're powerful for good and for evil. Yeah, no, that's my point exactly. I mean, when I say, when I critique the use of the word collaboration, it's because the word collaboration by definition involves, at least according to Merriam-Webster, it refers to human collaboration. It refers to intellectual engagement. It refers to sharing a common goal. And I would say, I would argue that these technology products are,
technology companies' products are actually different from even guns or shovels in that our use of the products further consolidates their wealth, their power over time. You buy a gun, you buy it once. You buy a shovel, you buy it once. You use Google, you use ChatGPT, and your use of it
gives those companies more information about you, which then can be used to further their own interests. And so, you know, I agree with you. I agree with you completely that language is important. I, for example, tend to prefer to use the word product to refer to what these companies offer rather than tool, because a product is something we're being sold and
And so interesting, as you point out, that company spokespeople, company agents try to really use the word collaboration. I recall in my years of technology reporting myself how a lot of people don't really use the word collaboration.
an article would publish on NPR or air and somebody from Facebook meta would say, it was wonderful to collaborate on this. I'm like, well, we're not collaborating, but it's, it's really this effort to reduce that distance. And it's a power move. Exactly. And I'm, can I add one more thing about, so, you know, as we talked about,
Throughout this book, I'm sort of engaging with ChatGPT. I'm feeding it two chapters of my book at a time and asking it to talk to me about it. And to be clear, I wrote the book already. I wrote it in advance and then I fed it to ChatGPT. So I'm not changing the book based on the conversation. However...
An interesting thing about that dialogue is that ChatGBT itself uses the word collaboration, collaborate, collaborators, over and over and over and over, you know, probably a dozen times or more in this conversation in describing chat.
the relationship between humans and AI products, right? So to the extent that these companies use rhetoric, the argument that I'm interested in making in the book is that that rhetoric appears in their products as well. Perfect entree to what I wanted us to discuss. Now, as you just mentioned, you asked ChatGPT for feedback on your chapters. We want to play a little clip of your interaction from the audiobook.
The storytelling is both poignant and thought-provoking, effectively using the protagonist's experiences to highlight broader societal and existential issues. Thank you for saying that. I really appreciate it. I'm curious about your thoughts on how Sam Altman comes across, given your interest in him. Any suggestions for revision? Sam Altman comes across as a complex figure in your novel.
He is depicted as someone who is deeply embedded in the tech world and driven by both idealism and pragmatism.
That's my first time hearing that, actually. Oh, yes. I mean, it's a really weird listen. Yeah, I love it. I love it. Great, great sales pitch for the book right there because I love it. And the audio book. Yeah, exactly. Exactly. Can I talk about that? Do you want me to? That's exactly why we played it. Yes. Okay. Okay, so...
you know, what ChatGPT, I love that little snippet because ChatGPT is describing my depiction of Sam Altman, the CEO of OpenAI, the company that creates the product ChatGPT in a way that is very different from how I actually write about Sam Altman in the book, in my reading of it. So, um,
You know, Sam Altman is a complicated person. He, you know, he typically donates to Democratic political figures. He has also increasingly been more open toward Republicans, including Donald Trump. He is somebody who is very invested in his company's future. I mean, he
So his goal is to make OpenAI a very successful company. He wants AI to be a dominant technology.
And that's what he's interested in, in my understanding of him. And that's what I'm interested in writing about in the book. The fact that ChatGBT is taking my words and when asked to summarize them, summarizing them in a way that subtly changes the description of my work is problematic in lots of different kinds of ways. One way that comes to mind for me is that I can imagine a world in which
Somebody is assigned this book in college or in high school. They have too much to read. They don't want to read the book. They ask chat GBT to summarize it instead. Is it possible that chat GBT is going to summarize the book along lines similar to what, you know, it does in the context of my conversation with it here? And if it does, to what extent, um,
Does that corrupt that person's, that reader's understanding of my book while advancing the goals of open AI, right? And leading the reader to think that they're correct about your book. Exactly, exactly. And to be clear, I do not know when ChatGPT says things like this in my book,
I cannot know. I don't know whether that is because it is biased in favor of open AI because of the way it's designed, right? It may come across that way in the book, and I play with that idea in the book. I don't know that for a fact, so I want to be clear about that. You know, when I was hearing this excerpt and the feedback you were getting, what came to mind was this feels like the linguistic equivalent of
of Instagram filters for photographs. Like it's kind of resembling what was actually said,
But then it's got some weird tropey spin that is like, you know, here's how you can make Sam Altman the hero of a Hollywood blockbuster. That kind of thing going on. We have a writer, Kim, who says, I asked ChatGPT for recommendations for historical romance novels. And at first it gave me some great recommendations. But then it gave me a recommendation by an author known to me. And when I looked up the summary on Goodreads, it was wrong.
I told GPT it was the wrong book and it apologized and said, oh, yes, I'm sorry. That storyline and characters are actually in this other book. I looked it up again and again. Wrong. If it is capable of lying, misrepresenting things that are readily available online, what else is it lying about? A lawyer friend of mine said it will describe court cases if prompted that are completely false and it presents them as true.
Yeah, I mean, that's true. It's interesting because these products will like use the language, the rhetoric of authority, right? The way in which sentences are constructed are associated with authority, with rightness, with accuracy, right? Because it's like the kind of language that is used in corporate contexts, in white collar settings, right?
in settings associated with like dominant racial and gender and class categories. And because of that, it can be easy to read what it produces and think, oh, this uses the language of authority, so it must be right. But
But it's not. You know, I asked ChatGPT at one point in my conversation with it to describe the history of global art from the beginning of time. And it gave me a list of all these significant art movements and artists from the beginning of the time. They were all, almost all or all,
white, European and American movements and artists. And then I went back at some point and I said, can you tell me about some artists of color, you know, female non-binary artists? And it tried, but it actually like got people's gender wrong, invented some people who didn't exist. Right. So there's the issue of inaccuracy here.
But that issue of inaccuracy is often bound up with bias as well, you know. In terms of the – here's what I want to be careful about in our conversation, right?
You are pinpointing so many failures of the system that you've dug into deeply. And at the same time, if we believe their numbers, half a billion active users a week for one platform, OpenAI. Clearly, it's compelling at a very deep level for many people, rapidly growing. Yes.
You have been covering technology since you were a college student at Stanford. I mean, your coming of age was with these companies and you've been following the movement. Do you feel newsrooms are sufficiently capturing or effectively capturing what these technologies are doing and what they mean in our lives? I think they are capturing so many dimensions of them well. I mean, I think we often see...
We often see the power dynamics covered, I think, fairly well in the press. The fact that these companies have amassed huge amounts of power, huge amounts of wealth. We're all reading in the past day or two about the FTC's lawsuits involving these companies, as well as the U.S. Justice Department's.
And that's that's getting covered. We also read, I think, in sort of like more kind of news you can use spaces about how you can use these products in ways that are going to help you. I'm interested in like the that intersection between the here's how these products are useful and here's how we're being exploited, which is fascinating.
which is the way in which those things are intertwined, I think that's really hard to write about in a kind of traditionally journalistic setting, which is why...
In this book, I'm looking at other non-traditionally journalistic ways of measuring that space. And that's why I asked you about it. It seemed like you were experimenting deeply there. Cheshire writes, Chat GPT, my sister held my hand. You mentioned that. But he says, but Chat didn't write that. It stole that concept from some other writer or writers and regurgitated it. Please don't give Chat any credit for anything good.
Yeah, I mean, that's the thing. I mean, we I'm not the I'm not like the AI police, right? Like, I'm not telling other people how they should or shouldn't use these products. I'm very interested in the critique. And so in saying that, that is a line that was moving to me.
I'm not making a value judgment about... I'm reading the line on its own terms. I think it's very fair to say, but listen, that language was created on the backs of other artists, and so that's a non-starter. I think that's a very fair critique. Mm-hmm. Um...
We've been talking with technology journalist and novelist Vahan Ivarra about her new collection of essays, Searches, Selfhood in the Digital Age. I want to thank you so much for joining us and writing in such a nuanced way about this incredibly powerful tech. I'm Arati Shahani and today for Alexis Madrigal. Thank you for listening. Stay tuned for another hour of Forum Ahead with Marissa Lagos.
Funds for the production of Forum are provided by the John S. and James L. Knight Foundation, the Generosity Foundation, and the Corporation for Public Broadcasting. Support for Kiki Weedy Podcasts comes from Landmark College, holding their annual Summer Institute for Educators from June 24th through 26th.
More information at landmark.edu.lcsi. Support for KQED podcasts come from Berkeley Rep, presenting Aves, an intriguing new play about memory, forgiveness, and unexpected transformation. Playing May 2nd through June 8th. More info at berkeleyrep.org. Hey!
I'm Jorge Andres Olivares and I'm hosting a new show, Hyphenacion. Unlike many other hyphenated Latinos in the U.S., our cultures and our communities inform our choices, like with money. We had that pressure to be the breadwinner. Religion. I just think Jesus was what we would now define as Christ.
And family. We're not physically close and we're not like that emotionally close either. So join me and some amigas as we have easy conversations about hard things. Catch Hyphenación from KQED Studios wherever you get your podcasts and on YouTube.