We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode 340: Critical Thinking over Code: Tess Posner, AI4ALL CEO, on Raising Responsible AI Leaders

340: Critical Thinking over Code: Tess Posner, AI4ALL CEO, on Raising Responsible AI Leaders

2025/6/16
logo of podcast AI and the Future of Work

AI and the Future of Work

AI Deep Dive AI Chapters Transcript
People
T
Tess Posner
Topics
Tess Posner: 作为AI for All的CEO,我深信AI的未来需要更多元的声音和视角。我们正在培养下一代AI领袖,他们不仅要具备技术能力,更要具备伦理责任感和社会责任感。我亲眼见证了AI如何改变人们的生活,也看到了它可能带来的风险。因此,我们必须尽早介入,确保AI的发展符合人类的共同利益。我鼓励年轻人积极参与AI领域,无论是否成为技术专家,都可以通过提问、批判性思考和创新来塑造AI的未来。我相信,只有当我们拥有一个多元化、包容性的AI社区,才能真正实现AI的潜力,让它为所有人服务。我希望通过AI for All的努力,能够激发更多年轻人对AI的热情,培养他们的创新精神和领导力,共同创造一个更美好的未来。

Deep Dive

Chapters
This chapter emphasizes the growing importance of AI literacy for everyone, highlighting AI's impact on various fields and daily life. It also touches upon the limitations of AI and the need for human sensibilities in AI applications.
  • AI is becoming a significant force impacting all fields and aspects of life.
  • AI literacy is essential for both casual users and future developers.
  • The future of work involves humans working alongside AI agents, leveraging AI's capabilities while maintaining human understanding and sensibilities.

Shownotes Transcript

Translations:
中文

These fields need your voice and they need your perspectives because ultimately we are shaping this world together. Nobody has the answers for what will happen in the future. People might say they do, but they don't. And, you know, the young people of today, anyone listening who is

growing up and graduating in this world. AI is going to be one of the most important forces that will impact every field and every part of our lives. And it needs people like you to help shape it. And maybe that doesn't mean you become an AI technologist. Good morning, good afternoon, or good evening, depending on where you're listening.

Welcome to AI and the Future of Work. I'm your host, Dan Turchin, CEO of PeopleRain, the AI platform for IT and HR employee service. Our community is growing thanks to you, our loyal listeners. If you're not yet subscribed to our newsletter, do it. Join us. Each week we share tips and tricks and fun facts that don't always make it into the weekly show, and you get a chance to hear additional questions and comments from the mailbag.

We will share a link to subscribe to that newsletter in the show notes. If you like what we do, please tell a friend and give us a like and a rating on Apple Podcasts, Spotify, or wherever you listen. If you leave a comment, I may share it in an upcoming episode like this one from Pamela in Fort Wayne, Indiana, who's a programmer for a defense contractor.

Pamela listens while gardening. Her favorite episode is the great conversation from last year with venture capitalist Alison Baum Gates from Semper Verans about her scrappy path to investing and how anyone can break into venture capital. We will also share that link in the show notes. We learn from AI thought leaders weekly on the show. And of course, the added bonus, you get one AI fun fact.

Today's fun fact, Lavanya Gupta writes in VentureBeat that swapping LLMs isn't plug-and-play.

Lavanya describes why it's impossible to avoid LLM lock-in when building high-quality, resilient apps, despite widespread beliefs that LLMs are a commodity. In the article, she described in good detail how to understand model differences such as tokenization variations, context window differences, instruction following, formatting preferences, and model response structure.

She then provided a code level case study on migrating from open AI to anthropic. She concludes by noting companies like Google, Microsoft, and AWS are investing in tools to support model migration. This is clearly an unsolved problem.

Of course, we'll link to that full article in the show notes. My commentary, all of us in the AI community know LLMs and specifically AI agent frameworks are pre-configured to do no tasks well. User expectations are extremely high. Every agentic application where AI is doing some task on its own is compared to how a human would do the same task.

When evaluating how to automate any use case, think first about how a human would do the thing. We should never let enthusiasm about bot capabilities distract us from the human experience. Bots are prediction engines, but always remember they lack our innate human sensibilities. When you go to prototype your own AI app, you'll quickly understand why the real future of work is humans'

with a little bit of prompting help from AI agents. Now shifting to this week's conversation, Tess Posner is the CEO of AI for All, the nonprofit that's finding and empowering the next generation of AI changemakers. She's also on a very short list of repeat guests. Her first appearance, believe it or not, was 320 episodes ago. Tess joined us in May of 2020 on episode 21.

In 2015, Dr. Fei-Fei Li, Dr. Olga Rusakovsky, and Dr. Rick Sommer founded a summer outreach program at Stanford to familiarize high school students with AI, specifically girls. This approach sparked the founding of AI for All, an Oakland-based nonprofit building an inclusive next generation of AI leaders. In 2017, Tess joined AI for All as its founding CEO. In addition to her day job,

Tess is an accomplished musician whose first EP was released in 2018. Her 2023 release, Alchemy, has over 600,000 streams on Spotify and 2 million views on TikTok. She was named a 2020 Brilliant Woman in AI Ethics Hall of Fame honoree. Tess received her undergrad from St. John's, go Red Storm, and her master's from Columbia.

And without further ado, Tess, it's my pleasure to welcome you back to AI and the Future of Work. Let's get started by having you share a bit more about that background and most notably, how you've been in the last five years. Hi, Dan. It's great to be back. Thank you so much for having me back on. I really appreciate it. I remember our last conversation and how much I enjoyed it. And it's really wild to think back five years ago because I think we were still in the

And it was just like, when I think about how five years feels, it's a pretty weird one, I'd say. Like the time both feels infinite and short simultaneously. So anyway, I've been good. I think all of us coming out of the pandemic, you know, there's a lot to unpack there. And still a lot of wild things happening in the world, especially in the world of AI, you know, it's been crazy.

a really amazing wild ride for AI since then. So yeah, I've been good. And just to share a little bit more about my background. So before AI for All, I've been working for over a decade on initiatives related to technology and access.

I think that, you know, in that time and to some extent even before then, technology went from a specialized skill to something that everyone has to know to even participate in our society, in our economy today.

So, for example, in 2012, I started a program that focused on teaching digital skills, really thinking about the future of work back then and how technology is going to be intertwined. We're actually teaching people how to succeed in remote work.

which we had no idea like in a decade would be everybody's doing remote work and who would have guessed how that actually went down. But we were focusing on Americans most left behind, the digital divide, and really thinking about how that excludes people from participating in the economy.

And then after that, I worked on an initiative out of the Obama administration called TechHire. And we were working on skilling up Americans that were taking non-traditional pathways into the technology space. So mothers returning to work, veterans. We had a program for former coal miners who had been displaced and young people who were sort of

graduating from college and high school and thinking about how can coding boot camps and these ways of like scaling up support these populations when all of a sudden everyone needs tech workers, everyone needs engineers. So how can that be an opportunity for people that would otherwise be left behind? I think we were doing that work in 70 cities, states and rural areas across the country and seeing

just all of these different examples of how much amazing untapped talent there is, number one. But number two, like what kinds of skills and changes were happening in the technology space? So that's when I started to see the early seeds of the AI field starting to grow and kind of crop up. So when I met, as you mentioned, Dr. Fei-Fei Li and Olga and Rick and their vision for AI for All, it just totally spoke to me because I knew that

AI was going to be the future. And so it seems like whoever shaped AI shaped the future. And so we really wanted to intervene early. So that's what really brought me to AI for All. And I'm just excited about what we've done in our next phase. So thank you for having me again. You bet. So...

Unbelievably, five years ago, we last talked, we were like two and a half years from the commercial availability of kind of the first high profile LLM. It wasn't the first LLM, but the first high profile LLM from OpenAI, ChatGPT. What does it mean now to create AI change makers versus what it meant then? Like I said, the founding vision for AI for all goes back to like 2017, right?

It's a fundamentally different world. How does that shape the vision that you have for AI for All? There really wasn't this widespread awareness of AI. And now everyone has it in their pocket. Like, actually, last week, for the first time, my parents started using AI. My parents are in their 70s and, you know, pretty tech savvy, but they hadn't been using it at all. And to see them use it and start to like,

realized what it could do like my dad using it for like asking help questions and I'm thinking like oh my gosh that's really exciting because now they have this resource but also all of the things that we are all concerned about with AI like how do I know that it's giving my dad the right answers like how do I know that it's not going to lead him in the wrong direction like

How do I educate him about making sure that, you know, double check certain things on Google or how do you verify information? And what does a hallucination mean? So I think to your point, it's a very different world than when AI for All started. But in a way, it's just

showing how the risks are very real and very personal now. But I think the mission of AI for All is more urgent than ever as society really tries to grapple with how to adapt to these changes and add, you know, AI is now in our pockets. So what does that mean for society? The mission that we're working on is just incredibly, incredibly urgent. And we're kind of

Also having to prepare our students and, you know, thinking about the next generation, what is the world that they're going to walk into? And how is that going to be different? And I don't think anybody can really predict that. But that is that is one thing that we're grappling with for sure. I love how you describe it. It is essential to being a citizen of the world today to be grounded, right?

in what AI can and can't do, but also what it should and shouldn't do. And I love the example of your parents. So whether it's Tess teaching her parents or AI for all, you know, teaching maybe high school students, there's always trade-offs we need to make between kind of going on the offensive and talking about

the ways maybe, you know, your parents could get reminded when they need to take their medication or, you know, track their dose, you know, various things that call out on the offensive, right? Embracing AI versus like you said, well, going on the defensive, kind of being aware of what AI can do if used inappropriately. When you're kind of developing curriculum or even, you know,

Chatting with mom and dad, like how do you balance the two when it comes to like what it means to be a global citizen and know that AI is going to be ubiquitous? Yeah, it's a great question. So from the very beginning, we really wanted to bake into our curriculum this idea of ethics and responsibility. We're really focused on ethics.

you know, developing leaders and preparing the next builders of AI. So there's sort of this AI literacy piece that I think is incredibly critical, but AI for All is more focusing on building the people that are going to build AI systems. And so we really focus on, okay, what are the implications of this? It's not just about having the technical skills

To be able to build a system, it's having the skills to understand, well, what does this mean? What should I be building? What are the implications? What are the risks? How do we think about that in a holistic way, not just from a technical side of things when something goes wrong, but like what are the unintended consequences? What communities might this affect?

So that's embedded into every part of our program. We also have students work on a hands-on project. And that's always been a core part of how we think that the most effective way to kind of teach people how to think about AI is actually getting that hands-on experience. So, for example, we run a program called AI for All Ignite.

which is a virtual accelerator for undergraduate students and prepares them to enter the AI field. So we have mentorship for them. We build this community with the other students. They have this peer community, but the core part of it is really this project.

So some of the recent projects are like brain tumor identification, developing a tool for tracking eye movements for more accessible scrolling, predicting startup success, credit card fraud detection, and students work closely with a mentor in the field to complete these projects. And that's a really great way for them to see not only get that technical hands-on skills that prepares them for internships and going into their next step in the field, but really understanding the implications.

And I think to some extent, you know, so that's our program that we're focusing on more on the technical side and preparing the next generation of AI leaders and builders. But even on the literacy side, you know, there's this awareness piece of like, like you said, what should AI do? What should it not do? What are the limitations?

And I think if you just can pick it up and so easily ask it questions and it just fits out these answers, it's really hard to know that intuitively because you don't know what the boundaries are. So there is that importance of the awareness and the literacy that I think everyone needs to have in the future, in addition to kind of preparing the people that are actually going to build these systems to build them more ethically and responsibly. Yeah, I love that term literacy when it comes to AI education.

How do you introduce the concept of the ethics of AI into Ignite or other coursework? Yeah, I mean, it's embedded in like our intro to AI lectures, things like that. I mean, I think we focus on it in a couple of different ways, like from the technical side of things and then also learning.

introducing concepts like bias. What does that mean in AI? What does that mean from a technical standpoint? What are the implications of it? How could bias deepen existing inequities in the system? So we're giving students this lens to kind of think through the implications of AI by making that a core part of

every piece of the curriculum. Also in terms of the projects like that they work on, that's a key piece of evaluating the project as well. And I think it's funny because I think people are aware of the risk, you know, when they come into our program, like, you know, that a lot of young people are

concerned about technology and the impact that it has in their lives. And I think that's something that they're walking into the program with. They're also walking into the program with their unique perspective. So we're focusing on recruiting diverse students from all over the country, from all different backgrounds,

And so they're coming in with their life experience and what problems they care about solving in the world. And, you know, that time in your life is so rich for thinking about where you want to fit in in the future and what you're passionate about. And so what we're really trying to do is connect AI to that. The students' passions, their unique experiences, their unique lens on what the most urgent and pressing problems are in the world.

And knowing that AI can either be an accelerator to solve our problems and amplify human capabilities to solve problems, or it can kind of have these risks or these consequences that intersect with what students care about. So I think that's really important because to me, that's what we need to have in the AI field is this

diversity of perspectives and backgrounds to bring that lens to how we're thinking about not just what problems that we're trying to solve with AI and how we're trying to accelerate human capabilities, but also like how these things are going to affect the different communities that the builders may not come from and therefore understand from a deep level.

So those are really the two pieces. You know, one of our students, one of my favorite stories is Maya. She did our program years ago and she's actually come back, which I absolutely love as a student mentor for some of our programs now. But she was really passionate about

you know, going after combating inequities in the communities that she cares about. And so she learned about bias in AI when she first took the AI for All program and how AI's growth could deepen existing inequalities and inequities. And she said that was a turning point for her. So she understood that she was already on the technology and engineering path, but she was like, oh my gosh, this ethics piece.

is really, really important. And that kind of sparked her passion to go into the field. So she actually was a responsible AI researcher at Apple after the program. And now she's researching human-centered AI systems that ensure fair work opportunities for Latina gig workers. She's at Northeastern University for undergrad, and she's also taking PhD-level AI courses in

So I think that's a great example, someone like Maya that really sees how her passion for technology connects to some of the biggest both opportunities and challenges in AI. And that really spurred her to take this path. And we see that happening all the time. So we're really excited by students like Maya and how thinking about what impact they're going to bring to the AI field in the future.

That's we're both in the Bay Area. And one of the things that bothers me is that when you drive up and down 101, kind of like the main freeway in Silicon Valley, you just inundated with these billboards with like pictures of creepy bots, like these weird humanoid bots. And you know what, we need to replace those with a picture of Maya, like Maya needs to be the face of like the future of humans.

And of course the future of humans involves AI, but like that's what we should be celebrating is Maya's story. I love that. I know it's so interesting how the media puts out these like scary robots still. And I'm like, why is this thing that we're celebrating? Cause it, I think it creates a lot of fear with people. They associated with the Terminator and these kind of like old school ideas of what a robot apocalypse could look like. In reality, I think,

And this is something that you represent here in this podcast. How do we make humans at the center

and think about how AI can support human flourishing, not replace us or create this like scary antagonistic perspective. I don't know if you saw the other day, I read some article, I don't even know how exactly true it is or how they calculate it, but that when you say please and thank you to chat GPT, it costs OpenAI millions of dollars. And then it said, well, why do people say please and thank you to chat GPT? And

And one of the leading reasons was because they're afraid of the robot uprising and that if they weren't polite, the fat TV, you know, it's going to remember and kind of target them. And I just think that's fascinating for how we're relating to these systems, that there's an underlying fear embedded in it. And we're sort of trying to take steps to prevent something bad from happening in these bizarre ways. Yeah.

Yeah, that's a great example. They're tools that augment what makes us human, like fire or the wheel, but we should never confuse them with humans. I think we do a disservice to humanity that we measure the technical progress we're making by things like the Turing test, which essentially measures how close we are to getting a bot to confuse a human into thinking that it's a human. Since when was that a goal, right? Why should that be something we aspire to? Right.

To everyone listening, I want you to roll back the tape a few minutes and listen to Tess, an expert in defining these curricula related to AI education. And in Tess's articulation of what they teach, none of it involved models or prompts or coding assistance, right? In which he described Maya's success, it was all about what Maya learned

through these skills that made her better able to ask the right questions. Because increasingly, the skill that we're all going to need is the ability to tell stories and ask better questions because part of the benefit of having AI as a companion or a thought partner is that things that we maybe can't recall or things where the brain isn't great at doing certain skills, we'll be able to kind of offload that to a bot.

Tess, I just want to tease that out because I, you know, it went by quickly in the course of the conversation. How intentional is it when you build these curricula that you're really focusing on inspiring your students? Like just the four or five use cases you've given already are brilliant. Like that's what it means to be a literate to me. Is that intentional? Absolutely. I think, you know, to your point, as we.

As AI becomes embedded into, I just read this study, I think it was a couple of days ago that like 40% of US adults are using generative AI. So let's say that's just the beginning, that it's going to be more ubiquitous in the future. What does that really mean for us? Like you said, we're offloading more of our brain power and it can be this incredible companion to our thinking, which is amazing.

But then the other point is like, we don't know how quickly that's going to change things. Like for example, what skills are needed for an entry-level job? Like if you can do web research, you know, using chat GPT research in 10 minutes,

Like, how is that going to really change things that an entry-level intern would have done before? We don't really know. I think that there's a lot of predictions that it's like, oh, every job is going to change so quickly and then other people are more conservative because we understand that, you know, there's a change management piece to how quickly this can get embedded into workflows and habits and to take advantage of the full benefits. But we do know that some disruption is going to happen, even though we can't predict exactly.

And I think that the skills that are going to be needed in that system are things like adaptability, things like problem solving. Like you said, what questions do we need to ask? And how do we think about humans and technology? What is the right relationship there? How do we understand what problems that's to focus on? How do we understand the risks? Like all of these pieces to me fall under the bucket of like critical thinking, like

problem solving, you know, empathy, like really listening and understanding how things will affect communities. I think those are the skills of the future. And that's absolutely critical.

a core of what we're embedding into AI for All because we believe that the builders of these systems in the future will absolutely need to have those skills because of how uncertain the world is that they're going into, how much technology is going to change things. Other challenges that are facing the world, whether it's like political divide, environmental instability and climate change, like there are big challenges ahead. And I think none of us have a crystal ball

But what we do know is that people that can ask the right questions can think about how humanity and the planet are impacted and carefully consider solutions using those lenses. That's what we will need in the future. That's very much a part of our vision at AI for All and why we are called AI for All, because we believe if our students who come from a

diverse set of backgrounds and life experiences that are trained with that type of lens, that the results of the technology, what is built, how it will impact society will be better because of that. That's kind of the core of what we're here to do. I've got to get your perspective on something that I consider fascinating.

a really important dialogue for the next decade. And that's the complicated relationship between how academia is embracing AI and how it's being embraced in the world of work, the workforce. We recently had a great guest on the show, the CEO of a company called Turnitin, which helps teachers detect when students are plagiarizing other people's work using AI. And so in one sense, you could say academia is discouraging

the use of AI can be frequently can be seen as cheating.

to have your work augmented by AI. And then when you graduate, you're expected to do great work. And oftentimes, you know, if you're using AI to edit or brainstorm or do other tasks, you're rewarded because the output will tend to be better. What's your coaching to those in academia who are kind of wrestling? Like, do we teach AI as a skill or do we penalize students and pretend that, you know, AI is a skill

you know, is a crutch that accelerates cheating? Right. Yeah, it's a great question. I find it interesting. There was another headline that supports that question, which is that it was like teachers use AI, but expect students not to or something. And that's really interesting because obviously there's a contradiction there. But I think what teachers are wrestling with and where I empathize with them is like,

How do we get students to engage meaningfully with the material and to think for themselves? And that's a challenge, you know, when you can just spit out an essay in two seconds with chat PPT and maybe cover your tracks. Like is spitting out an essay with chat PPT, is that the new, is that writing? Like, you know, like you said, if we were to do that,

tomorrow to prepare for an email, to write an email, to write a summary of something, like it would be fine. But then students, I think there's this question of like when things can be outputted so quickly, how do we still create a learning environment where students can engage with the material? Like

I know for me, if I use ChatGBT to create, like if I give it an article about generative AI, for example, and like summarize this and put it into a report, like the amount that I would learn from that is less because I'm not actually having to do the hard, slow work of like digging through it and trying to figure it out. ChatGBT just spits it out and there's something that is lost in that speed.

I actually wrote about this the other day in terms of the speed question of like, how does that impact our brain or our thinking? So I think that, you know, there's a resistance to change. Like, I definitely see that in the example you gave. And I think teachers and all of us are wrestling with like, what does this mean? If we can spit this out so quickly, like,

What do we do with that? You know, and there's this resistance of wanting to be like, okay, let's just like penalize it and not allow it to be there. But the truth is that it is there and it's going to continue to proliferate and it's going to continue to be second nature to the young people who have kind of grown up with it. So I think we need to reimagine what learning means in that context. I think about, you know, some of the discussion-based classes that I had in college and how I

Even if ChatGPT could help you prepare some talking points, like engaging in real life, real time with other people, like that's something that you can't replace. And that's where a lot of learning can happen. Fishbowls, like when you get in the center of the class and you are demonstrating something or you're presenting something, like what are the things that create learning and advance a student's

engagement with a topic that don't rely on just writing and outputting something. And quite frankly, I think education has always needed that shift because just like rote assignments and kind of standardized ways of measuring skills like are not the best way to get people to learn anyway and aren't really individualized to the student.

So this is a ripe moment for reimagining what that looks like. And I feel for the teachers. I understand because it's like, what do you do with that? It's not clear what the answer is, but I do think we can't hide from the reality that this is the future. And so let's figure out together how we can reimagine these systems and actually make them better.

and take advantage of the things that AI can help with that. Obviously, I'm super empathetic to teachers and faculty who have huge challenges ahead. I'm just trying to think through what some of the new ways of thinking could be

and opening to the reality that we're in. Yeah, optimist in me is so enthusiastic about it creating this like renaissance in like how we teach and learn and how we assess competency. And I want to see us go back very similar to what you said to like the Socratic method where we're debating, you know, we're engaging in conversation. And of course, if AI accelerates your learning, great. But the way we assess competency should be based on

demonstrating critical thinking and the ability to synthesize ideas from different sources, because those are the those are the lifetime skills that I believe, you know, any any, any stage of your life, that's what learning means is to cultivate those.

Absolutely.

And so a lot of, I think, the kinds of communities that you target at AI for All are ones, even maybe like my daughters, growing up in the cradle of Silicon Valley, but still, they might be likely to say, that education is for someone else. It's for people who don't look like me or they're dot, dot, dot, and they're

And they're the ones who unfortunately, you know, I could be weaponized against them if we don't take the opportunity to embrace it and make them feel like it is for all. What do you say to maybe girls or certainly underrepresented minorities who feel like, you know, this AI revolution is something that they don't get to participate in? Absolutely. Yeah. I mean, it's such a good point and I just want to put some numbers to it. So,

You look at CS for All data, only 60% of high schools in the US actually teach computer science. And in California, yeah, you and I are right in Silicon Valley, but in California, the birthplace of Silicon Valley and so much of this technology, only 50% of the schools offer it. And in a lot of ways, that foundational computer science is where students would get

exposure to this and really understand what it takes to get into the field and opportunities there and start to build some of those math and skill sets that are important and foundational for going into this path in the future. So the access piece is huge. Also think, yeah, like we can see that even though 60% of public high schools offer computer science, only 32% of students who took these classes are young women.

So even within that, there are gaps for specific populations within who even has access, right? So I think that's a huge problem. But what I would say to those young people who are thinking about this is these systems, these fields need your voice.

And they need your perspectives because ultimately we are shaping this world together. Nobody has the answers for what will happen in the future. People might say they do, but they don't. And, you know, the young people of today, anyone listening who is kind of growing up and graduating in this world, AI is going to be one of the most important forces that will impact every field and every part of our lives.

And it needs people like you to help shape it. And maybe that doesn't mean you become an AI technologist, but it might mean that if you're going into healthcare, how does that work? Like what pieces of the healthcare system are going to be outsourced to AI and how can you ask the right questions and think critically about that? So I think that it's up to everybody to ask these questions because it affects all of us and it will affect the future generations as

And the people that are represented in the media as being the focus of the heroes in AI are not just the only people going into AI and shaping the field. So there's a lot of efforts out there to create more spotlight on what the contributions of underestimated people in AI, women in AI are.

People of color in AI, they're making incredible contributions to the field that aren't always as visible in media. But you can find those examples out there. And people like Dr. Fei-Fei Li and Dr. Olga Roskovsky, who are, you know,

pioneering the computer vision field, for example, and the work that they are doing inspires me. And they're out there. They're out there. They may be more hidden, but don't be discouraged by what you see the media sharing because we all know the media doesn't share what's truly out there. It's a distorted picture. So just remember that and don't forget that your voice matters and

these fields like AI and other technologies that are going to shape the future need you to be part of it. That's brilliant. That's your TED Talk. All right, we'll submit this. We'll submit this to TED and make sure they get you on stage. That was brilliant. Oh, thank you. Yeah, no, I really enjoyed hearing that. And thank you for sharing the data as well.

Tess, we're way over time, but I'm not letting you off the hot seat without answering one last important question for me. And this one's kind of unfair because it's really like the topic for a whole conversation. But I need to get your take on it. So as Tess, the artist and the musician, I want to know what your perspective is on

where AI is creating content. And just maybe to make it very real, you know, Tess Posner, the artist, you know, let's say your unique individual work is getting hoovered up by LLMs and someone comes along and produces, you know, Tess Posner 2.

And it's an AI generated version, you know, based on some original ideas from you. There are lots of different directions to go with that, but I just want to get your perspective as an artist on what it means to be in a world where creatives are collaborating with, but also competing with AI. Absolutely. You know, it's,

The first time that I heard an AI generated song, the more recent, I mean, I've heard them where I was like, oh, whatever, that's nowhere close. But recently I heard one of them in a listening session with a bunch of other artists and musicians. And it was after the fires in LA. And we heard this absolutely beautiful song that moved most of the group to tears. And we were like, oh my gosh, who's singing this song? Like, it's absolutely amazing. And it was an AI song.

voice. And I'll admit, like, I'm in the AI field, I know these things are happening, but I was pretty shocked because I thought, okay, the human voice is one area where like, that's not getting replicated.

But it kind of shook me because I was like, okay, I know that I believe that the core of art and creativity and all these pieces, like the human soul is what makes it good and what makes it connective. And at the same time, we're wrestling with these pieces. Like, will you be able to tell if something is an AI versus a human? What does that mean?

Obviously, there's all these questions about copyright and what data things are trained on and how to actually recognize the artists who went into training the LLMs. And that's a whole can of worms in question that I don't have the answer to, but I think it's critical that we wrestle with it. But I think to your other point of like, okay, what would happen if this is replicating me? I think it's a question facing all of us is like, what is going to be our relationship to this?

In my most fun moments of creating music, there's a lot of tools that I use. You know, a lot of tools that back in the day, people would have said, my gosh, that's like cheating, right? A lot of the plugins that I use in my production process. But I think that ultimately, every technology can be a tool to enhance creativity.

So it may seem like, oh my gosh, it's just replacing us now. But in my heart, I believe that it will be this tool that will help expand our creativity. I just saw a music video, Snoop Dogg,

came out with a new music video that was a partnership between him and his team and AI. And I have to admit, it was super creative and super fascinating. I was like, wow, this is, there's so many possibilities of what we can do and how we can create new things using that as long as people are at the center. And I think we find a way to recognize the artists and the work that has been done to train these systems, even create that.

So there's kind of those two pieces of it. But overall, I think it's something that I see as a tool for creativity and I'm excited to explore. I've tried to use it as a songwriting partner unsuccessfully. It's mainly like a thesaurus at this point, finding other words for things. But I really tried. I'm pushing the limit of how I can collaborate with this. And I think it'll get better and better and better to the point when it

It's very interesting to think about what that actually means. So we're in for a wild ride. That's all I can say. And I'm hopeful that we can use it as another tool that becomes something that enhances how we how and what we create in the future. Very enlightened way to think about it. And I think certainly as a as the listening audience, we benefit from just a proliferation of ideas. And that's always been the case throughout history that it's

It might have been people innovating with new art forms and that sort of thing. And maybe the pace of innovation accelerates thanks to AI. But in no way do I think it diminishes the human contribution. I would expect you to say something like that, but I'm glad that's your perspective. So you've got to listen to this recent episode we did with this Hollywood producer named Marcus Bell.

who I think he went to Berkeley School of Music, very accomplished and he's done great things. He produced for Beyonce, et cetera. But he generated the first AI pop star called Raven Light.

And I'll send you a link. But Ravenlight, I think it is the proper pronoun, has become quite popular. And it's very well acknowledged that it's AI generated, but a lot of people like the music. And I think it's not deceptive in any way. It's just, it's out there for the community to judge. If that's something that we like, great.

there'll be an appetite for creating more like that. But I, you know, I celebrate that we live in a world where Raven light can exist and test poser can exist and I can choose, you know, who's, who's art I love, but yeah, exciting times. Can't wait to listen to that. I know it is. It really is.

Tess, this has been brilliant. I really so was looking forward to this and just enjoyed the conversation. Before we can let you go, where can the audience learn more about AI for All? And most important, how can me and the audience help you?

Oh, well, thank you so much. Well, number one, we have a program for college students and we're about to actually run this one in the summer. But if you have students that might be interested, we're going to open applications soon for the fall. So you can find us if you search AI for All, you can go to our website and there be a link to applications there.

And if you work in the AI field, if your company is using AI, we would love to work with you because we have ways for people to get involved, mentors, volunteers, things like that. So please reach out to us. We're always looking for that collaboration. And also, if you just want to learn more, we're also happy to connect because we believe this is an ecosystem. This is a community effort and it's all about the people. So we look forward to connecting with some of the listeners.

We will share all of those links in the show notes. Okay. It's not going to be another five years. Okay. This was so much fun. I loved hanging out. Thanks for coming by.

Thank you. This was so much fun. I appreciate it. And yeah, not five years. Let's do it sooner. You bet. Gosh, well, that's more than all the time we have for this week on AI and the future of work. Thanks again to Tess Posner from AI for All. And as always, I'm your host, Dan Turchin from PeopleRain. And we'll be back next week with another fascinating guest. ♪