EarSketch is a platform developed by Dr. Brian Magerko that teaches coding through music creation. It allows users to manipulate samples, effects, and beats using JavaScript and Python. Adopted across all 50 U.S. states, it has reached over a million users, primarily at the high school level, and serves as a tool for learning programming while fostering creativity.
Intentionality is a key factor that differentiates human-generated art from AI-generated art. Human art carries the artist's intent, emotions, and cultural context, which resonate with audiences on a deeper level. AI-generated art lacks this intentionality, often resulting in a hollow or less meaningful experience for consumers, even if the output is technically impressive.
Dr. Magerko emphasizes the importance of ethical considerations in AI-generated art, particularly around intentionality and cultural impact. He highlights concerns about AI models being trained on human-created content without proper compensation or credit, drawing parallels to historical issues like Led Zeppelin's uncredited use of blues music. He believes society must navigate these ethical challenges as AI becomes more integrated into creative industries.
Integrating AI tools like ChatGPT into education raises concerns about students relying too heavily on these technologies, potentially undermining fundamental skills. However, Dr. Magerko suggests that, similar to calculators and word processors, AI tools will become a standard part of learning. The challenge lies in balancing their use with teaching critical thinking, creativity, and ethical considerations.
Dr. Magerko envisions a future where human-computer interaction moves beyond traditional screens and keyboards. He advocates for embodied computing, where technology interacts with humans in more natural, physical ways, such as through dance or interactive installations like beach ball-controlled music. He believes this approach will create richer, more engaging experiences that align with how humans naturally interact with the world.
Improvisation plays a central role in Dr. Magerko's research, stemming from his background in jazz and improv theater. His early work involved studying the cognition behind jazz improvisation and even building an improv robot comedy troupe. He views improvisation as a unique aspect of human creativity that can inform the development of AI systems capable of dynamic, real-time interaction.
But there's a lot that comes down to the idea of intentionality. I believe that intentionality distinguishes human generated versus AI generated art. I have a hard time seeing that kind of public excitement or interest or thought or discourse about a computer generated artwork. If we know it's computer generated.
Good morning, good afternoon, or good evening, depending on where you're listening. Welcome to AI and the Future of Work, episode 317. I'm your host, Dan Turchin, CEO of PeopleRain, the AI platform for IT and HR employee service.
Our community is growing thanks to all of you. We get asked all the time, how can you engage with other listeners? As you may know, we launched a newsletter recently out on Beehive. Join it for additional facts and tips that sometimes get left on the cutting room floor. Of course, we'll share a link to that one in the show notes. If you like what we do,
Of course, please tell a friend and give us a like and a rating on Apple Podcasts, Spotify, or wherever you listen. If you leave a comment, I may share it in an upcoming episode like this one from Thomas, who hails from Ventura, California.
Thomas is an accountant and listens on the treadmill. Thomas's favorite episode is that amazing one with Linda Rotenberg, CEO of Endeavor, from way back in 2021 about Linda's entrepreneurial journey and the future of entrepreneurship in emerging economies. Linda is a national treasure. Go listen to that one if you haven't yet.
and we will share a link in the show notes. We learn from AI thought leaders weekly on this show. Of course, the added bonus, you get one AI fun fact. Today's fun fact, Yan Wu writes in the Washington Post online about why musicians are smart to embrace AI because it's more of an inspiration than a threat. In June, the Recording Industry Association of America announced
Universal Music Group, Sony Music Entertainment, and Warner Music Group teamed up to sue popular AI music apps, Suno and Yudio, accusing them of copyright infringement,
Despite the fears, AI could present more opportunities than challenges for musicians. One example, a group of musicians and scientists trained AI on Beethoven's music, then used it to complete his unfinished 10th symphony. The AI model generated multiple possibilities, and the musicians chose the contributions that made the most sense.
My commentary, let AI augment your creative pursuits in the process. Exercise AI responsibly, knowing it's a tool and the content it generates may be owned by someone who deserves credit for your derivative work. As a lover of music and art, I celebrate what's ahead.
which is a perfect segue to today's conversation. Of course, we'll link to that article in show notes. Dr. Brian McGurko is a professor of digital media, director of graduate studies in digital media, and head of the expressive machinery lab at Georgia Tech. He received his BS in cognitive science from Carnegie Mellon and his master's and PhD in computer science and engineering
from the University of Michigan, Go Big Blue. It has been published in more than 100 peer-reviewed publications, and it has received more than $20 million in federal grant support. He developed EarSketch, which makes music through code, and has been adopted across all 50 US states by over a million users.
Dr. McGurkos research focuses on how human and machine cognition can inform the creation of new immersive experiences. His work has been featured in the New Yorker, USA Today, CNN, Yahoo Finance, and NPR among others.
And without further ado, Dr. McGurkow, it's my pleasure to welcome you to AI and the Future of Work. Let's get started by having you share a bit more about that illustrious background and how you got into this space. Oh, sure. And thanks for having me. Yeah. So you said I got my cognitive science degree from Carnegie Mellon. That was in the late 90s.
And while I was there, I was really involved a lot in diving into learning about improvisation. So I was a jazz minor, I did improv theater. I just, I thought it was even in high school, I thought improvisation was really great. When I did debate team, I did the improv speaking stuff rather than the prepared stuff. I've always been interested in sort of that kind of that element of human creativity.
And my first research project that I was engaged with at CMU was about studying the cognition between or behind jazz experts as they perform and improvise solos.
I did that with Herb Simon, which was really just amazing experience to have. And that shifted really quickly into working with robots and trying to get robots to improvise. So my first paper as an undergrad was a collaborative work with others where we built the world's first improv robot comedy troupe.
And it like performed, I would put it in public now, I think it was pretty entertaining. But these little robots, they improvised a different scene every time you ran them, centered around the idea of one of them wanting to leave the room that they're in and the other one trying to convince it to stay.
And so I thought that was great. I was like, wow, this is really neat stuff. I didn't know you could kind of put this interest in cognition and creativity and performance and technology all together. And so I applied to one grad school because that seemed like a good idea.
Luckily I got in and I talked to the robotics professor there. I was like, I wanna do improv performance robotics and the interest was slight. In 1999, the idea of doing creativity and artificial intelligence was a little fringe.
Now you can do it on your phone like with an app from like a Play Store or something. But at the time it was a little fringe. So I had a little bit hard time finding my footing until I found a new advisor, John Laird, who said, hey, well, I've got some funding from this very interesting place, the Institute for Creative Technologies. And would you like to work on a AI dungeon master kind of story director thing?
And I said, yes, that sounds perfect, thank you. And ever since then, I've kind of realized that this is the idea of working in creativity and computer science.
It was a viable career choice. I was lucky enough to get my first NSF grant very, very early on in my career. And it was very much on studying improvisational cognition and building computational models of it. So directly kind of the work that I had done as an undergrad and as a graduate student fed right into
Being able to ask these very kind of unique questions that other people weren't asking about that I was weirdly uniquely qualified to ask. I mentioned in the fun fact, music copyright owners suing- Oh, that's what I was writing down, yeah. AI music apps, and I want to hear your perspective on it. Take the side of the creatives. You know, if-
I'm not a huge classical person, but if there was an unearthed Beethoven symphony, right? And it's as good as anything he's ever done. People would go to those performances, people would buy those albums, I'm assuming in classical music. This is different, right? To say we have a Bach unfinished piece,
And we had an AI predict what the most likely thing that Bach would have done. The great thing about Bach is that while he made and sort of codified these rules for how to make chorales and other kinds of music, the artisan breaking them.
The art is in going beyond just the rigid structure of the formulaic and making that decision about when and where to be creative and interesting and outside the box and whatever. So if you go and see this Bach performance with AI completing it, you're going to get the average middle of the road Bach. You're not going to get the actual genius Bach. So to some extent, it's a little to me because it's
We're making filler. This is an actual human initiated creative output. That's one view. And this all comes down to, and last podcast of yours that I listened to was about consciousness. And I didn't hear very much about this, but there's a lot that comes down to the idea of intentionality. So when we have a
filling in the lines for us like this. Or even taking out AI when we have John Lennon projected on a screen singing along with Paul McCartney on stage or Tupac Shakur as a hologram on stage, right? There's some lack of intentionality behind the artist that leaves it hollow, right? John Lennon, when he was interviewed, when he was alive, he was like, we're not going to be singing these songs when we're in our
40s in our old age, like it was old age to him. And but here he is forced against his will, his song and his likeness is up on the stage. It's no different than mining Bach corrals and generating new ones, right? There's no intention behind the artist, behind Bach, behind Lennon, whoever. And as consumers, as people,
We see that, it's lacking to us. Now, if we don't know, if we're given a really cool piece of music and we're like, wow, that's pretty jamming, that's really nice. And we're told, surprise, it was an AI. It would be like, oh, that's disappointing, but also kind of cool and interesting.
It's different of an experience than here is a thing from a human in their head that they've mediated through an instrument or through a laptop or through their own voice or whatever. And has gone into us and we are hearing and feeling what is inside that person. That's what music and culture is. At the end of the day, we have things in here that we get outside for other people to absorb into their own minds.
So I have a thought, I have an idea, it's very neural based, whatever. I managed to get it out into words. Those words managed to get into your ears and get back down into neural signals in your head. That's an artistic communication.
What AI produces, it fuzzies that. And especially when you don't know it's AI. There is an amazing, I'm just going to keep talking. There's an amazing ad that came out like two weeks ago. I just saw it the other day. I don't know if you've seen this. It's a deep fake ad about the election. Just a singular? No, I was pausing because everything I see is a deep fake ad about the election. No, no, no, this is a PSA. This is a PSA.
I see. This is if you just Google like Kirk Douglas, Chris Rock, election ad or something like that. Yeah, okay. But it's these celebrities just sitting for like three minutes just talking about
the dangers of deep fakes during the election cycle. How it's going to be ramping up in the next few weeks and here's things to look out for and be skeptical and be wary if something feels off. And then at the very end they show that most of the people weren't even the celebrities to begin with. They were deep fakes of those celebrities and like they had permitted, like they say they had permission from Chris Rock and everybody, but
super effective, just the most stunning. I work in AI literacy, and I saw that and I was like, I gotta watch this whole commercial. This is fantastic. I'll share a link to that one in the show notes. Yeah, yeah, yeah. There's an article about it and whatnot, but I don't remember why I brought that up. We're talking about intentionality. I wanted to maybe challenge or just explore that one a little bit more. So
Of course, I believe that intentionality distinguishes human-generated versus AI-generated art. But if we're thinking about as consumers of art, we vote with our wallets, we vote with our ears, we vote with our likes, our playlists, etc. And in that kind of open marketplace for what we choose to consume, does it matter? Does intentionality matter if we choose for whatever purpose
reason that the AI generated content. I would say yes, I would say absolutely. I mean, I know one person who buys music now, I guess, digitally. But let's say people who buy records and CDs. You pay for Spotify. Sure, yeah, but when you listen to Kendrick Lamar's new tune, you're listening to it not only because of
the properties of the audio file.
the content of the audio file. You're also listening to it because of the broader social context that that exists and being from Ken LaMar and who he's talking about in his music. And what they're doing in response and what TikToks you're seeing. There's a whole thing about that song that exists within the culture of being human. I have a hard time finding
seeing that kind of public excitement or interest or thought or discourse about a computer generated artwork. If we know it's computer generated and that's the rub, right? We thought Milli Vanilli was pretty great, but as it turns out, right? Or we thought the monkeys played their instruments really well or whatever, right? Eventually they did, but the beginning. So some of this is about perception. If somebody puts out
AI generated music and pretends it's from a person and presents it as a person. Yeah, maybe that would pass muster. But that's not what we're about with culture. It's not just about the artifact. It's about the communication of the artifact and who it's from. How would you feel about Spotify not putting AI Kendrick Lamar in my feed and giving preference to real Kendrick Lamar? I think, I think,
The people who would argue that that's a bad idea isn't a vast minority, right? We want our art from people in general.
We definitely want to consume stuff that's just good. That's what pop culture is all about to some extent. And maybe that's where AI fits a niche. It's just like we can generate the next formulaic rock movie or we can generate the next formulaic pop music. That's what industry does already. They apply algorithms and formulas. It's just people doing it instead of computers. So like in some sense, I don't know how much quality of sort of the average sort of popular culture artifacts, you know,
But for people who want arts, people who can't wait for the next Scorsese film, some of that comes from the fact that it's from Martin Scorsese as a person.
I'm in that vast minority. And the reason is because as long as it's labeled as having been AI generated, just like it's labeled as having been generated by the real Kendrick Lamar, I'd like to be able to decide. Yeah.
That perception, like I'm saying, perception is important. If somebody's trying to trick us, if it gets labeled as AI, hey, here's the latest, here's the thousandth tune today from the AI DJ robot or whatever. Awesome. Some people will absolutely, people do that and consume that already. People go and look at the automated, the
Was it Infinite Friends story, right? It's not very interesting, but it's compelling and weird. And hey, I'll check that out. I guarantee you can generate dubstep music with endlessly. And John Lennon and Kendrick Lamar were influenced by other greats, just like AI is, quote, influenced.
By learning from other musicians. So who's to say what creative influence and whether or not there was intentionality should dictate whether or not I'm allowed to listen to it, right? Yeah, there's definitely... Well, I mean...
There's influence and then there's ethical and credited influence, right? Led Zeppelin's a great example. When I realized how much Led Zeppelin stole from 50s and earlier blues artists,
And like a lot of other people- Like the Beatles. But didn't credit them, right? The Beatles didn't pretend to write those early blues tunes that they, when they covered Chuck Berry, they covered Chuck Berry. Led Zeppelin said, hey, here's this Howlin' Wolf tune written by us. And that's the weird line where it feels icky to listen to them, even though they're one of my favorite bands. 100%, 100%. And so when you listen to- Yeah, we shouldn't be misled.
Sure, but when you listen to generated music or consume generated images or video, there's some of that same question there. Where did these come from? Because one of the biggest advantages that back propagation has existed for decades. What's changed is then fundamentally, aside from transformers and yeah, but fundamentally the amount of computing power we have access to and
the gall to just take from human culture and sell it back to us. Which is a weird way for us to think about consuming these models. When I use chat GPT, it was sourced on lots and lots and lots of hard people's work that did not get compensated at all and will never get compensated likely.
So we have that same sort of challenge here that we have with Led Zeppelin is like, is it okay given where this came from? It's really good music, it's a really convenient tool. And as a society, whether or not we consume AI generated music or what have you, this is all gonna come down to some large societal forces and it's not gonna be uniform. We are not going to be okay with AI generated Hollywood movies, we're not.
We're gonna be okay with consumers. As consumers, we just won't buy them, we'll revolt against them. I just don't think that'll happen, at least not in the next 50 years, maybe in the far future. But there's just some lines it feels like we're just not gonna be okay crossing. But AI used in movies, I mean, I saw Carrie Fisher in a movie and she looked great and she was both dead and not nearly as young as what she was in, right? We're okay with that to some extent, though there was pushback against it.
So are we gonna be okay? Where's the line for us as consumers is a really big open question. And if you look technology 200 years, I'm not exactly sure when the phonograph became widely popular. But at some point, if you wanted music at a party, if you wanted music to listen to at your home, if you wanted music at all, you needed a musician.
You needed a piano in your home to have your niece play or if you needed to hire a string quartet to have it, whatever. You needed live musicians to experience music. And the second that we said, yo, we can record this, that fundamentally changed our relationship with musicians, right? And now we're all the way to Spotify, yada, yada, yada. It's not even sure how musicians make money anymore.
But as a society, we were okay with it. We said, yo, this convenience is way, way great. Musicians still exist, maybe not as many of them. And they're maybe not making as much money on average. There's some that are making a lot of money, but it's working for me as a consumer. I'm okay with this technology. So,
this is still an open question for us with ai in the cultural industries is what are we okay with and let's apply this line of argument to your work so whether it's um silhouettes improvving to let's say jazz music or let's take ear sketch which i mentioned in in your bio as examples of
AI augmenting creative work or generating creative content. What's your thought about how your intended audience should perceive that work that was AI augmented? Oh, how should people view my work?
Well, I mean, we take where the data comes from into perspective. I mean, so we have an AI dancer that learns how to dance by dancing with people. The models that we've used to train it are in collaboration with a dance professor at the Kennesaw State University and her students, and everybody's paid. We didn't go online and grab all of Michael Jackson's moves from his videos and like,
process those or something. So publicly usable data sets, data sets you generate on your own, very game, great. We're working, so EarSketch, which is an online environment for learning about programming through making music. So kids manipulate samples, effects and beats through JavaScript and Python. It's mainly for the high school level, but it's used all over the place.
We've experimented building an AI companion for that to help students sort of like just scaffold their learning experience, both in making music and in programming, which was a really interesting problem.
But the- How do you source the data to train? Yeah, right. So the initial one we built from scratch and this was we started in 2017. Let me tell you, bad time to start chat bot research.
We just started wrapping up our work and then LLMs were everywhere and- Just before the transformer paper. Yeah, students expectation was more about the accessibility of the technologies to students. So the second students saw ChatGPT, their evaluation of our work just plummeted because
Their expectations grossly change about how you interact with these things. Anyway, but we built the thing ourself and now that we're looking at sort of a 2.0 version of that in a smaller scale. And of course, we're looking at large language model technologies. And one of the things that we have to consider, we can't just grab chat GPT. If anything,
not secure and private. So there's not even just where do these large language models come from, but what are we getting out of it? That's a private company that is taking all of the interactions with it and using it for their own devices to make more money. That's not a great opportunity. That's not a great technology to use for high school students or college students. Anywhere where FERPA and privacy of student data is really important.
So we're going to have to use either a academically made one or Lama, which is Meta's open source, which at least they gave it back to everybody. Gosh, I can't believe Zuckerberg is at the ethical front end of these folks. But out of the models that are out there, that's sort of the best one from just a
how it was made to how accessible it is and it's everybody's model kind of thing. So we can run that on our own computer, nobody gets our data and it makes a usable technology that makes the world better, great.
That's quite honestly on reflection, if just these companies would just release their models for free, that might alleviate a lot of the theft kind of questions. Because if it's giving back to everyone and enabling all of us as a society to be more efficient, to be more creative, to do better things, fantastic. But if it's when they're taking from us at the same time that it becomes a little more dodgy.
So I want to know, as a professor at the intersection of creativity and computer science, let's say we're recording this conversation and it's 2034. But we are recording, right? We are recording. All right, just checking. And it's going to be the actual Dr. McGurk and the actual Dan Turchin, not our avatars, which will be widely available in 2034. But the actual humans are recording a version of this conversation. How has the
prevalence of AI generated art and music influence the kind of students that you're teaching and the kind of courses that you're teaching? Does it change the curriculum? Yeah, it really depends on how teachers, gosh, this is also, we're in a very undiscovered country right now where we're trying to figure out how as instructors, how to best navigate this tool and
And try to guide students to both accept that these tools are here, but not to misuse them and to use them effectively, safely, ethically. There's definitely a huge concern that kids aren't going to have the same kinds of skills moving forward that they can just use chat GPT for. I feel like that's the
I'm not an STS expert about it. I feel like that's probably a common cry of a lot of technologies that when the calculator came, we're not gonna be able to do math anymore. When the word processor came, we weren't gonna be able to write cursive anymore. Some of these are true. But at the end of the day, it's a question of what's the goal?
So here the goal is to provide educational experiences that enable people to have fulfilled and rich lives. Maybe prepare them for the workplace.
And it really seems like for better or worse, these technologies are going to be a part of our lives for the foreseeable future. I do not think that the negatives or the headaches that large language models cause from hallucination and whatever. I don't think that those things are necessarily gonna go away, but I think that they're gonna be mitigated over time. Like chat GPT-4-0 now can search the web.
It didn't know anything beyond 2022 or whenever its last model was made. Suddenly it has access to the rest of the world and knows the present suddenly. Claude can do math now.
Like this was one of my big things is that remember these things can't do math. Well, damn it, some of them definitely can now that are just easy to go find and use. There's a moving window here for us that it feels like technology is moving and catching up with enough to outpace our disdain for the negatives. So I think this stuff is around for a while. And students that are engaging and learning with these tools,
It's going to be the status quo, just like a calculator is, or a spreadsheet or a word processor. And it's going to be on us to really reflect on and learn about how we learn, how we think like with human brains. And what this is doing and what this is replacing and how we can double down on the stuff that it's not.
And be aware of the stuff that maybe that we're missing out on and maybe should even just avoid. You don't get to use a calculator in first grade, second grade, third grade. You get to use it in 10th grade, right? So at some point,
Even though that technology exists, we know that there's fundamental skills that you need to build up first. We're going to have to identify and figure out what that means for us with these technologies that exist now. That can do summarization really well and brainstorming and ideation really well and
Now, I mean, honestly, spell checking is fine. We're okay with that already. Nobody can spell anymore. I got to get you off the hot seat. But in light of that last answer, you're not going anywhere without answering one last important question for me. You mentioned Claude and math. One of the other kind of emerging capabilities of a lot of these models is being able to record your mouse clicks and use AI to kind of predict or replicate what you do.
in front of your machine. That's a new one. I haven't seen that. Yeah, work with me here. It's actually, it's kind of a thing these days in the foundation model community. My question is, I know you've studied human-computer interaction, HCI. I'm going to get on a soapbox a little bit. I feel like that's reinventing the past. And I prefer that we prepare
kids, students, etc. for a world that may mean humans interacting with technology in a different way rather than trying to capture mouse clicks or things that really failed a decade ago when we called them Excel macros or robotic process automation. Without baiting the witness too much, what's your perspective on the future of how humans will engage technology? I've never found that automating stuff on my computer is easier than just doing it myself.
And you get it right every time when you do it. Yeah, pretty much. Yeah. I was never one to learn, what was it, Apple scripts?
Or what role is that people would hack on their Apple, like their Mac laptops to make stuff? I talk a lot about the fusion of humans and machines. Oh, about different interactions. There's certain things machines do better than humans, and there's a lot of stuff that humans do better than machines. And I feel like we should come up with a mental model that marries the two. So one of the reasons I've been so interested in studying dance and AI is not because I'm a dancer. Actually, I don't dance at all.
traumatic eighth grade experience. But it's just an incredibly human way to interact with the world and with the computer to interact with us in that way is very foreign. And I feel like sort of pushes to and points to just
different ways to think about how we embody computing. So it doesn't have to be with a mouse and a keyboard. And there's many, many, many different other ways. We had a computer game from my lab a few years ago. The controller was an enormous car-sized bowler hat.
And it was suspended from the ceiling and went on to two people. And the idea was that you're controlling an alien with two heads. And so you had to coordinate and collaborate with your buddy, kind of steering the alien around by tilting and moving the bowler hat. Different way of interacting with the computer than I've ever experienced. And I love that project and the dance work and other stuff that I could talk about, but
We often feel so limited by writing software for the screen where the real interesting places in the real world. We have an exhibit that's made out of beach balls and a webcam senses where the beach balls are. And as they throw them around and move them around sound and music generates because of it's a real simple computer vision problem.
this very innate way of interacting with the world. A beach ball, everybody knows what to do with a beach ball. We just lay them on the ground and in a public space, people come up and mess with them and they're like, music happens. And then they play, it's like the keyboard in big where they come across it and suddenly have this playful, fun public experience.
But engaging with people in the real world, I feel like, and with their bodies and meeting us where we are as creatures is, I find, to be a much big open-ended space.
that's far more compelling and with plenty and plenty of work to do compared to working on the screen. Though your sketches, I mean, some things belong on the screen, obviously, but just in terms of your question about human computer interaction in the future, I feel like the gross majority of it is going to be about how we design into our space rather than how do we get us into the computer per se.
Dr. McGurk, where can the listeners learn more about your work or see some of this work in action? Oh, if you just go to my website, expressivemachinery.gatech, like gatech.edu. There's also not that many Brian McGurkos on the internet. So if you happen to search for my name, I'm the one that's not a wrestler. That's good to know. I can attest having this conversation on video that you don't appear to be a wrestler.
I am going to ask, there's so many important topics and we are just getting started. When we have you back for the sequel to this, can I ask you to improv that traumatic eighth grade dance experience for us? It's pretty burned in my memory, I think I can remember. All right, there you go. We're going to pick up there. And I can't, in fact, confirm that you're good at improv because
Nothing we talked about today was anything that we actually prepared in the topic list, which made it all that much more interesting. Hey, Dr. Murgocco, absolute pleasure having you on. And really, please take me up on the offer. Come hang out. Absolutely. I would love to be back. This was a pleasure. Thanks a lot, Dan. Excellent. All right. Well,
That is all the time we have for this week on AI and the Future of Work. As always, I'm your host, Dan Turchin from PeopleRain. And of course, we're back next week with another fascinating guest.