hey everyone and welcome to generative now i am michael mcnatto i am a partner at lightspeed and today on the podcast we have mikey shulman the co-founder and ceo of suno for those of you who aren't familiar with suno suno is the company that is building the future of music with ai and in this episode i threw a bunch of questions at mikey that i had about his company and where they've been over the past year
But we also turned to the X audience and went to the timeline and asked a bunch of people what their questions were. And so we flipped through about 20 or 30 of those. I thought this was an awesome conversation and I think you will like it as well. So check out this conversation with Mikey Schulman.
Hey, Mikey. Hey. How's it going? Great. It's great to be here. Good to see you. Thanks for coming back. Yeah, my pleasure. Yeah. When we first did this podcast together, I barely knew you. I think I just met you. And now, obviously, Lightspeed is invested in the company. You and I talk all the time. Live stream.
A lot has happened with Suno, so I'm excited to have you back on the pod and you to update people what's going on in the world of Suno. We have a fun little thing we're gonna do, but before we do that, like, what is new in the world of Suno since we last spoke on camera many, many, many months ago? - Yeah, it's great to be back. I think it's been about a year. A lot of growth, touching a lot more people, which is kind of the best part of my job.
just bringing the joys of music to more and more people. We released a couple models since then, so better quality music, more fun to use, better control, longer songs, also kind of new capabilities. I think my favorite is still probably covers. So you can take a Suna song and you can kind of reimagine it in a new style.
This is like extremely addicting for me. Another kind of platform shift, we've launched a mobile app. And so lots of people use Suno from their phones. It's with them wherever they go. When the moment of inspiration strikes, they can grab it. They can kind of express themselves quickly, a lot more quick hit dopamine, kind of the way music should be, which is kind of with you wherever you go. Yeah, the models are pretty crazy. I think when we talked on here a year ago or whatever it was, I think the model was 2.0, I think.
since then you you've done 3.0 obviously and now there's 4.0 and 4.0 is insane 2.0 kind of showed people i think like what the future could look like but it wasn't like a song you could hear on the radio or something 4.0 it basically sounds human made is that fair a lot of the time yeah yeah it's a lot of fun not always though uh you know sometimes there will be artifacts or mistakes not unlike when when uh you make music by other means uh sometimes when i'm
play the piano or play the bass, I make mistakes. And sometimes the model will make mistakes. But yeah, largely, I would say it's kind of radio quality music. It's crazy. And then the mobile app you mentioned, it's like a sort of new platform, new, you know, new interaction model.
That feels like a pretty big deal, right? I mean, music before Suno, really hard to make. You got to make it in the studio with instruments and expensive equipment. Then Suno makes it that you can do right from your desktop. And then the app, it's like you said, you can do it on your phone anywhere you are.
I don't know, just feels like that is like a paradigm shift for music, if it works out. - Yeah, and I think it is a really exciting one. We talk a lot about the future of music, just people are doing music a lot more, and that means kind of wherever they are, and inherently that means your phone, but your phone is also an incredible piece of technology.
it's not just like a smaller version of your computer. It's got a camera, it's got a touchscreen, it's got a microphone, and kind of using all of these to our advantage to let people express themselves in new ways, in fun ways, to make music, to share music. And so I think we're still pretty early in that journey. We're not really using the touchscreen, for example, to the full extent that we can, but I'd look out for updates soon. What are people doing as far as input and creation that's
different on mobile than it is on the desktop. Like I know you take photo input, you take, you can sing into it. Is that driving a difference in the creativity from people with it? Definitely. You know, I think while you can use photos or
audio on your desktop, it's just much more natural from your phone. You can grab your phone quickly, turn on the microphone, sing something, hum something, tap a beat, and use that as inspiration. And by the time you would, for example, find your computer, get it out, connect to the internet,
the moment is gone and so we see a lot more of this sort of multimodal input and not just kind of a text description of the song that you want to see come out yeah it's super super cool so suno i would say is probably one of the most talked about ai companies there is definitely in the world of consumer like it feels like there are very few consumer products that have like truly broken out and made themselves known in the general sort of
mainstream culture. Suno is one of them. And a lot of people talk about Suno. And so I obviously know a lot about this company. We're investors in it. You and I talked beforehand and we decided to take a little inspiration from some other podcasts that have been dabbling with a similar format. There's one called Technology Brothers that have been doing this, where they hit the timeline and they pull
They pull questions and posts from the timeline. Given how much people talk about this company, like let's find out what they actually want to know and let's pull the audience and let's get some questions. And so if you're up for it, let's go through this stack. Let's do it. I'm excited. I'm going to read a few here. This first one is from Octavia Grout, who says, do
Do they have plans to release an API? - Can we just take a moment to talk about your printer here? - Yeah, oh, sorry. - Your black and white streaky printer. As a proud Lightspeed portfolio company, I'm glad you're not spending any money on your printers. - Listen, listen, as a former startup founder, actually current startup founder, I wanna be frugal with how I spend my money, and so I printed this at home, all right, with my home. - Color me impressed. - Clearly we're having some ink streaks here. I gotta get that checked out.
But in the meantime, we can read the text. OK, that's fine. Yeah. All right. Do you have plans to release an API? No plans right now. I get this question a lot. It is a really good question. The way I think about this is don't think of us as a model supplier company. I think of us as trying to deliver pleasurable music experiences to the end user. And an API isn't really kind of getting us further to that goal. We're trying to build beautiful experiences for people. And so
So it seems like a bit of a side quest, you know, and to the extent that startups can focus, that's how you're going to win. And so not really looking at that business right now. Are you implying basically here like, yes, you could have an API and that would maybe further the mission into letting more people make music through these other products, but these other products might not meet
sort of like the quality or the threshold of creativity that you're trying to supply people. And you could really do that, only really do that from your own experience. I think it's the, it's a lot of that. It's not that there's a quality threshold. And then there's also, you know, there's a future of music that we are trying to build. And then there's futures of music that we are not trying to build. And we don't necessarily want to let someone else do that. And so as an example, somebody just, you know, using our API to make
endless streams of music that are somewhat antisocial. You could probably figure out how to do that with our API. I wouldn't want to enable that. I'm trying to enable kind of a more engaged, more social version of the future. Like you could probably never really see Instagram doing this or TikTok, right? Like they probably really value their tool and the creativity that it yields. And so
why would you have that happen somewhere else rather than like the best possible experience for that format? - I think that's a great analogy. - Yeah. - Okay, cool. So Octavy, hopefully that answers your question. We were just talking about models a little while ago. We've got a question from Cody Baker.
Cody is asking, what will the next models be like? Can we expect to see an increase in the realism of instruments and voices? What is the limit of musical control? There is still some room to go in how realistic certain instruments are, how realistic certain voices are on the consistency there and kind of raising the floor. And you'll definitely see those things increase.
We are nowhere near the limit of control, though. And I think about this. Different people will be able to describe music differently. You and I may listen to the same song. We may have different opinions of it, and we may also describe it differently. And we have so much to go in how we take the kind of hazy notions that are in your head and iterate until we get to the final product.
And so maybe we're one or two models away from the ceiling in our ability to perceive vocal quality, but we're maybe 10 models away in our quest to get actually good control about it. And so I think you can see in the future, we're going to focus a ton more on control, certainly compared to, let's say, sound quality or sound quality. One of the analogies I've often made in my head for these models, not just music models, but, you know, text, video. Remember, we were kids and
it seemed like at least the era that you and I grew up in, there was always a leap in graphics quality of video game machines, right? It was like 8-bit, then 16-bit, then 32-bit. And then at some point, it just like stopped mattering. Like it just like peaked and asymptoted. Is that kind of what's
happening now for music or is that kind of what you're implying? I think we're sort of getting to that point. That's a really good analogy. And, you know, getting a faster frame rate only helps to a certain point. And then it's kind of a drain in power and resources. But that doesn't mean that there isn't a tremendous amount in game innovation, whether it is the scenery or the actual game or the controls that you have available to you. Again, yeah, I would just see like
we are so far away from the limits here. And I think that is like a little bit of a false analogy to say that we're trying to hit the kind of the human threshold here. What we're trying to do is actually push forward where the human threshold is right now and push forward the quality of music that exists out there on kind of every axis. What comes to mind for me when you say that is when I listen to Suno music, especially with 4.0 model, I hear these genres of music, these mashups of genres that I almost feel like
Or brand new. Like I've never heard them before. No humans have ever put these two things together, like salsa and death metal. Or I don't know. I just made that up. But maybe that's a little bit what you're talking about. Like that's where the AI can actually push things a little bit further than like what we've done before it. I think more than a little bit further. I think a lot further. I think that when you make it easier for people to conceptualize things, they can go through concepts more quickly. They can iterate more quickly. This is how music will evolve more quickly and evolve to what I think is a cooler place than where it is now.
All right, next question from Mr. Ronnie Brucknap. Ask about his vision for AI music composition and how it might impact independent artists. Has automation helped or hurt creativity? I think this is a really good question. This is a kind of a long arc question.
I would say in making music, you know, technology has been part of making music for hundreds of years. In general, all of the technological advances that happen make it easier to make music, make more people able to make music, make more music out there and are actually quite beneficial for music in general. You know, I can tell you that I know one person who is a songwriter who had a lull in creativity and after finding Suno went from maybe making
50 songs a year to making 500 songs a year. And most of these songs maybe don't see the light of day, but it's an unlock in terms of creativity. So I think on the whole, it's quite obviously all of these technologies are good things. It's about how you use them. I think I'm a firm believer that in general, AI and most technologies are neutral. There are good uses and bad uses. And so we focus on building the good ones. That's fascinating. The 50 to 500 kind of reminds me of how
a lot of writers are getting value from LLMs as thought partners, right? For, you know, fleshing out a report or a thesis or, you know, a creative piece of writing. It's like it's somebody you can bounce an idea off of, right? And have a conversation with.
It's in every domain, and it's a little cliche to say co-pilot, but you see this in everything from code, you know, engineers writing code. Now you can have a co-pilot that can help you do it, and these co-pilots can make code for you. They can check code for you. When you are writing an email, you can have something that writes the email or rephrases the email or checks the email somehow. If you are writing a script for a movie, you can have help there either in checking something or rewriting something, or, hell, I just come and I'm blocked and I just need to see some weird ideas and...
I think in general, just increasing the amount of weird content that is out there is actually just really good for human creativity. And these things are amazing at that. OK, we got one from Lorenzo Bartolini. Lorenzo wants to know, curious about copyright applicability for AI and music. I believe what he's getting at here is, is this stuff copyrightable? Like when you make a song with AI or you make a video with AI or a piece of writing, is
Can you copyright it? This is a great question. Are you a lawyer? Let me preface it with I am definitely not a lawyer. So I'll give you my current understanding of it, which is evolving. Actually, just a couple of weeks ago, the U.S. Copyright Office gave more guidance, which is to say if you just type in a prompt and out pops a song, that song won't bear copyright and that you need more human input in order for the thing to bear copyright.
So, for example, maybe you had to bring your own lyrics. Maybe you had to do something else, bring your own audio to it, et cetera, et cetera. I think I would look out for this to evolve very quickly. The technologies evolve very quickly. And I'm almost glad that it's not been enshrined in law because law will change much more slowly. I don't know where the line will be of how much human input it will take to bear copyright. But I do know that I don't want to be the arbiter of that. And I'm actually really glad that there are other thought partners in the world out here. I wonder.
if this is actually, like you say, maybe it's a good thing that it's not getting etched in stone yet because we're probably just going to see so much evolution in how people make music, especially with AI, right? It feels like AI has the potential to create this sort of explosion of remixing and sort of reimagining of music, right? You make something and then I reimagine and then somebody reimagines that and
Yeah, just I actually wonder if copyright could sort of get in the way of that level of creativity. So maybe we do want to see like how things flesh out a little bit. I think certainly before things get enshrined in law, you want to you want to see how things flesh out. I'll just remind you, though, that copyright law was originally enacted to enhance and promote creativity and not to stifle it. And this has been like a guiding through line through all of copyright law.
To drive incentives, like give you an incentive to make something? Yeah, to incentivize people to be creative and not to actually stop people from being creative. And so, for example, the previous loose guidance was just AI content can't bear copyright. I'm glad that's kind of been nuanced a little bit because that actually does not incentivize people to be creative with these new tools. All right, going to Ansh.com.
B'shifta, asking a question that a lot of people are asking in AI these days. I hear this word like 10 times a day. That is the word mote.
He is asking, what's their moat? Oh, what's Suno's moat? Moat is a four letter word, right? I don't know if we can talk about that. It is a four letter word. Yeah, I, you know, I'd be super curious what you think about this. This is, this is closer to your day job, I suppose, than mine, but I'm not sure that Suno has a new type of moat. I'm not sure that AI presents new types of moats. I think that moats in this business will be like the old ones. So that is, we have to build a superior product that is stickier than the competition that has some sort of data and or network
effect or we will not have a moat. Yeah, I think that's right. I actually was just having a conversation about this, not about Suno, but about sort of like defensibility and the application layer with a number of my partners the other day. And yeah, the conclusion we came to is that the moat's
that exist in the sort of AI era are probably very similar to the most that existed beforehand. It's not in the technology. I'm actually a firm believer that like technology is not really defensible. It's just code. Code can be copied. Code can be rewritten. We've seen this many times over the years where, especially in consumer, somebody builds some great new format or interface and then boom, guess what? Meta copies it and they leverage their distribution and you're done.
So I don't think technology is a moat. I think the things that are moats are the things like you're saying network effects may be one of the biggest, some sort of data moat. And I don't necessarily mean data to feed an AI model. I mean more like if you use the product often and the product has all of your preferences and your data and your playlists and your this and your that, like you're going to think twice before stopping using that product because you're going to lose all that stuff.
AI has made it so easy for people to build products that there is more competition now. And more competition means that these things need to be even stronger, right? Like your network effects need to be even stronger. Your early mover advantage and your distribution needs to be even stronger. And so it's a much more competitive market. But yeah, I think the moats are kind of the same. I'll say two things to that. You know, one is certainly for us.
We don't have these mega models that cost hundreds of millions of dollars to make. So like that one goes right out the window immediately. You know, hearing you say that makes me actually think that when I say data or network effect, it's actually the same thing. Like the data advantages are just like you need something where the value of the product you are building scales with how many users are using it. And data is a means to that end, basically. Yeah, I think that's totally right. Okay. Okay.
Adam McIsaac, really interesting question from Adam about the Grammys. What does Mikey see as the prerequisites before we see an AI composition at the Grammys? And how is Suno contributing to this future? This is really interesting. I feel like we saw this play out with streaming a few years ago, right? Like streaming movies weren't allowed at the Oscars. I forget when this flipped, but yeah, what's going to have to happen before we see this in the music world? I think it's a great question. You know, I think I don't want to
Just be that guy who disagrees with the premise. But I know that there are producers using Suno and they end up... Little bits of Suno will end up in hit songs. And so if you just want to say when will little bits of Suno end up in a song that wins a Grammy, maybe it already happened. Maybe it's very, very soon. Did it happen? I don't know. Weren't the Grammys like...
The Grammys were recently, but I don't know exactly which songs have it. I just know that it's out there. Yeah. We got to find out. We got to find out. I think that slowly kind of these things, the general attitude will shift. And I think in five years from now, we're not really going to be talking about AI music or non-AI music. And all music is going to have bits of AI in it. And just like all music is kind of digitally produced today. Or has samples in it. Or has samples in it. Again, I don't mean to just be the
the jerk who disagrees with the premise of the question. But I think that will happen. But before that happens, it's like this will stop being a salient question of like, when will a Suna song end up as a Grammy winner? I do wonder if the copyright thing needs to get figured out, though, right? Because I mean, if somebody just made a song in Suna right now that was 100% AI generated, according to the question from a little while ago, they can't copyright it. If they can't copyright it, I don't know. Can it win a Grammy? I don't know like what their rules are as far as
attributing ownership to songwriters or, you know, master's rights. I don't know. I don't know all the rules, but I do know that until there's clarity, if somebody did that, they would never admit that Suno was involved in the process. Right. That's got to be challenging for you, right? The fact that
Because of this copyright thing, people maybe aren't willing to admit that they use the tool. That is definitely challenging. Definitely loses us, let's say, some legitimacy points because we can't talk about the megastars who we know are using Suno. On the other hand, I think, you know, I'm really optimistic about the fact that the vast majority of creatives that I meet will admit behind closed doors that they use Suno and that they like Suno. And so I think it's a matter of time before these tides tend to shift. I agree.
All right, to the next one. Ondu asks if he's looking at both sides of the market or just the creators. And if not, why not? Isn't distribution just as important? How do you think about the other side of, you know, not just creation, but consumption and distribution? Great question. Distribution is definitely just as important. We think about it a lot. You know, I think about this as what can we enable?
that does not exist, that would be a meaningful part of the future of music. And so we've done a lot around creation that didn't exist so easily, let's say a couple of years ago. But I think there's a tremendous amount on the consumption side that does not exist today and we can have much more engaging consumption experiences around music that we can help people enable. And then kind of the flip side of that is you want to give creators a means to express themselves and to be proud of what they do and to be able to
point to these are all of the Suno songs that I made in a nice coherent place. And so I think you'll start to see a lot more around meaningful consumption experiences. Last week we released comments. So this is really big. You can now comment on songs that you like or don't like. And
I'll say a couple things there. One is actually it's been incredibly positive. Like, there's been almost no need for moderation, which is amazing, just given probably what your bias would be coming in or what your prior would be coming in about all the content on the internet. But this is a way for people to actually get external validation on what they make. And it feels amazing to get a notification that somebody's commented on your song and you see, you know, they just did a lot of fire emojis or something like this, right? And I think that it is a basic human desire to feel fulfilled with
by the creative things that you produce. And so I think you'll see us pour a lot more effort into that in the next year. Yeah. The other thing is you've done a really good job of enabling people to share this stuff on existing distribution channels, other social networks, X, TikTok, Instagram. You have like these beautiful share assets. And obviously those platforms have feedback loops and feedback mechanics as well. And so
Feels like you have actually done a lot on this side of distribution. And clearly there's more to do with things like comments on your own platform. I mean, I think you see this also. There are songs with hundreds of thousands or millions of plays on Suno. And so, you know, that says that there's some amount of distribution happening. Yeah, totally.
All right. This one I would file more as a bug report or like a live bug report on the podcast. This comes from Federico says, I like Suno and I've been using it for quite a while. But the problem of poor audio quality, a.k.a. shimmering, is really becoming a problem since V4.
What do you have to say about shimmering? I don't know what shimmering is. Can you clarify? V4, huge bump in audio quality. I guess we've now kind of uncovered this artifact of I can only describe it as like you'll hear some shimmery sounds in the background. It's been fixed and it's getting fixed. So we've pushed a couple of fixes that greatly reduce it. And the last little bit that we know about, you should see get fixed pretty quick. What does it sound like? And like, why does it happen? There's like shh.
It's not like a hiss. I don't know. I can't shimmer for you in real life, I suppose. Technically under the hood, like what is creating that sound? You're a drummer. Imagine it's like somebody put rivets in all your cymbals. It's like that. So it's just like this kind of long-lived shimmer effect where you didn't necessarily want it. You wanted it super dry. But I guess what I'm saying is what about the model or the technology creates that sound? Like what's going on? So, you know, I think I would liken this to
if you've ever heard like a wave file and then the same wave file pass through MP3 compression and you'll hear these little compression artifacts. And the way we model sound, there is a codec, it's some compression algorithm to take this really, really big signal and kind of smoosh it down into something more manageable. The way we built this algorithm, sometimes it will introduce this shimmer effect into it. So for MP3, you know, there's that like kind of top end compression that you'll hear
Um, and for the Suno codec, you'll hear a little bit of shimmer sometimes. Interesting. Well, it sounds like it's, it's getting dealt with. We're working on it. Yeah. Thank you for the bug report, Federico. Yeah. I mean, I'm going to try to work that into all my podcast episodes. Seth Miller. I believe Seth is the CEO of another AI music company, Rap Chat. Uh, he says, uh,
Actually, this first question that he asks, we've already addressed. It's about the API. But then the second one he asks is what Mikey thinks about the deep seek like new models that are going open source like you. Why you eat during that crazy weekend where deep seek came out, a different Chinese company released this UA model that is effectively a deep
open source Suno, if you want to call it that. So it's like you can type in lyrics, you can type in styles, and out will come music. Maybe it sounds like V1 or V2 of Suno. I would expect it to get better very quickly, honestly. And I think about this, it's impressive work. Like all of these things, it will get bigger, it will get better. It is expensive to stay at the cutting edge and to keep innovating. You know, I think about it like this, though. When we think about
AI as a neutral technology and there are good uses and there are bad uses. And this just democratizes it and makes it easier for people to build some of the good uses, but also some of the bad uses. And the things that I worry about with open source projects like this are not that company in particular, but somebody leveraging their open source model, for example, to build
artist cloning apps where you can just make endless songs from your favorite artist without their permission or to build some of those kind of antisocial futures of music that just kind of suck attention away from where it should go. And so the technology itself is useful. And I, you know, so I'm optimistic that it gets used for good, but it's not a given thing at this point.
Like DeepSeek, you know, there were all these rumors that DeepSeek was distilling other models. I mean, do you have any reason to believe that this model like distilled Suno or anything like that? It's quite possible. Honestly, we have better things to do than to try to figure that out right now. But we know that people try to scrape our content. Yeah. And so it wouldn't it wouldn't surprise me one way or the other. Yeah, I guess on this notion of this stuff could be used for good and could be used for bad like that.
the thing that I'm, I'm thinking of is like the cat's out of the bag, the horse is out of the barn. Like you can't put the genie back in the bottle. AI music is out there. It's only going to get bigger and bigger and bigger. And hopefully there are, there are players and good actors out there that are doing like really good stuff with it. And that will obviously benefit the ecosystem. Um,
And then there will there will be some players that are doing some like not so good stuff with it. And so, yeah, hopefully hopefully the good the good ones can be empowered to do that. I hope so. I think there's a lesson to learn from the last wave of music technology, which is that you need to have you don't need to completely eradicate the bad solutions, but you need to have good solutions that are easy and accessible for people to use. And then they will go and use them. Right. And if you, for example,
would have shut down Spotify and Apple Music in the early days, people would have found another Napster. - I'm sure you've listened to the stories about Spotify and heard some of Daniel X interviews, but you know, the whole thing, the whole original point of Spotify was to make what Napster and these companies were doing so good that people didn't mind paying for it, right? Like Napster was great because it was free, but it was illegal.
And so you had to have a solution that was better than Napster for it to be willing to be paid for. And that's what Spotify was. It was like literally that good. And so they built an amazing product. Only reason I know what a VPN is, is originally like I wanted to try Spotify before it was available in the United States. Somebody tells me, oh, there's this technology called the VPN and then you can go try it out. And so I think, yeah, having amazing alternatives is the right way to go. Yeah. All right. Pouya asks, what is Mikey's favorite AI tool?
track? You know, there's still a soft spot in my heart for Stone by Oliver McCann. Great song. Great song. It was at the top of the Suno charts for a while. Oliver is a great creator. There's a few things that I've made with my kids that really resonate very strongly with me. So if you can let me choose to, that's what I'll choose. Okay, cool.
You talked a little bit about this, but maybe you could expand. Hadrian Labs says when he expects the next version, Suno 4.5 or 5, and how he thinks it will be better. Great question. Hard at work at it. Initial progress, very good.
Very exciting. Better song quality. You will see better sound quality. What I would be excited for is on the on kind of the control side, the ability to be far more expressive with the types of descriptions. And so you could add 10 or 12 or 20 kind of descriptors to your music and it will kind of listen to all of them all at once. And that sounds like a nice to have, but it's actually much more than a nice to have, because what it means is that you are able to be far more experimental with how you express yourself and you can kind of push forward the music even more.
When you say the word controls, you've said that a few times throughout our conversation. What exactly do you mean by that? Like when you when you say like I think of knobs and sliders, like is that is that what I should be imagining? Or you just need more like descriptors or I think controls all of it. It's it's like, yeah, did I give you knobs that that made it easier to get to the sound that you wanted? But like, did I just listen to the descriptors that that you gave me? You know, if you asked for a tambourine and a saxophone and a
a xylophone, did all of those instruments come out? And now when I added a few kind of emotional keywords, you know, like I could say emotional, I could say vibey, I could say sad, I could say happy, I could say uptempo, I could say groovy, you know, all of these things. And what if I just give you all of those and there should be some interpretation of the music that actually listens to that whole thing. So I think of control extremely broadly as how do I let people
iterate to the sounds that are in their head as quickly as possible through any means possible. Could be text, could be audio, could be a picture, could be anything else. One of the things I have found with Suno is when I put in too many descriptors, you know, hey, I want to do this genre mashup of, you know, reggae and funk and death metal, whatever, like eventually it just gets
too meshed it's almost like when you try to mix too many colors it just turns brown right like do you see that is is there a point at which like the controls yield diminishing returns maybe but we are we are not near it and so what i hope you see in the next model is that it will do a good job of picking little bits from each of those genres and making something that is interesting and coherent instead of either becoming kind of a total mess or just ignoring half of the things that you said which is neither one of those is fun
All right. Wayne has a question. How's Mikey thinking about evolving Suno's product? For example, user vocals, instrument swapping, genre blending, et cetera. The short answer is all of those. I think, yeah,
On some level, the music has to get better, but the creation flows have to get more intuitive and easier and more fun to do than they are right now. And so I think lots of those things mentioned you will see show up. And then I am most bullish, most excited about more of the multiplayer stuff, more of the social stuff about letting people create together, letting people have these collaborative experiences, whether that's
at the same time you and I are at the computer together doing different things or asynchronously where, for example, I can make a song and you can remix it and we can be collaborating like that. Last night, we actually had our live stream. So we did this contest with Timbaland where fans could remix one of his songs. And to me, this is like,
an incredible part of the future of music where this is the highest form, in my opinion, of engagement between a fan and an artist. And it is the highest form of flattery. Like I feel like I can make music with my favorite artist. I feel like part of the creation process. And so we did a live stream last night where we played the top five winners of that contest. It was really, really cool to hear like, why did Tim like this song or that song? So he's sitting there and he's like reacting to these songs. He had already listened to all of them. There were tons of them.
And then he had already selected the top five. We would have been there for a year if you had to, if we had to listen to all of them, um, uh, synchronously together. That's cool when it's your favorite artist, but like, that's also just as cool when it's your friend. And so, um, I think that is a huge part of the future of music and you'll see us do a lot more of it. Yeah. I can imagine something where like,
there's almost like a messaging platform or like a mail thing where like I create something, you get a notification and it's like, it's almost like a turn-based game, right? Like I take my turn, then you take your turn. Then I take my turn, then your turn. And then let's see where we end up like 10 turns from now. Maybe 50 turns from now, you know, we did a turn a week and we have a 50 song album of the iteration of this song. It's like a game of broken telephone and it starts in
in one place and it ends up in a totally different, interesting place. And it is a musical journey that only makes sense to two people. And it's beautiful. That's really cool. I would love to see you guys build something like that. This is from Scheltz.
Shelts32TT, any thoughts on letting people upload their voices and then AI can polish them up and make them sing a created song? We are working on that at some point. I think very important to put the right controls in place to prevent people from cloning other people's voices. But again, I think this is like a- Sorry, what do you mean by that? Cloning other people's voices?
So I don't want to let you make a song in somebody else's voice. I only want to let you make a song in your own voice. And that, again, somebody else could mean somebody that you know that isn't you or could be a celebrity. And we kind of want to not let that be a thing on our platform in either case. Again, this kind of basic human instinct to want to make something beautiful and show it to people. And if technology can be a means to that end of making a song that sounds beautiful, that sounds like you, even if you don't have the best voice right now, like why not?
Why wouldn't we let people do that? That would be really cool. I'm a terrible singer. So if I could sing into Suno and can make me sound, I don't know, like, I don't know, somebody with a good voice. No, you with a good voice. That's the point. Right, right. OK, I see what you're saying. I mean, that's
I mean, is that just auto-tune? It's not only auto-tune. You know, auto-tune will get you some of the way, but it's not only auto-tune. And the other thing is, you know, if you want to use auto-tune, you have to sing the whole thing by yourself. And empirically, people don't want to do that. And there's an element of shame here that, you know, I think about this a lot, having young kids who actually don't have
you know, that element of shame. Like kids will just sing for no reason. And I think it's beautiful. And somehow culture beats this out of us that like, it's okay to sing wherever you are and we can bring that back. Along those lines of like letting people just sing and then it just does it. Um,
Somebody had asked, when are you going to create video on behalf of people's songs, right? Like, is there a music video, you know, your kid or somebody just sings their heart out, it makes them sound great, and then boom, there's like a music video to go along with it. Yeah, at some point, not immediately is the answer. I think we think a lot about this from the product side, which is that...
The whole point here is to elevate music. And so it's not, we don't want to get to a point where we are having a Suno song be the background accompaniment to some video, but instead having a video experience that elevates the song. And so, you know, this is effectively the music videos of the 90s and 2000s that I'm sure you watched a lot of, and I certainly did, and you loved them. And
We kind of lost that a little bit. It doesn't, they don't have nearly as much cachet as they used to, but I think we can bring that back. It's not obvious to me that it's long form. It might be short form. You know, I think riffs on songs are a good way to do that as well. Like it or hate it, TikTok has made some really interesting strides in that direction of like, you can speed up a song and put a dance to it. And this actually, in many ways, is able to elevate the song slightly. So I think there's a lot cooking in the background. Nothing to share just yet. Okay. Stephen Huang says,
it'd be interesting to know what percentage of active Suno users are professional musicians versus total beginners. It's a little hard to actually pin down the numbers. There's a huge spectrum because there's also a ton in between of, you know, Grammy winning producer and total novice. But the vast majority of our users are people who are finding music making for the first time, who were otherwise creative in another domain and who found this and it's an amazing outlet for them. Here is a question from ProSense.
When can artists jam live together with Suno? Oh, man, I really, really want this. Unfortunately, Prism, no time soon. The human ear is so sensitive to small amounts of latency. And so network, like even just going over the network and Internet connection is going to be too slow. This is a dream of mine. We will figure this out at some point. I just can't promise it. This hold on. This is not something that even exists on other platforms. Like there's no live connection.
sort of real time jamming? It would have to be on your computer. It would have to be running on your computer in order to be fast enough that you can jam along with it. Unless you want to do something like cut a backing track with Suno and then like, you know, solo over it or something like that, that there's tons of people doing on social media, in fact. But like a truly interactive one, a co-pilot that is the
The same way you and I might play and listen to each other and adapt what we're doing. That is out of the realm of what we can do just yet. I guess maybe the next best thing is what we talked about earlier. This like turn based idea where this asynchronous, I make a move, you make a move. In fact, like I wonder if you could work in almost like challenges. Like I make a move and I challenge you to add in vocals, right? And it's like, yeah, you challenge me to like put a new top line on. I don't know. I'm just like, I think that's the future. All right. This next one comes from Insight. Insight Ment.
When will they expand to more modal input? I assume that means like not just text, also audio and video and pictures. And we have that now. It's a little rudimentary. I think you'll see a lot of new functionality around that, especially in audio. You know, all of these things, what does it really mean to, quote unquote, prompt a model with a picture or, quote unquote, prompt a model with a little audio? Is it to extend it? Is it to somehow incorporate it
into the song in another way, like is it, I'm gonna kinda hum you the hook and then make a whole song around it. So I think you'll see us start to play a lot more with the types of things you're able to do with, for example, like a little clip of audio that you hum into your phone and then you can really adapt it and turn it into a whole song. - How has that been working out? Do you have photo input, do you have voice input? Like how's it working? - Yeah, I think for photos, all of that exists right now. For photos, we have like good initial version out there. If I'm being honest, the true potential of that thing comes when it's able to catch
moments much better than we do right now. And I think that there's some product changes that you'll see us make in the near future that will really nail it. You know, just as an example, I think that the inspiration to catch a photo, um,
probably already happened and that you should be going for example to your photo reel and not opening your camera yeah and you should be sharing something that you otherwise would have shared and you can use music to elevate that experience instead of kind of trying to force something into being shared that you otherwise wouldn't have shared i wonder if you could almost do that automatically right if suno if you give suno access to your photo library and it's like at the end of the day suno looks at all the photos you've taken of that day and it's like hey here's
Here's like the soundtrack to your day. 100%. You know, Apple will do this, for example. Yeah, exactly. And just imagine how much harder it will hit when the song is relevant and not some kind of like cheesy piece of music. Yeah. And maybe the lyrics like tie into, you know, your day at the beach. You know, your phone knows so much about
you. You know, and it's about what, what access does the user feel comfortable giving to, to really, you know, make that truly personal piece of music. And I think that this is a type of personalization that people are actually sleeping on a little bit. And it's not just like the sounds were exactly what I wanted. The genre was exactly what I wanted, but it was actually like, it's about me. It's about what happened to me today. I wonder what other pieces of context from the device you could, you could put into the model for, for that level of personalization. Like
I don't know, like lat long and GPS coordinates or, you know, whether you're, you know, the accelerometer, are you driving? Like, does it know you're driving, right? Or am I working out music? Yeah, exactly. Yeah. Yeah, totally. Maybe you have your Apple watch and then we know how hard your heart rate. Exactly. That would be really cool. I know. I think Spotify a long time ago did a, like a running feature where you run and like the music, they'd only pick songs that like match your heart rate or your pace or whatever.
Using music as a tool to elevate things and to be far more engaged than the average music listening is right now. Yeah. This question is, is Suno considering the integration of on-chain technologies like NFTs into their platform?
This goes back to what we were talking about before, where you're going to be riffing a lot on other people's music. You are going to be extending it, covering it, remixing it, et cetera. Music will be far more social. You know, another word for social or some of the social dynamics would be peer to peer, for example. And so I think that there's just going to be a lot of progress that gets made here. Blockchain actually is a natural way to try to keep track of
of a lot of these edits. And I think that while nothing is about to happen, I think this is a huge part of what the future of music is going to look like. Why doing it on-chain? Just for, like, accountability and, like, trust? Imagine I'm an artist and I want to put something out there and I want to say, I want to be able to have proof of I can take some amount of royalties on all of the things that get remixed. So go and make as many remixes as you like and
I want some kind of royalty for that. Like, this sounds like a smart contract to me and otherwise might be actually very, very hard to do. And so either it's honor system, but sometimes you're going to want things that are verifiable. Like I was the first person to make this piece of content. I was the first person who made all of the content upon which all of this other stuff was derived. So I think blockchain is a natural set of technologies for that very, very early here. And we did get a flavor of this with the whole NFT movement.
Right. I mean, there was a lot to dislike about that whole movement. But like, you know, there were these projects, people created these NFTs. And then every time the thing got traded anywhere down the line, the original creator got a royalty. Right. It just happened. That's right. So, yeah, maybe there's like something there. I think there's a lot there that and open source, you know, Justin Blatt just open sourced a piece of music, which is really interesting. He made he made a GitHub. He made a GitHub project.
and he put the project file from Ableton into that GitHub repo and he said, "This is free to remix and if you wanna submit a pull request with the remix, I will accept it." - That's pretty cool. - I think people are not thinking forward enough about the new ways that people can interact through music. - Yeah.
And you could probably enable a lot of that directly on the platform, right? Again, going back to this like turn-based thing, you know, maybe Total Stranger takes my song and does some sort of remix and then...
you know, pushes it up to my repo or whatever. And maybe we both share in the incentives of that. A hundred percent. A hundred percent. I think about this from a product perspective. We want to make this stuff really easy for people. And so much of it has to happen on platform while at the same time, we don't want to prevent you from doing something off platform that you may want to do otherwise. Yeah. Yeah. Because you want to foster like a true sense of ownership and creativity and music is for everyone. Yeah.
Okay. Camille Rosinski has a couple of interesting questions. So maybe I'll tick through a few of them. How do you develop AI's sense of taste, especially in creating music? This word taste, I feel like keeps coming up in the discussion about AI. If anyone can create anything, like how do you value stuff? Oh, well, you got to have taste, right? Yeah. Now, obviously, like everyone's saying that they have taste, even though they probably don't. So yeah, how does the AI develop a sense of
pace? This is a fantastic question. It is far understudied, in my opinion, you know, just to contrast it with lots of other things happening in AI where there will be objective answers that are somehow verifiable. And so in lots of the reasoning based things, lots of question answering things, lots of stuff that OpenAI and Anthropic and others are working on, there are verifiable things here. And so it is much easier to say yes, no, and the model is doing a good job or not doing a good job. And music is completely subjective. Music is
I think there exists good music and bad music, but music, what is good and bad changes to people. You and I will listen to the same song and disagree. You may think it's really good. I may think it's really bad. And so totally understudied how to align these models to human tastes. We have some techniques. It is totally not obvious that the techniques that you use to align reasoning models should be the same techniques that you use to align taste models. We have some tricks.
you know, without giving them away. I think that you will see massive improvements here in the future as progress gets made. You know, again, I would just think like audio in general is a year or two behind text. And so think about what alignment of models looked like in text one or two years ago. It was kind of really in its infancy. And so look out for
models to have way better taste. Look for models to be able to really understand you and personalize things to you. Look for models to be able to have something that let's say two people are going to like and not other people like. So tons of low hanging fruit there.
You mentioned reasoning. Is there an equivalent of a reasoning model for music? Something that like with multi-step, you know, chain of thought reasoning? Have you guys thought about that yet? We've thought a lot about it. You know, so again, the fundamental problem here is that when you don't have a formally verifiable thing, all agentic type behavior gets very, very difficult.
And so if you're going to have something that can do multiple steps and try to correct itself based on some other model without a human saying yes or no, you need that thing to be objectively true or not true. You know, it's like you made the right decision or you did not make the right decision. This does not exist in music, right? So either, you know, a person could sit there and say, you made the right decision for me or you didn't make the right decision for me. And then you'd have a fairly non-scalable way of aligning things.
a model to some human taste. But I think that a lot more is possible there. Yeah. I wonder if, could you have models that are maybe...
perfectly trained on maybe a genre of music or like a certain vocal style. And maybe that's like other model that's sort of providing that reasoning layer and saying like you did it or you didn't do it. There's lots of games to play there. And then how do you use, let's say, the vast trove of either explicit or implicit preference data that your users have given you there. So again, we are really, really early. I'm really glad that we've been collecting kind of preference data on stuff for
for a while now. Since the beginning, right? Yeah, since the beginning. So this is a huge asset for us. Yeah. All right. Another one from Camille. Camille says, how can we make music more social with Suno? Music to my ears. I think there's nothing wrong with just putting music on yourself and listening to it in the background, but much more is possible. And I think that encouraging people to make music for other people and share it, finding songs that
particular smaller sets of people may find interesting and not other people kind of like a Facebook group type of a thing all of these things again are completely underexplored and it's just about doing them and I think that the first time that you make a song and you send it to somebody that you think has good taste and they like it you will get this validation and you will be hooked and you'll be like oh this is amazing I want to keep doing it I want to keep making music I want to keep sharing music I want music to be a bigger part of my life and so in some sense I
making music more social is like the most important thing for the mission of the company because that social aspect is what, in my opinion, is missing to make music much more valuable than it is today. Yeah. He also asks,
Any prompting tricks? Oh, I'm not the best person for that. That's a great question. I am not the best person. I think, you know, you mentioned before, don't have too many prompts. There's kind of an optimal length, you know, give it a couple of genres, you know, maybe three genres and three other descriptions, something like that. I think the best prompting trick that I can give you is listen to a lot of music on the
at how people prompt it, because you are going to learn far better from a large community of people than you will from me. One thing I've seen in the prompts, which kind of blows my mind, and I've never asked you about it, is like sometime in the lyrics, they put like these weird characters in there. They put in like hyphens and swirlies and like
Does that influence the model in any way? Sometimes, yes. Sometimes, no. Sometimes that's actually edited after the fact to try to be more eye-catchy. So it's kind of users hacking your platform. I think that characters that we don't know will kind of get ignored. So without actually looking at it, it's hard for me to say one way or the other. But again, I think look for us to make big changes in the near future about...
how easy it is to describe music in kind of a long and convoluted way and get something out that matches your description. - How is lyrics going? I know like a lot of the early knocks on these large language models is that they like were not creative. Like you tried to push them to be a certain type of creative and they always just like came up with this like cringey, corny stuff.
How have you dealt with that? Are the lyrics getting better? If so, how? With V4, we made a big leap in terms of lyric quality. And this was just a bunch of hard work. Incidentally, DeepSeek is actually more creative than the big US providers. You are asking the wrong person as to why, just empirically. It is...
It seems less stilted is what I want to say. It seems like maybe they didn't do as good a job of aligning it to, you know, what OpenAI and Anthropic are trying to align their models to, which is like very factual, very matter of fact. It's like a boring lawyer, you know, no offense to the lawyers out there. I think, again, this is a huge area that there's just a ton of low hanging fruit of making these models actually make sense.
better lyrics. And so we've done a few tricks, both with prompts and chain of thought and using the right models, et cetera. And I would see, I would look for us not just to have the lyrics come out perfect every single time, but look for much more co-pilot or co-writing experiences so a human can feel like much more ownership over it because it will always come out better when you can say, I like that, I don't like that. Help me get over this little bit of writing block, et cetera, et cetera. Yeah, maybe it comes back to like that co-pilot analogy.
It's exactly you've got this like songwriting assistant that's like sitting with you and like you're workshopping it together and teaching you teaching, right? And, you know, a lot of this is like anything else. It's stuff that you get better at just by practicing and doing it more and more and having a teacher, you know, much of the benefit of having a teacher is just the commitment to just keep doing it and doing it and getting better and getting better and refining your taste and refining your taste. Obviously, a teacher can do more than that.
But like at the very least, that will actually get you improving a ton. Yeah, actually, on the teaching note, another question was submitted to me, not through tweets from from Anthony Delia, who was asking, would you ever take the music that's coming out of Suno and enable features that then teach people to do the real to do the human version of it? Right. Like spitting out guitar tabs.
We're spitting out the sheet music, right? Is that an opportunity to like have like almost like this educational angle to Suno? Freaking love that. Yeah. Right? You know, and it's like this like harkens back to like composers used to write etudes, which are pieces that are meant to be practiced because they teach you some, you know, particular skill with your hands, some particular hard motion. And so it's like,
yeah, we can do this and then we can give you the tabs or we can give you the sheet music or whatever. I love that. No, no, that's the first I'm hearing of it. It's a great idea. Well, I mean, I could see you mentioned that a lot of producers use this and we know that producers often use Suno to get inspiration and then they often take this music and they go maybe recreate it in the studio or hire like an orchestra to play the string part. Like maybe Suno just spits out the entire sheet music for the orchestra and then they just like take it right into the studio. I think that would be amazing. You know, another way to say that is like don't sleep on
the other side of AI, the non-generative side of AI, you know, there's still like a ton of good work to do there. - Yeah. All right, last one from Camille. What's the 10 year vision for Suno? What's 2035, what are we doing? - We are doing a lot more music. Music is a way bigger part of your day. It is way more social than it is today. You are creating a lot, you are sharing a lot, you are editing a lot, and music is way more valuable to you than it is today.
You may not mean you, you are a musician, you are a former Spotify employee, you are a former founder in the audio space, but most people, music is not valuable enough to them. And I think that if we're successful here, the product is a platform where people can come to have a whole bunch of way more engaging and pleasurable musical experiences. I think in 2035, there's probably a whole section dedicated to kids
Music is a huge part of development and education in kids. So I think the sky's kind of the limit here. And I think that it is kind of a failure of imagination on a lot of people's parts to just think that the only way music should progress is that we should just keep making however many...
pop songs there are and however many popular songs they are every year and that the interaction patterns are just going to stay the same. But when you say more valuable, does that mean like I pay more for it or? Yes, you should be willing to pay more for it. You know, if you want to just be cold and hard and calculating about it, the amount that you are willing to pay for something is a measure of its value. And if we do things right, people will be willing to pay a lot more for music than they are today. And like my and not not me, me, but like a person like
my listening habits, will they shift from the places I'm listening to music now and start shifting more to Suno? Like, how does my day look different? You know, when you say it's more social, what does that mean? I think you are making and sharing way more music than you do today. Maybe you don't make and share music at all today, but I think in the future you will. I think you will listen to
More intently, probably because there is more short form music out there, but not only. I think music will progress a lot faster. It will evolve a lot faster than it does today. And so just keeping up with all of the interesting sounds and songs out there will be really
a more engaging, but also just require more time. It'll be a more engaging experience. Having music evolve more quickly will take more time to actually keep up with it. We'll also be more engaging to keep up with it. And so there will be just like a tremendous amount of interesting and I actually hope weird content that people start to enjoy. I want to hear weird. Yeah. I also always just want to hear weird stuff. Yeah. Can be hard to find.
A couple of quick questions for me around building an AI startup in 2025. Again, last time we talked, it was much earlier days for Suno, much smaller team. Now the team's quite large. You know, you're generating lots of revenue. I mean, what's that trajectory been like? Like,
I don't know, just give us like a quick recap of what it's like to be a CEO of an AI company over the past 18 months. It is a lot of fun. Yeah. I have a really cool job. I get to work shoulder to shoulder with great people. I get to do music.
as my day job, which is really fun. Every day is different. And that's also really exciting. You know, I wonder how I will look back on it in another year. But like, if I'm being honest, the last year feels like a total blur. And that's great. You know, I think I've probably interviewed, you know, I don't know, 200 people since we spoke last, you know, so which I love. I also do my best to
keep my ear tuned to the correct extent to all of the hype out there, which is to say probably less than the average AI person. And I think that it can be quite distracting. There's a lot of hype. And for example, the weekend when everything went bonkers with DeepSeek, I mostly didn't pay attention to it. I can get it digested for me
you know, two days later and nothing bad will happen in that case. Something like that. What else is overhyped right now? AI is definitely overhyped. Am I allowed to say that on this podcast? AI in general. AI, I think AI in general. Like when you say AI, you mean this notion of AGI or ASI or like the future? I think AI is a tremendous part of the future. I'm not even saying the number of dollars getting pumped into it is incorrect, but I think that there's so much hype and urgency around it that doesn't need to be there. Yeah.
If you were thinking from a capital allocation perspective, I don't think anything is necessarily out of whack there. If you think about from a minute spent on Twitter perspective, I think probably there are things that are off. What's underhyped right now? Taste in general. I think I know you said everyone likes to think they have good taste, but actually studying taste and having good taste. You know, judgment is another word for taste. And I think this is underappreciated how important that is. And I think that will get even more true.
If you weren't building Suno, is there something, is there some other AI startup you would want to be building right now? Like an existing one or a different? Anything like, is it like if you had thought of some opportunity that you could do with AI, you'd be like, oh, that would be a great startup that I'd love to pursue. Maybe this is too spicy, but I think that
I used to be a physicist studying quantum computing, and I think there are probably still more things that people think they can do with a quantum computer that you could do much more easily with AI, at least to an approximate degree. And that's an interesting thing to go study. Give us like an example. Like maybe a lot of these simulation type things, like the actual factoring numbers, I don't think you can do, but basically everything else you could probably figure out how to do. That's pretty fascinating. Yeah. So, so, but I like my job. Don't, don't. No, I'm not, I'm not trying to tempt you. Don't worry. What's it been like?
you know, building a team? Like you've never, you've never had a team this big, this quickly. Like what's that been like? And, and how do you orient the team around kind of this future? That's somewhat unknown, right? Like these models are getting better all the time. And so you don't really know what you can build with them until they exist. Like how, how, how have you managed that with the team? I don't have a lot of strengths, but one of them is just, I actually like interviewing people and that makes me,
my work seemed much less tedious just given how many interviews that I do. And the best part of the whole endeavor is the team. Like that's why we build an in-person company, working shoulder to shoulder with people that you actually like working with. How do you get people aligned? That actually kind of starts before they even get hired. And a lot of it is just hiring people who are actually excited about
the future of music and who are thoughtful about the future of music. And, you know, the buzzword here would be mission oriented. And I think I didn't really understand what that means until having to kind of construct a team and making sure that that's a really important hiring principle of like we are hiring primarily people who have thought about what the future of music should be and how technology can enable a better future and who are not agnostic or music decels if you if you want to use like a weird term. But what about this model thing also? Like,
How do you build a product roadmap and a product strategy if you don't know what the model will ultimately enable? You kind of have to pretend that you can do anything and then, you know, hire amazing people who can build anything. And so to take this from first principles, you know, and this is not how it operated early days. Early days, you're just kind of
will it into being and your model is your entire product. And then when you realize like, actually we can build anything. And so let's be very thoughtful and very intentional about what we build. This is a total cultural shift for a company. It's like, is everything just kind of a, I'll use a bad buzzword or a wrapper around your model. And then the correct way to do it after you have it working is like, let's think about what is the amazing experience that I can deliver to users that they will not want to put down. And then we have to
have our amazing machine learning team figure out how to actually make that a reality. - On the topic of deep seek, and also we talked earlier about like defensibility and AI startups, like how valuable is a model at this point? Like how core is the model to the value of the company? And is it sort of like decreasing in value over time? - It's hard to know, you know, like at first thought you might say like it's gonna decrease because the moats are gonna be not the model. - Right, network effects. - But the other way to think about it is the model is the means to an end right now. And so if there were a way to deliver people
uh, the pleasurable experiences that we do without a model, then the model would not be, you know, worth anything, except there isn't. And I don't know of any possible way that you can think about doing anything except to have an amazingly powerful machine learning model that is able to generate kind of arbitrary music. Yeah. A little bit of a chicken or the egg. It's a little bit of a chicken and egg problem. And, you know, in some sense, like that is not what we spend our time thinking about. We spend our time thinking about just
shipping product that people are gonna love outside of music like what what ai products or models or tools do you use in your day-to-day i use a bunch of chat gbt guy or a cloud guy or can i say both like i use i use both i can imagine that in the future you're probably using a bunch of these things you know and pay for all you have like just 10 different ai subscriptions no i have both i have subscriptions to both of those you know what that might be it uh
- Really? - You know, I don't get to code as much as I want to, so I don't use Cursor, but tons of people at the company use Cursor. Granola. - You use Granola. - Granola's good, yeah. - Nice. - That's an AI product. - Shout out Granola. - But that's a product. - Yeah, yeah, yeah. - That's not a model. - Right. - That's a product. - Okay, so what other products do you use?
I mean, it's like this AI, AI is, you know, kind of permeating everything, right? Like gets into Figma, it gets into Slack, it gets into Gmail, it gets, you know, and I think the correct, the correct way to think about this is like not what AI products do users like almost shouldn't know that it's there, right? It's just like, it should be helping me. Yeah. Awesome. Mikey, this has been great. Thanks so much for doing it. So much fun to be here. Thanks for having me. Yeah. Until next time. Thanks.
Thanks so much for listening to Generative Now. If you liked what you heard, please rate and review the show on Spotify, Apple Podcasts, and YouTube. And of course, subscribe. All that stuff really, really does help. And if you want to learn more, follow Lightspeed at LightspeedVP on X, YouTube, or LinkedIn. Generative Now is produced by Lightspeed in partnership with Pod People. I am Michael McNano, and we will be back next week. See you then. ♪