Welcome to Practical AI, the podcast that makes artificial intelligence practical, productive, and accessible to all. If you like this show, you will love The Change Log. It's news on Mondays, deep technical interviews on Wednesdays, and on Fridays, an awesome talk show for your weekend enjoyment. Find us by searching for The Change Log wherever you get your podcasts.
Thanks to our partners at Fly.io. Launch your AI apps in five minutes or less. Learn how at Fly.io. ♪
Welcome to another episode of the Practical AI Podcast. This is Daniel Whitenack. I am CEO at PredictionGuard, and I'm joined as always by my co-host, Chris Benson, who is a Principal AI Research Engineer at Lockheed Martin. How are you doing, Chris? Doing very well. Looking forward to talking some fun stuff on this beautiful spring day. So, yeah.
Yes, yes. Well, I've always hoped that AI could make me a superhuman. So really, really excited to hear about maybe something in that realm today from Loic Oussier, who is head of engineering at Superhuman. How are you doing, Loic? I'm good.
I'm doing great. I'm super excited to chat with you guys. And I was with a pretty humbling set of guests in the past. So I'm super happy to have this opportunity and discuss in length. Yeah, that's awesome. Well, I know this is kind of interesting because I know superhuman...
I think was one of, you know, maybe the sort of first, you know, really integrated AI first kind of engineering tools that I remember seeing. And of course, the AI space has advanced a lot in that time. Maybe could you give us a little bit of a kind of
state of AI in email or productivity more broadly, if you want to think about that. But really, I mean, obviously we're going to talk a lot about email and messaging. So could you give us a little bit of a sense of what that landscape looks like right now and kind of how Superhuman fits into that? Yeah, totally. And it's incredible the time that we're living right now.
Of course, like everyone has been shocked like when we had like the first version of those LLMs, like doing some crazy, crazy stuff and like analyzing text, summarizing and doing like all sorts of magic. And of course, email, I mean, is all text-based for the most part.
And so it was like a really nice test bed to try out all the cool stuff that you can do. And interestingly, that's also helped this category, like the email client category.
to thrive for quite some time. Superhuman was almost the only one supercharging Gmail and Outlook. We were the only one on the space making people faster going through their emails and all of that. And with the rise of LLM and agents and everything, now there's a bunch of people that are like, oh, damn, this is a great, great, great environment to play around and to make things better.
And right now, this is what we see. We see a bunch of, I would say, other tools trying to do stuff with LLM and to create a better experience for email. And this is indeed...
interesting time for us because this is the proof that that category needs to exist was existing before but we were the only ones there and now like more and more people are getting there showing that there's deep interest in it and it's challenging and it's like super interesting and it we probably talk about it but it will also like
help us understand what makes like a good product and is like just the LLM and AI sufficient or like do you need some sort of like a secret sauce on top of it and happy to discuss about it. As you I'm kind of curious following up on what Daniel was saying with you know with you guys being so early into the space
And obviously we're, you know, not only LLMs, but just AI in general has been going at light speed, you know, increasing steadily over that time period.
How is that? How is the space change for you guys from being kind of the early only player, you know, into, into the space where there's others, you know, it's becoming, you know, somewhat congested across, not just the space you're in, but just like everything. How has that changed the world for you guys in terms of, of staying differentiated and all that? Yeah. So, so it's very interesting because like there's multi dimensions that we can talk about, like the,
The first one is this is like the raise of those AI features and capabilities are bringing like a new set of features that you can implement that you couldn't do in the past. In the past, AI was mostly classification, like adding labels and stuff like this. And that was kind of like the limit of what AI could do really for everything that is text-based. So typical classifiers, typical...
models like this. And more and more now you can do like some intelligent stuff. So we moved from a place where
we were making things faster for our users compared to Outlook, compared to Gmail. But now there's more that we can do so we can make things smarter, which is probably a paradigm shift in terms of the value that we're creating for our users. The other dimension is this is raising the expectations for the different users.
Like for a long time, they were like, damn, this is so fast. And I'm winning like four hours a week to go through my emails. But like everyone is used to chat with GPT. Everyone is used to complexity. Everyone is like crafting images or like even movies with Sora and all of that. So like the level of awareness and the level of understanding of what the technology can do is raised dramatically. So for our users, the level of expectations like, hey, superhuman,
I expect this now. I expect this now. The other dimension is like from an engineering standpoint and like a building standpoint, our tool set is totally different. Like the tools have changed and engineers that were working like in some ways three years ago, even two years ago, even six months ago, like right now, the tool set and like your flow and like all your setup to work has dramatically changed. And maybe like the
The last dimension that I think is really tricky to apprehend is the perceived quality. So Superhuman was seen and built on the one single dimension that was like, it's highly qualitative. We were in charge of the quality because we master everything. So you can have a zero bug policy. You can take the time to deliver the value, but it needs to be perfect. And now with LLMs,
a bunch of the perceived quality depends on your prompt. So you have users that are prompting with different skills or different level of skills and the outcome of that prompt may be perceived as low quality, but that's something that is really hard to control. And it's creating like something that is like
sort of like mind-blowing from an engineering standpoint. I mean, we've all been working in tech and the craft, the bugs and everything, there are some processes to limit the number of bugs. But now, quality is not only bugs. It's also like this perceived quality based on the user. And that's an interesting thing to tackle. And I'm curious, as you...
You kind of mentioned the fact that with some of the prompts and having different users, you know, skill level and stuff like that. Could you talk a little bit about kind of how you tackle? This is one of those interesting things from my standpoint to hear about where there's all these little gotchas in this world that
that a typical person isn't going to ever have thought about going ahead of time. And so as one of those things where prompting itself is, is fairly diverse in terms of the skillset. Can you talk a little bit about like, how do you deal with that when you're trying to put together a product and focusing on the quality issues and stuff like that? Because I'll be honest with you.
That would not have occurred to me to have to think about addressing that kind of issue. Could you talk a little bit about that? I will tell you about one specific feature that we released in Q1. So we have those auto labels. So automatic labels that will basically flag your...
your emails and based on the label you can decide to skip your inbox altogether typical stuff like random pitches from a company that want to get in touch with you to sell their product I receive like probably like 30 of them every day do I want to take a look at those like 30 and answer like all of that probably not
Probably not. So I'd love them to basically be skipped altogether. So for those, we build classifiers that do not rely on user prompts so that we control the quality, precision, recall, like the typical stuff. But we also allow our users to create and craft their own labels.
Let's say you want to have like, oh, I want all my podcast invitation to have the same label. But you cannot just have a deterministic rule to say it because I don't know all the podcast people and everything. So you cannot just do the filter like Gmail would do where you say if then, then this. So you have to prompt it and you have to basically allow the user to craft a prompt that will surface all of those. But
But then that prompt is tricky because if you have someone that is just putting just a one-liner, you start having some issues because the precision and recall based on the one-line prompt is not great. And we know, as I guess your audience have been working with chat GPT or prompts in general,
the more structured and extensive they are, the better the result. And there's a bunch of hallucinations that can happen if you are just one-liner because lack of context and lack of all of that. So of course, you do some system prompt to surround this user prompt to try to avoid too much
too much issues. But there's also like a part of education that you need to have. And we're working on this now, which is like, huh, your prompt seems interesting, but like probably you want to structure it that way. So there's some stuff like this that we will be working on. Also sharing prompts, like libraries of prompts is something that we're thinking about more and more because
Not everyone is able to craft a nice prompt and maybe someone in your team will have done like a prompt that you would really use happily if you get access to it. So it's sort of like, I mean, it's very product centric, so it's not AI centric and you need to work around this new problem and, yeah,
And I wish we'd have a civil bullet and the answer to that problem. But I think we are learning as we walk. But it's super interesting. I'm wondering, I'm always intrigued by, I read a book by Richard Hamming. And one of the things that he talks about is how if you rethink a process that was very human and manual before, often the way that you would make that an augmented or machine-driven process is very different
from what the original human process would look like. And I think in the email client,
We all sort of expect a certain process, a look and feel to the email client that's developed over time. What have you found in terms of like presenting an email client to a user that is drastically different? What sort of needs to be preserved? What's kind of up for grabs in that experience? What should stretch the user? What needs to be preserved? You know, how do you think about that?
That's really interesting. That's a really interesting point because we are at that moment where the user interaction with a computer, with the system is like dramatically changing.
Like people don't expect to click in different windows anymore. Like the expectation is different. Like Chat EPT or I would say the other like clones like from different providers, you basically have a chat box and you ask everything there. Like even if you're working on a document, you ask...
on the chat bot and like, Oh, what do you find my document and rewrite my exact summary? Oh, make my tone a bit more like X and Y and Z. You don't expect to have like a button, like world would have like Microsoft world back in the days. So, um, and we are only at the beginning of this, uh, of this shift. So I think that, um, and it's kind of like coming back to, um,
like competition and all of that, the barrier to entry to like pretty much any SaaS application or given a consumer application is very low now because it's very easy to, at least to build a POC, at least to build a POC. I wouldn't go like on like,
further than that. And what will make the difference is the product test and how you want to understand your users and how you understand their user interaction. And this is where I feel pretty proud to work at Superhuman because our CEO is a freak in terms of user interaction and vision and is already thinking about that and how the future of interaction will be. And
It will change. It will be different. So what will stay? What will be slightly different? I'm pretty sure that the conversational aspect will be a strong paradigm. Like right now, you don't talk much.
whether it is through your keyboard or through a mic, you don't really talk to your system. You don't talk to the application. Maybe you start talking with ChatGPT because they have this nice voice interaction. Maybe you use WhisperFlow or these type of tools to basically write your email or write in Slack and your messages. But you're not exactly commanding the device to do things as you talk just yet. But more and more people are doing so. I
I probably talk to my computer now more than I type, interestingly. So there's a change. And everything that we've done in the past was mostly click and click and click. Superhuman started with like the common key and keyboard-centric access to things for people that wanted really productivity because like switching with a mouse is like,
is pretty slow. And now more and more people are starting to engage with the voice. So all of that will change the way you think, the way you surface the data, the way you interact with the data, the way you bring the focus. So this is an interesting area. One thing that I do believe will stay though, to your point, Daniel,
And I would talk about email especially. The concept of inbox, like the concept of having like some sort of like a timeline of things that you need to go through and get rid of the stuff that are top of mind, some sort of like a task list to some extent will stay. Now, how it will be surfaced, how you will go through it will dramatically change over time. And we're already like seeing this.
Okay, friends, build the future of multi-agent software with Agency, A-G-N-T-C-Y. The Agency is an open source collective building the internet of agents. It is a collaboration layer where AI agents can discover, connect, and work across frameworks. For developers, this means standardized agent discovery tools, seamless protocols for inter-agent communication, and modular components to compose, integrate,
and scale multi-agent workflows. Join Crew.ai, Langchain, Lambda Index, Browserbase, Cisco, and dozens more. The agency is dropping code, specs, and services. No strings attached. You can now build with other engineers who care about high-quality multi-agent software. Visit agency.org and add your support. That's
agntcy.org.
So as we were kind of going into the break, we were talking about kind of, you know, the notion of rethinks of that. And I'm kind of curious as we, as you're thinking about not only like the rethinks, but you're also having to respond to the evolution of, of the technology itself that's available for your, your teams to implement stuff.
And one of the things that we've seen over time is kind of, you know, it's not this smooth increase. You may have evolutionary increase in the model capabilities for a bit, but you also have these jumps that'll occur along the way.
And with your product teams, as you're looking at what the future of your products are going to be and you hit these moments where it kind of goes from predictable improvement in the models and you make these jumps, how does that affect the product development cycle that you have internally when you're saying, are those moments, as we were talking about rethinks, do you have moments where you kind of go,
Maybe it's time for kind of a deliberate rethink because something just happened in terms of the technology capability that we weren't expecting last week. And we're going to do that. How do you guys handle this kind of an industry being in it for that? No, it's very interesting. So, Daniel, you were mentioning a book, but one book that comes to mind as you're asking this question is Zone to Win by Joe Fremour. And it talks about like continuous innovation and like disruptive innovation.
And this is probably what we're talking about. Like we continuously innovate and we continuously add more features and new stuff into the product. And sometimes you have this opportunity to provide something that is disruptive, whether it is like the underlying technology that is disruptive or because you have like some sort of a
wow moment and you are like someone I would say with a vision that is like this is the direction to take and we need to either pivot or we need to do something like drastically different. What we've seen especially with AI is like the rate of those disruptive innovation is
I would say before AI to some extent, like the technical innovation where maybe once a year, once every two years, like you have something that is like brand new and like, "Holy to use this," and pardon my French. But what is interesting with like LLMs, like every two weeks or three weeks, if you're not on Twitter, you're not on Hacker News, like you can miss like the new big stuff.
like LLM, like multi-models, reasoning, MCPs, like that came in six months.
And all of that is coming with a new set of capabilities that you can decide to implement in your product. So to come back on your question, what is the impact on the product development? How do you handle this? One, you better be agile, meaning like the true agile. So you better be able to stop what you do.
And say, wow, we need to sit down for a moment because this is coming. What do we do about it? And you need to have like, and that's why I love like small companies to some extent, because it's very easy to have like everyone, listen, there's this new thing. We need to do something about it. Let's change the roadmap right now.
When you're in a bigger company, it's way harder to do it because you have your yearly planning that is like coming into quarterly planning and you have all those OKRs that you need to report on and everything. So like you need basically like almost a six months business plan to explain why you want to pivot and do something else, which is obviously not the case when you're a company that is of a small size.
superhuman engineering and product and design is probably like, I don't know, I don't have the strict number, but like 40.
maybe 50, but that's about it. That's the size where you can be like super agile. You can stop everyone doing something because something is coming up and we need to focus on it. Of course, we can do better. If my engineers are listening to this podcast, they would say, Loic, maybe you're like you're caricaturing a bit. So probably I'm caricaturing a bit. And of course they are, right? I mean, and of course they're right. Of course they're listening to it. And of course they're listening to it. No, so it's,
having this understanding that everything is changing right now. So you need to reassess your priorities like almost every two weeks, almost every two weeks. MCP is coming. People are standardizing on it right now.
What do we do with it? What do we do with it? Should we invest like crazy? Should we stop everything that we're doing? Should we, I would say, do we still believe in the vision and it's providing more value? You need to make those decisions every two weeks. So almost every week. So being close to, I would say, a close-knit team
That is talking like basically on a daily basis to make sure that you're making the right decision is key. And by the way, just for listeners, you may have heard MCP in there. If we did an episode explaining what MCP is. So anyone who's not familiar with it, you should jump back a few episodes and hear that out. It'll give you some context around that.
Yeah. Thanks. Thanks, Chris. And I'm sorry if I use, I would say some jargons, but jargon, jargons, fine. But we, we always try to jump in and point people to it. So this is perfect. This is perfect. And I think kind of looking forward, one of the things that,
that I'm really curious about is we've kind of, well, we've kind of tackled some of the bigger issues of AI and email, but I'm kind of curious, you know, if we dive down into specific functionality at superhuman, you know, what,
How do you see kind of the most, maybe the most useful AI email functions that you're currently either kind of releasing or kind of thinking about forward? You know, how do you, when you get kind of granular on the product, how are you starting to think about that? I would just like the feature that all our users are basically talking about because they just love it is a feature called Autodraft.
You receive an email as part of a thread. Someone is asking you some questions and or you send an email basically saying like, hey, can we meet next week or whatever? And after two days, you don't have an answer. You usually want to bump that into their inbox and everything. We build this feature where we create those drafts for you ready to be sent.
It's not mind-blowing in terms of like usage of LLMs. Like you provide the context, you use the tone of like you're with that person and everything. And you craft a draft that could sound like a good way to reply to it. And the results are just mind-blowing. Like the users find it like so addictive because it's,
relatively accurate and they win a lot of time. Like it's just about like winning time. Our users are mostly CEOs, CXOs on the sales side as well, some consultancy firm. They leave like basically day in and day out like in their emails. So every day
10 seconds that you can make them win in their day is a huge win for them, given the amount of emails that they have. So this is one of those features that is super effective, even if it sounds simple. So Loic, even with what you're just describing there, creating an auto draft per email, maybe an LLM call, doing classifications, auto labeling, maybe other calls,
I don't know how many calls or chains of LLM calls are happening per email, but that could potentially be a lot. And if you do that for one email, that's fine. You do that for all my emails, that's more. If you do that for all emails of thousands or hundreds of thousands of people, that's a lot of Gen AI workload. How does Superhuman as more of a
AI application company, think about that in terms of optimizing infrastructure or AI, Gen AI use, consumption, hosting your own models, fine tuning your own models, using smaller models. How do you think through some of that?
No, so this is a great question and this is a real challenge to some extent, if not a problem sometimes indeed. I guess like my engineers are like very much into like the finance. Like they understand like the cost of inference, the cost of the input, the cost of the output. They understand the difference between the different models. So we had to put some sort of like high level principles to keep moving fast so that they know
I would say how to default and only like escalative. They have some questions. I will give you some example, but if it's a new feature, we don't know if it would be working or whatever. So still testing, we want it to be great. So we take the most expensive model. It's working and we have traction and
Great, good problem to have. And now this is the moment where you start thinking about optimizing the cost. And maybe you will switch to a cheaper model, maybe more fine-tuned. Maybe you would switch to a different type of model altogether. So for example, like the classification that we discussed, LLMs are okay with classifications, but you can have way cheaper for the same quality with like a BERT type of models.
and inference cost is like a fraction of it. A fraction. So long story short, this is the way we provide the value to our end users. We try with the best working. We do optimization after the fact. Does that answer your question? No.
This is like, more generally, I think this is like always like the right approach is like don't care about the cost right now if it's not becoming a problem because you always want to provide like the best experience. And if you don't have traction, too bad. Because the risk, if you try to start small because you're afraid of the cost, you will use like a cheaper model. And the feedback from the users would be, man,
And they won't use your feature. And then you don't know if it's because the feature is, I would say, not well targeted or if it's because of the model. So targeting the best, you have better answers and better insights.
That was a really interesting answer from my standpoint. You explicitly called out as you're getting to the feature and going ahead and going with the best, the most expensive thing, and then pulling it back to what the efficiency will be.
And once again, one of the things that we often call out on the show is kind of the fact of kind of software engineering being applied and kind of the analogies on that on the AI side. So I just, I really wanted to kind of call that out because I thought that was a great insight that you made there. And it has impact. So I'm sorry to cut you off, Chris, but like it has like a significant impact on the way you build your application because you want to be able to switch models, to switch like applications
the heuristic associated with the output that you want to have. So you have to invest some time to have like a way to do this switch relatively easily, potentially do A-B testing with different population to measure like the difference of perception. Because again, there's a...
Not everything is black or white. There's nuances of gray now in terms of perceived quality. So you need to have more of a statistical approach in terms of understanding the impact of one model versus the others. And of course, we have internal evals and all of that to do our research.
own testing in terms of with like our golden data set. But the reality is we have like a diverse set of customers and everyone is different. So we need to have like a broader perspective than just relying on our own data set.
Yeah, Loic, I appreciate you getting into the technical side of things a bit and talking through some of those optimizations and how you think about them. Obviously, you're leading the technical efforts with Superhuman. And I'm wondering if you have any sort of hard lessons learned from doing AI engineering over time. We have a lot of practitioners that listen in. Any kind of general principles or lessons learned that you'd want to impart? Yeah.
That's a good question. Maybe one thing that I've learned is to, like as a CTO, I need to discuss with the rest of my leadership team and we talk about the success of features and everything. And the typical way to talk about quality is typically in terms of number of bugs and everything. Now, and I was touched on it early on, but the perceived quality
Like we're in a world with way more subtleties with LLMs. So setting the right expectation, basically explaining that the way the feature can be built and sometimes failing because the feedback is not great might not be because it's not well implemented. But maybe there's, I would say, more to it. Maybe there's a part of the perception. Maybe there's too much latitude that is offered to the end user. Maybe there's some work on the prompt side. So that's something that hit me immediately.
in the beginning where the perception of the feature was like, Oh, this is, this is terrible work. Like it's not working. People are complaining guys. What have you done? And the work was done properly. It was like well implemented and everything. Uh, but the perceived quality of, uh, such as some of those features can be completely different based on like the, the, the, those, uh, those new aspects. So maybe like my lesson learned was to, uh,
is now to just like be very explicit when you basically launch a new feature about the risk of that perceived quality and like the source of the mistakes being a bit less on the engineering side and maybe a bit more on the user. And there's a lot of work to be done to control that in terms of like the
user education, in-product education. So putting a bit more effort on the product-led growth, typical aspect of the business that will have a tremendous impact on the success of the future. So that's probably one. The second one is, and it's interesting because I see it every day, we're moving up market.
We have a lot of startups that are moving up market. So you start having companies that are part of the Fortune 500 and they want to use your product. And I come from a world where moving to enterprise is pretty heavy. You need a lot of features. You need to have a lot of compliance. You need to have...
Basically, a lot of things that are not directly improving your product, but improving the confidence of those companies that you are the right partner to work on those for them. There's a shift now. There's clearly a shift in those $1,400 and by extension, all the enterprise markets where, especially with AI, the risk associated with
lesser compliance or you're a small company, should we trust you, is completely counterbalanced by the risk of missing out. Like the cost, the opportunity cost is too big. And now we see definitely push from CXOs on their security teams for those AI tools and productivity tools, basically saying, hey guys, you need to make it work.
You need to make it work because it's improving so much the efficiency of the C-level.
and by extension of the rest of the company, you know what? We're probably ready to take the risk. Even if it's like a Series A, Series B, Series C company, and it's not fully established, maybe, or maybe, yes, they are processing our emails, which is like a core data set of our business, and we need to be straight about it. Maybe they're more okay. Of course, we need to do the work. You need to be like...
You need to prove that you're the right partner. But the first approach is changing and the dynamic is changing. So it's basically a bias toward let's make it work compared to two years before where it was probably prove us that you are a reliable partner and then we'll see if we do this POC. It's completely the reverse right now. So yeah, that's an interesting dynamic that is useful in the way to build a product right now.
I'm curious. And, you know, we get to talk about all these really cool things happening in the AI space and how they're affecting products and services. And, you know, LLMs can do so much now. And, you know, we're kind of moving into the agentic age, you know, of AI and that's increasing.
But, you know, there's still there's still a human being in the workflow. And and kind of what what are the what are the critical factors that the humans still bringing into the workflow as opposed to all this amazing technology that we're able to utilize on that? How do you see the human in the workflow going forward, given the fact that you have so much capability from technology playing all around them?
That's an interesting question. And I guess the answer is almost in the question. It's like the human part that is hard to replicate. And so, I mean, creativity, ability to define, to detect patterns and stuff. So I think that the rise of LLMs is helping us get rid of everything that is mundane. I spent, I used to, I will give you one example.
I do a lot of interviews because I hire engineers. And as part of every interview process, you write up a debrief for the team to consume. And writing a debrief, a thoughtful debrief, takes time. It takes time. I was probably spending between 20 and 30 minutes after each interview to basically put the pros, the cons, and the cons.
question mark, like area to dive in. Now we are pretty much all using like meeting minutes that are like, you know, using the transcript, formatting that the way you want. And you just have to add your quick thoughts here and there and underlining like that. So now like from 20 to 30 minutes, this is taking me three minutes and boom, this is uploaded in the whatever ATS like tools like FormHR. That's an example.
Meeting minutes with my people. I have one-to-ones. I do one-to-one meetings with my people. I want to keep track of everything that we said. I used to take notes. I'm still, to some extent, taking some notes. But the transcript itself is so good now that I don't have to take notes of everything. So I just put notes of the two key highlights that I want to keep somewhat private. The rest is already shared.
And now it's building like a database for me, like information on my desktop that I can query anytime to find information. So this is replacing all the mundane work that I was doing. And I can just focus on like brain power to some extent.
And that's definitely changing. So same for my engineers. My engineers, they've lived like a, I would say, padding shift to padding shift and changing the way they build software over time. They keep increasing their velocity because of those new tools. They have also to think differently, but it's still stupid to some extent. All these toolings, it's basically an intern.
It's an intern. So you need to review. You need to spend the time like reviewing what has been output, what's the output of like
like your new ID being Cursor, being C-line, being whatever, like those tools, you need to review everything because sometimes it will make like some crazy mistakes that like a regular engineer won't do. But I think that it's saving a ton of time for our engineers that they can just focus on the core of their job, which is like understanding the user, understanding what needs to happen and what is the smartest way to get it happen.
LLMs are just a nice helper to go faster. But so far, it's about it. But it's changing every day. It's changing every day.
Yeah, Loic, you mentioned kind of coding and vibe coding, you know, comes to mind. And I almost wonder, you know, there's going to be a new reality for email with all of these AI features coming in. And I know when I'm using vibe coding tools, I have to sort of learn a new technique.
a new way of working and there's different types of mental loads that I have to manage, like a lot of context switching, you know, guiding the model in different ways. It's, it's a different kind of mental load, a different kind of skill. Do you see a similar thing developing in terms of, you know, my interaction with, with email and, you know, learning a kind of different
of working through those things, you know, in good ways, but also in challenging ways to sort of retool my mind or retrain my mind of how to work in this kind of vibe emailing way. No, no, this is a good question. And yeah,
And then we are talking about the user interaction and how this is evolving. And our work is to make that transition, if there's any transition, the smoothest possible. We need to take the user where they are to bring them where they will be eventually with this vibe
emailing, if that even mean a thing. I'm not sure what would be behind it, but clearly there's a change that we are facing. And interestingly, I was talking lately, but right now startups, I would say, startup typically over index on seniority for engineers because you need people to be able to manage the noise, manage the shit. It's always changing. You need people with tough skin to be able to manage that.
That said, and we see it, it's harder right now for like a new grads to get into this market. So, but they have like one asset that makes them probably different. It's the brain plasticity. The new grads of this year, like for the last three years, they've seen so many different technologies coming like every six months. They had to readapt, they had to relearn. So their brain is used to this mental shift, right?
um like every six months like oh damn this is the new way to code oh damn this is the new new way to code like in my days like the biggest shift was moving from svn to git that's that was about it or like you have a new framework or you have like a new language but like it's same old same old like different flavor of uh of the same thing so i i do think that um
The people that are just born with it. We were born with internet. They are born with LLMs or with AI. And they have this brain plasticity. And I think this will be probably the challenge for practitioners, like engineers globally, is how to adapt to that. Because I'm 45 now.
I'm not sure that my brain plasticity is still there. So I need to keep up. I need to still try new stuff and everything and challenge myself every day compared to even like five years ago where I was just like tuning my own ways and like making it like
slightly better over time. Like this is a pattern shift. And if I don't take the wagon, I'm probably lost. And same applies for engineers. So it's, it's definitely an interesting time. Definitely an interesting time. Uh, I gotta say, if you hadn't, if you hadn't dated yourself intentionally, uh,
revealing your name, I was going to say the, the SVN to get switch would have done that for you there. I don't think anyone out there under 30 is, is going to know what SVN is anyway. So. Yeah. I'm sorry. I'm sorry for that. It's kind of like my gray hair that we're talking. No, I'm just definitely a brain plasticity is on my mind as well. I'm older than you are even. So, so I'm curious, you know, as we wind up here, you know,
You know, there's so much ground is getting covered right now. And, you know, you've talked about like, you know, the evolution of the product and, you know, new technologies, you know, kind of slamming into to your current plans and having to adjust and stuff. Yeah.
If you kind of take a step back or, you know, kind of done for the day and you're kind of thinking about the future and you're thinking on a little bit longer timeframe than kind of what we've been talking about, kind of, you know, where can, you know, email and messaging and stuff, where can it go with these technologies and a little bit longer timeframe when you kind of get into, you know, kind of just letting your mind wander and kind of kind of dreaming what could be.
What are your thoughts around the future around that, you know, in the large? What should we be thinking about that's not necessarily going to be science fiction going forward, but, you know, day-to-day life given where things are kind of generally headed? No, this is a, I wish I knew. I wish I knew. But like if I have to do like a bit of science fiction, like clearly I see the communication globally, communication between people is so fragmented.
So fragmented. Like with my family, I use WhatsApp. At work, I use, with my partners and all of that, we use emails. Internally, we use Slack. But we also like discuss in like Google Docs threads, like in comments and all of that. So communication is so spread out and so in different places that it's really hard to make sure that you have
everything that belongs to the same topic into the sort of like unified inbox. So if I have to...
to guess where we would be like in, I don't know, I would say 10 years, but like maybe with like AI, it would be like in six months. I would say that there's probably a need of a unified and central way to communicate for you, which is your preferred, preferred interface, regardless of where this will land. And doing so like in a way that brings focus. When I want to work on a specific partnership with like a,
in AI with all those big providers and everything. I want to focus only on this, but I don't care if the information is in my email, is in Google Doc, is in Notion, is in WhatsApp or whatever. I want this to be consolidated so that I know everything that is happening in one place. So I think there would be a lot of work around this. The other aspect that is really interesting is where LLM sits. What is the entry point?
we see ChatGPT being like one entry point, but like all the tools have like an unbedded ChatGPT equivalent. So whether you use like Confluence, Notion, whether you use like Salesforce, whether you use like any kind of B2B application, have their like own specific chatbot. And then you have like actors like Glean, for example, and...
and some others that try to unify everything. Where is this going? And so that's something that I'm really curious about. Do we want to be where people work or do you want to have like some sort of like a unified experience regardless of the vertical people are working in? Um,
I'm curious. I have more questions than answers. What's for sure it will evolve? And I do believe that Superhuman is doing that in a nice way and people tend to love it. So building on that experience and that empathy with users, I believe we'll be well-placed for that race, basically.
But interesting, Bryce. I appreciate the insights. And thank you so much for coming on the show today and kind of sharing not only where Superhumans is at, but kind of how you're tackling the challenges and thinking about the future. A lot of insight there. I really appreciate it. Thanks, Chris. I appreciate the time with you and Daniel. Thank you.
All right, that is our show for this week. If you haven't checked out our ChangeLog newsletter, head to changelog.com slash news. There you'll find 29 reasons, yes, 29 reasons why you should subscribe.
I'll tell you reason number 17, you might actually start looking forward to Mondays. Sounds like somebody's got a case of the Mondays. 28 more reasons are waiting for you at changelog.com slash news. Thanks again to our partners at Fly.io, to Breakmaster Cylinder for the beats, and to you for listening. That is all for now, but we'll talk to you again next time.