We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
People
A
Ashu Garg
E
Ethan Malek
J
Josh Miller
K
Kevin Howe
L
Lisa Su
N
None
R
Roblo
S
Sam Altman
领导 OpenAI 实现 AGI 和超智能,重新定义 AI 发展路径,并推动 AI 技术的商业化和应用。
S
Sebastian Semitkowski
S
Sudhij Lapagari
V
Varun Mohan
Z
Zach Kukoff
Topics
Sebastian Semitkowski:我认为提供人工客服将始终是VIP服务。我们可以利用AI自动处理那些枯燥乏味的手动工作,但同时也要向我们的客户承诺提供人工连接。这实际上是将两者的优势结合起来,我相信这种模式也会在其他领域得到广泛应用。此外,我观察到公司内部涌现出一批能够进行基础编程的商务人士,他们能够更好地理解和沟通需求,因此会变得更有价值。

Deep Dive

Chapters
Klarna's AI transformation and its shift towards a hybrid model of AI and human customer service are discussed. The company's CEO believes that human customer service will remain a VIP offering, while AI handles routine tasks. The increasing role of business people with basic coding skills is also highlighted.
  • Klarna's hybrid approach to customer service, combining AI and human interaction
  • CEO Sebastian Semitkowski's view on human customer service as a VIP offering
  • Rise of business people with basic coding skills to better communicate needs

Shownotes Transcript

Translations:
中文

Today on the AI Daily Brief, is OpenAI going to kill your company, even if by accident? Before then in the headlines, is human customer service a VIP thing in the future? The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI.

All right, friends, quick announcement section. First of all, thank you to today's sponsors, KPMG, Blitzy, Vanta, and Super Intelligent. And if you are looking for an ad-free version of the show, go to patreon.com slash AI Daily Brief. One other note before we dive in, believe it or not, even though it's only June, we are quickly selling sponsor slots for the fall. If you are interested in sponsoring the AI Daily Brief, shoot me a note, nlw at breakdown.network with the word sponsor in the subject. For now, though, let's get into the headlines.

Welcome back to the AI Daily Brief Headlines Edition, all the daily AI news you need in around five minutes. We kick off today with the latest from Klarna.

Quick TLDR on their AI transformation if you haven't been following along. A couple of years ago, the company set out to rip out their SaaS services and use AI coding to replace them. Then the company also laid off around 700 customer service workers to replace them with AI chatbots and voice agents. Recently, however, it seemed like they had been going back to a more hybrid structure, where there would be a combination of AI service and human customer service, and CEO Sebastian Semitkowski seems to be thinking along those lines.

At the London edition of the South by Southwest conference, he said, Two things can be true at the same time. We think offering human customer service is always going to be a VIP thing. We can use AI to automatically take away boring jobs, things that are manual work, but we are also going to promise our customers to have a human connection. Basically, the plan is to combine the best of both worlds, which seems to me to be exactly the pattern that we're likely to see in other areas. Now, Semenkowski also noted that the company's engineering positions haven't shrunk as much as other departments. He

even though they're all using AI to increase their productivity. He did note that, quote, what I'm seeing internally is a new rise of business people who are coding themselves. I think that category of people will become even more valuable going forward. Going a little bit deeper, Semetkowski was not arguing that all of a sudden business people are going to replace the coders, but that by being able to code even in a very basic manner, they're better able to understand and communicate the specs of what they need to be built. This would mirror the pattern that we're seeing, certainly in Superintelligent and lots of other startups,

where feature discussions are now entirely had with prototypes thanks to things like Lovable and Bolt. So for those keeping score at home, we are still very in the midst of this transformation, but Klarna continues to be an interesting case study for those who want to see how this all might shake out. Moving over to Redmond, Washington, Microsoft has reshuffled their executive lineup for a big push in enterprise agents. Interestingly, Ryan Rolansky, the CEO of the LinkedIn division, has been appointed to lead the teams in charge of the Office productivity suite.

Rolansky has been at the head of LinkedIn since 2020, leading a big growth push. And within office, he will be tasked with speeding up the deployment of AI tools and driving enterprise adoption. His new role will report into Rajesh Jha, one of the company's top engineering executives who was given responsibility for consolidating AI tools and platform groups in January.

Charles Lamanna, who runs the Dynamics 365 line of sales and business planning software, will also be transferred from the cloud division to Jha's team. It sort of sounds like Microsoft is bringing everything enterprise agents under Jha, while appointing a proven leader to shepherd the agentic iteration of the Office suite.

One question that's not clear is where Mustafa Suleiman fits in all of this shuffle. Suleiman was, of course, the big-ticket acquisition in March of last year and appointed the CEO of Microsoft AI. His work seems now primarily focused on consumer applications of AI, with Suleiman envisioning a personality-filled AI companion.

It's worth noting that we are dealing with wildly divergent trends with AI right now. On the one hand, it is obviously incredibly potent and powerful for the enterprise, and that's where a lot of our attention certainly is. But consumers are using these tools in totally different ways. Life coaching, relationship support, lightweight therapy. These use cases are growing as fast as anything in the enterprise, which can be kind of head spinning for a company that's trying to deal with all of that at once.

Moving over into the hardware side of the business, AMD has NVIDIA in its sights with a new acquisition. The chipmaker has acquired an AI software optimization startup called BREEAM for an undisclosed amount. The company was acquired while it was still in stealth mode, but according to their bare-bones website, they're working on, quote, "...enabling ML applications on a diverse set of architectures and unlocking the hardware capabilities through engineering choices made at every level of the stack. From model inference systems through runtime systems and ML frameworks to compilers."

If your brain melted with all of that jargon, they appear to be creating software that allows AI models to run on a variety of different hardware. In a press release, AMD said that the acquisition will help fulfill its commitment to, quote, "...building a high-performance open AI software ecosystem that empowers developers and drive innovation."

Open is certainly the key word for the second-ranked AI chip manufacturer. One of the biggest roadblocks for AMD hasn't just been about matching the performance of NVIDIA's chips, but rather overcoming compatibility issues. Most of the world's LLMs are built on NVIDIA's CUDA platform and optimized to run on their hardware and software.

In that regard, BREEAM feels like a natural fit to solve AMD's problem. In that sole blog post from their website published back in November, they specifically referenced the chipmaker, writing, In recent years, the hardware industry has made strides towards providing viable alternatives to NVIDIA hardware for server-side inference. Solutions such as AMD's Instinct GPUs offer strong performance characteristics, but it remains a challenge to harness that performance in practice as workloads are typically tuned extensively with NVIDIA GPUs in mind. The

The issue is so prominent for AMD that CEO Lisa Su drilled the point home during a recent hearing in Congress. She said that for the U.S. to remain a leader in AI, there needs to be a commitment to open ecosystems that allow, quote, hardware, software, and models from different vendors to work together. This accelerates innovation, reduces barriers to entry, strengthens security through transparency, and creates healthier, more competitive markets.

So will this acquisition make a difference? Only time will tell, but for now, that is going to do it for today's AI Daily Brief Headlines Edition. Next up, the main episode. Today's episode is brought to you by KPMG. In today's fiercely competitive market, unlocking AI's potential could help give you a competitive edge, foster growth, and drive new value. But here's the key. You don't need an AI strategy. You need to embed AI into your overall business strategy to truly power it up.

KPMG can show you how to integrate AI and AI agents into your business strategy in a way that truly works and is built on trusted AI principles and platforms. Check out real stories from KPMG to hear how AI is driving success with its clients at www.kpmg.us slash AI. Again, that's www.kpmg.us slash AI.

This episode is brought to you by Blitzy. Now, I talk to a lot of technical and business leaders who are eager to implement cutting-edge AI, but instead of building competitive moats, their best engineers are stuck modernizing ancient codebases or updating frameworks just to keep the lights on. These projects, like migrating Java 17 to Java 21, often mean staffing a team for a year or more. And sure, co-pilots help, but we all know they hit context limits fast, especially on large legacy systems. Blitzy flips the script. Instead of engineers doing 80% of the work,

Blitzy's autonomous platform handles the heavy lifting, processing millions of lines of code and making 80% of the required changes automatically. One major financial firm used Blitzy to modernize a 20 million line Java code base in just three and a half months, cutting 30,000 engineering hours and accelerating their entire roadmap.

Email jack at blitzy.com with modernize in the subject line for prioritized onboarding. Visit blitzy.com today before your competitors do. Today's episode is brought to you by Vanta. In today's business landscape, businesses can't just claim security, they have to prove it. Achieving compliance with a framework like SOC 2, ISO 27001, HIPAA, GDPR, and more is how businesses can demonstrate strong security practices.

The problem is that navigating security and compliance is time-consuming and complicated. It can take months of work and use up valuable time and resources. Vanta makes it easy and faster by automating compliance across 35-plus frameworks. It gets you audit-ready in weeks instead of months and saves you up to 85% of associated costs. In fact, a recent IDC white paper found that Vanta customers achieve $535,000 per year in benefits, and the platform pays for itself in just three months.

The proof is in the numbers. More than 10,000 global companies trust Vanta. For a limited time, listeners get $1,000 off at vanta.com slash nlw. That's v-a-n-t-a dot com slash nlw for $1,000 off. Today's episode is brought to you by Superintelligent, specifically agent readiness audits. Everyone is trying to figure out what agent use cases are going to be most impactful for their business, and the agent readiness audit is the fastest and best way to do that.

We use voice agents to interview your leadership and team and process all of that information to provide an agent readiness score, a set of insights around that score, and a set of highly actionable recommendations on both organizational gaps and high-value agent use cases that you should pursue. Once you've figured out the right use cases, you can use our marketplace to find the right vendors and partners. And what it all adds up to is a faster, better agent strategy.

Check it out at bsuper.ai or email agents at bsuper.ai to learn more. Welcome back to the AI Daily Brief. One of the more persistent memes throughout the recent history of Gen AI, basically the post-ChatGPT period, has been this idea of OpenAI killing all startups.

This was even the subject of a Y Combinator podcast episode back in 2023 called Will OpenAI Kill All Startups? Now, initially, the context was that in the wake of ChatGPT being released, there were a ton of companies that were either A, very, very thin wrappers on top of ChatGPT, or B, trying to fill in very specific gaps in the ChatGPT product.

One really notable example of this was the talk with your docs type apps, of which there were a bajillion before ChatGPT could interact with PDFs. Now, obviously, that was going to be a feature that was somewhere on the roadmap. And that even ultimately led to these statements from Sam Altman.

Fundamentally, there are two strategies to build on AI right now. There's one strategy, which is assume the model is not going to get better. And then you kind of like build all these little things on top of it. There's another strategy, which is build assuming that open AI is going to stay on the same rate of trajectory. And

and the models are going to keep getting better at the same pace. It would seem to me that 95% of the world should be betting on the latter category, but a lot of the startups have been built in the former category. And then when we just do our fundamental job, which is make the model and its tooling better with every crank, then you get the open AI killed my startup meme. Now, this meme came up big time again in the context of yesterday's product announcements.

This is what I had featured in the headline section of the show, but the announcement had only just happened, so I hadn't had much of a chance to digest. And as people did dig a little bit deeper into this, this meme of OpenAI killing startups came back. Sudhij Lapagari from Battery Ventures writes, Great set of product announcements from OpenAI today. Enterprise Search, Glean, Meeting Notetaker, Granola, IDE, Windsurf. What's next? Calendar, Spreadsheet, Email? This

This validates that LLM is a commodity and the real money in Moat lies in the application layer. So let's talk briefly about a couple of the features that were announced yesterday and the startups that people pointed to as potentially threatened because of this. The new connectors feature allows ChatGPT to interact with other data sources.

This is only available inside business accounts first, and this basically gives that sort of chat with your docs experience that people have been interested going all the way back to those wrapper companies. More recently though, the idea of enterprise search as a use case for AI has been a huge priority for a lot of enterprise AI focused companies. Notably Glean, who was mentioned in that tweet. ChatGPT building that sort of functionality natively into their core enterprise experience

does bring up the question of whether you're going to want or need an additional search experience outside that.

Professor Ethan Malek wrote, "'So OpenAI deep research can connect directly to Dropbox, SharePoint, etc. In my experiments, it feels like what every TalkToOurDocuments RAG system have been aiming for, but with O3 smarts and easy use. I haven't done robust testing yet, but impressive so far. When it quotes a document, that link actually takes me to the document. I think it's going to be a shock to the market, since TalkToOurDocuments is one of the most popular implementations of AI in large organizations, and this version seems to work quite well and costs very little.'"

Now, of course, Glean is not just a talk-to-your-documents company. It is an all-in-one work AI platform that ranges from an assistant to agents and more. But it is certainly the case that the more the core products like ChatGPT start to nibble at the edges of these offerings, the more confusing it's going to be for some percentage of enterprise buyers who think to themselves, well, let's just stick with the company that's offering it alongside the core models. If anything, the note-taker announcement seemed to get a lot more chatter.

This is, I think, because people absolutely love Granola. Granola advertises itself as the AI notepad for teams in back-to-back meetings. And even in a world of a million native meeting recorders with things like Otter and Fireflies and Fathom, Granola has started to carve itself out a nice little niche.

If you go search around Twitter slash X, you can find lots of people talking about what they love about Granola. One of the benefits is that it doesn't place a bot inside your calls. It just captures audio directly. And so yesterday, people definitely took note when OpenAI announced ChatGPT record mode. Remember the tweet was, we're rolling out ChatGPT record mode to team users on macOS. Capture any meeting, brainstorm, or voice note. ChatGPT will transcribe it, pull out the key points, and turn it into follow-ups, plans, or even code.

Roblo writes, in other news, OpenAI is trying to kill Granola and every other AI meeting notes app. Zach Kukoff writes, Granola getting Sherlocked by OpenAI. At some point, model providers are going to need to decide if they want to be stable platforms or compete for every vertical.

platform risk has never been higher. Now, Zach also mentioned another thing in this same domain which has been going on lately. He says on the heels of Anthropic throttling Windsurf's access to Claude 4. A couple of days ago, Varun Mohan, the CEO of Windsurf, tweeted, "...with less than five days of notice, Anthropic decided to cut off nearly all of our first-party capacity to all Claude 3.x models. Given the short notice, we may see some short-term Claude 3.x model availability issues."

A day later, Windsurf's head of product engineering, Kevin Howe, writes,

By way of backstory, he writes,

As part of that, we've always prided ourselves on providing access to all models. Kevin goes on to say that they're working with other third-party providers to try to bring the cloud models to their paying users. Kevin also writes, quote, We have significantly improved our agentic harness around Gemini 2.5 Pro and GPT 4.1. By the way, Google AI Studio lead Logan Kilpatrick had responded to the CEO's post with a Gemini handshake emoji windsurf response. Kevin concludes, Ultimately, as any user can attest,

The magic of Windsurf has always been in the product. It's important to power our product with great models, but the real magic is in the deep contextual understanding of existing knowledge, thoughtful UX, tool integrations like previews and deploys, customizations like workflows and memories, enterprise readiness, jet brains, and the list goes on and on.

And this is exactly the question. In the new world that we operate in, what are the moats? Going back to Zach Kukov's tweet again, remember he wrote, at some point model providers are going to need to decide if they want to be stable platforms or compete for every vertical.

Battery Venture's Sudhi writes, This validates that LLM is a commodity and the real money and moat lies in the application layer. This certainly seems to be the pattern that the frontier labs, at least the startup versions, Anthropic and OpenAI, are embracing. Yes, obviously, they continue to compete for model dominance. Anthropic, for example, has really leaned into the fact that it has the preferred coding model. But these companies are also releasing actual applications. They are not just playing the role of platforms.

OpenAI has slowly but surely been releasing a set of what are effectively consumer applications that live inside ChatGPT. One might consider image generation a version of this, but certainly deep research, operator, now codecs. These are OpenAI's first forays into owning the application layer, not just the model layer.

Similarly, Anthropic is not just interested in being the model provider. With Cloud Code, they are directly competing with some combination of the latter-day IDEs and the vibe coding platforms. Again, it's pretty clear that they value owning some part of the application layer and the relationship with customers. There are really big implications for what the frontier labs decide to do vis-a-vis agents. The single most dominant theme in venture investing right now is vertical AI agents. Vertical

verticalized based on specific sector or specific function. The question is, how many of those are the frontier labs and hyperscalers going to go after? And what, if anything, can actually differentiate and allow those companies to become integrated in a way that they're not just eventually punched out by those bigger players? There was an interesting discussion from about a year and a half ago on Hacker News around what is a 2024 to 2030 moat for AI. One of the most popular answers was,

said the moats are network effects, switching costs, economies of scale, low-cost producer, and brand. And what you'll notice is not here, and this has become kind of conventional wisdom at this point, is unique or differentiated technology. Basically, there is a sense that technology itself is getting commoditized, and so it will be other things that allow companies to compete. I also saw this post from Enterprise VC Ashu Garg, who writes...

I had lunch with a founder last week who pitched me on their AI for Operations platform. I stopped them three slides in. General purpose AI isn't cutting it anymore. DeepSeek's January breakthrough told us something important. Efficiency and performance can coexist a lot earlier than most people thought. Startups are now excelling not by scale, but by focus. They're building vertical AI that deeply understands the messy high-stakes workflows in sectors like healthcare, finance, and defense.

Specialization is the new competitive advantage. Three patterns I'm tracking across successful vertical AI startups: First, they pick massive but high-friction and high-value workflows. AI for sales or AI for operations is too broad. What's effective is focusing on urgent, complex processes. Second, they build more than model wrappers. They create proprietary feedback loops and data assets that compound over time. This instrumentation is what turns a one-off tool into a durable, defensible product. Third, they expand from beachheads of earned trust.

They wedge into multi-billion dollar industries by solving problems in the hardest, least glamorous corners. From there, they earn the right to expand and unlock bigger TAM over time. I don't know if that's the exact answer or the only answer, but I do know that whatever the answer is to this, it's going to shape how the industry evolves over the next several years. Browser company CEO Josh Miller writes, Weird convergence in tech. Notion adds AI research, meeting notes, enterprise search. So do Atlassian, Grammarly, Coda, Glean, and Granola.

OpenAI buys Windsurf and Codex, GitHub and Google Follow. Browsers are next. Is the future this obvious? Everyone's converging. He continued in another tweet, It feels like everyone is bundling into a handful of AI super apps of sorts. Coding, IDE, agent, etc. Work, docs, enterprise search, meeting notes. Assistant, AI chat, search browser, etc.

The point is, things are going to get more and less messy. Companies are going to find themselves in competition in ways that they didn't anticipate. And we are just now figuring out what the post-technology moat world looks like. If there is any good news for startups, it's that these moments of chaos and transition tend to benefit the nimble more than the big and lumbering. And so who knows? The changes in moats may be exactly to some of these new startups' tastes, if they can just figure out what the new moats are going to be.

I think it's too early to say that OpenAI is going to kill all the startups, even that they are now competing with by virtue of the announcements yesterday. But things certainly just got even more interesting. For now, that is going to do it for today's AI Daily Brief. Thanks for listening or watching, as always. And until next time, peace.