Today on the AI Daily Brief, OpenAI takes a page from Palantir, doubles down on consulting services. Before then in the headlines, Zuckerberg's superintelligence team was officially announced. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. Hello friends, quick announcements. First of all, thank you to our sponsors for today's show, KPMG, Blitzy, and Vanta. To get an ad-free version of the show, you can go to patreon.com slash AI Daily Brief.
Welcome back to the AI Daily Brief Headlines Edition, all the daily AI news you need in around five minutes. Nothing like having a daily show, right? Where the second you press stop on a recording, there is news that makes your last episode out of date.
If you are our daily listener, you'll know that yesterday we talked all about the latest in the talent war between OpenAI and Mark Zuckerberg. And literally just an hour or something after I pressed send on that thing, Zuckerberg's superintelligence team was officially announced. So here we are doing the catch up on that. In a memo to staff on Monday, Zuckerberg introduced his new AI hires and revealed the official name of the new division. He wrote...
As the pace of AI progress accelerates, developing superintelligence is coming into sight. I believe this will be the beginning of a new era for humanity and I am fully committed to doing what it takes for Meta to lead the way. We're going to call our overall organization Meta Superintelligence Labs or MSL. This includes all of our foundations, product and fair teams, as well as a new lab focused on developing the next generation of our models.
Now, as we thought, former Scale CEO Alexander Wang will be leading these efforts. He will both lead MSL as well as being named chief AI officer for Meta as a whole. Interestingly, when it comes to the labs, he is being joined by Nat Friedman. Nat, you'll remember, previously ran GitHub at Microsoft, and his investment firm with Daniel Gross is one of the most active and successful AI investors.
Alongside the note from Zuckerberg, we also got a list of 11 names, some of whom we haven't seen before, who had joined this new effort. In addition to some ex-OpenAI folks who weren't in the initial reporting, we also had a couple others from Anthropic and from Google DeepMind, although the concentration is definitely from OpenAI. Now, in terms of the group's mandate,
It's not exactly clear what the balance is between fundamentals research versus product advancement, but it does appear like MSL has a mandate to pursue both. Now, the information for its part is fairly skeptical. They write,
The end result is a team that, without being too cynical, feels highly combustible. Don't be surprised if at least one high-profile departure occurs within a few months. Any group with a lot of big egos working under intense pressures from a controlling chief executive is going to have trouble staying together. They also noted that Alexander Wang at the helm has never produced a foundation model and is better known for his political savvy than his AI skills. They suggested that he will be more of an advisor to Zuckerberg than a hands-on research lead.
Touching on Friedman, they pointed out that he turned down the leading role and suggested Wang instead. Concluding, they added, "Add to these wrinkles the fact that Meta has hired a bunch of highly paid scientists from OpenAI who will join existing staffers who are likely feeling a little disgruntled at how things have come about. Meta's AI team has undergone repeated upheavals over the past couple of years. Meta could be described as a permanent revolution of AI, and that likely won't stop now."
Now, in some ways, the whole process looks a little bit more like assembling a sports super team than a traditional tech hiring plan. From sky-high salaries to plucking top talent from across the sector, Sarah Guo from Conviction wrote, there are now folks helping researchers negotiate their comp packages and taking a fee, like agents for athletes.
Look, I think it's completely reasonable to be skeptical of this. It is totally understandable to know where the very real possibilities for breakdown are. But at the same time, while dream teams can break down because of big egos, they can also create their own sense of momentum.
You have to think that a lot of these folks joined not just because they were getting huge paydays, although that's part of it. They joined because they thought that if they all joined at the same time, there was a real chance that they could be first to this coveted goal. That creates excitement and, like I said, momentum that I don't think all these media reports are quite giving enough credence to.
It's now in the public eye, and we'll see if the spending spree has stopped or if they're still a stumbling, but Meta's superintelligence lab is here, and it is a new force to be reckoned with in the space. Speaking of powers to be reckoned with in the space, Apple seems to be giving up entirely and considering handing Siri over to OpenAI or Anthropic.
According to Bloomberg's Mark Gurman, Apple has met with both AI companies to discuss using their models to power the next iteration of Siri. Gurman framed this as a "potentially blockbuster move aimed at turning around its flailing AI effort." Until now, Apple used their own in-house foundation models to drive Siri and had been planning to continue on the same course for the new version due next year. The exploration of outsourcing the model is still in its early stages.
But the labs have been asked to train a special version of their model that can run on Apple's cloud infrastructure. Apple uses their own silicon rather than industry-standard Nvidia chips, so some conversion is necessary. The internal project, dubbed LLM-Siri, remains ongoing.
The shift in thinking was reportedly the result of Vision Pro lead Mike Rockwell taking over the project earlier this year. One of the first orders of business was to test Siri using third-party technology from OpenAI, Anthropic, and Google. Rockwell and other executives concluded that Anthropic's models had the best performance, leading them to open discussions with the company about using Claude. Gurman reports that plans are still murky. Apple has approved a multi-billion dollar budget for running their own models via the cloud, but beyond that, nothing is set in stone.
Still, it seems like executives are reportedly on board with outsourcing the model, with Rockwell and others seeing little reason to stick with their own technology. At the same time, morale in the ranks is beginning to sour, with Gurman writing: "Some members have signaled internally that they're unhappy that the company is considering technology from a third party, creating the perception that they are to blame at least partially for the company's AI shortcomings. They've said that they could leave for multi-million dollar packages being floated by meta platforms and open AI."
Signal captured a part of the zeitgeist on this, writing: "Absolutely astonishing. Apple used to own the full stack, silicon to software to services. Now they're outsourcing the one layer that will define the next decade of computing. It's a metaphysical betrayal of their own DNA. OpenAI and Anthropic don't need Apple, but Apple desperately needs one of them. This puts Apple under the models it's integrating. Wild reversal. Whoever they choose, they now owe existential dependency to."
And finally, if consumers realize that Siri does not equal Apple anymore, that it's powered by OpenAI or Anthropic, then what exactly is Apple's IP? A thin shell over someone else's mind? That kills the aura of vertical magic.
Given how frequently Signal shows up as a quote on the show, I often think that their perspective is very valuable. On this one, though, I have to disagree entirely. My strategic sense is that even if all this is true, it doesn't matter. Apple has to do something big. They are behind, falling more behind, and they are not catching up with their own models. Period. Full stop. End of story. Think about the first thing we just talked about with his incredible spending spree with Zuckerberg.
That's what it takes to compete for talent right now, and Apple's not doing it and gives no indications that they're going to do it. So they are left with a set of solutions that involve not having that access to talent. That means that this sort of partnership or acquisition, like the perplexity acquisition we talked about last week, are their paths forward. Yes, it is the case that Apple used to own the full stack, but that is not a strategy that is available to them now. The longer that they cling vaingloriously to what they once were, the more likely it is that they will never be that again.
I also think that this perspective understates the value that Apple still brings and overstates consumer recognition. On the latter point, all that the average consumer wants is for Siri to work.
If it works, they're not going to care or ask questions about how it works. I think that the brand risk from having it powered by OpenAI or Anthropic is much lower than it might appear from those of us who are watching this like baseball stats. And when it comes to the idea that OpenAI and Anthropic don't need Apple, Apple still has an incredible number of installed devices, billions around the world. Getting access to that distribution at a time when models are highly commoditized and getting more so,
is nothing to sneeze at. Now, OpenAI has ambitions to actually go compete with Apple on its home territory of devices and usher in the post-iPhone era. But Anthropic doesn't, and they don't have the resources to even consider that. So in my estimation, Apple should do something like this, and they should do it as fast as humanly possible.
Lastly today, a little fun feature update for those Vibe and regular coders out there. Cursor has launched a web app to manage AI coding agents. The AI coding platform continues to expand their interface beyond the IDE. In May, Cursor launched background agents that are able to take instructions and then work independently of the user. The following month, they introduced a Slack integration allowing users to set the agents to task from within their workspace. And
And this web app is another natural extension, letting users give instructions via the browser on desktop or mobile. Notably, this is the first time that Cursor has been available on a mobile device without needing to use Slack as a workaround. And at first glance, people love it. Developer Nick Dobos writes, Cursor on mobile is here and it's amazing. Been using it for a few weeks now and I will never not be amazed to be merging PRs while riding Peloton. I'm never touching a laptop again. Just bookmark the website on your home screen and it's basically an iOS app.
I am very excited for that to be a new interface norm going forward. And frankly, it just kind of makes sense. If part of the way that we interact with coding isn't sitting there in front of a screen, but is instead interrogating it and using our voice to tell it what to do, that's something that really can be done from mobile. In any case, that is going to do it for our slightly extended version of the headlines.
Next up, the main episode. Today's episode is brought to you by KPMG. In today's fiercely competitive market, unlocking AI's potential could help give you a competitive edge, foster growth, and drive new value. But here's the key. You don't need an AI strategy. You need to embed AI into your overall business strategy to truly power it up.
KPMG can show you how to integrate AI and AI agents into your business strategy in a way that truly works and is built on trusted AI principles and platforms. Check out real stories from KPMG to hear how AI is driving success with its clients, and
at www.kpmg.us slash AI. Again, that's www.kpmg.us slash AI. This episode is brought to you by Blitzy, the enterprise autonomous software development platform with infinite code context.
Blitze is used alongside your favorite coding copilot as your batch software development platform for the enterprise-seeking dramatic development acceleration on large-scale codebases. While traditional copilots help with line-by-line completions, Blitze works ahead of the IDE by first documenting your entire codebase,
then deploying over 3,000 coordinated AI agents in parallel to batch build millions of lines of high-quality code. The scale difference is staggering. Copilots might give you a few hundred lines of code in seconds, but Blitzy can generate up to 3 million lines of thoroughly vetted code.
If your enterprise is looking to accelerate software development, contact us at blitzy.com to book a custom demo or press get started to begin using the product right away. Today's episode is brought to you by Vanta. In today's business landscape, businesses can't just claim security, they have to prove it. Achieving compliance with a framework like SOC 2, ISO 27001, HIPAA, GDPR, and more is how businesses can demonstrate strong security practices.
The problem is that navigating security and compliance is time-consuming and complicated. It can take months of work and use up valuable time and resources. Vanta makes it easy and faster by automating compliance across 35-plus frameworks. It gets you audit-ready in weeks instead of months and saves you up to 85% of associated costs. In fact, a recent IDC white paper found that Vanta customers achieve $535,000 per year in benefits, and the platform pays for itself in just three months.
The proof is in the numbers. More than 10,000 global companies trust Vanta. For a limited time, listeners get $1,000 off at vanta.com slash nlw. That's v-a-n-t-a dot com slash nlw for $1,000 off. Welcome back to the AI Daily Brief. One of the categories of firms that has done the best so far in the AI boom is, of course, the consultants.
Revenue for consulting-related engagements is way up for all of the biggies like Accenture and McKinsey. In fact, AI is driving a huge portion of their new business. And yet at the same time, it feels very, very clear that while there is a massive short-term opportunity for consulting, right, tons and tons of enterprises and businesses that need help navigating this transformation, it is also equally clear that AI represents a fairly existential threat to their models as such.
To the extent you view consulting as experts with specialized knowledge, being smart about how to gather information, process that information, and turn that into advice, a lot of that certainly sounds like things that AI and LLMs are very good at, right? In fact, some of my episodes about the AI disruption of consulting companies have been my most popular of all time.
And what's more, consultants aren't just facing challenges from LLMs directly. To some extent, one of the big patterns that we're observing is that every technology provider is also becoming a services company at the same time. This is the Palantirification of everything. And according to the latest from the information, OpenAI is among the companies who are walking down that road.
Over the weekend, the information published this piece, "OpenAI Takes a Page from Palantir Doubles Down on Consulting Services." They write, "OpenAI is adding staff and resources for a consulting-like service in which its engineers guide customers through a process known as fine tuning." Now you guys are a little bit more sophisticated, so I'm sure you know what fine tuning is, but basically it's a process of modifying a model based on a particular set of data, like the set of data that you have around a particular enterprise or company. The information continues,
To get the consulting help, OpenAI is typically requiring customers to spend at least $10 million. OpenAI is telling potential customers that it will refine its models such as GPT-4.0 using their proprietary corporate data so that the model can solve problems specific to their needs. These engineers also develop applications powered by customized models such as chatbots akin to ChatGPT, according to OpenAI executives and customers of the service.
Now, the information says that this move puts OpenAI into quasi-competition with software firms like Palantir, consulting firms like Accenture. And they also point out that from the very little that we know, this appears to be part of how ex-OpenAI CTO Meera Marathi's startup, which of course raised a $2 billion seed round on a $10 billion valuation, looks to stand out and compete.
Again from the information: Thinking Machines Lab plans to use reinforcement learning, a common AI development technique that rewards an AI model for accomplishing certain goals and penalizes it for other behaviors. TML plans to customize models on specific business metrics its customers track, aka KPIs, which typically relate to revenue or profit growth. TML may be banking on the idea that customers of AI may be willing to pay a premium for models customized for their industry.
Now, going back to the OpenAI version, the company has specifically been hiring for a role known as forward-deployed engineers. They say that OpenAI formed an FTE team earlier this year and hired around a dozen people for it, including several who work for Palantir.
Now, forward-deployed engineer is perhaps the hottest job title in Silicon Valley right now. And to understand what it is, go back and check out a post from the Palantir blog back from November 2020 called A Day in the Life of a Palantir Forward-Deployed Software Engineer. It followed a day in the life of Brian, who was at that time focused on delivering data integration solutions to a U.S. Department of Defense customer. Brian said...
When asked, "Is an FTE similar to a consultant?" Brian said,
No, not really. I think one of the things that differentiates us from consultants is how technically creative we can be while also delivering solutions quickly. In the hands of a forward-deployed engineer, Palantir's products are ready-built playgrounds that empower us to be flexible and efficient in how we solve problems. Unlike consultants, we can pull most of the pieces together out of the box, meaning we don't need to reinvent the wheel for each customer and spend years creating a patchwork solution. Instead, we can focus on composing the right architecture of features or whipping up a new secret sauce to supercharge users.
This way, I'm always creating software that makes my customers more uniquely able to do their jobs. Now, as much as Brian and Palantir said that forward-deployed engineers are not like consultants, they are absolutely undeniably a new category of consultant. And the category and what makes it interesting is that they are specifically focused on a particular software platform and fast-forwarding and making it work inside a company at a more rapid clip.
And so going back to OpenAI once again, the idea is basically that there are some advanced technical things that you can do with these models that might make them more performant for a variety of enterprise use cases that frankly, most enterprises just aren't going to have the technical capabilities to do or at least do well. And so by embedding engineers directly inside their biggest customers, they make their solutions more usable.
Now, this is a major trend right now across the AI industry. When Palantir first started doing this, there was skepticism from Silicon Valley, who has a near visceral counter reaction to anything that hints of services and not software margins. And there was even skepticism in the public market. However, now as Palantir is trading as one of the most expensive, if not the most expensive major stock in the market, things have shifted dramatically. What looked like smaller margins,
than pure-play software companies has instead turned into a significant advantage and a deep entrenchment that owns the customer relationship.
This trend has become prevalent enough that at the beginning of June, Andreessen Horowitz dropped a research post called Trading Margin for Moat, why the forward-deployed engineer is the hottest job in startups. The post reads, For the better part of the last decade, it's been broadly assumed that product-led growth, or PLG, is superior to implementation-heavy enterprise software. The allure is obvious. PLG promises greater scalability and higher margins. The obsession has been driven by success stories like Atlassian, Slack, Figma, Notion, and Dropbox, and more recently, ChatGPT and Cursor. All of these
All of these products offer simple, single-player modes, are easy to adopt without needing a sales call, and can be purchased directly with a credit card. No lengthy scoping or enterprise contracts required. During platform shifts, however, companies have room to experiment and build more intricate products that don't follow the standardized formula. Salesforce, ServiceNow, and Workday did this during the transition from on-premise to cloud platforms. Each of these companies sells an enterprise platform requiring significant implementation, services, and support, which is the antithesis of Bottoms-Up PLG.
However, in nailing complex implementations, these companies achieved dominance with impressive market capitalizations. Their combined value dwarfs that of the top PLG companies, and it's not even close. He continues later on: "Category-defining companies like Salesforce and ServiceNow became indispensable largely because of their ability to integrate with a company's internal systems and context. The customization effort initially results in lower gross margins and higher burn rates.
At IPO, for example, ServiceNow's gross margin was 63.2% and Workday's was 54.1%, far below the ideal around 80% for software. Even Salesforce, generally considered the gold standard, reportedly burned over $52 million to generate $22 million in revenue before developing its partner ecosystem. These complex businesses are easiest to build early in a platform shift, when workflows are still taking shape and the payoff for replacing an entire system of record is highest.
The AI platform shift is different from and in some ways more exciting than the previous transitions to cloud or mobile, though, because the implementation work required to make agentic experiences can itself be streamlined and automated by AI. Historical integration work might require outreach and collaboration with partners, mapping data fields, navigating data transfer between different coding languages, and understanding various internal guidelines. This is the kind of work that can now be done more efficiently and in some cases entirely with AI. Once those workflows and behaviors are established, these
These companies possess moats that allow them to increase prices and build implementation ecosystems. Now he goes on with a bunch of different observations and best practices, but the point is, this is an instantiation of something that is absolutely a trend that everyone is seeing, which is that all of the big players have some version of this approach to FTEs and actually building on these new platforms for the enterprise clients. But what does this mean in terms of OpenAI strategy?
The line that I don't totally agree with is the idea that it puts OpenAI into quasi-competition with Palantir and Accenture. Although that is nominally true, it's pretty clear to me, as a fairly close observer, that OpenAI, one, is going to do whatever it takes to continue to grow adoption of their tools, and two, has a strong sense that owning the customer relationship is really going to matter.
All the indications we see from OpenAI with things like building their own agents indicates that they are not comfortable betting exclusively on model superiority and want to actually own the relationship with the end customer. Now, when it comes to consumers in ChatGPT, they have that in spades. When it comes to the enterprise, that's still more up for grabs, even though they are in the lead.
But at the same time, it's very clear to me that OpenAI is not doing this all on their own. For example, over the last couple of months they have announced a number of partnerships with dev shops and implementation labs like Tribe AI. In May they announced the Tribe partnership, and just a week ago they announced a similar relationship with Fractional. The idea is pretty simple: OpenAI brings the models, Fractional or Tribe or other partners bring, as Chris Taylor, the CEO of Fractional, put it, the end-to-end support from idea to production.
And what's more, OpenAI isn't just partnering with the up-and-comers. They have relationships like this to some extent with basically all of the big GSIs or systems integrators, including for example this one with PwC.
So when it comes to open AI, I see this more as them doing everything it takes and a validation of forward deployed engineers as a key part of the playbook than as them having some radical new strategy. But it still does have the impact of putting new pressure on the existing consulting partners. And we're starting to see that manifest.
Bloomberg recently reported that PwC's AI head had said that the firm had started to cut prices because tech was saving their staff time. Said Chief AI Officer Dan Priest in an interview with Bloomberg, "...clients would hear us talking about using AI and say we want our fair share of those efficiencies. We certainly, as appropriate, give our clients the pricing benefit of the efficiencies we're achieving."
In other words, hold aside the details, there is downward price pressure that AI is creating for these companies that are selling AI services. Again, to bring our own personal examples of this, one may still absolutely prefer the comprehensiveness and human touch that you get working with a PwC or an Accenture or McKinsey or whomever. But what we do with our agent readiness audit, interviewing dozens or hundreds or even thousands of people over the course of a couple of days, and
And turning that all into actionable insight around which agent use cases are best suited for your firm based on the hundreds of hours of interviews that we just got would have been completely impossible in the pre-AI era. And even the closest approximation of it would have cost hundreds of thousands of dollars and taken months. We're offering it for less than a tenth of that in days. And if we're taking out the discovery portion of what consultants have historically done, other companies are nibbling at all the other parts as well.
Now, as an aside, by the way, in our experience, as much as they have to charge for discovery work, it's not the work that consultants want to do. Consultants and professional services firms want to do the high value stuff that actually produces results that gets them rehired, not just the long, laborious discovery. And so it's actually a good fit and a win-win for everyone. But the point is, AI is absolutely coming for a lot of what is on the books as revenue right now. The din of conversation talking about the disruption to consulting is doing nothing but getting louder.
Last week, The Economist ran a piece called "Who Needs Accenture in the Age of AI?" They write: "Between the start of 2015 and the end of 2024, Accenture, which split off from its accounting sibling in 2000 and went public a year later, generated a total return of around 370%, handily outdoing not just the S&P 500 index but also Goldman Sachs and Morgan Stanley. As America's stock market climbed to an all-time high in February, the firm was worth $250 billion, more than either investment bank."
Since then, however, investors have wiped out some $60 billion from its market value. They pointed out that new bookings for both one-off consulting projects and managed services were down, and that while some of it was a temporary setback — think trade war, real war, and all the other macro problems — as The Economist puts it, the firm's problems run deeper. Having made a fortune telling others how to adapt to newfangled tech, it now faces the selfsame predicament in the age of general artificial intelligence. As semi-autonomous Gen AI agents sweep the world, who needs consultants?
Now, obviously, I think that there is a lot of room for evolution and adaptation. But like in almost every industry, the reality is that consulting and professional services will not look the same in a year or certainly five or 10 years as it does now.
The companies that are able to nimbly adapt to that and change could build incredible enduring legacies. But they're going to have to do it with people coming in from all sides. Software companies, neo-consulting companies, product companies, everyone it seems is now in the business of technology and services all at once. And that could be a challenge. For now, very interesting to see that OpenAI is moving into this forward deployed engineer space. And we will continue to keep an eye on the trend.
Thanks as always for listening or watching. Until next time, peace.