Databricks raised the $10 billion to provide liquidity to employee shareholders, make acquisitions, and expand overseas. The round was initially targeted at $3 to $4 billion but was increased due to overwhelming investor interest, totaling $19 billion in demand.
Databricks is considered a top candidate for an IPO due to its strong financial performance, including positive free cash flow for the first time this quarter and $3 billion in annualized revenue with 60% year-over-year growth. CEO Ali Ghodsi has indicated that the company might go public by mid-next year.
The Databricks funding round, being the largest in venture history, suggests a strong investor appetite for AI companies. This round, which is nearly twice the size of OpenAI's latest round, indicates that late-stage AI startups are finding significant funding opportunities, potentially leading to an AI IPO boom.
The battery life of Meta's Ray-Ban AI glasses is a critical issue because the glasses can only operate with full voice and video for half an hour. This limitation hinders their practicality as an ever-present AI assistant, despite their advanced features like real-time translation and Shazam integration.
Adam Mosseri warns that AI content labeling alone may not be sufficient to ensure AI safety. He emphasizes that while labeling helps identify AI-generated content, it doesn't address the broader issue of misinformation and the need for context about who is sharing the content.
OpenAI released a phone number for ChatGPT to make the AI more accessible to users, especially those who might prefer a voice interface. The service is free and available via a 1-800 number in the US and through WhatsApp globally, allowing users to interact with ChatGPT via familiar channels.
The integration of ChatGPT Search with Advanced Voice Mode is significant because it enhances the user experience by allowing voice queries and verbal responses. This integration, especially with visual context, could transform how users interact with AI, making it more intuitive and accessible.
Perplexity's acquisition of Carbon is important because it enables the company to connect AI systems to external data sources like Notion, Doc, and Slack. This move positions Perplexity to compete more effectively in the enterprise search market, a space where companies like Glean and OpenAI are also vying for dominance.
NVIDIA's new Jensen Orin Nano AI dev kit is significant for hobbyists and makers because it is available at half the price of its predecessor, offers a 70% boost to neural processing, and a 50% increase in memory bandwidth. This makes it more affordable and powerful for AI and robotics projects.
NVIDIA's cloud business is poised to challenge AWS due to its rapid growth and the potential to generate $150 billion in revenue from software and cloud services. This growth is driven by high demand for AI compute, and NVIDIA's dominant position in AI hardware and software.
Thank you.
Hello, friends. Quick note before I dive in. When I started recording the headlines, as you'll hear, I wasn't sure that it was going to be a full extended edition, but then 20 minutes later, it was clear that that made sense for this episode. It's a little chaotic here as we try to close out the year. I'm doing a bunch of pre-fills for the next couple of weeks so you guys don't miss episodes. But for today, we are definitely going to do an extended headlines edition, and then I anticipate Friday we will be back with a normal episode. Currently, the plan is for Monday to be the last normal episode and then have pre-fills from there. We've
We've got a ton of great countdown and end of year reflection episodes, so I'm very excited for those. But for now, let's catch up on the last couple of days of headlines. Apologies for yesterday. As you can probably tell, I'm fighting something off and yesterday I was just completely knocked out. But we will catch up today and we're going to kick it off with Databricks $10 billion funding round valuing the company at $62 billion. It's kind of an IPO, even though it's not an IPO. So let's get into it.
This is officially the largest venture round in history. It's almost twice the size of OpenAI's latest round in October. The Series J round marks the company up by 30% from their last round, which was raised in 2023. The 11-year-old startup will use the fund to provide liquidity to employee shareholders, as well as to make acquisitions and expand overseas. In the same way that an IPO allows early employees to get liquidity on their shares, that's what Databricks is in part using this round for.
Databricks, for those of you who aren't familiar, provides data analytics and AI solutions platforms to enterprise customers. It's an 11-year-old firm, and it is right at the top of companies that are expected to IPO next year. In November, CEO Ali Ghodsi said, if we were going to go, the earliest would be, let's say, mid-next year or something like that. He also noted that there aren't really many options for late-stage investors with large funds.
commenting, Still, Gauthier says that this deal is a much better way to cash out his employees. He said,
Initially, Databricks was only looking for $3 to $4 billion from this round. However, the insane demand made him rethink. He said, I saw this Excel sheet where they keep a tally of all the people that want to invest. It was $19 billion of interest, and I almost fell off the chair. And we hadn't even talked to everybody. I was like, oh my God, that's a huge amount of numbers. And then we actually moved the price up.
Godsey himself is on record suggesting that we are at a peak AI bubble. He said, It doesn't take a genius to know that a company with five people which has no product, no innovation, no IP, just recent grads, is not worth hundreds of millions, sometimes billions. You get billion-dollar valuations in these startups that have nothing. That's a bubble. Databricks, though, is in a good position. It expects to generate positive free cash flow for the first time this quarter at $3 billion in annualized revenue. Their last financials published in October claim 60% year-over-year revenue growth.
Most of the conversation on Twitter slash X is around how companies are continuing to not want to wade into the IPO market. Investor Matt Turk writes Series J is the new IPO.
Investor Tanay Jaipura put some context around it, saying, To put the size of the round into context, there have only been eight companies in U.S. history that raised more money in their IPO round. Gergelio-Ross explained why it might be better as an employee. He said, for reference, Uber raised around $8 billion during its IPO. Employees had shares sold to cover for taxes but could not sell for another six months of lockup.
Next up, an update for Meta's Ray-Bans. Now, next week, you're going to hear from me around what I thought were the top 15 AI products of the year. And one of the things that I do not count among them is AI wearables. And yet, very quietly, Meta's Ray-Bans are an AI wearable that people really seem to love. And with this new update, the product becomes much closer to an ever-present AI assistant. The biggest new feature is called Live AI. You can now talk to the glasses like you would any other voice-enabled assistant, receiving voice responses from the AI.
The glasses also stream video as an input, meaning the assistant will be aware of the things you're looking at. Meta sticking with the example of getting the AI to tell you which ingredients you should pick out at the grocery store, a dagger to my not believing that that's actually a use case heart, but obviously the implications of the technology seem much bigger than pasta sauce. Right now, the big catch is battery life. The Ray-Ban's only able to operate with full voice and video for half an hour. And yet this update really does start to paint a picture of the future.
One of the other game-changing features is real-time translation. The glasses can now translate between English and either Spanish, French, or Italian. The translation can either be piped through the glasses' embedded speakers or shown as transcripts on your phone. And because Shazam is the app that just will not go away, which is great, I love them, the glasses have also received a Shazam integration for identifying songs while out and about. Meta does warn that all of these features are still experimental, saying, "...as we test, these AI features may not always get it right. We're continuing to learn what works best and improving the experience for everyone."
Still, this is a very cool slate of updates and is increasingly pushing these from a device for enthusiasts to something that just everyone is going to want. The update is now live, but only available to those in the early access program. And if you're wondering how well the product is selling, Ray-Ban is boasting that the MetaGlasses are now their top selling product in 60% of stores across Europe, the Middle East, and Africa.
Staying in MetaLand for a minute, but going to a different direction, Instagram head Adam Mosseri has warned that AI content labeling may not be enough to ensure AI safety. The topic of AI-generated misinformation has evolved rapidly over the last year. Heading into the election, it was presumed that influence campaigns would leverage deepfakes in a hugely impactful way. One of the really interesting things about this election cycle is that that just didn't happen.
I had kind of wondered if that would be the case, mostly on the basis of people having their guard up in a bigger way, but there also might have been a capabilities issue. Image and video generation have rapidly improved over the past few months, especially on the video side, making deepfakes a more credible concern than ever.
So far, AI safety around deepfakes is largely about two things. One, labs refusing prompts that could imply nefarious use, or two, platforms supporting an AI labeling system in order to make sure that it's clear what is AI and what isn't. Still, Mosseri warned that we may be thinking too simplistically about AI safety and that the solution won't come from technology. He wrote, "...generative AI is clearly producing content that is difficult to discern from recordings of reality and improving rapidly."
A friend, investor Sam Lesson, pushed me maybe 10 years ago on the idea that any claim should be assessed not just on its content, but the credibility of the person or institution making that claim. Maybe this happened years ago, but it feels like now is when we are collectively appreciating that it has become more important to consider who is saying a thing than what they are saying when assessing a statement's validity.
Our role as internet platforms is to label content generated as AI as best we can, but some content will inevitably slip through the cracks and not all misrepresentations will be generated with AI. So we must also provide context about who is sharing so you can assess for yourself how much you want to trust their content. It's going to be increasingly critical that the viewer or reader brings a discerning mind when they consume content purporting to be an account or a recording of reality. My advice is to always consider who it is that is speaking.
And this is basically what I've thought for a while. I think that the reason that I am somewhat less concerned around the deepfakes than many people are is that I just think that humans are going to adapt incredibly quickly to a new set of norms. In fact, I think that we're going to adapt so quickly that the real challenge will not be about deepfakes tricking people, although there will be some amount of that, and I think it will break along generational lines. I think instead, it'll be dealing with the sociological implications of a world where no one trusts anything.
We're already dealing with challenges in our political and social environment based on the idea of fractured realities and different truths. And I do think that not trusting anything that anyone says because of the concern of AI deepfakes could add a whole new dimension to that. Still, it's interesting to see someone in such a key position discussing this topic.
Today's episode is brought to you by Rocket Money. We are coming up on the beginning of the new year, and that is a perfect time to get organized, set goals, prioritize what matters most, which for many of us is going to be financial wellness.
Thanks to Rocket Money, those goals, especially around money, feel achievable. Rocket Money shows you all of your subscriptions right in one place, helping you easily cancel those that you've maybe forgotten that you're actually paying for. Rocket Money also pulls together all of your spending across your different accounts so that you can clearly track spending habits and see where you can cut back.
Rocket Money is a personal finance app that helps find and cancel unwanted subscriptions, monitors your spending, and helps lower your bills so you can grow your savings. Their dashboard gives you a clear view of your expenses across all of your accounts. You can easily create a personalized budget with custom categories.
You can see your monthly spending trends in each category to know exactly where your money is going. Rocket Money will even try to negotiate lower bills for you. They automatically scan your bills to find opportunities to save, and then you can ask them to negotiate. They'll deal with customer service so that you don't have to.
Rocket Money has over 5 million users and has saved a total of 500 million in cancelled subscriptions, saving members up to $740 a year when using all of the app's premium features. Cancel your unwanted subscriptions and reach your financial goals faster with Rocket Money. Go to rocketmoney.com slash AI breakdown today. That's rocketmoney.com slash AI breakdown.
Today's episode is brought to you by Plum. Want to use AI to automate your work but don't know where to start? Plum lets you create AI workflows by simply describing what you want. No coding or API keys required. Imagine typing out, AI, analyze my Zoom meetings and send me your insights in Notion and watching it come to life before your eyes.
Whether you're an operations leader, marketer, or even a non-technical founder, Plum gives you the power of AI without the technical hassle. Get instant access to top models like GPT-4.0, Cloud Sonnet 3.5, Assembly AI, and many more. Don't let technology hold you back. Check out Use Plum, that's Plum with a B, for early access to the future of workflow automation. Today's episode is brought to you by Vanta. Whether you're starting or scaling your company's security program, demonstrating top-notch security practices, and establishing trust is more important than ever.
Vanta automates compliance for ISO 27001, SOC 2, GDPR, and leading AI frameworks like ISO 42001 and NIST AI risk management framework, saving you time and money while helping you build customer trust. Plus, you can streamline security reviews by automating questionnaires and demonstrating your security posture with a customer-facing trust center all powered by Vanta AI.
Over 8,000 global companies like Langchain, Leela AI, and Factory AI use Vanta to demonstrate AI trust and prove security in real time. Learn more at vanta.com slash nlw. That's vanta.com slash nlw. Today's episode is brought to you by Superintelligent. Every single business workflow and function is being remade and reimagined with artificial intelligence.
There is a huge challenge, however, of going from the potential of AI to actually capturing that value. And that gap is what Superintelligent is dedicated to filling. Superintelligent accelerates AI adoption and engagement to help teams actually use AI to increase productivity and drive business value. An interactive AI use case registry gives your company full visibility into how people are using artificial intelligence right now.
Pair that with capabilities building content in the form of tutorials, learning paths, and a use case library, and Superintelligent helps people inside your company show how they're getting value out of AI while providing resources for people to put that inspiration into action.
The next three teams that sign up with 100 or more seats are going to get free embedded consulting. That's a process by which our super intelligent team sits with your organization, figures out the specific use cases that matter most to you, and helps actually ensure support for adoption of those use cases to drive real value. Go to besuper.ai to learn more about this AI enablement network. And now back to the show.
Next up, let's catch up on OpenAI's 12 days of shipmas. Nothing huge, but a bunch of fun and or small but meaningful updates. One that's certainly more in the fun than meaningful category. If you're looking for chatbot answers, you can now dial 1-800-CHAT-GBT.
Yes, this is actually a thing. OpenAI have hooked their voice-enabled LLM up to a phone line. Users can call from the US and message via WhatsApp globally. The phone integration uses OpenAI's real-time API, while the WhatsApp bot is powered by GPT-40 Mini. Calls are limited to 15 minutes per month, but available for free. OpenAI position this product as purely about expanding their introductory range, stating that it offers a, quote, low-cost way to try it out through familiar channels. They suggest the slimmed-down service probably doesn't bring anything to the table for existing users.
Open AI Chief Product Officer Kevin Wheel said, this came out of a Hack Week project. The team built this just a few weeks ago and we loved it. And they hustled really hard to ship it and it's awesome to see it here. We're just getting started making ChatGPT more accessible to all of you. And I think that's what is interesting about this. It is not insane to me to think, especially as we were just talking about generational divides, that maybe some people might have their first experience calling 1-800-CHAT-GPT like it's the butterball hotline.
A slightly bigger update from earlier in the week, although one that was very much drowned out by the announcement of VO2, OpenAI have released their ChatGPT search feature to all users. Even free-tier users will now be able to access the AI-enhanced search. The rollout comes alongside a few major updates to the feature as well.
OpenAI has improved their citation engine, featuring links to source material more prominently and ahead of AI-generated responses. Presumably/hopefully, citations are also more accurate, which has been a key gripe of ChatGPT Search so far. More impressive is the ability to use ChatGPT Search in combination with Advanced Voice Mode for paid tiers. Users can now query the AI search engine using voice prompts and receive a verbal response from OpenAI's voice assistant. That also means that Advanced Voice Mode now has access to up-to-date information from the web.
The search experience is now integrated into Google and Apple Maps, allowing for improved location-based search as well. Finally, users can now set ChatGPT Search as their default search engine, making it accessible from the address bar in the browser.
Couple reasons why this is maybe more significant than it seems. The first is that it's very clear that OpenAI is inching towards making advanced voice mode a fully integrated UX for most tasks. I think that we are still barely scratching the surface on what advanced voice mode, especially advanced voice mode with vision, really does for our interaction with AI. You see a little bit of it with the new integration with the meta Ray-Bans, but it really will be a very different world when the way that we interact with AI is talking to it and it has the visual context of everything we're experiencing around us versus having to type things in.
Maybe the bigger thing, though, is that it is so clear that the great battle right now is for the default starting experience of the internet. For 20 years, Google search has been the way that people start their interactions with the web. One could argue that the only challenge to that paradigm is social networks, where when people aren't looking for things specifically, they browse and discover.
But when it comes to actually trying to get things done, Google search has been king. And even before ChatGPT, it was clear that they were trying to compete for that same slot, where you go to ChatGPT before you go to Google to try to get answers to your questions. One little note from a business model perspective, the fact that they're rolling this out to free users means a huge amount of casual users are going to try AI-assisted search for the first time. And that probably implies that advertising-supported AI search is going to be a big part of the business model going forward.
The obvious comparison here is, of course, to perplexity. Daedalus tweeted, in my opinion, ChatGPT's search won't be great until it's at least on par with perplexity. Anything less from OpenAI, even if decent, just won't cut it, to be honest. And interestingly, we got some news out of perplexity as well. First, reports are that they have actually closed this latest funding round at a $9 billion valuation.
According to Bloomberg, Perplexity raised $500 million in a round that triples their valuation since June. Sources claim that the round was completed last month, led by institutional venture partners. What is more confirmed is that Perplexity has acquired Carbon, a small startup specializing in connecting AI systems to external data. Perplexity CEO Aravind Srinivas wrote, Starting early next year, Perplexity will not just let you search over the web. It will connect to Notion, Doc, Slack, and more. I'm excited about the Carbon team joining us to work on this.
This is a massive area of enterprise competition. Think about a company that has 20,000, 30,000, 100,000 employees, or even more. There is an incredible, incredible volume of data and information that is generated. And the ability to search and access that information has been a perpetual challenge.
Now, there are a ton of people competing in this space. Glean, for example, has recently broken out as one of the frontrunners. But Google and OpenAI are also going to be in this game, to the extent that they're not already. But I think it's a really smart move for Perplexity.
This is definitely the type of thing that will bring another look at the service from a lot of enterprise users, and it makes this area of enterprise search a really interesting competition for the year to come. Divyansh Agarwal writes, perplexity going all in on enterprise search with their acquisition of Carbon. This is going to be an interesting battle between perplexity and Glean and maybe OpenAI. Before their acquisition of Carbon, I would have given the edge to Glean, but now it will be a lot tighter.
Finally today, a couple of stories coming out of NVIDIA. The first is that they've released an update to their Jensen Orin Nano AI dev kit. The single board computer is similar to a Raspberry Pi with a Tensor Core GPU attached. The new model is available at half the price, $249. It has a 70% boost to neural processing and a 50% increase in memory bandwidth. NVIDIA has aimed this device at hobbyists and makers who need something small to power AI and robotics projects.
The device is also suitable for lightweight industrial automation or other applications for small companies that want on-premise AI compute. Deepu Tala, NVIDIA's vice president of robotics and edge computing, thinks this device will actually be a big unlock, saying this is the time finally when generative AI capability is coming to the edge.
Lastly, NVIDIA has quietly grown a cloud business that could take on AWS. Two years ago, the company began renting out servers filled with its AI chips directly to businesses. Tucked away in the fine print in a recent investor disclosure, the company claimed the division could generate $150 billion in revenue from software and cloud services. That would be more than AWS generates annually at the moment, and in fact, more than NVIDIA's entire revenue for 2024. While it's a big claim, NVIDIA is in the business of delivering incredible forecasts and then beating them.
More interesting is what this implies for the AI cloud industry as a whole. NVIDIA presumably wouldn't be able to hit this target unless overall demand grows much, much larger.
The other thing interesting to me about this is just the question of what the future of NVIDIA holds. This is a company that is the dominant leader in its space right now that has inroads into so many other areas, but also antitrust breathing down its neck. It is going to be fascinating to see what happens as a new administration comes to bear. I think one thing that's for sure is that NVIDIA in five years is very unlikely to look exactly like NVIDIA of today.
For now, that is going to do it for our catch-up extended edition of the headlines. Appreciate you listening as always. And until next time, peace.