We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Rabbit CEO Jesse Lyu isn't thinking too far ahead

Rabbit CEO Jesse Lyu isn't thinking too far ahead

2024/10/7
logo of podcast Decoder with Nilay Patel

Decoder with Nilay Patel

AI Deep Dive AI Insights AI Chapters Transcript
People
J
Jesse Liu
N
Nilay Patel
以尖锐评论和分析大科技公司和政治人物而闻名的《The Verge》编辑总监。
Topics
Nilay Patel:Rabbit R1发布之初,其核心功能大型行动模型(LAM)缺失,导致用户体验不佳,并且面临来自微软、苹果、谷歌等巨头的激烈竞争。此外,Rabbit的商业模式也存在法律风险,例如未经授权使用其他公司服务的问题。 Nilay Patel质疑了Rabbit的商业模式的可持续性,并以Aereo公司为例说明了技术方案领先于法律法规的风险。他认为,如果Rabbit的LAM能够成功运行,那么Spotify、DoorDash等公司可能会采取措施阻止其使用其服务,这将引发法律纠纷。 Nilay Patel还对Rabbit的LAM技术细节提出了质疑,例如Rabbit是如何访问Spotify等服务的,以及这些服务是否知情并同意Rabbit的行为。他指出,Rabbit通过在云端运行虚拟机,模拟用户操作来访问这些服务,这存在技术和法律上的风险。 Nilay Patel最后还对Rabbit的市场前景表示担忧,认为Rabbit需要克服技术挑战、法律风险和市场竞争等多重困难才能取得成功。 Jesse Liu:Rabbit公司起源于2013年创立的RavenTech公司,其目标是创造一个类似Jarvis的AI助手。RavenTech公司被收购后,Jesse Liu看到了Transformer技术的进步,认为时机成熟,可以再次尝试开发AI助手。 Jesse Liu表示,Rabbit专注于开发能够帮助用户完成任务的AI代理,而不是仅仅理解用户的语音。Rabbit的目标是开发一个通用的AI代理,能够处理各种任务,而不仅仅是针对特定应用场景。 Jesse Liu承认Rabbit无法在资金方面与OpenAI等巨头竞争,但可以通过快速开发产品并与其他公司合作来保持竞争力。他认为,Rabbit在研发方面无法与OpenAI等巨头竞争,但可以通过与他们合作,将他们的技术整合到自己的产品中。 Jesse Liu表示,Rabbit R1硬件的利润率超过40%,并且其云服务成本在短期内不会超过硬件利润。Rabbit的发布策略并非以短期盈利为目标,而是专注于产品创新。 Jesse Liu解释了Rabbit的LAM Playground是如何工作的,以及其与第一代LAM的区别。他表示,Rabbit的LAM Playground是一个通用的跨平台AI代理系统,能够在网站上执行各种任务。 Jesse Liu回应了关于LAM功能缺失的传闻,并解释了Rabbit最初使用API,但由于这些服务没有提供合适的API,不得不采取其他方式来实现功能。他强调,Rabbit不会伪造用户或窃取用户凭据,而是通过帮助用户点击按钮来实现自动化操作。 Jesse Liu表示,Rabbit将通过推出新一代设备和软件服务来解决用户使用成本过高的问题,并采用类似App Store的商业模式,从用户创建的AI代理中获取收入。 Jesse Liu认为,Rabbit的核心技术是可行的,但同时也承认技术会不断变化,需要不断适应。他表示,Rabbit目前没有足够的资源来应对其他公司可能采取的法律行动,但认为这种情况发生的可能性较小。 Jesse Liu对Rabbit的未来发展充满信心,他认为Rabbit将在未来三年内开发出满足用户需求的通用AI代理。他表示,Rabbit将继续专注于产品创新和用户体验,并努力克服技术挑战和市场竞争等困难。

Deep Dive

Key Insights

Why did Rabbit launch the R1 with its core feature, the Large Action Model (LAM), not fully functional?

Rabbit launched the R1 with a basic version of LAM, which could handle specific apps and services. They aimed to iterate and improve it over time, rather than wait for a fully developed version.

Why does Rabbit use a virtual machine to interact with web services instead of APIs?

Rabbit uses a virtual machine because many services like Spotify and DoorDash do not provide APIs for voice-activated applications. This method allows Rabbit to simulate human interactions on the web.

Why is Rabbit confident that web services won't block their agent users?

Rabbit believes that web services will eventually see the value in agent users and will either adapt their terms or negotiate deals, similar to how the music industry adapted to smart speakers.

Why did Rabbit choose a handheld device form factor for the R1 instead of a smartphone or smart glasses?

Rabbit chose a handheld device to avoid the complexity and competition of smartphones and to create a more intuitive and conservative form factor that users can easily understand and use.

Why does Jesse Liu, CEO of Rabbit, focus on making decisions based on current facts rather than predicting future scenarios?

Jesse Liu prefers to make decisions based on current facts and immediate needs, rather than spending time on hypothetical scenarios. This approach helps the team stay focused and agile.

Chapters
Jesse Lyu, CEO of Rabbit, discusses the company's origins, starting with his AI venture, RavenTech, in 2013. He explains how advancements in transformer technology and large language models led to the development of Rabbit and the R1, a handheld device designed to interact with Rabbit's AI agent.
  • Rabbit's CEO, Jesse Lyu, started an AI company called RavenTech in 2013.
  • Advancements in transformer technology and large language models inspired the creation of Rabbit and the R1 AI gadget.

Shownotes Transcript

Translations:
中文

Support for Decoder comes from NYU's Stern Executive MBA program. For a lot of professionals, the idea of earning a higher degree can feel like a tug of war between furthering your career and stalling your business development. Well, with the NYU Stern Executive MBA program, you can fine-tune your emotional intelligence, enabling you to make smarter, more human-centered decisions that can drive your business forward. And you don't need to put your life on hold to earn your MBA.

Thank you.

Support for Decoder comes from Janice Henderson Investors. If you have questions about investing without paying for expensive advice, you might want to check out Janice Henderson Investors.

You can get access to over 90 days of world-class investment expertise. They offer free, personalized investment advice to help you evaluate your investment strategy. All you need to do is ask. To find out more, visit janishenderson.com slash advice. Spelled J-A-N-U-S-H-E-N-D-E-R-S-O-N dot com slash advice. Janice Henderson, investing in a brighter future together. There are no additional costs for advice beyond the underlying fund expenses.

It's time to review the highlights. I'm joined by my co-anchor, Snoop. Hey, what up, dawg? Snoop, number one has to be getting the new iPhone 16 Pro with Apple Intelligence at T-Mobile. Yeah, you should hustle down at T-Mobile like a dog chasing a squirrel, chasing a nut. That's a nice analogy, Snoop. On to highlight number two, if T-Mobile families can save 20% every month versus the other big guys. Very impressive. Take it away, Snoop. Head to T-Mobile.com and get the new iPhone 16 Pro with Apple Intelligence on them. Now drop that jingle. ♪

See how you can save versus the other big guys at T-Mobile.com slash switch. Apple intelligence coming fall 2024. Hello and welcome to Decoder. I'm Nilay Patel, editor-in-chief of The Verge, and Decoder is my show about big ideas and other problems. Today, I'm talking to Jesse Liu, the founder and CEO of Rabbit. I'm

It's a startup company that makes the adorable R1 AI gadget. It's a little handheld designed by superstar design firm Teenage Engineering. It's meant to be how you talk to Rabbit's AI agent, which then goes off onto the internet and does things for you, from playing music on Spotify to ordering an Uber and even buying things from Amazon. Rabbit launched with a lot of hype at CES and then a big party in New York, but early reviews of the R1 device itself were universally bad. Our own David Pierce gave it a 3 out of 10 back in May, saying...

most of the features don't work or don't even exist. And the core feature that didn't seem to exist was the most important of all, Rabbit's Large Action Model, or LAM, which is meant to allow the system to open a web browser in the cloud and browse for you. The LAM is supposed to intelligently understand what it's looking at on those websites and then literally click around to accomplish tasks on your behalf.

There have been a lot of questions over the past 10 months about just how real Rabbit Slam was, but the day before Jesse and I spoke, the company launched what it calls "LAM Playgrounds", which lets people actually use a bare-bones version of the system. And it does indeed appear to be clicking around on the web, although it is obviously very early and it is very slow.

So I wanted to know how Jesse planned to invest in the LAN and compete with all the other AI agents being announced that also promise to do things for you. For example, Microsoft just announced a new agent-y version of Copilot, and Apple's vision for the next generation of Siri is an AI agent, one that will run on the phone that you already have and have direct access to your phone apps and the data inside them.

It's the same with Google and Android and Gemini, and even Amazon's rumored next generation of Alexa. This is major competition for a startup, and Jesse talked about wanting to get out ahead of it. But what I really wanted to talk about is how Rabbit's system works, and whether or not it's durable. Not just technically, which is challenging, but also from a business and legal perspective.

After all, if Rabbit's idea works and the lamb really does go out and browse websites for you, what's stopping companies like Spotify and DoorDash from blocking it? You might have a strong point of view here, Jesse certainly does, but at some point there's going to be a fight about this, and it is not clear what's going to happen. Here, give me just a second to put this in historical context with an example. About a decade ago, a handful of startups all tried to stream broadcast television without permission or licenses,

by putting a bunch of TV antennas in a single location and then building apps to let people access them. This felt technically legal. What's the difference between all of your customers having their own antennas and putting all those antennas in a single place and letting your customers access them over the internet? And some of these companies were seriously innovative. The most famous was a company called Aereo, which spent a ton of money designing specialized TV antennas the size of a nickel so they could pack as many of them into a data center as possible.

I wrote about Aereo back then. I visited their offices in Brooklyn, I saw the antennas, I interviewed the CEO, the whole thing. Yeah, Aereo got sued by the TV networks, the case went to the Supreme Court in 2014, and you will note that Aereo no longer exists.

I don't know if Rabbit's another area, and I don't know how all these companies will react to having robots browse their websites instead of people. And I certainly don't know how legal systems around the world will handle the inevitable lawsuits to come. I asked Jesse about all this, and you'll hear his answer. He thinks Rabbit will be so successful that all these companies will show up and want to make deals. I gotta say, I don't know about that either. I do know that this is a pretty intense and occasionally contentious interview. Jesse didn't back down, and that means we got pretty deep into it. Let me know what you think.

Okay, Jesse Liu, the founder and CEO of Rabbit. Here we go. Jesse Liu, you're the founder and CEO of Rabbit. Welcome to Decoder. Thank you, Nidai. Glad to be here. I'm very excited to talk to you. Rabbit is a fascinating company. The idea for the R1 product is fascinating. I think a lot of people think that something that looks like the R1 is the next evolution of

smartphones or products or something. And then there's the company itself, which is really interesting. And you've got a connection to Teenage Engineering, which is one of our favorite companies here at The Verge. And you've got some news to share about opening up Rabbit's large action model so people can play with it. And it's kind of an early version. I really want to talk about that. But let's start with Rabbit itself. The company has not been around that long. The R1 just started shipping six months ago. What is Rabbit? How did the company start?

Here's a little bit of history of it. So I actually started an AI company back in 2013, which is called RavenTech. We were at YC Winter 15 batch. And it's basically my personal dream to chase this grand vision that, you know, I guess...

Me being this generation grew up, we watched so many sci-fi movies. You know, there's AI stuff here and there. And I guess every geek wants to build their own Jarvis at some point, right? So that's exactly how I started the Raven Tech 11, 12 years ago. Back then, you know, we had this...

idea. We had this direction, but the technology back then, obviously there wasn't like GPU training, there wasn't transformer and stuff. So we worked really hard on the early days of like voice dictation and NLP and NLU, which is natural language processing and natural language understanding. So the technology won there. We tried our best. We actually built entire cloud system and the hardware, which is similar to what we have in Rabbit today. But the phone factor was like more of a smart speaker as if

we all know back in 10 years ago, everyone's chasing that phone factor and ultimately the company got acquired. So it's not a new idea for myself, but it's definitely a new opportunity that when I saw the progress on the research side of Transformer, obviously I got a chance to try like chat GPT or GPT's API very early time. We were really impressed because, you know, we felt the timing is right because being able to do something like R1 or, you know,

more sci-fi Java stuff, you really need to figure out two parts from the backend. One is that you want to make sure that by talking to the device, the computer or device actually understand what you're talking about, right? Which is the transformer, the large language model part. But

We believe that around 2020, 2021, we believe that the transformer is absolutely the right path that OpenAI and other companies are heading to. We believe that portion has been solved, will be solved. So our focus immediately shift to, you know, after this device can understand you, can it actually help you do things? And

The company I started 10, 11 years ago, Raventech, we were actually one of the first companies that we designed a cloud API structure that, you know, after the recognition, after the understanding, the query got sent into different, you know, APIs. The system has a detector to understand, oh, maybe you're looking for a restaurant on Yelp.

Maybe you want to play a song from this streaming software. But, you know, I guess 10 years ago, there's a great opportunity of APIs. There's a lot of companies working on APIs. And if you remember, like, 10 years ago in Silicon Valley, like, everyone was talking about maybe in the future the entire operating system will be just HTML5s, right? But that didn't live quite long. So I think now when we're looking after 2020, like, the API business is not really, like,

major business for most of the popular services. So we also want to take an evaluation of whether we can build a generic piece of agent technology, which is really hard because I believe the current AI is all generic. Obviously, there's a lot of people doing vertical stuff, right? You can build an agent for Excel, you can build an agent for legal documentation process. But I think the biggest dream, what really makes us excited is the generic part of it.

It's like, can we build something that without pre-training, without knowing people want to do what, and they just tell whatever they want and will be able to smart enough to handle all the tasks. So that's why we felt the opportunity was right. And we started Rabbit right after COVID. Yeah. The idea that agents are going to be a big part of our life, and in particular, general purpose agents that go take actions for us on the internet. I've heard this idea from all kinds of folks, from startup founders like yourself to the CEOs of the biggest companies in the world. I want to come back to that. Yeah.

That's a big idea, but I just want to stay focused on Rabbit for a second. How many people work at Rabbit today?

At the current moment, we're roughly around 50 people, 50 to 60 people if we plus the interns. But when we started, the company was seven. And by the time we launched our CES, it was 17. So just by growing the team within four or five months is quite a challenging job for me. Yeah. So CES was a big launch. We were there. Our own David Pierce was at the party. The Rabbit was introduced. You gave demos in a hotel room, I think. And then you had the launch party here at the TWA Hotel.

at JFK, which is very cool. The thing's been out, but you've been growing. You said you started 17 people in January at CS and you have 50 now. What are you adding all those people to do?

Most of the part is just engineers. We have a very small group of design slash hardware design or ID that we started from day one. And most of the new folks are working on AI and infrastructure perspective like cloud. Basically, we not only ship the hardware, we build the entire web OS for it. So I think the major work is always going to be in the software part.

Yeah. How is the whole company structured? As you go from 7 to 17 to 50, you obviously have to decide how to structure Rabbit. How is that structured now? How has it changed? We are primarily located in Santa Monica. We have a device team, really great folks in Bay Area. And we have a couple of research engineers here and there. So it's kind of like mostly in-person, but somewhat hybrid system. And the way that we find our people is mostly by internal referring. So we're not like

you know, spending money chasing for like agents, agencies to do the hiring. Most of the good folks that we basically do an internal recommendation. Yeah. But how are your, your 50 people you have now, how is that organized inside the company? It,

It's really flat in a sense. You know, we have different departments, obviously, you know, the hardware, ODM, OEM, that part is in Asian. You know, we have our IT team in collaboration with folks in Stockholm, Teenage Engineering in this case. And we do our own like graphics and marketing, all that in-house. And then for the software part, we have the device team that they need to work with, the ODM, OEM.

And we have the cloud team. We have the AI team. That's basically how much team we have. And each team, there's obviously crossovers. And we basically work in project base. So there is no like really like hierarchy, like crazy hierarchy going on. I mean, the biggest company I ever led was back in the Raven. I believe by the time we got acquired, we were like 250 people. So this is like still within my comfort zone to manage like 50-ish people. So, yeah.

Teenage engineering is clearly a big part of the Rabbit story. They designed the R1, and their co-founder Jesper Kauthuf is your chief design officer. How much more hardware are you designing right now? Are there iterations to come? Do you have a roadmap of new products?

yeah so the way we work together uh obviously this is not the first time we collaborate uh we did the collaboration back in raven first of all teenage engineer is my hero company it's basically a fanboy dream become true story for me i really appreciate their help over the years the way that we work together is very intuitive there are obviously many ways that are considered to be like the proper way of designing a project like this but i think we're out of the

ordinary way of doing this. I can give you an example like back in the Raven, all we did is that we had probably two meetings in person, a couple of phone calls, no email, no text messages. We set up a secret Instagram account that we just share sketches and we just hit like on that Instagram account. And that's how we designed the previous Raven project. This time is even quicker. I think I shared this publicly. I think we spent like probably 10 minutes

on deciding the R1, how it's going to look like. And, you know, we have quick sketches here and there. And ultimately I push Jesper back for using the current color, which is the orange from Rao. We do have maybe like two or three projects in our mind, but I think by the end of this year, our current focus is to really get this LAM pushed to the next level. So yeah, stay tuned. I

I think one thing people will realize is that this team do hardware really quick because when we start sketching the R1, it was like last year back in November.

And we introduce that by January and we start shipping by April. So if we want to launch the next project, it's going to be like roughly, I don't know, six to eight months timeframe, certainly not like a year or two. But that being said, I think I was having my own community voice chat yesterday. I was talking to people about the current R1 because I really don't like the current consumer electronics, like one year.

per generation by default regardless, right? Like we've seen that from the smartphone companies and doing annually release for all this stuff with minor changes. When we started designing the R1,

the entire RabbitOS runs on the cloud. That means that this piece of hardware, even though it's $199 and not the latest chips, is really capable of offloading the future features to this device. So I don't think R1 is like a one-year lifespan device. And so does our community thought. They think they can tweak so many things about it. So in that sense, we're not in a rush to drop another version of it, but we do have different form factors in our mind at the moment.

And is Jesper actively working on those designs or is his chief design officer, is he working on something else? He was literally in our office last Sunday, which is three days ago. Yeah, we are actively working together. Okay. How much money have you raised so far?

That's a good question. I want to be accurate, but it's somewhere around 50 million total in the whole lifespan. Last part was 35 million led by SoundVenture and also HostelVenture and Amazon Alexa Finance Synergist. So last round was 35. And if you consider all the money together, I think it's around 50.

When I look at the amount of money that other AI companies are going out to raise, open AI, right as we're speaking, open AI just raised the biggest round ever in history to go build, obviously, a foundation model, digital god, whatever Sam Altman thinks he's doing. Do you think you can compete at $35 million a round? No, but I think talking about competition, money is one part of it.

I've considered myself a veteran because I've done startup before. I know how it works. Certainly money is very important, probably most important in the early couple of years. But I think when we talk about competition, we ultimately want to ship products to consumers, right? Because the way I look at it is that people are not buying electricity, right?

Electricity is basically controlled by here in California, South California Edison, right? You have an address, you have to pay for it regardless of how much electricity you're using. But I think

People are ultimately buying microwaves, cars, motorcycles, televisions. People are buying products powered by electricity. So research-wise, I can say very clear, we have at this moment of rabbit, there's no way that we can compete over OpenAI and Tropic and DeepMind and Google. But

How can we play the game? We become partner of everyone, right? So R1 is hosting every single model, the latest model from these guys. And we offer their capabilities combined with our product and addition on the Rabbit OS and all the features offered to our user. So there is no way we can compete over on a research perspective, but we ship product fast, right?

You saw OpenAI just released the Instant API, as they call it. I was actually invited to the meeting, but I'm launching the LAN playground yesterday, so I couldn't be there in person. But they're offering an API for people to build an agent for it. But yesterday, we dropped a LAN playground, which you can go to any website and just do it by voice. So I think...

computation is different magnitude i think money is definitely important we hope that we we can raise more money of course but i think right now if you talk about computation we have to play smart

They are good on research. We are good on converting all the latest research into a piece of product that user can use today. Let's talk about what that product is today. So right now you have the R1. You can buy it. It's a beautiful piece of hardware. It is orange. It is very striking. It has a screen. It has a scroll dial. And then it has a connection to your service in the cloud, which goes and does stuff for you. That costs $199. Yeah.

Are you making money on the sale of each individual R1 unit right now? Correct. What's the margin? What's your profit on R1? It's a very good margin, even though I cannot tell you the details, but it's over 40%.

To make it over 40% on the hardware margin of the R1. On the hardware margin. We run the calculation. We might have to redo the math because yesterday, literally after drop the LAN playground, the server crashed like multiple times. So we might need to redo the calculation. But again, first of all, in the beginning, we are making money. Now we have this more powerful features moving forward.

I think I haven't heard a company that went bankrupt because they got a popular service that is so popular that they couldn't afford cloud bills, right? I think if you build a good product, there will be – Well, hold on. I can draw that line for you. So it's $1.99. You're making over 40%. So that's between $80 and $90, right? It's not 50%, which would be $100. So it's a little less. So between $80 and $90 in margin, right?

That margin has to, you do have to pay your cloud bills, right? So is that margin all being fed into your cloud bills? Obviously, we have this dedicated instance with all these cloud competitors, right? I mean, don't get me wrong, like the Amazon AWS, we're hosting on AWS, and there's AWS, Google Cloud, Microsoft Azure. On the LLM partnerships, we have Anthropic, OpenAI, and Gem.

That's a lot of companies that like to make a lot of money. They're not cheap to partner with all those companies. They're not cheap. But what I'm trying to point out is that they are competing so fierce in a way that they have a lot of good benefit for the early startups. I have to shout out for all these companies. Sure. So they really want to figure out a way to help you on board and maybe making your money in the long run. But I think at this current scale, we can totally handle it. Yes. Okay.

So we got great deals from them. So if I buy an R1 from you, you take $90 of margin or $80 of margin. At what point, how much do I have to use my R1 to turn that negative for you? Because everything I do with an AI, that's a token. That token costs money. It costs multiple services. Your bandwidth costs money. It all costs money. How much does a single R1 user have to use their R1 to...

to take up $90 a margin or $80 a margin from you? I think if a moderate user using in a non-robotic way or non-malicious way, it's going to be really hard to break that negativity. Is that two years worth of usage? One year, six months? I think it's definitely over a year and a half. I'm not sure about two years because there's new features going to implement into this, including LAN playground and teach mode. But yeah, so I want to...

kind of like share my understanding to this is that yes, we did the mathematics. We are making money. No problem. We wish we can sell more, which we're hoping that we can sell more. That's going to definitely help. But the

of this whole launch strategy is not set for making like X amount of money on like first six months. I think there's other companies that are really greedy about how they want to launch their product. I'm not going to even mention the name, right? So that won't work, right? That won't work. So I think if you look at any company

a new generation of product if the founder and the company and the board decide to set up a strategy that let's squeeze every single penny out of the user

It's not going to work because we know AI is very early and we know that there's going to be a lot of things that go wrong. In fact, I believe that every company, regardless of if you're big or small, if you work on the latest AI stuff, the first two weeks, it's going to be a disaster because you're going to find a lot of the misbehavior by the AI. You're going to find a lot of the edge cases by the model, right? So I think the whole thing is too new. There's no way that we want to like

charge for subscription that's even like worse i don't like that strategy in general so even though this sounds very concerning that okay you can easily twist my uh story or someone might twist my story be like oh rabbit is doing everything great except they're gonna bankrupt no matter what right i think it's a very stupid way to think in that sense because

a great innovation, you have to focus on the innovative part first. Then when you figure out the money part, if we start figuring out the money part, none of these making sense. Yeah. None of these making sense. I think,

You know, there's another, you know, people in the industry that they have a great understanding of everything. And then they decided to release a wallpaper app charge for 50 bucks per month, right? Hopefully that works, I guess. Yeah. You can go talk to that guy and you say, hey, there's no way you're going to bankrupt because your money checks, all these equation checks. If you charge for this, you're going to be making money. But...

That's based on the perspective that the whole logic needs to stand up, right? So I think I'm not really wasting a lot of time, of my time at this point on trying to basically fine-tune a little bit about mathematic equations to make this more like 20%, 50%. Obviously, as a startup, we need to survive, right? And I think even though

We have like a roller coaster ride since launch, but we're growing and we're surviving and we're still pushing the features that none of the other devices can do, which is a very, very good sign. So, yeah. We're going to have to pause for a quick break here. We'll be right back. We'll be right back.

Support for Decoder comes from ServiceNow. AI is set to transform the way we do business, but it's early days, and many companies are still finding their footing when it comes to implementing AI. ServiceNow partnered with Oxford Economics to survey more than 4,000 global execs and tech leaders to assess where they are in the process. They found their average maturity score is only 44 out of 100.

But a few pacesetters came out on top, and the data shows they have some things in common. The most important one? Strategic leadership. They're operating with a clear AI vision that scales the entire organization, which is how ServiceNow transforms business with AI. Their platform has AI woven into every workflow with domain-specific models that are built with your company's unique use cases in mind. You

your data, your needs. And most importantly, it's ready now, and early customers are already seeing results. But you don't need to take our word for it. You can check out the research for yourself and learn why an end-to-end approach to AI is the best way to supercharge your company's productivity. Visit servicenow.com slash AI maturity index to learn more.

Support for Decoder comes from Vanta. Whether you're starting or scaling your company's security program, demonstrating top-notch security practices and establishing trust is more important than ever. Vanta automates compliance for SOC 2, ISO 27001 and more, saving you time and money while helping you build customer trust.

Plus, you can streamline security reviews by automating questionnaires and demonstrating your security posture with a customer-facing trust center, all powered by Vanta AI. Over 8,000 companies like Atlassian, Flow Health, and Quora use Vanta to manage risk and prove security in real time.

Get $1,000 off when you go to vanta.com slash decoder. That's right. That's vanta.com slash decoder for $1,000 off.

They're not writers, but they help their clients shape their businesses' financial stories. They're not an airline, but their network connects global businesses in nearly 180 local markets. They're not detectives, but they work across businesses to uncover new financial opportunities for their clients. They're not just any bank. They are Citi. Learn more at Citi.com slash WeAreCiti. That's C-I-T-I dot com slash WeAreCiti.

Welcome back. I'm talking to Rabbit founder and CEO Jesse Liu. Right before the break, I was asking him how many AI tokens a user would have to use in order to cost Rabbit more money than the device brought in. His response was that Rabbit had done the math and it was fine. Then he started calling out his competitors and others in the space. Let's get back into it.

So one, I don't think anybody has ever linked criticism of Humane to criticism of Marquez's wallpaper app on our show before. Well done. I think Marquez has a very different view of where his expertise is and what went wrong with that app. And maybe one day we'll talk to him about it. But my question for you when you talk about growth and you talk about the unit economics of the rabbit is on some curve –

becomes unprofitable for you. Just me having a rabbit for longer than 18 months becomes unprofitable for you. That's the moment that you would charge a subscription. You would say to continue using this thing, it

it can't be negative for our company. And that's the thing that I'm pushing on here. I think there are multiple solutions to that question. One is that obviously if let's use an R1 for every user for more than 18 months, there's a couple of solutions. One is that we're going to launch the next generation device. And maybe multiple devices still profitable from the hardware. Two, I

I think we have this prepared since day one. From last week, we rolled out the RFR teach mode to a very selected group of testers. I would love to give you access. So please reach out to us later on. We'll see if we can help you set it up.

But we rolled out a very small group of alpha testers, roughly around 20, 25 people, to be honest. And then over the last 72 hours, I saw more than probably 200, more than 200 lessons or agents has been created through Teach Mode. And if you look at the current Apple ecosystem or Android ecosystem, I think the hardware is not going to be the number one money contributor. It's really hard to make money.

on top of the margin of the hardware anyway. So at some point, you want to convert that into services and software, right? That doesn't mean that you're going to charge subscription for the device. What I think is very promising is that we're going to slowly roll out the teach mode to beta testers and hopefully by the end of this year, we can like

grant opened the teach mode as we promised on day one so all these lessons created or or rabbits or agents created by each independent users or developers they can be considered as a new generation as of app store on that we can make big money

Using the App Store economics of taking 30% of the transaction. I don't want to invent any. Exactly. I think it's very – I think I'm not trying to invent any new business model. I think as a startup, it's very risky to invent your own business model. But there is a very great business model out there, which is App Store. And that's contributing like, what, 70%.

for the annual income, right? So I'm just curious, just as I've played with R1s and looked at the device, I've always wondered how on earth are we making money at $1.99? So that makes sense to me.

When you think about what the rabbit is actually doing, I asked it a query. It shows me a beautiful animation on the screen, which is adorable. And it goes off into the web and uses a bunch of APIs and now the new large action model, which is the news, right? Yesterday, you announced the large action model playground. People can watch it work. I've seen the lamb click around on the verge website just to read headlines, which is neat. Is

Is that the back end of this? I ask the rabbit to do something, and in the cloud, it goes and clicks around on the web for me? We have to separate two different systems here, or maybe three different systems here. Let's talk before yesterday, because yesterday is really a great milestone. Before yesterday, what happens is that you talk to the R1. We have an intention triage system, which basically, we convert this audio

to a text, we send that text to our LLM providers, and then we have an intention triage system from there. Like after the LLM understands the intention, we send to different APIs or different features. There are a lot of features which is on device, right? Like set a smart timer or something like that. Or there's like a simple question, but we think that there's other services or model probably answers better than the default LLM. So sometimes we send a

particular query to Propaxity. Sometimes we send a particular query to Warframe Marathon. So you can understand as, you know, Intention Tree Address System is dispensing on this to different destinations. And then the relative features will trigger. But after yesterday, which we have this plague

And that's a first stepping stone towards what we really want to create, which is a generic cross-platform agent system. It has to be generic, which on this case, it is a generic. It is not cross-platform yet because it handles only on website. It will be cross-platform very soon. But with this generic website agent system, essentially you can just talk to Rabbit and be like, hey, go to ABC website or go to somewhere and then help me do this.

So that's exactly how we wish to design a product. And I think everyone in the industry is heading towards this direction, which is you say something, we understand you and we help you do it. And what happens as we put a Windows on the rabbit hole that you can see is that the agent will break down different steps. I'm going to Google first. I'm searching for the verge. I'm clicking to verge.

home website. I'm trying to find this title as you requested. I'm clicking the button to share this. And in theory, you can chain multiple steps, infinite steps, follow-up queries to the system. So I gave you an example. I tried. I think I showed this to another reporter is that, hey, go to Reddit first, right? And search for what are people's recommending for the 2024 best TV, 4K HDR, get that model, then go to Best Buy.

add that to my cart. If Best Buy is out of stock, then search on Amazon. If they both are out of stock, get me the second recommended model. So you can actually chain different queries and you can pause it, you can add, you can tweak it, you can fine tune it. So it's really just like a playground. You can freely explore the system and the system is fairly good enough to do daily tasks. And people are, obviously, developers and our hacker in Swastik,

Whitehikers, of course, are giving us impressive showcases. There are people using the LAN playground to create an app just by talking to R1 because there are third-party AI destinations that you can just use prompt and create an app and download the code and stuff like that. So it's really amazing to see all these great showcases just within actually precisely 24 hours.

Yeah. So I want to make the marker between yesterday and the day before it, right? You announced the Rabbit at CS in January with the LAM, but it wasn't there. Why announce it without its fundamental enabling feature? It is not accurate. I want to take this opportunity to address that. If you go to the connections, now we have seven apps. By day one, we have four apps.

Those are the first iteration of LAM, which is not a generic technology. We never on the CES claim that you cannot go to Amazon and order something, right? We said, we're working towards this piece. And today there's four apps that you can connect. We're going to add more services. And over the past couple of months, we did add three more services. So as if today there are seven services in total,

Then we keep working on the current LAM playground. And when the time is right, we swap it. So there's a lot of debate saying LAM wasn't there. That is not true.

I can trace back to where this rumor starts. It's where there are people hacking to the R1. They saw R1 is fundamentally powered by an Android system on the local device. And obviously, that should be the case. It would be more sketchy if it's non-Android. So at the bottom of it is Android system. And they dump the code, which you can do that. In fact, every good piece of hardware in history has been hacked. Someone goes into this and...

jailbreak the R1, which I guess every piece of hardware is jailbreakable at some point. And obviously, that's a factor to us. If you build a software and no one even bothered to jailbreak it, it's probably not a good phone factor anyway. So people jailbreak it, find out the Android code. They dump the Android code to another media. And they say, hey, there's nothing about AI here. There's nothing about LAM here. Of course, because all the stuff is in AWS.

So that's where the rumor starts. And then there's a lot of media, and they just take that piece and reiterate that. The apps you started with, Spotify, DoorDash, there are a few others. Those are APIs, right? You were using their APIs. You were actually opening Spotify on the web in Chrome and clicking on it. Yes, yes. Why?

What do you mean why? There is no API. That's the most brittle way to use Spotify I can think of. There is no API. There is no API. You made a smart speaker. Spotify can run on smart speakers and other kinds of devices. That's a partnership. That's a partnership. Go to Spotify, read their documentations. There is a specific line is that you cannot use API to build a voice-activated application.

Literally. So Spotify right now on the R1, when I ask to play a song, it goes and opens Spotify on the web somewhere. Goes to the window. Yes. And then you're restreaming the audio to my device through your service? Correct. Correct. Does Spotify know that you're doing this? Yes. And they're okay with that? We have a conversation. They realize this is agent behavior. And we said, look,

We ask users to log in on your website, and they're 100% legitimate users, and they're paid users. And when we do the trick, we help them click the button. I've always been very curious about this. I've been dying to ask you these questions. So I ask my R1 to play a song. Somewhere in AWS, a virtual machine fires up, opens a web browser, opens Spotify, logs into my Spotify account using my credentials, clicks around on Spotify,

pushes a button to play a song, and then you capture that audio and restream it to me on my R1? Everything is accurate, except we don't help you log in. You have to log in for yourself, and we don't save your connection. But the part where you are restreaming audio that Spotify is playing to your virtual machine to me,

You're doing that. We are basically give everyone a virtual machine, which is a VNC, which is 100% within policy, right? And you have the rights to access that VNC. And on that VNC, we basically work directly on website, just like today's LAN playground. So we're not getting the audio from server, from Spotify or somewhere else. We're basically going to Spotify,

website and play and do the things for you and play that song for you. Okay, but where does the song, where do the bits go? The bits come to the virtual machine and then they come from the virtual machine to my rabbit. So you are, you're restreaming the song to me. I am not restreaming the song to you. I'm basically presenting the VNC directly to your R1. How did, wait, explain how that works. Maybe I'm not technical enough to understand how that works. You're presenting the VNC to my R1.

Correct. So it's running locally on my computer? With no UI. Okay, I see what you mean. So I'm logged into a cloud computer. The R1 is the client to a cloud computer. And Spotify is playing on that cloud computer and the R1 is taking out audio. Okay, that raises like a million extra questions, right? Yeah. First of all, I see where you're going. Before you go deeper, I just want to say, first of all,

We're not using API. Second of all, to say LAM is not there, that's false claim.

Because we have all these services. If you really pay attention to their documentation, there is no API for like DoorDash. There is no API for Uber. Right. But I just want to be clear. That's a choice those companies have made to prevent companies like Rabbit from automating their services and disintermediating their services from the user. Right.

So as you think about these agent models going out onto the web, however they're expressed, whether it's the LAM, whether it's whatever you're doing before the LAM playground hit, all of those companies are going to have a point of view on whether agents can use their services in this way. That's pretty unsettled. And I'm curious, you know, you have a few services. They might have just said, OK, let's see how this goes. But over time, you're going to enter into a much more complicated set of negotiations that will actually be

probably determined by the big companies making deals, right? You can see how OpenAI or Microsoft or Amazon would make a deal to have DoorDash accessible by agents. And DoorDash would say, we've made this deal. You can't be accessible. How do you solve that problem? It's not a problem for now. We'll see how this problem evolves. But I remember when Apple was relatively not so big, I mean, not as big as today,

When I read Steve Jobs' book, there's one chapter he said, okay, go talk to Sony from tomorrow, $0.99 per track. Remember that moment? So at some point, this level of negotiation needs to be happening. I'm not sure if we're leading this or someone else is leading this, but this is the working proof that we're not using API. And I don't think the services are...

not building API just because they're trying to prevent people from automating the company, it's just because API to them is not making money. And they for sure will love to set up a negotiation in some phase later when we grow bigger. But I guess...

We tried to reach out to Uber. We did before launch. They're like, who are you? You're too small. That's it. We don't care. MARK MIRCHANDANI: And so then when you-- you have Uber on the R1 now. That's opening the Uber desktop app? FRANCESC CAMPOY: No, the Uber website, which is very janky, which is very useful right now. MARK MIRCHANDANI: That's what I'm asking. Sorry, what I meant by desktop app is in the web browser, you're calling an Uber. FRANCESC CAMPOY: Yeah. MARK MIRCHANDANI: If you're running on Android, why not open an Android virtual machine and use the Android app?

It is a little bit more technical to achieve that, which we are working on the other platforms. I think I showed a very select group of people of a working prototype that LAM is operating on the desktop OS, such as Linux with all that local apps. So we're definitely heading to that direction. Is there a possibility they can detect the fact that these are not human users, but in fact, agent users?

I guess there's always a way that you can detect. But I think the question is, this is actually a very good topic that we're talking here, is that, you know, think about CAPTCHAs. Sure. LAMP Playground or any capable AI models now can go there and solve text-based CAPTCHAs. So their old system to prevent automated systems like this are currently failing.

And this is an industry effort to push everyone in the industry to rethink about now with this AI, now with all this agent, how their business is going to reform or how their business is going to, how all these policies need to be changed. I do agree. This is a very complicated topic. So, but what I can see is that this is not a rabbit doing some, you know,

really fancy magics here. Like every company is doing this. We have other Asian company like Motel, even the GPTs are doing this, right? So this is like a new wave emerging for all this old services that they have to think about. But I can tell you my personal experience dealing with scenarios like this, like when we first start building one of the first smart speakers back in like 2013, you know, like all this music label, they don't care.

They don't care until everyone's building smart speakers. They're like, okay,

we have to resell the whole copyrights for this particular phone factor. I guess end of day is about money, right? They want to sell the same copyrights to as many phone factors they want if there's a popular one. So we're okay to have this kind of negotiations, but certainly like you said, there's bigger companies that are doing similar things or even more advanced things that needs to be addressed. I'll give you another example, like Siri and Microsoft.

There's a feature called Microsoft recall, right? Which they put back that feature now and I think they relaunch it. Yeah. Which is very aggressive. That is taking a screenshot of your local computer. This is what I saw was happening in AI in the early days. There's going to be a lot of like different takes and tries and eventually people will reconcile and agree on a single piece of like terms and agreements. But if you check Microsoft,

how we automate the website to their interface. The most important part is we don't create fake user. We don't create spam user. We don't log in on your behalf. And you are you. The way I help you to do things is by help you click the buttons and mouse. It's equivalent of if I want my buddy to help me, I'll give you an example. So if I'm busy, I'm about to head into a meeting.

I want my buddy to help me order a burger from DoorDash. All I need to do is I unlock my phone, I pass my phone to my guy, and my guy help me click that. And in this process,

I'm not sharing my credentials to my body, right? I'm not telling my phone password. I'm not telling my DoorDash password. I'm not even sharing my credit card info. All he do is just add to the cart and click confirm. That's it. So this guy is equivalent of the first generation of LAM, which is unfortunately, we don't like it. So that's why we work so hard. Now we have Playground, which is more generic technology.

Well, let me ask you about that difference between the first generation of LAM and the Playground. The Playground sounds like the thing you've always wanted to build, right? You actually have an agent that can look at web pages, understand them, take action on them. The first one, it might have been a LAM in the broader definition, but...

But as technology, it was expressed as testing software that was moving in an automated way through these interfaces, right? You weren't actually understanding the interfaces. You were able to just navigate them. Well, yeah. Because that's like pretty normal robotic process automation stuff. Were you just building on that kind of technology while the LAM came into existence? No. No. Okay. We're working on your symbolic, right? But even in the first versions? Yeah. Yeah.

But you could only understand... So, for example, the question I've always had is, what happens if Spotify, before the LAM exists? Because I understand now the claim is that this version can understand every website. But if Spotify changes its interface, or DoorDash changes its interface, Rabbit was kind of getting tripped up, right? I'll tell you, Spotify changes its interface all the time. Right. And I think in the past...

Six months, five months since the first LAM was adding the Spotify with the connection since launch. I think we probably put the Spotify under maintenance for like maybe two times, one hour in total. That's a very hard proof. Yeah. But that's a hard proof. But just taking it for what it's worth. I think that means it's not good enough.

Right? The Spotify app on my phone never goes down for maintenance. And if the claim is the agent can go take actions for me, I have to rely on that at 100%. And so I think the question for me that I have, this whole thing is the delta between what you want to do, which is have agents go and crawl the web for me, and the reality of what we can do now is,

Actually, the middle ground is APIs, right? The middle ground is not so brittle. That makes more sense to me. The agent would, instead of using an interface design for my eyes, use an interface design for computers. I really want to laugh hard. Okay. Really. Two things. I disagree that the Spotify is not working good. Spotify has been working amazing. Sure. Five months, maybe two months,

times we put on the maintenance, the total amount of time we put on the maintenance is probably under one hour. You can ask any R1 users. That's not through API, which is impressive. That's through agent. That's through agent to handle... I get that it's impressive for an agent. I'm just saying... You said it's not good enough. I said it's not good enough. It is not good enough. Where's the curve where it's 100%?

Because API is 100 percent. That's my second part. Yes, API is 100 percent. But you're relying on they give you the API that's stable, that works. I'm the user, I don't care. That's what I'm getting at. As the user, why should I care?

Users don't need to care. We need to care. We need to care. And we need to care because we checked what are the good APIs we can use. Don't get me wrong. Propax API has been great. Sure. OpenAI API, you know, break every...

Every day or two, they said, we observe an issue. You can follow the chat GPT down. There's like a very detailed how many, you know, how many breaks per day. It's, you know, more than, I guess, more than 10 on average that chat GPT API breaks or instable, whatever it takes. We have a notifier. So API, first of all, API is not stable. It is not stable. Sure. And

You have to chase for the services people want. We want to offer this music feature, and we think Spotify is the best experience overall. And we want to chase for this partnership, and we're still chasing for this partnership. But to talk from a technical perspective, why I said I don't like APIs is because think about Alexa. Alexa speakers are all using APIs.

And you literally have to go there and negotiate because, like I said, today, not everyone is opening APIs. A lot of the traditional services don't have API. And then startups, for startups, it's impossible. When you go talk to them, they think you're too small, right? We did that. We just did that to everyone. They think we're too small. They don't care. So we can't get API, right?

Does that mean that we're not going to figure out an alternative way to make it work? No, hell no. We're going to make it work. And this is exactly how we make it work. So we care about users to use this feature. We don't care about how to do it. In fact, because we know that you don't care how this has been done, I don't want to spend six months, eight months, suiting up to talk to Spotify people and Uber people and one by one, let's do that, right? Sure.

Well, the promise here is you're going to eventually have a general purpose LAM that is just using the web for you, right? So you hand your phone to a buddy, which is why you can make the rabbit device and just talk to it. And it goes off and does stuff in the general case. The enormous –

Death Star that everyone sees is that Apple has announced substantially the same feature for Siri on the iPhone. And Apple can get the deals. And Apple can pull developers into an API relationship locally on the phone with Siri. And Apple honestly can just burn money until it chooses not to build a car or whatever it wants to do.

And getting people to buy another device that doesn't just fall back to the Spotify app on iOS when it breaks seems very challenging. How do you overcome that? Because if the technology isn't 100% better 100% of the time, that feels like a challenging sale. Yeah, this is a fun part of the game, really. How do you win the game? First of all, speaking for myself, I've sold my company before when I was 25. I don't want to build another app.

I should chase my same dream because I really think that the grand vision that I have and our team was working on is actually the current direction everyone's chasing. And it just feels so bad if you don't chase the same dream, no matter how hard it is, really. And in reality, we feel blessed and happy to say the exact situation because we

We don't have any serious competitors from startups, to be honest. Well, there's one, and they seem like a pretty spectacular failure, right? Humane launched with a lot of money and a big T-Mobile partnership and a subscription fee and Time magazine and all that stuff. And it doesn't seem like that has gone very well. Yes. So I said, as if right now, I don't think we have serious competitors from startup. And then...

When we talk about competitors, obviously there's Apple, there's every big companies out there, including OpenAI. First of all, I think this is good for us because it validates our direction. It's absolutely correct. And I also are curious about

What are going to be the definitive route for the generic agent technology? Because different people in the industry might have different ideas, right? There are still debatable states. There is no evil for agent systems yet. There's no like very good evil yet. And there, you can see a lot of different research houses and companies trying different routes. Obviously there's API routes like GPTs, which doesn't really take off. There's pure neurosymbolic routes. There's Hebrew routes. There's like,

all this multimodality. So we're still in the phase of everyone trying their own recipe, and hopefully that can become a definitive recipe, including Apple. I think the benefit for Apple to do that is that, yes, they understand the user better, much, much better than any companies out there. And they have infinite money, theoretically infinite money, and they have the very close ecosystem. The way that they're rolling this out is that they have this SDK called AppIntent, right? So

different companies or app developers need to choose to enroll or not enroll with that to have the new Siri to control stuff. I guess my

relative advantage as a small group, as Rabbit, is that we move fast. We move fast and we keep growing. I think if we put all the cards on our table, we have a spectacular launch. We are the most sold dedicated hardware yet. And we have made good profit. We fixed all the day one problems and the company actually quadrupled the size. So we're growing, we're moving fast. And now we drop this, I think

Like you said, I put a marker between today and yesterday. I think today I can say a lot of things that you can do on R1 you cannot do on iPhone, right? I believe eventually everyone will be able to come to the same solution that all the device can do same kind of like similar stuff.

But I firmly believe at least this remaining half a year or the Q4 of 2024 and probably the Q1 2025, it is still a game of you have something that they don't have versus you guys all have the similar stuff. Who's done better?

So I think relatively, we have a good six to eight months ahead of start. Like we have our little room here. But obviously, I also believe when big company wants to kill startup, they have a million way to kill you. That's just the reality. I think people keep talking to me and asking questions. What happens if the risk is too high? What happens if the company dies?

I really don't think that all these questions matters because we're on this course. We're going to see the end, whether it's a good end or bad end. And I don't think like any answer to this question will change our course, to be honest. I can go here and tell you and be a crybaby like this is super hard. This is impossible. Everyone in the industry can kill us easy.

Or the YouTube reviewer can kill us by posting a video. It doesn't change the course because we are doing things. We're launching, we're shipping things, right? We're moving forward. So it'll be interesting to see what Apple came from. I was on the Apple iPhone upgrade program. So I automatically get a new iPhone every year by paying the same monthly fee. But I really don't find any reason to upgrade that because people are talking about Rabbit being launched too early.

Now you have the company like Apple, if you go to the, what is that called? The Sunset Boulevard in Los Angeles, where it's close to here, or I guess Mission Street in San Francisco. You go to any major city, you see this gigantic posters, billboards that Apple puts there, right? iPhone 16, iPhone 16 Pro. What are the other lines underneath? It says Apple Intelligence. Is it ready? Is it out? No.

Yeah. Let me talk about growth for a second. You mentioned you quadrupled. I'm guessing you mean by employee size. Yeah. You told Fast Company last month the R1 is only being used daily by 5,000 people. Is that higher or lower than you expected? First of all, you saw that article from, I guess, Verge. No, it's Fast Company. That's what it says. I'm reading. I'm looking at it. No, but there's a Verge that says R1 only has 5,000 users daily, which is from...

Which is from Rick Burr. That's a quote from you. What I said there can be misinterpreted. What I said is that if you go look at the data doc right now, you probably will find 5,000 people using R1. At least 5,000 people. I'm just going to quote you Fast Company. Lou said right now around 5,000 people use the R1 daily. I said it can be misinterpreted. Okay. Yeah. First of all,

I think we saw a very steady growth of all the people interacting with R1. And each time with new features, there's going to be more people using it. I gave you some numbers that I want to throw to you. And maybe I can share like a very detailed like usage sometimes in the future. First of all, there are about 5% people that they have their R1. They're not happy that return less than 5%, which is a very good number. And again,

I think the top features that people are using are asking questions and visuals and visions and all that. And we really are hoping for people to discover more use cases, but unfortunately we have like four, seven apps on the connections, right? That's one of the bottleneck. So if you check for the total query, most of the cases you ask a question, you forget about it, right? So it's not about how many times you ask R1, it's about...

what kind of task you ask R1 and is R1 actually going to help you? So

I guess, yeah, very unfortunate. It seems that that's a misinterpretation. So what's the number? What's the daily active number? We'll issue the correction tomorrow. What is it? I will go back and get you a very accurate number. But I can tell you yesterday our server actually crashed. So I think... Is it double? Is it 10,000? Is it 25,000? Oh, yesterday our cloud cost actually... Actually, let me check right here.

Because I can check right here. This is why I like founders on the show. This is why I love having a founder on the show. Okay, so past one day is 33.76 thousand. Okay. So almost 34K yesterday. 34,000 actually years yesterday. Okay. What percentage of your sales is that? Yesterday? Yeah, 33,760 people yesterday.

What percentage of your total sales is that? I think we delivered more than 100,000 units, and that should be around 33%, 34%. Sure, that makes sense. And that, I'm assuming yesterday, because it was the launch of LAN Playgrounds, this is a big spike. Yes. What were the days before that? So past two days, 52.06. So if you minus 33, that's another...

20,000. Wait, I'm sorry. I don't think I followed. You said numbers, but I don't think I followed them. Past two days, say it again. So past two days, 5,206. That's the total of two days. Correct. Okay. And one day is what the LAN playground wanted to say. Okay. Correct. So you're saying it's 5,000 active users at any time, not daily. Correct.

Okay. And then you're getting about 20,000 users daily, and then we'll see if that goes up because of the LAN playground. Correct. Then there's an article by The Verge just used that title, 5,000, which is wrong. I can tell you that's wrong. That's very wrong. That's misinformation right there. Well, you tell Fast Company, and then we will update it. But we ran off your quote in the magazine, so we feel good about that. He wasn't there, and he or she...

That journalist wasn't there. And that's not what I said in the quote. Okay. We're going to have to take another short break for a minute here. We'll be right back.

Thank you.

That company is called Arm. Arm designs compute platforms for the biggest companies in the world so they can create silicon and solutions to power global technology. Arm is proudly NASDAQ-listed and became a NASDAQ 100 company within a year of its IPO. Arm touches nearly 100% of the globally connected population. 99% of smartphones are built on Arm. Major clouds run on Arm, as well as all major mobile and PC apps. Next up is Arm.

Support for the show comes from Toyota.

For many of us, driving is just what you need to do to get from point A to point B. But why not think of it as a reward instead? Make it an experience that captivates the senses by driving a Toyota Crown.

The Toyota Crown family comes with the quality and reliability that Toyota is known for, along with bold and elegant exterior styles. The Toyota Crown sedan has an available hybrid max powertrain with up to 340 horsepower and comes with an available bi-tone exterior finish to help you stand out on the road.

And the Toyota Signia gives you the space you'd expect from an SUV with a stylish design unlike any other. Whether you're a daily commuter or weekend road warrior, you can make any drive a thing of beauty with the Toyota Crown. You can learn more at toyota.com slash toyotacrownfamily. Toyota, let's go places.

Thank you.

It's about real people providing real defense. When threats arise or issues occur, their team of seasoned cyber experts is ready 24 hours a day, 365 days a year for support. They provide real-time protection for endpoints, identities, and employees, all from a single dashboard. Their

Their cutting-edge solutions are backed by experts who monitor, investigate, and respond to threats with unmatched precision. Now you can bring enterprise-level expertise without needing a massive IT department. Huntress can empower your business as they have done for over 125,000 other businesses. Let them handle the hackers so you can focus on what you do best. Visit huntress.com slash decoder to start a free trial or learn more.

Welcome back. So you heard all that back and forth about Rabbit's daily active users and CEO Jesse Liu saying he would get back to us for the better number, asked the company to clear it up, and it turns out what Jesse actually said to Fast Company was that at any given time, Rabbit has 5,000 users. The Fast Company article has been corrected, we'll correct ours as well, and we'll use Jesse's number of between 20,000 and 34,000 daily active users, which is still substantially less than the 100,000 R1 units sold. That's where we left off. Let's

Let's jump back in. Now that we have the number, we'll run it. But my question to you is, you've got to sell more R1s. You've got to get more people who've already bought them to continue using it. And you are, in fact, whether or not Apple Intelligence has arrived or not, it will arrive in some fashion in the coming weeks. There's a report just a week or so ago that Johnny Ive is working with Sam Altman at OpenAI on a hardware device.

Something will happen with Humane. Something will happen with Google. Something will happen with Samsung. As that universe of competitors expands, it feels like the core technology you're betting on is being able to

automate a VMC with a large action model, right? You're going to open up user sessions for people in the cloud, and then your lamb is going to go click around on the web for them. And that will get you out of the challenges of needing to strike API deals with various companies with other kinds of deals, copyright deals with various companies, whatever you might need. Is that durable, right? The idea that this will keep rabbit away from needing all of the deals, the big companies will just go pay and get.

Because that's the thing that I think about the most. I can think of 10 companies that came up with a technical solution to a legal problem. And even if the technical solution was amazing, the legal problem eventually caught up with them. Yeah. We're confident.

that this technology is the current technology route that it will work. And I haven't yet to see another approach that actually make any generic agent system work in any other manner. That doesn't mean that we're locked in to one technical path. If you talk to any company, it's probably not a smart idea to say, hey, we just buy this for next 10 years.

the technology changed so fast, you have to adapt. But right now, I think we're off to a good start. We launched a concept with Playground, with free off-charge that you can explore so that we understand how this system can be improved. In fact, I believe the speed can be improved very fast.

but we're not here to say, hey, we're stuck into this. We do have patterns about this, but we're not saying, hey, we think this is the correct path to go. I don't think anyone in the AI industry can give you a very definitive answer and be like, hey, if you just do this, here's the structure. This is going to guarantee you

the best result in the long run. I think that's not a good way to think of it. But yeah, I agree. Everyone in the industry are experimenting something new and a lot of companies that we saw are

going to, like you said, run into some sort of legal problems. You know, there's music generation platforms. I mean, this feels like the story of the AI industry, probably, right? There's like a YouTube training video can be used by this or that. You know, there's all sorts of things like this. Yeah. But I think it's not just the builder are adapting. The industry are going to adapt to the builder too. At some point,

there's going to be a conclusion that, okay, this is a new policy. This is a new kind of like terms that we need to follow. Are you building to that goal? I think, again, this is just the kind of the big question I'm thinking about with all of these things. Basically every AI product is a technical solution that is ahead of wherever the legal system is or wherever the business deals are.

At some point, Spotify might show up on your doorstop and say, you know, we're not going to allow agents. It has to be a human user, and we're going to change our terms of service to say it has to be a human user. DoorDash might say it. Whoever might say it. Are you ready for that outcome? Do you have the budget socked away to go lawyer up and fight that fight? No. At the moment, we don't have the resources to fight that fight. And at the moment, that's not a real threat to us because they said we're too small. Fair enough. When do you think the turn hits?

I don't think that it's a dead end for us, right? No, I'm just saying, when do you think it's a turn? When do you think that becomes a conversation about whether you can have agent users or human users? Yeah, that's exactly what I'm talking about. I don't think that they are not willing to change their terms. And I think unlikely they're going to put terms like it has to be a human term.

It cannot be. There's a lot of automation tools out there already, right? So like where there's no turning back. I think what they would like to work with any companies, including us, is that when they see a popular demand from this new kind of agent technology, they want to charge for it. We ask our user and us to pay for them.

And that's a business deal. That's more like a money terms. That's what I can see. But yeah, for now, we're not breaking any of their terms and agreements, right? And if they change the terms and agreements tomorrow, we'll take a look and we'll see how we adapt. But the agent is out there yet already. There's a lot of agents running already. So I think there's no turning back and it's very unlikely they say, hey, we're going to stop agents using our services. That's not a...

That's not going to happen. Let me end with two kind of big thinky questions, and I forgot to ask you the decoder questions. Think on the longest timeline you can. Let's assume everything works out and it's all solved. How much time and money is it going to take before the general purpose agent you're trying to build is 100% reliable and can just do all the things that we all imagine them being able to do? I might have a different opinion here. Okay. I think...

Foundation models like OpenAI, obviously they're raising for a crazy amount of money. I think we take benefit from what they've been worked on, right? Because their primary services is selling their models as APIs, which saves a lot of money. We don't want to recreate a wheel retraining like an OOM. It might not as scary as a lot of people might think.

I think there's a huge gap between converting the latest technology into a piece of product versus pushing for a more advanced technology. Obviously, I'm very proposed to do high-end research. We want to have a research house here set up at the same scale as OpenAI and DeepMind, even though they're already far, far behind. But I think what we're trying to do right now at this current scale

Because here's the money we have, right? We don't have like $1 billion. We don't have like $2 billion. We have this very limited budget. Is that how can we convert the latest technology and research and build to a product that we can ship early and collect feedbacks and learn from it? So a lot of people have different definition of AGI. I don't really talk about this term because I think so many people have so many different definitions for it. But I do think that

AI understands what you say and can help you do things. And maybe, you know, here we're talking about virtually help you click buttons and stuff. There are a lot of companies doing human-owned Android that they're actually giving a hand and a leg for the AI to do things. I think it is an entire human's effort and a lot of the resources can be shared.

Instead of each company has to go raise for this amount of money and take that amount of time to achieve the same goal. So it's really hard to say, but we know we need more money and resources, that's for sure. But I think you've seen how efficient this team has been performing. From seven people, 17 people till today, we raised obviously much less than Humane or any big companies out there. I think it's actually one of our...

that we can do things in a relatively cost-efficient way and fast. Yeah. Timeline-wise, though, again, assuming everything goes your way, is it a year from now that you can build on all the foundation models and all the other investment in this thing just sort of

does whatever I ask on the web? Is it five years? What do you think? I think the AI model will get very smart very fast. But I think we're talking about a generational shift. I think obviously we don't want a 2024 piece of technology operating on eBay's website, which is basically designed back in 1990, right? So I think a lot of the infra needs to be refreshed. And the biggest gap, as I can see here, is productionized. So

I think in our roadmap, we think that it's very likely that we can get all this separate piece of technology we have, like LAN playground teach mode and RabbitOS at some point, maybe next year, kind of like merge into a new RabbitOS 2.0. And that actually will push a huge step forward towards this generic goal. But my general take is that AI model is smart enough, but the action part

is a lot of infrastructure. There's a huge gap between research and production. So that's what we learned. So I would say that I'm very optimistic in the three years term, but I think, like I said, right now and starting of next year is everyone trying different approaches and we'll see which one works. But

I think we're confident on the approach that we're taking right now. Yeah. And then I just want to end and ask about form factors. Obviously, the Rabbit is a very distinctive piece of hardware. People really like the design. We've seen just a lot of interesting glasses lately. The idea that we're all going to wear cameras on our face and someone's going to build the display. Do you think that's correct? I was wearing the Meta Ray-Bans yesterday. I was like, why would I wear these all the time? I'd rather have a thing. Yeah. I'm not against any form factors. In fact, I really think that

there will be a lot of fun factors. But when we were trying to design our one

The reason is that we know it's not going to be a smartphone because we know people are going to do a lot of other things on smartphone, which the current AI cannot do. So we deliberately avoided smartphone phone factor. Talking about pins with lasers and glasses, I have different comments for each phone factor because there's no universal rules here. Because let's talk about pins, right? So I think my general pushback for making it as a pin with a laser like Humane is

First of all, I think it's really cool. But I think it's too risky. You are trying to offer a new way of utilizing your technology to have user use software, and that's already new to them. And you don't want to just introduce a sci-fi kind of type of gear. So two new things stacked together, that's too risky. So if you look at R1, it's very...

design. You know there's a button you're going to push. You know a wheel probably can scroll. There's a screen you can look at things. So the R1 phone factory is very conservative.

In a sense that it derives the software. It's just like people haven't figured out how to interact in a virtual world. And all of a sudden, back in 2016, there's like 200 different companies making goggles, right? And they all fell. So I think I'm very, very conservative on the hardware phone factor. Talking about a glass, that's a different story. I think your skull actually grew to fit the frame, not the other way around.

Because I used to wear prescription frames. I know the pain. Your scroll is growing to fit a glass frame. Now they're the other way around. So I think there's really no generic fit on a glass frame. I was having fun with my design team joking. I'm like, maybe if we do the glass, we'll probably do the Dragon Ball style, like the Power Reader or whatever that is. The old Google Glass form factor? But I'm really like...

I can't wrap my head around, I'll have to put like a frame, you know, like that doesn't fit. Yeah, we'll see. I think even the current smartphone, I think is perfect. I really like the state of a glass or a screen phone factor. But the real problem here is not about the phone factor. The problem is about the apps, right? Because now we see all this agent technology, AI stuff.

They're doing things that app are doing and they're doing things that apps can't do. So I think the problem is with apps. I forgot to ask you the main fucking question. This is my fault. You've had a number of startups. You've done a number of things. You have a big idea here. How do you make decisions? What's your framework for making decisions? I'm a very intuitive person. I like to trust my intuition on big directions like, you know, what's going to happen, you know.

in the long run. But meanwhile, I'm quite conservative that I hate to predict things. So I think people, when people replay this episode, they will hear probably, I got really tricked by some of your questions. It's just my brain couldn't work for like

is that I don't like to make predictions. Like what happens if this happens, if that happens, what do you think? Like, I think when I manage my team, I tell people like we make decisions based on current fact and we find the best solutions to it. If you spend too much time, at least if I spend too much time, think about what if Apple knocks on your door, what are you going to do? And what if this A happened, then B happened, then C happened, what are you going to do?

most likely you're going to get a different strategy, right? Because if you think about if B is a solution to A, when A happens, you just do B, right?

But there are other type of people, they're like, hold on, have you ever thought of when A happens, then D happens, then E happens, then F happens, are you still going to do B? If you think in that way, probably not, right? So I just choose not to predict a lot of what-ifs, and I make short, clear, concise decisions based on current facts.

And in fact, if you do the recap for what we launched back in the CES, it was probably the best timing. The price is probably just right. The color probably just right, right? And the decisions of not negotiating, spend six months negotiating with T-Mobile is probably just right. You know, like...

I make current decisions and that's my style. And I talk to people, everyone talk to me. I told my team, you know, everyone in my team, they can find me anytime, talk to me anytime. I spend a lot of people, a lot of time talking to my peoples. And it's just, we are in general, just a very real team, like down to earth. And I really don't like some of the other people

type of a startup that they kind of like spend too much time, enjoy the feeling if you understand what I'm indicating. But there are a lot of people that they say, oh, I'm a founder. I'm cool. You know, like, no, I've, you know, I've grown enough to get rid of that. Probably the same way as if I'm 21, 22, but now I'm like 34. Startups is really tough. It's a war. It's about survive, right? It's really, really tough. And

it doesn't really matter if others want to do something like whatever. You have to be survive and just survive by your own is tough in any sense. So that's why, you know, a lot of people ask me, I got asked a lot like, okay, what if they do this? What if they do that? Well, end of day, there's nothing you can do. You have to do your thing and they will react to it. I think it's fair to say that with Rabbit, other startups like us,

biggest company like Apple, they react to us. They react to us in a very hustle way, right? Like very unusual way that they have this new phone, but all of things are still not there. Well, we're making a very small dent, right? But that even doesn't matter. I think for us, we care about our customers. One thing I want to say is that, yes, there are a lot of misinformation, there are hates, there are all that feedbacks, criticisms. If you talk to the R1 user, they're happy. That's what I care about.

That's what I care. Otherwise, you know, there will be a lot of returns. There will be a lot of refunds. We have less than 5% return. Like, put that term in any consumer market. Electronics device, it's a good benchmark. And

We're going to keep releasing all the stuff. And in fact, we pushed 17 OTAs within five months. The other company pushed like, what, three, two, three, four, five OTAs. So I think, I really hope people can see us as, you know, we're a bunch of underdogs. Yeah. Our solution isn't perfect, but it is David versus Gordie on day one because we're

It's a reality and don't expect perfect stuff from us because we are not perfect, right? We raise a very little amount of money and we're a small team, but we move fast. What we can guarantee is that when Rabbit shows you something, you probably couldn't even find somewhere else. Just like the hardware, just like the playground, or even the very janky day one version of LAM, we're the first company that has Apple music can be streamed to our device.

Yeah. Does Apple? Because you're opening it on the web. Yeah. Yeah. I mean, I don't get legal documents to my door. Maybe I will get one, but maybe they think we're too small. But yeah, we do things in our way. I guess that's what I want to say. We're really down to the ground team. Like, that's my style. Jesse, thank you so much for coming to Coder and Ming-So Game to answer these questions. I really appreciate it. Yeah. Thank you so much. My pleasure. Thank you.

I'd like to thank Rabbit CEO Jesse Liu for taking the time to speak with me, and thank you for listening to Decoder. I hope you enjoyed it. If you'd like to let us know what you thought about this episode, or really anything else at all, drop us a line. You can email us at decoder at theverge.com. We really do read all the emails. You can also hit me up directly on threads. I'm at Reckless1280, and we have a TikTok. Check it out. It's at DecoderPod. It's a lot of fun. If you like Decoder, please share it with your friends and subscribe, or we can get a podcast.

If you really like the show, hit us with that five-star review. Decoder is a production of The Verge and part of the Vox Media Podcast Network. Our producers are Kate Cox and Nick Statt. Our editor is Kelly Wright. Our supervising producer is Liam James. The Decoder music is by Breakmaster Cylinder. We'll see you next time.

Support for Decoder comes from ServiceNow. AI is set to transform the way we do business, but it's early days, and many companies are still finding their footing when it comes to implementing AI. ServiceNow partnered with Oxford Economics to survey more than 4,000 global execs and tech leaders to assess where they are in the process. They found their average maturity score is only 44 out of 100.

But a few pacesetters came out on top, and the data shows they have some things in common. The most important one? Strategic leadership. They're operating with a clear AI vision that scales the entire organization, which is how ServiceNow transforms business with AI. Their platform has AI woven into every workflow with domain-specific models that are built with your company's unique use cases in mind.

your data, your needs. And most importantly, it's ready now and early customers are already seeing results. But you don't need to take our word for it. You can check out the research for yourself and learn why an end-to-end approach to AI is the best way to supercharge your company's productivity. Visit servicenow.com slash AI maturity index to learn more.

Do you want to be a more empowered citizen but don't know where to start? It's time to sharpen your civic vision and ignite the spark for a brighter future. I'm Mila Atmos, and on my weekly podcast, Future Hindsight, I bring you conversations to translate today's most urgent issues into clear, actionable ways to make impact. With so much at stake in our democracy, join us at futurehindsight.com or wherever you listen to podcasts.