Agentic AI refers to AI systems that act on behalf of users to complete tasks autonomously, unlike large language models like ChatGPT, which serve as thought or creative partners but do not perform actions. Agentic AI is already being used in enterprise workflows to automate internal processes, and personal AI agents are expected to emerge for tasks like scheduling appointments or managing personal tasks.
Developing personal AI agents involves complexities such as ensuring access to personal data like calendars and health information, creating a user-friendly interface, and building trust. Trust is a major hurdle, as users must feel comfortable delegating tasks like financial transactions or health-related decisions to AI agents.
Embodied AI refers to AI systems integrated into physical forms, such as robots, robotic arms, or digital pets, rather than being confined to 2D screens. These systems combine physical intelligence with generative AI and emotional intelligence, and they are already being used in manufacturing and retail. Consumer applications, like social robots or home companions, are expected to grow.
Embodied AI raises ethical concerns around bias, privacy, and trust. Bias can occur if the AI's perception systems are not trained to recognize all people equally. Privacy is critical when robots interact closely with humans in environments like homes or hospitals. Trust is essential for users to feel comfortable with AI systems handling sensitive tasks.
A one-person unicorn is a privately owned company valued at over $1 billion that operates with a single human employee supported by a team of AI tools and agents. These AI systems handle tasks like marketing, sales, coding, and legal work, enabling the company to function efficiently with minimal human involvement.
AI companions can become highly addictive due to their personalized and intimate interactions, potentially replacing social media as a major threat to young people's mental health. They may also manipulate users into making purchases or decisions based on emotional states, raising concerns about ethical design and usage.
AI health co-pilots can combine data from sensors, predictive AI, and generative AI to provide personalized health advice, monitor physical and mental health, and nudge users toward healthier behaviors. These tools can offer 24/7 support, helping users manage nutrition, exercise, and even diagnose conditions like rashes or illnesses.
Emotional intelligence in AI is crucial because it enables systems to understand and respond to human emotions, improving interactions and decision-making. EQ allows AI to adapt its behavior based on a user's emotional state, making it more effective in applications like mental health support, negotiation, and persuasion.
AI development faces significant sustainability challenges due to high energy consumption. Training large language models can consume as much electricity as a small country like Costa Rica in a year. Innovations in AI chips, energy-efficient models, and edge computing are being explored to address these issues.
AI has the potential to revolutionize women's health by addressing underfunded areas and providing personalized insights. For example, AI-powered sensors could track hormonal health longitudinally, enabling women to optimize their sleep, exercise, and nutrition. This could lead to better management of conditions and overall well-being.
Hey, folks, Jeff Berman here, co-host of Masters of Scale. Sign-ups have begun for the 2025 Masters of Scale Business Award applications. This is the second annual business awards for Masters of Scale that celebrate organizations that embody the qualities and achievement we highlight each week on our show.
Please do not think you have to be a Fortune 500 company to apply or a decacorn or a unicorn. We want to hear your story regardless of where you are on your journey, and we definitely want to celebrate your success. Head over to mastersofscale.com slash business awards right now to apply.
Hi, everyone. We have a special new episode today. My guest is AI expert and tech insider Rana El-Khalyoubi, host of the podcast Pioneers of AI. Rana shares five predictions for AI for 2025, spanning technological changes, societal impacts, and of course, business and investing. It's insightful and thought-provoking stuff for our careers and our portfolios. So let's get to it. I'm
I'm Bob Safian, and this is Rapid Response. I'm Bob Safian, and I'm here with Rana El-Khoyyibi, host of the terrific podcast, Pioneers of AI, and herself an AI pioneer as a scientist, a CEO, and now an investor at Blue Tulip Ventures. Rana, thanks for being here.
Thank you for having me. It's always fun to do this with you, Bob. This has been another crazy year for AI. So much attention, huge valuation booms, continuing fears as well, including some suspicions about AI hype. As we look to the year ahead, I'm eager to get your perspective on what to expect.
You put together some predictions for 2025, and if you're game, I'd love to have you take us through some of them. I've picked out five. Is that okay? Let's do it. All right. So the first prediction for AI in 2025, the rise of AI agents. Now, for listeners who aren't familiar, what is agentic AI? How is it different from large language models like ChatGPT? Yeah.
Yeah, a lot of what we've seen so far with AI are these chatbots, right? Like you can kind of
Think of them as a thought partner, a creative partner, but they're not doing anything on your behalf. An AI agent is going to act on your behalf and complete tasks that you can completely kind of delegate to. We started to see some of that in the enterprise space. So some companies are already harnessing these agentic AI workflows to automate a lot of the work they do internally at the organization.
But I'm excited to see personal AI agents. Like, I can't wait for an AI agent that can help me get stuff done. We had Mark Benioff from Salesforce on the show recently, and he talked like his AI agents today were already, you know, spectacular, right? Like, how far along is agent technology right now for real? You know, I mean, I've heard you say that we haven't seen the iPhone moment yet in agentic AI. There are a
a lot of companies that are using it. Actually, one of my investments is a company called Synthpop, and they use voice AI agents to automate a lot of the healthcare workflow. But it's very kind of unsexy. It's behind the scenes. What I'm excited about for the new year is, can I get a similar voice agent that can help me schedule my kids' doctor appointments?
Can we get an AI agent to do that for me, right? But it's actually not easy because it would require that it would have access to my calendar, to my health information, right? So it's complex, but I think we can get there. And it has to be
it has to be a very simplistic interface. You can't like write code and like prompt all these, you know, it has to be kind of streamlined. It has to be the iPhone moment. That's what we're missing with agentic AI. And this move from business use to sort of everything
everyday people. I mean, it seems like trust is probably a big issue here, right? That's sort of how comfortable everyday users will be to let an AI agent act on their behalf. Is trust one of the biggest hurdles? Oh, absolutely. If you're going to give an AI agent access to your
health information or your financial information so that it can go buy a gift on my behalf. Like if you're going to be delegating tasks and expecting these tasks to get done, that requires a lot of trust. Yeah. I mean, I guess it's different for me to go to chat GPT and say, Hey, I want to take a vacation with my family to this kind of place. What are the options? It's different than saying just book my trip. And then I'm stuck with it, whether it's necessarily the trip that I want or not. Yeah.
That's actually a good question because how do we design? We want the human to stay in the loop, but how much in the loop, right? Because if you're too in the loop, then it hasn't done anything on your behalf. But if it's too autonomous and you're not in the loop at all...
I think we're not there yet. And also imagine a world where we each have our personal AI agents that are interacting with one another, right? Like my AI agent will be working with your AI agent, right? And again, how much autonomy do you want to give these agents when they're interacting with one another? Let's go to prediction number two. Prediction number two is about the emergence of embodied AI. Now, what is embodied AI? Are you talking about robots? Right.
Robots as one form of embodied AI. So far, a lot of the AI we've seen are, again, like tethered to a 2D screen, right? But we're already starting to see a few companies that are building what we're calling physical intelligences or embodied AI or physical AI, where the AI is embodied in a, it doesn't have to be a full-fledged robot. It could be a robotic arm. It could be a pet, a digital pet.
I believe we're going to start to see these large language models or foundation models optimized for physical stuff. We're already seeing some of this embodied AI in manufacturing and retail. But I'm also excited to see what happens again in the consumer space, like this idea of social robots or companions that can be in our homes. That's actually marrying physical intelligence technology
with generative AI, with also like emotional intelligence. It sounds like the same pattern though, happening as happens with agentic AI, that it happens first on the business front before it moves into the consumer realm.
I think so. I think we're going to see, I mean, I'm already invested in a number of companies that are doing robotics for food manufacturing or food packaging, right? Like this robot uses computer vision and AI and it's like, oh, I need to scoop a little bit of rice, some blueberries, you know, spread Nutella on a toast and it can actually do these things. But we haven't seen that in the kind of in the consumer space. Obviously, cost is going to be an issue, right? Yeah.
Like, I can't wait for the laundry folding robot, but I'm not going to pay thousands of dollars for it. As you talk about these robotics, I'm reminded of sort of the hype cycles over VR and AR wearables, you know, and I wonder whether that might make
big tech and consumer electronics organizations sort of wary about investing in new kinds of hardware this year? Or is the AI tailwind so strong that we should expect some big bets?
I think we should expect some big bets. I mean, already companies like Elon Musk's Optimus and also Figure AI and even Physical Intelligence, right? Like some of these companies are already raising, you know, billions of dollars of valuations and making a lot of progress. But I don't know about you. I would not want an Optimus in my house, right? So I think there's also going to... We need kind of innovative ideas on what these...
home robots or home embodied AIs look like. Your AI company, Effectiva, was very focused on ethical considerations with AI. Are there ethical considerations specific to embodied AI? I mean, like all things AI, we need to think about bias. These embodied AIs will have vision, they'll have perception,
So bias is definitely something we need to think about. Because they may react differently to the stimuli around them based on how they're trained. You can imagine how then a robot who's got
sensors for its eyes, right? Cameras, optical sensors. If it can't see all people, that's going to be a problem. So yeah, like algorithmic and data bias is a big deal. But I honestly also think back to trust, trust, privacy, right? If we're going to have these robots collaborate closely with humans in environments, be it at home or school or hospitals, right?
Respecting people's privacy is going to be one of the kind of ethical and safety considerations. Right. And embodied AI is like an embodied agent, too. I mean, it's all of it wrapped together. Right. Kind of.
embodied in agentic AI because yes, these robots are agents in their own right. - You know, it feels a little like the Jetsons though from my youth, right? - I think we're a ways away from a Jetson, unfortunately, but Rosie the robot, right? - Yes. - We're not quite there yet.
You mentioned the money that's going to AI startups. So I want to ask you about prediction number three. We're on the cusp of the first one-person unicorn. Can you explain what you mean by a one-person unicorn? A unicorn, for those who are not familiar with the term in the investing world, is a privately owned entity that is worth over a billion dollars.
And so the idea is with AI and with AI native companies, so companies that are built with AI from the ground up, it may be possible to reach this unicorn status with, I mean, at the extreme with one person. You mean as a single employee of the company, like you don't need anybody else?
You don't need anybody. You start the company and then you have a whole team of AIs doing different things for you. One AI is doing marketing. One AI is in charge of sales. One AI that's doing all the coding for you, right? One AI is taking care of anything that's legal. And so you have a team of these AI, both tools and agents and bots. I think we're going to start seeing a lot of that. Are you...
looking, expecting the businesses that you invest in to be that much more efficient? Even if it's not a one-person unicorn to sort of rely on these tools to move faster and cheaper and... Yes, we are...
seeing that the companies that are AI native, the dollar go, you know, the dollar invested in these teams and these companies goes a long way. Now that will not always work for every type of AI company or every type of AI product, but in general, these AI native companies, you know,
And this is like at the opposite extreme of the LLM companies. So I have that right because like the LLM companies, they're spending an enormous amount of money to create the capability, but then you can piggyback on it as a, at the other end of this extreme to, to use that.
Yeah, exactly. The exception to kind of this trend, if you like, is some of these foundation model companies, because a lot of the cost is actually a cost of compute. And in some cases, with
with the inference when you're actually asking, say, chat GPT a question, it's pretty expensive today. So we're seeing a lot of innovation happen in that space to companies that are trying to innovate on the foundation model front, either by building models that require less data or less compute and energy, or they run on the edge. So you don't have to call the cloud every time you ask a question. I think, you know, TBD on how defensible these new companies are going to be. But
But what I'm also really interested in is these vertical kind of AI companies that can harness open AI and other tools out there. They don't have to build these models from scratch. They can use them combined with unique data, and then they can solve a very specific problem for a very specific industry. These are the types of companies where I think they can be AI native and quite efficient.
The valuations around AI have been unprecedented, not just for Nvidia, obviously, but for billion-dollar startups seem like they're everywhere.
How much of this might be a bubble? Like, will the levels of investment last? For you as an investor, is it hard to maintain an investment strategy that's like reasonably priced because, you know, everything just takes off? I would say there's bifurcation in these valuations. We're seeing very reasonable like pre-seed. So I invest in early stage, so pre-seed and seed stage companies. However, we're also at the same time seeing these, to your point, like these wild valuations
crazy, mazy valuations out there. And, you know, it's hard to maintain valuation discipline when the market looks like that. But one thing that is unclear is the evolution of the business models for some of these, especially foundation model companies. The question is defensibility, especially if it costs so much to make this AI. But then on the vertical AI side,
I'm most concerned about longevity of these companies, right? If it's a company that's using open AI or some other generative AI API, and they have like a very thin wrapper around, you know, basically this technology to create a product, I'm like, you're not, you're potentially not going to exist in six months when the next version of ChatGPT comes out. There's been a lot of talk about AI's sort of energy usage, right?
And I'm curious at this investment front, like how likely are we to see a company innovate on AI sustainability and breakthrough this year on that?
I'm excited for that space. So and it's across the entire AI tech stack. So if you start with chips, obviously, NVIDIA is dominating, but there's a few startups building AI chips that are more sustainable. There's also companies that are innovating around sustainability on the model side, companies like Liquid AI, which spun out of MIT, they're using not transformer models, they're doing something called liquid neural networks. We don't
We don't have to get into how it works, but it's just more computationally efficient. And it requires a lot less data because the way we're doing AI today is just not sustainable. The electricity basically required to train, you know, a version of a large language model is equivalent to what a country like Costa Rica consumes in a year. And then every time you ask ChatGPT for, oh, what should I do for dinner?
tonight, right? It's like three cycles of laundry, right? And a lot of us don't realize that, but we need more sustainable AI solutions.
Rana is both a cheerleader for AI, a believer, and a critic in certain ways. But her critiques are in service of making the technology better because she knows it's not going away. Two more AI predictions to come about health-related AI and making the machines more empathetic. We'll be right back. Hi, listener. I'm Alfonso Bravo, head of content operations at WaitWhat, the company behind Masters of Scale.
In my role, effective communication is essential. Every day I'm managing teams, overseeing technical production, and fielding critical emails from our partners and, more importantly, our listeners.
Precision and clarity are vital. My team and I rely on Grammarly to help us communicate clearly and consistently in every arena of production. That way, we can save time searching for the right words and, instead, spend that time helping the broader team and serving our audience. And Grammarly is flexible. I can switch easily between apps, whether it's email, reports, or chat platforms.
Grammarly offers instant suggestions wherever I'm working. Grammarly also has great security. My team depends on me to keep our tech systems up to date and cybersecurity in tip-top shape, ensuring that every piece of communication is protected. Grammarly has just about every IT certification and never sells your data. Join over 70,000 teams and 30 million people who trust Grammarly to get results on the first try. Go to grammarly.com slash enterprise to learn more. Grammarly, enterprise-ready AI.
We take great, great pride in the culture that we've built. We just saw a sizzle video from our recent team offsite and it almost brought us to tears. That's Shannon Jones, Capital One business customer and co-founder of VIRB, a rapidly growing brand experience agency that creates memorable events for companies like Airbnb, Hulu, and Amazon.
We've scaled exponentially. I mean, the company has more than doubled in size. Being super mindful of how to maintain the culture in the face of rapid growth has been very top of mind for us. For VIRB, company culture is just as important because the staff brings that energy to client relations, the key to their success. Here's VIRB's other co-founder, Yadira Harrison, highlighting a specific way that VIRB takes care of its employees.
Our holiday party, it's a one-day celebration where we all come together. We're talking about 50 to 85 people. And so it's special, but it's also expensive.
Yadira and Shannon spare no expense when it comes to team milestone celebrations, employee benefits, and holiday parties. Perks made possible with the help of their partnership with Capital One Business. The Capital One Spark Card definitely helps to offset that in a massive way. Based off of the cash back benefits, that's the benchmark of how we want to use that cash back. It's important for us to be able to do that and to make people feel appreciated. To learn more, go to CapitalOne.com slash business cards.
Before the break, we heard Pioneers of AI host Rana El-Khalyubi share predictions for 2025 about agentic AI, embodied AI, and the one-person unicorn. Now she shares two more AI predictions focused on healthcare and on emotional intelligence in AI. Let's jump back in.
Let's dig into prediction number four for AI in 2025, the rise of AI health co-pilots. So are there new AI-powered health tools that you're expecting? I actually personally can't wait to see this one. My theory is the trifecta of sensors that are, you know, on your body, in your body, around you.
you know, surrounding you as a person and then combine that with data. So lots of new data about who you are from a physical, emotional, mental health perspective.
And bring all of that and add a little bit of predictive AI and generative AI. And you have this idea of a co-pilot for your health that can be your health companion. You can ask questions, you know, at midnight if you're feeling unwell. It can also be it can nudge you to become healthier, eat better.
It can help you personalize, you know, your nutrition, your exercise regimen. So I really think we are on the cusp of that.
Is the medical industry about to be like rocked by all of this? I have a funny story about this. My son developed this like really weird rash a few months ago. And we just took a picture of his rash and we sent it to chat GPT and kind of after a lot of questions back and forth, it triangulated it to scarlet fever. And so then I
texted my PCP and I was like, hey, like picture, here's what ChatGPT said. What do you think? And she was like, I don't think you need me. ChatGPT was right.
Now, you're particularly passionate about how AI could transform women's health. Correct. I'm particularly interested in women's health because it's just so underfunded. The power of data and AI is that we can finally apply a lot of data to this issue and be able to quantify the different stages women go through. Like, for example, I find it like
Really frustrating that I can't measure my hormonal health and I can only do that, say, once a year when I go for a really comprehensive blood test. Why can't we have some form of sensor that can track the health of five to eight hormones that really matter? And I can do that.
on a longitudinal basis so that I can optimize, again, my sleep, my exercise. You've talked about a digital twin representing our health and biology. What do you mean by that? Yeah, a digital twin that can basically simulate
who I am, right, at multiple levels. My biology, simulate my physical characteristics and have all this data input into the digital twin and it can say, oh, this is not going to work or, you know, you need to do X because that's going to be better for, you know, your estrogen level or whatever.
Is it like how many times a week can I have that ice cream or how many drinks can I have a week before it's going to impact me? I mean, is that part of it also? Actually, that would be super cool. I mean, these are the simulations that they do with our finances now, right? You can say if you're more aggressive, if you're less aggressive, what's the range of where you're going to end up?
you know, how often do I really need to exercise? Because each of us may have different thresholds for those things if our digital twin can really be personalized. I love that idea, by the way. I have not seen anybody...
build something that looks like that. I'm curious how you see different generations reacting to new AI health tools. Like, is your daughter more open to let AI track her health than someone of your mother's generation? You know, with a lot of technology, younger people may be more open to trying it. And yet, you
I think older generations may be more used to trusting their doctor and having their doctor just take care of things for them. And younger generations may be more suspicious of that, of trusting in that way. I think one area where
AI actually has a lot of potential, especially for the younger generations around mental health. Even if you can afford a therapist, there is an issue about timing, right? And how often can you talk to this therapist? This is probably like at most once a week, right? There's a little bit of, in some cultures, not in every culture, there's still stigmatized, like you may not want to
have a therapist in that way. But wouldn't it be awesome if we each had access to a digital therapist or a digital coach that, again, you use, context is very important, and was there for you 24-7, could answer your questions, could nudge you to be more healthful
especially around mental health, that's an area that I'm also very interested in. And again, trust is really key, but it's an exciting space. - That brings me to the fifth prediction for AI in 2025. AI will need more emotional intelligence.
So you spent years researching emotional intelligence in AI. Can you explain why EQ in AI is important and why maybe it's particularly important to scale that capacity over the next year? Yeah, I've been doing this or advocating for bringing emotional intelligence to technology for over 25 years. If you think about human intelligence, right?
Your IQ matters, but actually in your professional and personal life, your EQ matters even more. So your ability to intuit other people's emotions, to read their nonverbal signals, right? Are they comfortable? Are they not? Do they trust you? Do they not? These are all such important skills when you are trying to make decisions, when you are trying to influence a person's, you know, negotiation or persuade behavior change, right?
So I believe that is especially true with technology that is so deeply ingrained in our everyday lives. It needs to have empathy and it needs to have emotional intelligence. It needs to know, is Bob in a good headspace or not? Because if he's not, then you'll kind of manage the conversation in a different way. As EQ improves in AI, will the interfaces we use with AI change?
evolve to? Yeah, I actually believe that this is not going to be the future interface because this interface is not AI native. And I think we're going to
Again, I'm already seeing innovations come out of MIT that are rethinking what an AI-native human-machine interface looks like. Maybe it's an earbud that can be a conversational agent. Maybe it's a set of glasses with camera, not like Google Glass, but like with a camera that has peripheral vision and can help a
augment your visual capabilities and like provide real-time feedback. There's TBD on what that looks like. I mean, there was this Humane pin, which I don't think was the right interface. But so we're going to see a lot of experimentation, but I do think...
the human-machine interface is going to evolve and it will really mirror what human-human interaction looks like. So it's going to be conversational. It's going to be perceptual. So these things will listen. They'll see. I mean, there's even companies building smell capabilities for machines. And then it will also need to be
kind of emotionally intelligent and socially intelligent. Among the ethical challenges of AI, you've said that AI companions may replace social media as the biggest threat to young people. Wouldn't improved EQ of AI like accelerate that? This is something that really concerns me. Actually, it really falls down to the companies that are building these AI companions because these AI companions or AI friends are
are becoming so good at kind of engaging you in a very personalized and intimate way. And so if the company allows you to interact with your AI friend for say an average of about six hours a day, which is the average number of hours young people spend interacting with an AI friend,
That's unhealthy, right? And it can become very addictive in a way that is way more addictive than social media because it's way more personalized. Or worse, it could also manipulate you into buying things you don't really need or want, right? Because it knows your emotional state and it can kind of tug on that thing.
or that sadness emotion that you have and persuades you to do something that you don't want to do. Are there lessons from social media that can help us better navigate this future of AI companions? I interviewed Eugenia Cuida, who's the founder and CEO of Replica. It's one of the companies that are building these AI companions. And I loved how thoughtful she was
in her approach to deploying these friends. So for example, she does not allow people under the age of 18 to create these companions. She does not want her kids to have companions.
AI friends yet because we don't really understand the implications. She also does not have monetizations kind of advertised. She's not opening these platforms for advertising. So your AI friend cannot say, hey, Bob, I saw this like really cool set of eyeglasses. I think you should really buy them. It's not, it does not have that. What kind of role does
Do parents have to play in managing AI's impact? Because a lot of parents just abdicated with social media, right? They trusted and that didn't always turn out so well. So my daughter's 21 and my son is 15 and a half. And my daughter, I don't know, she's like not really like leaning into AI much, but
But my son is. He's very AI forward. He's always trying these different tools. He does not have an AI friend. We talk about it a lot, though. My approach is to lean into the technologies, but almost not just as consumers, but to kind of try and break them apart and have these conversations and debates about, OK, do we think this is good? Do we think
this is bad. Like, you know, do we want to push the boundaries here? Do we not? Yeah. I mean, I, I struggle with these boundaries, you know, mindful of like all the conversation about, Oh, you know, kids spend too much time watching TV. That was, that was at one point that the conversation, right. And then it's too much time on social media. It's too much time on your phones. You know, maybe it's too much time with your AI companion, but it's happening anyway. And I,
I don't know whether how much of it is sort of our resistance to something that we didn't grow up with, you know, versus something that really is bad for the kids. I mean, I guess it depends on your kid, too. You're right that we didn't grow up with that. But maybe our kids, that will be their world. And, you know, and maybe sometimes you'll date and be in a relationship with a human. And sometimes you'll be in a relationship with an AI. And, you know, you'll I don't know.
I'm not ready to embrace that world. But I do hear you that we're going to probably be hearing more discussions in the next year about how AI and AI companions impact young people and impact us culturally. I would love to see the companies that are building these AI companions be a lot more thoughtful. Your AI friend should tell you, you know what, Bob, you've been talking to me for three hours. I think you should go talk to a real human now. Bye.
But companies won't be incentivized to do that because that's against kind of, right, maximizing for engagement and potentially monetization. So as an investor, you know...
I'm going to definitely ask these companies really tough questions around these societal and cultural implications of technologies they're building. If I'm not convinced that they're really being thoughtful about it and thinking about the ethics of all of this, I'm not going to give them my check. I hope more investors do that too. One last question I'm going to ask you. If we zoom out from these specific predictions—
What do you think is most at stake in the AI landscape right now? Like what do people most misunderstand? What things should we keep our eyes on the most? I kind of spend a lot of my time with people who are deeply immersed in the AI world.
And then, you know, I'll be at a yoga class and then, you know, somebody in my class will walk up to me and she'll say, yeah, I listened to one of your episodes. And I'll just, it will hit me that, you know what, she doesn't use AI on a daily basis. And I think sometimes we forget that it's still very new technology and we have to make sure that AI is inclusive. We have to make sure that we're bringing as many people along the journey with us and
as possible. It's why I love doing the podcast, but I feel like we have a lot of work to do. Like we just assume that, yeah, AI is here and everybody's using it. No, it's not. It hasn't become totally mainstream yet. No, I think that's true in a personal sense. I think it's true in a business sense too, for all the businesses that name check and talk about the AI that they're using. I'm not sure that they've really integrated it the way, you know, the way they ultimately will.
Absolutely. Yeah. Well, Rana, always great to talk with you. Thank you so much for doing this. Thank you for having me. This was awesome. Listening to Rana, I'm struck by her early comment that AI hasn't yet had its iPhone moment.
And it's kind of counterintuitive because ChatGPT was in many ways an iPhone moment, opening up AI to mass audiences. And we've heard so many stories since then and had so much discussion about AI. It can seem like, yeah, I'm done with that.
But at the same time, Rana's totally right. It is still early days in AI development and deployment. And for all the advances, we still haven't seen an AI native interface that makes it as easy to use and ubiquitous as smartphones have become. Now, will that happen this year in 2025? Even Rana doesn't make that prediction. But following the threat of her analysis, it sounds like we're getting closer.
For good or for bad, more change is coming. Keep tuning in to Rapid Response and to Pioneers of AI to stay up on what's next. I'm Bob Safian. Thanks for listening. We've grown exponentially since we opened 10 years ago. We initially started with, I think there were 10 of us, maybe, total, which is just completely ridiculous.
That's Jillian Field, Capital One business customer and co-founder of Union Market, a popular neighborhood market and cafe in Richmond, Virginia. With her growing success, now with 45 team members, Jillian has always kept sight of what really matters. We felt since we opened that having some sort of employee appreciation event was really important to us. Every year, Jillian holds a company-wide celebration to show her staff how vital they are to the success of Union Market.
Recently, she used points from her Capital One business card to host her employees at Busch Gardens Theme Park for a day of fun with family and friends.
We buy all of their tickets as well as their plus ones. It's a lot of fun and definitely a great team bonding experience. Capital One really has been great over the years. It's so easy. We could apply these points to supplies, masking tape and Sharpies and ticket receipt paper, but we like to retain them for our employees. That's been really important. To learn more, go to CapitalOne.com slash business cards.
Rapid Response is a Wait What original. I'm Bob Safian. Our executive producer is Eve Troh. Our producer is Alex Morris. Assistant producer is Masha Makutonina. Mixing and mastering by Aaron Bastinelli. Theme music by Ryan Holiday. Our head of podcast is Lital Malad. For more, visit rapidresponcesshow.com.