We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
People
N
Nufar Gaspar
英特尔设计部门的 AI 无处不在和通用 AI 总监,专注于推动 AI 技术在企业中的应用和发展。
Topics
Nufar Gaspar: 2025 年,几乎每家公司都会推出 AI 代理产品,或声称其运营背后由 AI 代理提供支持。这既有炒作的成分,也有实际的应用场景。微软、Salesforce、谷歌和 OpenAI 等公司都在积极发展 AI 代理技术。垂直领域的 AI 代理公司将比横向平台公司获得更高的投资回报率,因为它们更专注于特定用例,并能为企业提供更直接的价值。 在工作场所,AI 代理将被广泛应用,成为知识工作者的个人助理,并自动化后台任务。企业将进行大量 AI 代理试点项目,以探索其在不同业务流程中的应用。 到 2025 年底或 2026 年,活跃的 AI 代理数量将超过全球人口数量。AI 代理的成本效益、精确度将得到提升,但其定价模式仍不明确。企业需要制定明确的策略,以确定何时、如何以及由谁创建 AI 代理。 从副驾驶模式转变为代理模式,AI 代理将承担更多自主决策和执行任务。然而,最成功的 AI 代理仍然需要人类的监督和预先规划,尤其是在金融、医疗和法律等高风险行业。 人们对 AI 代理处理信用卡等敏感信息的信任度仍然较低。对 AI 代理对现有工作的影响,以及 AI 代理在替代和增强人类方面的作用,需要进一步的讨论和研究。 在伦理方面,需要关注 AI 代理的公平性、隐私性和安全性,并确保其可解释性。AI 代理的负面影响,例如欺诈、网络犯罪和军事用途,也需要引起重视。 NLW: 同意 Nufar Gaspar 的大部分观点,并补充了一些细节和自己的见解。例如,NLW 认为企业将在 2025 年进行大量 AI 代理试点项目,这反映了企业对 AI 技术的信心和对垂直 AI 代理的期待。NLW 还强调了在 AI 代理应用中,人类监督和预先规划的重要性,以及在不同用例中,人类与 AI 代理协作方式的多样性。此外,NLW 也表达了对 AI 代理潜在的伦理风险和社会影响的担忧,并认为需要通过渐进式实验来理解和应对这些挑战。

Deep Dive

Key Insights

Why does Nufar Gaspar believe that nearly every company will showcase agent-driven features by 2025?

Nufar Gaspar predicts that nearly every company will showcase agent-driven features or claim to have an AI agent at work by 2025 due to the increased feasibility and notable releases from major companies like Microsoft, Salesforce, and Google. These companies have already made significant announcements and releases, indicating a trend where AI agents will power operations behind the scenes across hyperscalers, SaaS, and vertical industries.

What is the difference between a vertical agent and a horizontal agent according to Nufar Gaspar?

Vertical agents are specialized for specific use cases or functions within an organization, such as customer support or coding, and are deeply tied to business context. Horizontal agents, on the other hand, offer the ability to build agents for diverse use cases without specialization, like platforms from Microsoft or OpenAI. Nufar believes vertical agents will yield higher ROI in 2025 due to their practicality and specialization.

What does Nufar Gaspar predict about the widespread adoption of AI agents in the workplace by 2025?

Nufar Gaspar predicts that AI agents will become ubiquitous in the workplace by 2025, acting as co-workers, personal assistants, and support functions. They will handle tasks like email management, HR inquiries, and IT support, often operating autonomously in the background. This widespread adoption will be driven by companies piloting agents across various workflows, embedding them deeply into daily operations.

Why does Nufar Gaspar believe there will be more agents than people by 2025?

Nufar Gaspar predicts that the number of active AI agents will surpass the global human population by 2025 due to the ease of creating agents and their widespread use in consumer and work life. Every company will deploy bots for customer support, and most online interactions will be agentic. This proliferation will lead to a future where humans are surrounded by more virtual workers than actual people.

What does Nufar Gaspar predict about the cost, performance, and precision of AI agents by 2025?

Nufar Gaspar predicts that AI agents will become more cost-effective, faster, and more precise by 2025 due to optimized hardware, smarter algorithms, and additional capabilities that allow agents to learn on their own. The ecosystem will mature, and companies will introduce modular solutions, making powerful agents more accessible, efficient, and reliable for organizations.

What does Nufar Gaspar predict about the shift from co-pilot to agent paradigms by 2025?

Nufar Gaspar predicts a shift from co-pilot to agent paradigms by 2025, where early adopters will move from building custom GPTs to creating agents with greater autonomy and execution capabilities. Agents will handle open-ended tasks and automate processes, offering more business value than co-pilots, which primarily assist with strategy and drafting.

Why does Nufar Gaspar believe that the most successful agents in 2025 will still be highly controlled by humans?

Nufar Gaspar believes that the most successful agents in 2025 will still require human oversight and pre-planning to ensure they act within predefined boundaries. Companies will implement rules, guardrails, and fail-safe protocols to prevent unintended consequences, especially in high-stakes industries like finance, healthcare, and legal. This control will be essential to maintain trust and reliability in agent operations.

What does Nufar Gaspar predict about the ethical considerations and responsible development of AI agents by 2025?

Nufar Gaspar predicts that there will be increased focus on ethical considerations and responsible development of AI agents by 2025. Agents will need to be fair, privacy-aware, security-aware, and explainable to avoid biases and ensure transparency. This will require additional governance and policies to address the challenges posed by autonomous agents in various organizational functions.

What does Nufar Gaspar predict about the dark side of AI agents by 2025?

Nufar Gaspar predicts that the dark side of AI agents will grow by 2025, with bad actors exploiting them for fraud, cybercrime, and military purposes. Agents could be used for phishing attacks, social engineering, and autonomous weapons, posing significant challenges for detection and security. This will require vigilance and countermeasures to mitigate the risks associated with malicious use of AI agents.

Shownotes Transcript

Translations:
中文

Part one of 25 predictions around agents for 2025. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. To join the conversation, follow the Discord link in our show notes.

Hello, friends. Today, we begin a very special set of episodes, 25 predictions around agents for next year. And to do this, I am joined by Nufar Gaspar, the director of AI Everywhere and GenAI for Intel Design AI Solutions Group. Nufar,

Nufar has spent the last 14 years working on AI at Intel and brings to bear everything from product design experience and the management of internal AI transformation to an absolutely voracious content diet, which gives her a ton of perspective on how the rest of the world is thinking about and talking about agents as well. It's a really great conversation. So without any further ado, let's dive in.

All right, Nufar, welcome to the AI Daily Brief. How are you doing? I'm good. How are you? Good. Okay, so what we're going to do today is you've gone through and prepared 25 predictions about agents. You've organized them into a bunch of categories. You and I have been going back a little bit and back and forth on these. I think it's a really great way to start conversations around a huge amount that's going to be very important in 2025.

But for those folks who are listening now, could you share just a little bit about the context you're bringing to this, sort of how you're engaging with the world of agents, the lens through which you're viewing it, anything that will help people understand and contextualize these predictions before we get into them?

Yeah, sure. So I've been working on AI for 14 years now. And what happens over the last two years was incredible. But I think with the agent era, it's even more exciting about how we can really drive value from AI.

I've been working also on an agent as part of my day job within Intel, as well as some consultation for other companies. And I can't wait to see how it all rolls out because I think this is the missing link of how we can really drive this Gen AI to fulfill its full value for enterprises and consumers.

The value will come mostly from enterprises. And that's why I can't wait to see how everything rolls and have compiled this list because I wanted to know more. And I think the audience wants to make more sense of all this chatter about agents that we keep hearing. Yeah. And you are, I think one thing that I know about you, you are a voracious content consumer. So chances are good if someone else has done an agent prediction thing, you've assimilated it into your thinking at this point.

I'm probably like an LLM. I'm a good accumulation of all the knowledge that was shared in the last few months on agents, probably. Perfect. Okay, great. Okay, so we're going to kick this off and we're, I think we'll split this into two episodes, but that'll be clear as it happens.

And so we're starting with section one. The section one, you called it the spread of AI agents. And the first prediction is nearly, number one, nearly every company will showcase agent-driven features or claim to have an AI agent at work. So get us into what this means and how you're thinking about it. Right. So probably everyone who pays any attention to what's happening in AI have heard on agents, probably in the second half of this year.

And part of it is probably hype, but also there is a lot of substance behind this agentic era. And I think that with increased agent offering and more feasibility, there are some use cases that will really get us rolling with that.

And, you know, some notable releases just from the last few weeks is Microsoft had massive announcements, Salesforce. They went even to the extreme of renaming themselves to be AgentForce. And they released even a second version by now. Google's latest announcement also, Project Astra, Mariner, so many agentic offering literally from everyone. And, of course, the recent 03 announcement by OpenAI, they all get us to a point where probably in 2025,

Nearly every company will either launch an agent offering or claim that the AI agents are powering some of their operations behind the scenes. They will be the hyperscalers, the SaaS, the verticals, probably everyone, in my opinion.

And I wanted to ask you, because you also have been here for crypto, what it's going to be like in the blockchain, where it's mostly going to be something that everyone puts in their marketarial content or something for real this time? Yeah, I mean, well, even with crypto, there's sort of various layers of this. The short answer is there, AI, Gen AI has from the beginning been less...

less dreams and future than crypto and blockchain. There's a lot that is very interesting and high potential about blockchain integrations with the existing financial system. We're starting to see some of it. Lots of it has been delayed and blocked because of regulatory environments. But then there's also, in addition to different financial rails, there's also the whole metaverse dimension.

One thing that I do think, which is a totally separate conversation, but, you know, Facebook very famously renamed itself Meta at the peak of that cycle. And I think that some people have this sense, given how hard they've gone into AI and all these sort of things, that they've sort of, you know, stopped paying attention to that whole thesis.

I actually think that Zuckerberg still believes every bit as strongly as he did back in 2022 when they made that change, that metaverse and these sort of digital worlds are going to be a huge part of our future. I think that he was just a little early on it. And I think that AI is part of the way that it all comes together. So I think that there is...

less hype in the sense that there is immediate value to be gained from AI right away. I think we've seen that for two years. Any organizations that have taken the time to actually figure out how to integrate it in a meaningful way have been or should have been able to start finding some value with it. I will say only as a caveat, I think in the scale of maybe 10 or 15 years, the blockchain stuff might look a little less hypey than it did when it was first pitched.

But I do think kind of related to that question and one that's a little bit more ground setting is, you know, how how much.

When you use the term AI agent, how strictly are you defining it? Where are you kind of drawing the lines? Or maybe a better question is, where do you think the industry is drawing the lines? One of the things that I see a lot of discussion around is, where does the line between sort of an automation and an agent actually exist? And does it really matter in point of fact?

So when you say, you know, every company will showcase agent driven features or claim to have an agent, what is the difference between, you know, an agent and just a sort of an LLM working in an automated process?

Right. Fair question. Personally, I only care about the business value, so I don't really care about the definitions. But if we're trying to be more focused on agents here for real, then I would probably draw the line of something that has at least some level of autonomous ability to make decisions and some level of planning and not just a simple automation that is like if else on top of an LLM.

So a little bit more than just an automation on LLM, but not maybe to the point of fully autonomous with all the bells and whistles that some might define agents, the strict ones.

I think that that's going to be a pretty, a pretty good working definition for people who are trying to wrap their heads around it next year. I also tend to think putting on my marketing hat for a little while, um, every time I see a company try to, you know, beat the drum that actually this isn't really an agent. It's just, it's a losing battle. Uh,

I think, you know, people and it's a losing battle, not just because people are ignorant or people want to use the most hyped up term. It's a losing battle because I think it intuitively does mean something else, like the beginnings of autonomy, the beginnings of decision making, the beginnings of it doing something rather than you doing something. I think those things feel clear in practice. And and even if they're not the full expression of, you know, fully autonomous agents, I

I think that there's clearly a difference between that and just, you know, a worker using an LLM. And so trying to kind of buck against that tide probably isn't the best approach from a marketing perspective. But let's move on to number two, you know, continuing to get into definitions.

The vertical agent companies will have greater ROI than horizontal ones. First of all, I guess, as you know, make this prediction, but then also define how you're thinking about what is a vertical agent versus a horizontal. Yeah, for sure. And this one might be more controversial than the previous one. But, you know, if you want to roughly categorize the agent offering, you can divide them between horizontal and vertical platforms or agents.

So the horizontals are everything that basically offer the ability to build agents for diverse use cases without specializing them on specific properties or specific use cases or even organizational functions. Examples I already noted, Microsoft, OpenAI also have the rumored operator that will probably come in 2025.

The vertical agents will be either companies that specialize in billing agents for specific use case, specific function within the organization, or ones that utilize the agent to perform very precise and specific use case. And as such, they are typically very, very tied to the business acumen and the business context.

Example for a vertical agent will be Sierra. They specialize in building agents for customer support. In the coding space, you can think maybe of Devin, who is like the software engineer based on a lot of agentic capabilities, and you can probably think of many, many more.

So in terms of what or why I give this prediction is that I think horizontal agents will be used significantly and we will see many people like starting their first agentic spaces using these platforms. I believe that the return on investment will be higher for those highly specialized agents.

First of all, because they will let the companies or the employees much more like a safety net versus the other that will have to build agents on their own and it will take some time. We'll need to learn how to crawl before we can walk and run. And with those horizontal companies, they have much more motivation. In many cases, you see their pricing is based on outcome. So all of these companies

reasons will get them to probably yield more value. And I'm not betting against OpenAI or Google by any means. In the long run, they will probably also have huge value directly or by people using them. But for now, I believe in 2025, the vertical ones will be more practical and thereby yield more value. So I tend to agree with this. And I think a piece of evidence that I would point to is that

In Menlo's Enterprise AI study, which came out pretty recently, one of the most interesting statistics to me was the shift in buy versus build behavior between 2023 and 2024. So in 2023, it was something like 80-20, right? So 80% of the use cases that were deployed in enterprises, they had purchased a third-party application versus 20% they had built inside.

Last year, or this year, rather 2024, it was 47% build and 53% buy. And I think that this will one, I don't think this is going to last forever. But what I think it reflects is actually these very specific sort of vertical use cases. I think that these firms are getting, you know, confident with their use of AI after doing a bunch of pilots.

They're honing in on something that's unique and specific to them that would be valuable, either discrete to their industry or discrete to the particular data they have. And they're running ahead to build something to service that in advance of when those vertical agent companies have fully come online. However, I think what's going to happen across the course of 2025 is that vertical agents are going to start to flood in and fill those cracks.

and start to compete again with those internal applications that people are building. But I actually think that that huge shift to building behavior reflects enterprises racing to the places that those vertical agents are going to ultimately end up. So I think that that's where we're going to see a lot of first experiments next year. Yes. And if you're building a vertical agent, then that's the year for you. So you need to work on in this year.

So number three, widespread adoption of AI agents in the workplace. First, talk about what the prediction is. And then to the extent that we can maybe push ourselves and try to, you know, put some frame around what widespread actually means, I think we should try just so people can yell at us later if we get it wrong.

Right. So first of all, I want you to imagine what a widespread word look like. Like we're going to have agents literally everywhere in our workflow at work. There will be co-workers working alongside us fully autonomously. Some of them will be like our personal assistants helping us with emails, with emails.

and what various other tasks that we do as knowledge workers. There will also be support functions. We will no longer have to ask questions to our HR people or IT people. We'll go and have conversations with agents.

And of course, they will also automate stuff in the background without us even being aware that an agent is actually doing that. And during 2025, in continuation to everything that we just talked about, there will be so many people who will build, configure and use agents. Often, as I mentioned, without realizing it, that they will become so embedded in the workflows that it will literally be like everywhere around us.

So this is this is one of the things that that we think is most clear about next year is everyone is going to be piloting agents in 2025. It is just going to be across basically every company. And so here's why.

There are companies right now all operate on some part of the spectrum from they are fully up and running with a Gen AI strategy to they're feeling very behind because they don't really have anything formal. And in each case or sort of everywhere along that spectrum, I think we'll see 2025 be an AI agent pilot year because the companies that currently have a strategy and currently are a little bit farther, even they don't.

are staring with trepidation at the agent change because it is so categorically different than just even integrating gen AI workflows to human assisted processes, right? The leap between this is how we used to do things and now we do it much better with AI versus these are entire categories of things that we used to do that now we're turning over to robots is

It's such a big shift that it's very humbling in a way that I think is important for enterprises where they're all going to be back in learner mode and willing to pilot. So that's sort of on the end of the spectrum for folks that are more advanced in their general AI strategy. Then for the folks who are maybe behind or feeling behind,

I think agents are going to feel like a way to potentially catch up, right? I think there's going to be some number of those companies that feel like, you know, maybe they kind of got left behind a little bit or didn't get their stuff together to really figure out, you know, the assisted kind of era of AI, or at least initially, but they can jump in on an even playing field with agents, which are really just coming online. And so I think that, you know, we're going to see EPS,

Effectively, every company run probably multiple, but at least one, you know, meaningfully sized agent pilot in 2025. If anyone wants to play a drinking game out there, let's play the drinking game for how many times I mentioned super in our strategy for 2025. But one of the things that super intelligent is doing right now is we're basically building a

an AI readiness or an agent readiness and opportunity audit, which looks at the whole organization across a number of different vectors, including the organization in general, based on what it does and how it's organized and how it's structured, as well as where its current AI strategy is in terms of how it's run and how things get approved, and compares that to the landscape of agents to provide suggestions for where we think some of those early pilots would make sense.

how to scope them, how to find the right partners for them. So that's something that we think is going to be just a huge part of 2025 for basically everyone. All right. Maybe two caveats. One is that or the

the three caveats. One, go in and check out the super offering because it might help you move faster. One caveat, yet another caveat is that 2026 might be where some of what we just described will happen if things will not move as fast as some anticipate. And lastly, I will say that don't run to use an agent if you haven't used an NLM before at all in your company.

start small and then gradually move to the agent. So it can be a catch up, but be careful about how you approach that because it might be too huge of a leap forward to go for a full autonomous agent before you learn how to call basically with LLM. So number four is a really fun prediction. There'll be more agents than people. Talk to us about this one.

Yeah, so again, quite a bold one. Maybe it will take till the end of 2026, but at some point, the number of active AI agents will surpass the global human population. And of course, these agents will range all across from the ones that we will use in our consumer lives to work life and so on. And of course, think about the fact that literally every company will have so many bots for customer support and most of your interaction online will be agentic to some extent.

And with so much advances in the technology and improved offering, it will just be easy eventually to create agents. And it will be a very interesting era for the humankind where we'll be surrounded by more robotic or virtual workers than actual humans. And then there are many questions that we have to ask ourselves about the economy, how it will look like.

Maybe each and every one of us need to imagine the implications to our personal lives and work lives. Any thoughts? Scared? No, I mean, so one, I think that this will be a theme that we kind of come back to throughout this episode. For me, so the reason that I agree with this prediction, broadly speaking, is that it's very hard for me to imagine that in some number of years, I don't know whether it's 1, 2, 5, 10 years,

The normal mode of working doesn't involve each of us managing a slew of different agents that do things on our behalf. And I think it's hard to conceptualize and imagine now because we're still in the one-to-one replacement era of AI where we're thinking just about the things that we can do now and how AI can help us do them a little bit better, a little bit faster, a little bit cheaper. But I think that we will soon start to move into the

totally new opportunity mode of thinking about AI, where we actually realize that, you know, instead of marketing campaigns where we can just create more content, we could be building software and games for those marketing campaigns. And we actually don't need to ask IT, you know, or the engineering group to do that for us in marketing because we can use these tools and these agents to support us. So I think one of the things that's really interesting is going to be

The human upskilling around how everyone has to become managers. I think that that's going to be a really interesting shift. Supervisors, yeah, for sure. And it's going to be a whole new challenge. But let's move to section two. You called it moving from hype to practice. And the first prediction there, number five overall, is agents' cost performance and precision will improve.

Right. So, you know, the current AI agents, they're probably the slowest, the priciest and the dumbest that they will ever be. I'm not saying that they are bad, but they are not great. Okay.

And it's like the equivalence of the first release of ChatGPT, where we could see the potential and we are all super excited, but it's still very early in the journey. And with so much investment that we're seeing, the operational cost will decrease and speed and accuracy will improve.

Partly it might be more optimized hardware, smarter algorithms, but also additional capabilities that will help the agents learn on their own. The overall ecosystem will mature. Companies will introduce more modular solutions. Bottom line, we're moving to a future where the powerful agents become more accessible, more efficient and reliable for organizations. And it will happen very, very quickly because we're seeing so much

investments.

There is one interesting thing here with regards to cost, because if we're trying to compare that to SAS, and people always try to compare agents to SAS, the licensing model does not work anymore. So you have to wonder whether employees' replacement will be the right pricing model, where you will pay maybe 10% of how much it will cost you to hire an equivalent human, or what would be the right way to price it so that it will be economical.

Yeah, I think the discussion around this is actually pretty insane and inane right now, at least from the venture capitalist side. Sorry, VCs who are listening, I don't mean to blow you in. But right now, it's a very popular meme to talk about how vertical agents could be 10 times the size of SaaS. And what that refers to is the idea that companies pay 10 times as much for human labor right now as they do for software.

And it's not that that's incorrect, but the idea that the total addressable market for agents then is the total current cost of human labor is just absolutely ludicrous. It is going to be some fractional piece of that labor that, uh,

that these models are costed for. And the world looks very different based on how that question gets answered. If it's 50%, the world is entirely different than if it's 10%. And I don't think anyone knows yet. There are going to be incredible competitive pressures which drive it down. There's already so much competition in this space. It's going to get commoditized. People are going to compete on price. I think it's likely to see huge, huge... Let's put it this way. I think it's a lot closer to 10 than it is to 50, and 10 might even be generous.

What's more, I think that we don't know yet how exactly it's going to shake out in terms of how much jobs are going to be replaced wholesale versus everything gets fractured into tasks and reconstituted in the sense that all of a sudden there are agents who handle certain types of tasks, but the roles stick around. They just look different than they did before.

It's going to be a very weird, messy process. So, you know, I think that the business model is very unclear to me. And I think it's going to have major implications for, you know, how the world looks in the future.

I agree. And we're voting for a lower cost to keep humans in their jobs as much as possible.

Whether you're an operations leader, marketer, or even a non-technical founder, Plum gives you the power of AI without the technical hassle. Get instant access to top models like GPT-4.0, CloudSonic 3.5, Assembly AI, and many more. Don't let technology hold you back. Check out Use Plum, that's Plum with a B, for early access to the future of workflow automation. Today's episode is brought to you by Vanta. Whether you're starting or scaling your company's security program, demonstrating top-notch security practices and establishing trust is more important than ever.

Vanta automates compliance for ISO 27001, SOC 2, GDPR, and leading AI frameworks like ISO 42001 and NIST AI risk management framework, saving you time and money while helping you build customer trust. Plus, you can streamline security reviews by automating questionnaires and demonstrating your security posture with a customer-facing trust center all powered by Vanta AI.

Over 8,000 global companies like Langchain, Leela AI, and Factory AI use Vanta to demonstrate AI trust and prove security in real time. Learn more at vanta.com slash nlw. That's vanta.com slash nlw.

If there is one thing that's clear about AI in 2025, it's that the agents are coming. Vertical agents by industry, horizontal agent platforms, agents per function. If you are running a large enterprise, you will be experimenting with agents next year. And given how new this is, all of us are going to be back in pilot mode.

That's why Superintelligent is offering a new product for the beginning of this year. It's an agent readiness and opportunity audit. Over the course of a couple quick weeks, we dig in with your team to understand what type of agents make sense for you to test, what type of infrastructure support you need to be ready, and to ultimately come away with a set of actionable recommendations that get you prepared to figure out how agents can transform your business.

If you are interested in the agent readiness and opportunity audit, reach out directly to me, nlw at bsuper.ai. Put the word agent in the subject line so I know what you're talking about. And let's have you be a leader in the most dynamic part of the AI market. All right. So number six, there will be clear best known methods on when, how, and who should create an agents. Agents are not suitable for everything and everyone. So talk to us about this.

Right. So what I believe is that at the beginning of the year, we expect that FOMO and that hype will drive many of the people to experiment. And literally everyone will rush to at least, as you said, pilot or build their first agent without a clear understanding of the consequences. And with so many options and companies, they might find themselves overwhelmed, as you rightfully said before.

And with time and experience, there will be a growing prescription on how and who and when to build an agent. In many cases, the answer will be not to build an agent. In other will be to build it with a known vendor. And in some cases, it will be to build their own. But there is a lot of learning curve for all of us to have before we will get there.

Yeah, so this is the thing that we're spending the most time on right now at Superintelligent. And the way that we think about this is that effectively, if you want to extrapolate a little bit,

Someone is working on an agent for almost every industry and almost every business function or role or process right now. And that means that as companies think about where to begin, it's basically every option is on the table for them, right? Anything that they can imagine will, if it doesn't already have some agent offering, it'll be there soon.

And so it's going to be really important for enterprises to have a process by which they figure out

how to start that experimentation. Which of these types of vertical agents do they want to explore? Is it something based on their industry? Is it something based on specific roles or functions? How much does that have to do with, you know, particulars of data that they have? How much does that have to do with things that are a challenge in their organization right now versus new efficiencies they're looking for? And then once they have decided which area they want to focus on, how do they figure out which of the available options are

going to be the right fit, you know, based on companies offering different business models, companies having different, you know, value propositions, companies having different points of emphasis, companies having different, you know, amounts of experience. And then from there, there's the question of how you scope the pilot and how you actually make it work. One of the things that has been very clear from the last couple of years of the assistant era of AI is that

part of the reason that so many people persist in pilot purgatory right now is not that pilots haven't delivered value. It's just that they haven't been structured in a way where the value that is being realized can be systematically analyzed and scaled up. And so it's almost, we so often see pilots that are destined to fail from the moment they start because there isn't the appropriate support infrastructure around them. They're not set up to succeed. They're not set up to rather to have this

a chance to succeed. And I think with agents, it's going to be even more so because it's such a radically different way of thinking about software. And so, like I said, we're offering this audit and opportunity product, but just broadly speaking, thinking about how to help companies figure out which agents to try for which processes and how to actually do that experimentation is going to be a huge, huge part of the next couple of years.

Yeah, and also to be aware of everything they need to know in advance about productization costs, about the overall implications, so they will not start with pilot only to fail again like they had in the LLM. Yep. Speaking of, so this kind of gets to your seventh prediction, shift from a co-pilot to an agent paradigm. Talk to us about this one.

Yeah. So, you know, today, everyone that is quite smart around you is probably boasting the art of building a custom GPT as a superpower. They might have a co-CEO or co-whatever that they're using day in and day out.

And over time, what I expect that these early adopters, because they are the one typically most savvy about experimenting with the technology, they will also start building some agents. And with agents, they can do so much more. Instead of just helping us strategize or draft an email, now you can automate the email. Now you can have a much more open-ended tasks, send down the agent path.

And there are so many such good GPT use cases where people have just built assistants that can now evolve into an agent-driven applications with more user-friendly interfaces and enhanced capabilities that

the companies will offer that many people will just start realizing that GPTs are nice, but agents is where you can drive the big value because they have so much more autonomy, so much more execution capabilities, and thereby so much more business value.

Yeah. So you have folks like Mark Benioff out here who are screaming that it's all about agents and that the whole assistant co-pilot era was just a big lie, which is obviously he has a lot of financial incentive to do so. Do you think it's going to be a balance of co-pilots and agents? Yeah, it's going to be a mix because sometimes you just want someone to help you think or write without full autonomy or help.

It's like having a mixed team where you sometimes want the interns and sometimes you want the VPs and the very sophisticated people. And you need them both. So similarly with AI, you'll probably want to have more autonomous capabilities that you send to the wild versus capabilities that are just your consultants and you are still holding the reins for the decisions. So speaking of holding the reins, number eight, the most successful and prevalent agents will still be highly controlled and pre-planned by humans. Yeah.

So agents, they will continue to evolve, whether they are more sophisticated and so on. But I still believe that the human oversight and intervention will remain essential. Companies will probably also have clear rules and guardrails to ensure agents act within our predefined boundaries.

And some of the pre-planning for agents will involve companies kind of mapping out the various decision processes as well as many fail-safe and escalation protocols to prevent unintended consequences. Because in many cases, all you want to do is be able to talk to a manager. And in this case, often the manager of agents, like you just said, will be a human. And you want to be able to have the oversight of a human for some of these cases.

And this is, of course, especially relevant if you're working in a very high stake industries like finance, healthcare, legal or anything that is basically governed.

And even in the lower stakes use cases, you will probably still, at least in 2025, not fully trust the agent to do everything without having a lot of supervision and a lot of those what sometimes people refer to as scaffolding that keep them on rails and make sure that they don't go into tangents that are unrelated, which they tend to sometimes do if you don't have those guardrails.

Yeah, I think it is going to be extremely incremental to your point. And I also think that it's the the incrementalness will not just be based on.

the experience level that different enterprises have, I think it will actually be more broad and relate to how fully we understand how different agents perform with particular types of use cases. I think where we'll start to see them break out of that extreme incrementalism is within very specific use cases

that we can sort of see and observe what happens when you let them go a little bit more and you sort of stay farther back than you might otherwise have. But I think that that's going to be a process that takes a couple years and a lot of kind of collective exploration before it happens. One of the places that you drew, you drew the line around credit cards. You said, number nine, most people in companies will still not trust an agent with their credit card.

Right. So in my opinion, and this is a very common use case that people always refer to, is that you will have an agent that will book your next vacation autonomously. I don't think that will happen in 2025 because I think letting an agent control your email, control to some extent, by the way, even email might be risky, but write code for you. It's one thing, but you will not let an agent have access to your bank account automatically.

And I believe that, as I said before, agents will be safer and more accurate and so on. But not in 2025 will companies or individuals will let them shop or pay for them. They might get the agent to get them to the shopping cart or equivalent. But at the very end, they would want a human oversight on whatever comes to money. So I think a couple of things. First,

I deplore this example as a use case. I think later on you have earmarked to talk about business versus personal use cases in general, but I just, this is the most cringe thing. And I know it's just a reflection of capabilities, but whenever someone, whenever an agent company talks about how it's going to book vacation travel or order food for you, I just,

I do not believe that this is actually a problem that people have that they care about using agents for. I think it is a total misnomer. And I worry that companies actually trying to solve for those types of things, you know, to the extent that they're not just using it as an example of helping agents figure out what they can do is going to be very, very off track for where real value is.

I actually think, though, I have a slightly different take on this. I think broadly speaking, I understand. However, I would be very surprised if we don't see people start to, as part of the piloting process in 2025, create sort of segregated accounts and segregated pools of money that agents do have access to with the presumption that they might be lost.

I think it'll be very incremental. I don't think you're going to see it, uh, in, in general, but I actually think giving agents capital is going to be one of the ways that we push the boundaries and really understand what they can come to do. I think it'll be more solopreneurs who are trying that out in very limited ways. I think it'll be experimental startups who think about it. Uh, but you know, if you can cap the downside of it, if being all lost, I bet that you'll see some of that. Um, if for no other reason than, uh,

you're going to be able to get a lot more headlines around how your agent uses money than if it doesn't use money. And we saw this with Truth Terminal this year. It's a fascinating cultural moment that tons of people were paying attention to because of the introduction of capital.

Right. But it's not going to be the norm. It's going to be the extension and the extreme, and maybe the rest will follow. I don't know. Personally, I'm not about to give an agent my credit card, but maybe that will change in 2026. We'll see. Um, okay. So last section for, for this first part, uh, agents concerns and ethics. We have a few predictions in here, um, before we come back to, uh, to part two, but, uh, so number 10 overall prediction, the first in this agents concerns and ethics section, uh,

a more significant concern and debate on implications to current jobs. Right. So, you know, with the current existing LLMs, the reality is that very few jobs have been eliminated. And some might say that artists and maybe customer support and others might disagree. But overall, in the economy, we have not seen the promise of you will all lose your job for AI imminently. Right.

However, the possibility of completely replacing human jobs became much more eminent with agents because now we're talking about something that is autonomous, that can perform tasks end-to-end. And like I said before, I'm very bullish on the potential of agents. So I'm of the philosophy that eventually there will be more jobs lost. And I think others also will perceive that and the entire discussion and debate will really reignited about job loss and justfully so.

So, of course, there are many implications to society that we all need to start discussing them both individually. What does it mean for my job? Should I do some upskilling? Should I become a manager of agents, like you said before, as a new skill? And also as a society, like if we have more and more people that will have less to do at work, should we do something else with their time? Should we retrain them? What's the implications?

Yeah, I mean, I think that it is correct to say that agents will increase the tenor of this conversation necessarily, right? And so I think that it's still, my guess is that when push comes to shove, at least for the next couple of years, we're still perhaps going to see even less pure play job replacement than we think in the sense that right now there are fewer

few jobs or fewer jobs than I think we think that are so totally made up of bundles of tasks that agents can do well that it makes sense to replace people wholesale right away. I think that said, there are areas that we will see major disruption fairly quickly with customer service being the most obvious example of this. It's already happening in a big way. Now,

how that shakes out and how much that means that customer service just becomes something that only robots do versus companies start trying to win new business by retraining all of their existing customer service agents to be, you know, super powered and, uh, and answer concerns that get kicked up much more effectively, you know, remains to be seen. But I do think that we're going to see more of this conversation. Um, I think that the, the,

the large scale social and organizational inertia that slows things down faster than they would...

you know, the technology would normally, yeah, I think is, is, is actually fairly useful from a societal level. In this case, things that are very frustrating from, you know, when, when you're dealing with enterprises or being inside enterprises are actually kind of a bulwark for rapid change. But, but I do think that we're going to have to have a lot of conversations so much that I think maybe we'll, we'll keep going just because we could fully go down that rabbit hole all day. Yeah.

Okay, so number 11. There will be more clarity about the role of agents in replacing and augmenting people. Obviously very related to what we were just discussing. Yeah, very related to the previous one. So what I assume is that there will be much better understanding of an optimal way for humans to collaborate with agents. You can think...

about it like a spectrum. In some cases, humans can stay out of the loop completely and let agent run the show, while in others, you will have better results by having the human supervise the agent or take the control completely when the agent somehow went into an irrelevant loop. If you ever used, for example, Replit to write a code, you might have seen how very easily an agent can get into a dead-end loop. And unless you have some coding skills, it's going to be very hard for you to

take it off from the loop. So that's a very good example of what, like a human supervision. On the other hand of the spectrum, going back to the same example of Replit, in many cases, when you use to write a code, you have to have many very advanced coding skills. And with Replit, the only task that they allocate to the human is to be like the product manager that gives the requirement at the beginning. The agent then goes and

provides the next level of understanding and suggest some courses of action, then it implements the code and the human becomes only like the user acceptance tester. So a completely different role playing between a traditional software and how you would code with an agent like Rapid just as a good example. So this is very interesting experience. And like you said before, it will probably get us to wear different hats than what we used to in our

lives before the agentic era. Yeah, I think that there's, it's going to be, it's going to be fascinating to see for me. The augmentation is what gets me really excited. And what I hope for is that alongside all of the stories that we're inevitably going to get around, you know, companies, you know,

reducing their customer service staff in half and blah, blah, blah, blah, blah. We also equally see stories about people who create massive high value organizations with a team of three or four people because they figured out how to supercharge and augment themselves. And then I hope we see stories around people taking that inspiration and bringing it into their workplace and changing what their organizations can do.

I think it's going to cut both ways. And, you know, the more that we can think about it as doing more better stuff versus just, you know, the same, the same with fewer resources, I think the better will be. Number 12, focus on ethical considerations and responsible AI agentic development. There will be more focus on ethical considerations.

Or there should be, right? So, you know, I guess some people have heard the interesting but somewhat scary research by the Apollo Group that tested some of the latest models. And it showed very quickly that even the current generations of models without very advanced hygienic wrappers around them, they will start skimming if they perceive that at

threat to their well-being. They might want to copy themselves into a different server. They might deceive you. They will do many things just to continue on with what they wanted to do. And this is both fascinating and scary. And, you know, with agents, they will have also the ability to monitor and manipulate and do stuff in the real world or rather on a computer, which is our real world in many cases.

And you can think about an agent that is biased towards something very specific, maybe an HR agent that have had very good results with a policy of hiring very specific type of individuals to the job. And in order to be successful, and all of these agents are literally optimization algorithms in the back end that tries to

optimize for something so they can very easily start hiring or screening only candidates of a certain type. So everything that we were worried about in the co-pilot area will become even more prominent with the agentic era and we have to really think long and hard about how to build these agents such that they are fair, that they are

privacy aware and security aware and how to make sure that they are explainable so we can track what they did and not just have a black box and let these biases go into every area of your organization. And everything that we've done thus far will have to be accommodated to the eugenic area and

In other cases, we will have to add more governance and more policies. And I know I'm going into a risky territory, but I wanted to hear what you think about all of this. I mean, I think that just definitionally, the more autonomous we allow these things to be, the more it requires deeper consideration, right? When everything is mismanaged,

and mediated by people, you can rely on those people to sort of figure things out more. The more that the AI has the ability to, and even the mandate to work on its own, the greater the consideration. And so I don't have good answers for how that plays out, but I also don't really think that you can figure it out in advance. I think that we just have to incrementally experiment and understand these things, and that's going to involve necessarily some things that go wrong and then we walk back from.

But I think that that's a better process to me than trying to guess too much in advance of exactly what it's going to be, at least at this stage and the sort of capabilities that we're at. I think that actually this next stage is going to be extremely important because the capabilities will be such that we'll start to, I think, get a real world sense of where some of these challenges and fault lines will be without the actual risk being as catastrophic as it might be a few years down the line based on changing capabilities. If we go with our eyes open. Yeah.

So I want to bring in your 13th prediction because I think it's related to this. Number 13, the dark side of agents will also grow. Fraud, cybercrime, military usage. Yeah. So that's true probably for any new technology that bad actors would also want to take advantage of that. But

they will find a way to exploit them to do everything that they like to do, whether it's like phishing attacks, social engineering tactics, whether it's cyber criminals that will probably have their agents do some attacks in massives. We're also going to struggle more and more to detect them. The more the bad actors, agents will become more sophisticated.

And of course, we also need to talk about the military implications of having a fully autonomous weapons that are very sophisticated, not always in the hands of parties where you want them to have full autonomous agents as weapons.

And this is both scary and something that we need to be aware of. And we'll probably, some of the best people out there will fight the good fight like they always do with the new technology. But there's huge potential also for doing bad things here. Yeah, I mean, listen, I think that the...

To the extent that you want to try to sum up this whole section, the stakes are raised with agents is really just what it comes down to. They represent the next generation of autonomy, the next generation of capability. And because of that, everything that people have been discussing since Gen AI really hit the mainstream becomes even more pertinent.

That's an awesome place to end this first half of this conversation. We will be back in part two where we'll talk about technology growth and much, much more. So thank you for being here and we will catch you again very soon.