We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
People
F
Finastra 的CMO
播客主持人
播客主持人,专注于英语学习和金融话题讨论,组织了英语学习营,并深入探讨了比特币和美元的关系。
文章作者
Topics
文章作者:本文探讨了AI对就业的影响,指出企业开始更直接地讨论AI替代人工的潜力,并从成本削减和创新两个角度分析了AI对就业的影响。企业将AI视为提高效率的技术,降低成本,同时也要关注AI带来的创新和扩展能力,最终实现增长、竞争和繁荣。 Jared Spataro:企业高管关注AI如何降低预算支出,希望看到AI带来的实际成本效益。 Finastra 的CMO:AI可以帮助企业以更低的成本完成营销工作,例如替代外部营销机构,提高效率。 播客主持人:AI对就业的影响将分三个阶段进行:第一阶段是成本降低,企业将AI视为提高效率的技术;第二阶段是创新和扩展能力,企业将AI视为创造机会的技术;第三阶段是增长、竞争和繁荣,那些充分利用AI创造机会的企业将获得巨大成功。企业领导者需要制定愿景,让员工和投资者相信AI不仅能降低成本,还能创造机会,才能在AI时代取得成功。

Deep Dive

Key Insights

What is the primary focus of the initial phase of AI adoption in enterprises?

The initial phase of AI adoption in enterprises will focus on cost reduction, where companies aim to do the same tasks more cheaply to improve efficiency.

How does the speaker differentiate between task replacement and role replacement in AI adoption?

Task replacement involves replacing specific tasks within a job with AI, while role replacement implies replacing entire job roles. The speaker believes that most jobs will see a redefinition of roles rather than complete replacement.

What are the two main ways companies can improve their performance according to the speaker?

Companies can improve their performance by either growing and innovating to create new lines of business or by doing their current operations more cheaply through efficiency gains.

What is the potential impact of AI on marketing costs according to the case study of Finastra?

Finastra used AI to generate ad copy from transcribed interviews, reducing the cost of hiring external agencies by $60,000 and saving several months of work.

What is the speaker's view on the long-term impact of AI on jobs?

The speaker believes that while some jobs may be redefined or replaced, AI will ultimately lead to job creation and innovation, especially in roles that focus on leveraging AI for new opportunities.

What role does leadership play in AI adoption according to the speaker?

Leadership is crucial in setting a vision for how AI can be used to innovate and position the company for the future, rather than just focusing on cost reduction.

How does Klarna use AI to reduce costs?

Klarna uses AI to power its customer service chatbot, which can handle the work of 700 humans, and has also reduced marketing costs by cutting out external firms.

What is the speaker's prediction for the transition from phase one to phase two of AI adoption?

The speaker predicts that companies will start to innovate and expand their capabilities during phase two, moving beyond just cost reduction to create new products and services.

What is the significance of Meta's Lama 3.3 model in terms of cost efficiency?

Meta's Lama 3.3 model is 25 times cheaper than OpenAI's GPT-4.0, making it more accessible to the open-source community and reducing the cost of running large models.

What is the potential market impact of companies fully embracing AI as an opportunity creation technology?

Companies that fully embrace AI as an opportunity creation technology will likely outperform their competitors, leading to significant market share gains and long-term success.

Shownotes Transcript

Translations:
中文

Today on the AI Daily Brief, how AI job replacement will actually play out. Before that in the headlines, Sora appears on its way. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. To join the conversation, follow the Discord link in our show notes. Welcome back to the AI Daily Brief Headlines Edition, all the daily AI news you need in around five minutes. By the time you're watching or listening to this, it could already be out of date.

It's day three of the 12 days of OpenAI or Shipmas, and it looks likely that Sora is coming today. Now, over the weekend, we started getting some preview videos. Ruud van der Linden, the co-founder and CEO of LaunchVideo, tweeted, OpenAI's Chad Nelson showed this at the C21 Media keynote in London, and he said that we'll see it very soon as Sam Altman has foreshadowed.

The video includes this epic Viking battle scene, followed by something like an alien snow planet war scene. And alongside the video, we got some details including one minute video outputs and the fact that it was both text to video plus text to image to video plus text to video to video.

The response can be very summed up by Alex Volkov of Thursday AI, who said, I take back everything I said about other video models catching up to Sora even remotely. Leaked video of Sora version 2, multiple scenes and just incredible character consistency. P.S. If this is coming as part of the $200 per month pro tier, OpenAI is about to see a lot of new subs.

Now, pricing is one big question. I'm not sure that even at the $200 per month, this can be economical for OpenAI, but we'll just have to wait and see. Sika Chen writes, I thought this kind of quality was maybe a year away, but here we are. Ethan Malek writes, if this is real, it's a big leap in video generation over even the Chinese models. I guess we will see.

Sam Altman, meanwhile, on Saturday tweeted, I am so, so excited for what we have to launch on day three. Monday feels so far away. But of course, Elon wasn't about to let Sam Altman have all the fun. And his team responded over the weekend by announcing XAI's new image generator, Aurora. For a brief few hours on Friday night, XAI users got to play with this new image model.

Elon Musk explained, this is our internal image generation system. Still in beta, but it will improve fast. Based Beth Jezos writes, XAI casually releasing one of the best image models on a Saturday at 2 a.m. Y'all are built different for real. Alex Volkov again says, so this new Grok image generation model called Aurora just shipped on a Saturday. What do we think, folks? Looks like trained by them, no evals or details, just here you go, use the theme. Seems focused on photorealism. So what do we think? Well, TechCrunch had a few gripes.

They wrote, "Ex-users posted aurora-generated images showing objects blending unnaturally together in people without fingers." Although when I searched around, I actually couldn't find any examples of that. Stylistically, the model goes a little heavy on a soft-focused bokeh background, but that's not really a complaint.

What there are are tons and tons of photorealistic generated images, ranging from Tesla vehicles to stylistic landscapes. Lots of people point out that the model excels at celebrity portraits. We have Bill Murray dressed up as Abe Lincoln, Adam Sandler and Ray Romano palling it up on the set for a show they never actually did together, Captain Jean-Luc Picard in a Santa hat, even Sam Altman pining after Scarlett Johansson.

The model even does a great job rendering the particular unique characteristics of faces, with a portrait of Ilya Sutskever complete with his characteristic moles and jowls.

What we didn't get is an explanation of why the model was pulled down after such a short time. It may have been intended as a brief demo, or the team could have found some unforeseen issues with its release. Like the existing Flux model that drives Grok's image generation, Aurora seems to have basically no guardrails in what you could create in terms of real people or copyrighted material. Nudes were banned, but that was basically the only prompt Aurora would refuse. For example, it had zero qualms producing Luigi boxing with Mickey Mouse.

Zillopreneur Peter Levels was impressed, posting, "...from quick glance, Grok's new image model, Aurora, looks higher in detail than Flux for generating photos of people. What's crazy is how they've been able to create an entirely new image model so fast. Or is it a partnership with Black Forest Labs who makes Flux? And that is the other lingering question. Are Black Forest Labs and other auxiliary model providers still viable if XAI is capable of spinning up this kind of model in-house, or are they actually the secret to this model?"

As a small addendum, XAI has also released their Grok chatbot for free with a limit of 20 prompts and 10 images every two hours. In case you had any doubt, the team is clearly intent on taking OpenAI on in the new year. Chris Park, an XAI developer, took a potshot, responding to Sam Altman's post about Monday's reveal, saying, "'XAI doesn't need to wait until Monday. This team is too cracked and stays shipping.'"

Congrats, XAI, for releasing a brand new image generation model, Aurora. Grok 2 in Aurora is now available with your X app in the model selector. Oh, by the way, Grok 3 is coming.

Lastly, one more model announcement. Meta have released a new version of their 70B model, calling the release Lama 3.3. Ahmad El-Daleh, Meta's VP of Gen AI, crossed over to rival social media platform X to make the announcement, tweeting, Introducing Lama 3.3, a new 70B model that delivers the performance of our 405B model but is easier and more cost efficient to run.

By leveraging the latest advancements in post-training techniques including online preference optimization, this model improves core performance at a significantly lower cost, making it even more accessible to the entire open source community.

The benchmarks they shared are impressive, a close comparison to Gemini Pro 1.5 and GPT-4.0. The model is also in line with Amazon's new Nova Pro, which was positioned as the lowest-cost frontier model. Nova Pro was priced at a third of the cost of GPT-4.0, while Lama 3.3 is priced at an eighth of the cost of Nova Pro or one-twenty-fifth the cost of OpenAI's offering.

The model comes with a 128K context window, which is the same as GPT-4.0, and equivalent to around 400 pages of text. It's text-only for now, and available as open source from Hugging Face. To give a sense of how important this size reduction is, VentureBeat did some back-of-the-napkin math. They suggested a 24-fold reduction in GPU load compared to other Frontier models. This also brings the power of a Frontier model down to a size that could feasibly be run on consumer hardware.

Aoni Hanoon, a machine learning researcher at Apple, wrote, Lama 3.3 70B 4-bit runs nicely on a 64GB M3 Max, would be even faster on an M4 Mac. Yesterday's server-only 405B is today's laptop 70B. The model certainly seems to add evidence for the theory of model distillation, the idea that the performance of frontier models can be jammed into a smaller size with some clever post-training.

Ultimately, the battle right now is not only for the state of the art, but for cost effectiveness. AI educator Paul Kovort writes, Meta has just released Lama 3.370b, which is more powerful than GPT-4.0 and 25x cheaper. Open source is really winning at every level.

Lots of exciting stuff out there. Like I said, I'm afraid that by the time this goes live, it will already be out of date. But here we are. That is AI. But for now, that's going to do it for today's headlines. Up next, the main episode. Today's episode is brought to you by Vanta. Whether you're starting or scaling your company's security program, demonstrating top-notch security practices and establishing trust is more important than ever.

Vanta automates compliance for ISO 27001, SOC 2, GDPR, and leading AI frameworks like ISO 42001 and NIST AI risk management framework, saving you time and money while helping you build customer trust. Plus, you can streamline security reviews by automating questionnaires and demonstrating your security posture with a customer-facing trust center all powered by Vanta AI.

Over 8,000 global companies like Langchain, Leela AI, and Factory AI use Vanta to demonstrate AI trust and prove security in real time. Learn more at vanta.com slash nlw. That's vanta.com slash nlw. Today's episode is brought to you by Superintelligent. Every single business workflow and function is being remade and reimagined with artificial intelligence.

There is a huge challenge, however, of going from the potential of AI to actually capturing that value. And that gap is what Superintelligent is dedicated to filling. Superintelligent accelerates AI adoption and engagement to help teams actually use AI to increase productivity and drive business value. An interactive AI use case registry gives your company full visibility into how people are using artificial intelligence right now.

Pair that with capabilities building content in the form of tutorials, learning paths, and a use case library, and Superintelligent helps people inside your company show how they're getting value out of AI while providing resources for people to put that inspiration into action.

The next three teams that sign up with 100 or more seats are going to get free embedded consulting. That's a process by which our super intelligent team sits with your organization, figures out the specific use cases that matter most to you, and helps actually ensure support for adoption of those use cases to drive real value. Go to besuper.ai to learn more about this AI enablement network. And now back to the show. Welcome back to the AI Daily Brief.

We are coming up on the end of the year, and that is a context to have a lot of big conversations, to think about where we've been, what we've learned over the last year, and where we think we're headed.

One of the great big blinking questions when it comes to generative AI is what the impact is actually going to be on jobs, both in the short term and in the long term. The great fear that so many people have is, of course, that robots are going to replace us all. With the speed with which they are increasing in capabilities, it's understandable that people think that the jobs they have now are very unlikely to be the jobs they have in the future. And for some, it's not hard to make the leap to the idea that there just won't be jobs in the future.

Still, I think it's going to be a lot more nuanced than that, and that's what we're going to discuss today. The way that we're going to kick it off in the initial context, though, is a report from the information called Microsoft's New Sales Pitch for AI, Spend Less Money on Humans. They write, the potential of AI to replace human workers is an old idea, but one most companies have avoided bringing up explicitly for fear of suffering reputational harm and political attacks. But as tech companies try to overcome customer uncertainty about the value of AI, they're becoming more direct about that possible benefit.

Jared Spataro, the CMO for Microsoft's 365 co-pilot said, what CFOs are rightly asking for is show me what you took out of our budget by using AI. We're hearing that loud and clear. And basically what this article is about is the fact that we're transitioning from pure innovation style spending to actually trying to capture other budgets. And to do so, companies are looking for ways they can reduce spending in other areas.

The article also includes some other case studies. Finastra, a British fintech with $2 billion in annual revenue, recently used 365 Copilot to do a bunch of marketing work that it would have previously hired an external agency to do.

Basically, Finastra's in-house marketing staff interviewed 50 financial sector executives around what they found useful about its product. They used Copilot to transcribe the interviews, and then Copilot generated ad copy from those 4,000 pages of transcribed interviews. The CMO of Finastra said that this might have typically cost them something like $60,000 and taken several months, whereas they were able to do it for their employees' time plus the cost of Copilot for each of those employees.

For that CMO, however, where he draws the line is on his own internal marketing staff. This is an agency cut challenge, not a reduction in his 60-person marketing team. He said, I think we're going to produce more stuff with the people we have. Another area that has seen lots of disruption is around support.

The CEO of telecom firm Bell Canada recently told analysts that Google's AI tools helped contribute to $20 million in labor cost savings. Interestingly, when Google reposted this, they removed the phrase labor cost, so it just said $20 million in savings. And this is hardly the only example of this phenomenon. The Klarna team has been raving about how they've been using AI to save money. And although I've seen a fair bit of skepticism around the magnitude of some of these changes, they still seem pretty dramatic.

Over the first nine months of the year, Klarna sales and marketing expenses fell 16%, while customer service and operations expenses were down 14%. Those cost savings came while generating 23% higher revenue. And Klarna points to AI for most of that cost saving. One specific example, they said that their customer service chatbot, which was launched back in January and powered by OpenAI, they say can do the work of 700 humans. They also, like that firm we were just discussing, have slashed marketing costs by cutting out outside firms.

Then, of course, there's Salesforce, who have gone all in on the idea of agent force, which is an in-production agent system that theoretically could be replacing salespeople, even though for now, Salesforce says it's augmenting them and allowing them to do more.

And of course, this agent discussion is swirling around hugely. There's an idea that you might have seen floating around that vertical AI agents could be 10 times bigger than SaaS. And basically where those numbers are coming from is the notion that vertical AI agents are replacing labor, with the cost of labor being dramatically higher than the cost of software.

For the sake of this conversation, at least, we're not going to get into all of the assumptions that go into how those things are going to be priced and how weird and wild that might get. The point is that there is just definitely more comfort right now with the idea that AI is actually going to be in some way human replacing. So where does this all leave us?

Over the last couple of years, doing this podcast and running Super Intelligent, which helps big enterprises with their AI transformation, I've started to have a pretty clear mental framework for how I think this is going to play out. Now, at this point, this is entirely speculation. This is just my take. I could be wrong about all of it, but it's certainly not informed by nothing.

So where I think we've been for the last couple of years, let's call phase zero. This is where enterprises are exploring the possibilities of AI. They're doing pilots. They're inviting their teams to try things out. They're starting to find use cases which can be repeated over and over again. They're dabbling, experimenting, trying to figure out what works, and thinking about, in some cases, how they might begin to scale new processes. One thing that's important to note is that in this phase zero, absolutely everything is up for grabs.

Part of what makes the AI disruption so different is that it literally implicates everything. It is every business process, every workflow, and that's a scale, depth, and breadth of transformation unlike any we've seen before. But where are we headed? Well, I think where we're headed is, let's call it phase one, which is all about cost reduction.

I've spoken about this before. I think it is inevitable that there will be an initial phase where enterprises view AI largely, if not exclusively, as an efficiency technology, where they think about it just as a way to reduce costs. In other words, a way to do the same, but while spending less. This makes sense. There are two ways for big companies to improve their performance.

One of them is growing, innovating, creating new lines of business. The other is doing the current stuff just more cheaply. And unfortunately, in a lot of ways, that one tends to be easier for companies to conceptualize.

I think that during this reduced cost phase, companies that are able to do this will be rewarded by markets. Short-term market investors love efficiency gains. They love reduced costs. That leaves more money for other things like stock buybacks. So TLDR, I think it is inevitable that we're going to be in a reduced cost type of phase. And I think this is going to be a lot of the story of 2025.

One important caveat to this, though, as we think about reducing costs, task replacement is not the same necessarily or not always as role replacement. I think sophisticated organizations are going to be able to make this distinction. Like I said before, it is every task and every activity and every workflow that is likely to be shifted in some way. It'll either be done by a human with an AI assistant in some more efficient or better performing way, or it'll be replaced by an agent.

However, if we view jobs as collections of tasks, there aren't very many where the entirety of that job and the entirety of those tasks are likely to be replaced all at once. Sure, there are some. This is why I think we're seeing the biggest disruption in customer support and customer service areas to start. When it comes to the complexity of sales or marketing or operations, it seems much more likely to me that even in this reduced cost phase, it's going to be about job redefinition and role redesign.

This won't exclusively be the case, of course. There will be many boardrooms that just want headcount slashed, and that's something that we're going to have to deal with. But as you can see, I'm trying to draw a distinction between the organizations that are going to just do the low-hanging fruit obvious things, and perhaps even have some short-term success with it, versus the companies that are really going to dig in and engage with these issues and position themselves for success in the long term.

Now, speaking of that, if phase one is reducing costs, phase two is all about innovating and expanding capabilities. Like I said, there are two ways for a company to improve its performance. The first is it can do all the same stuff it does now, just more cheaply. The second is it can start to think about how it can do more. Could you, for example, with the same number of inputs in marketing, in terms of cost, time, etc., produce 10 times the amount of content and collateral?

Would having 10 times the amount of content and collateral make a meaningful dent in increasing performance? Would it grow sales?

I think where AI starts to get really interesting and starts to move out of this realm of pure efficiency is as organizations start to think about how they get out ahead of their competitors to build the future, to offer products that weren't available before, to create support around their products that's totally distinguished from what's available now, to be better than anyone ever has been at getting your products or services into the hands of the right customers, and generally just becoming better companies, not just more efficient companies.

So how does phase two take over from phase one? Two things. First of all, I think a lot of great organizations are going to be thinking about phase two even as they go through some amount of phase one. In other words, they're going to be thinking about reducing costs in the context of ultimately wanting to get to innovating and expanding their capabilities. These are the companies, by the way, who are likely to get down on the granular task replacement level rather than just thinking in terms of role replacement.

My point is that it's going to be completely possible for organizations to take advantage of the cost reduction capabilities of AI while positioning themselves for the future. That's about something more innovative. Ultimately, this is going to lead to phase three, grow, out-compete, and flourish.

At some point, it will not just be companies who are jockeying roughly in some combination of phase one and phase two. I think we will very quickly start to see companies that wildly outperform their peers and competitors because they move more quickly into a full embrace of AI as an opportunity creation technology, not just as an efficiency technology.

And I think it won't take long before the press picks up on those signals, before markets pick up on those signals. And all of a sudden, all those early short-term market rewards for just reducing costs and thinking about this as an efficiency technology look dumb because you're behind because your competitors are racing ahead, offering things that you aren't able to build, a level of service that you aren't able to offer. I think there could be dramatic inflection points where companies start to scoop big chunks of market share that weren't theirs previously because of the outperformance that a full AI embrace has enabled.

So what are you supposed to do with this if you're a company? Ultimately, it all comes back to leadership. Leadership is not just about which tools to use and which new structures to set up, although that's a part of it as well. There are going to be lots of great resources to help with those questions. I'm building one in Superintelligent. And by the way, if anyone is looking for a new infrastructure for integrating new business processes and workflows, we got you covered.

But all that won't amount to much if leadership isn't able to effectively lay out a vision for how they want to harness AI and how they want to build their own future for their companies. You have to get a ton of stakeholders on board. You have to get your employees not terrified that they're going to lose their jobs to robots tomorrow. You have to convince investors that AI shouldn't just be about reducing headcount, but has to be about positioning for the future.

There is a lot of work to be done. And although these technologies are going to be available to everyone, the best leaders I predict will dramatically outperform because of the tone they set in how all of that tool usage and all of these new processes are adding up to something more. So as you start to see more of these conversations talking about spending less money on humans, take note of them, keep track of them, but do not despair.

And perhaps this is just my optimistic side. It is completely inevitable that the organizations that treat AI as an opportunity creation technology are the ones who are going to win. And it'll take less time than you think for markets and everyone else to figure that out. That's going to do it for today's AI Daily Brief. Appreciate you listening or watching, as always. Till next time. Peace.