We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode 849: 2025 AI and Data Science Predictions, with Sadie St. Lawrence

849: 2025 AI and Data Science Predictions, with Sadie St. Lawrence

2024/12/31
logo of podcast Super Data Science: ML & AI Podcast with Jon Krohn

Super Data Science: ML & AI Podcast with Jon Krohn

AI Deep Dive AI Insights AI Chapters Transcript
People
J
Jon Krohn
S
Sadie St. Lawrence
Topics
Sadie St. Lawrence: 2024年对GPU和其他AI硬件加速器的需求将远超以往,这将为新的竞争者与英伟达等老牌巨头竞争打开大门。英伟达的股票价格大幅上涨,同时市场上也涌现了许多新的竞争者,例如AWS的Trainium和Inferentia芯片。大型语言模型作为新操作系统的预测仅部分准确,人们的习惯改变需要更长时间。大型语言模型的能力将超越简单的规模扩展,通过模仿人类的“慢思考”过程来增强逻辑推理和数学能力。未来大型语言模型的扩展将超越简单的规模化,更多地关注特定领域的模型开发,类似于生物学中动物根据环境进化出独特智力的方式。虽然大型语言模型的功能调用API整合了各种工具,但企业工具的整合程度仍有待提高,微软Copilot由于与微软套件深度集成而表现更好。复杂的代码解释器(如ChatGPT的高级数据分析功能)使业务用户能够自行执行数据分析,从而改变了传统分析师的角色,导致工作场所出现剧变。人工智能对技术领域的工作岗位影响巨大,导致许多高科技公司裁员,软件开发职位发布数量也大幅下降。人工智能对初级技术人员的影响最大,他们需要提升自身技能,特别是商业技能,才能在竞争中保持优势。开源项目,特别是Meta的Llama模型,以及最终受益的消费者,是2024年的最大赢家。2025年自主式AI将成为主导趋势,超越单一应用程序,创建能够自主处理复杂任务的专业网络,但平台间的安全性和权限仍然是一个挑战。2025年,人工智能将进一步集成到日常设备中,例如增强现实眼镜和个人电脑,但这并非所有集成都具有价值。数据科学领域的角色将不断发展和演变,需要从业者不断学习和积累新的技能,例如人工智能工程技能,这与2014年IT领域向云计算的转变类似。 Jon Krohn: 2024年,大型语言模型将成为一种新的操作系统,改变人们与机器交互的方式,减少对键盘和屏幕的依赖。大型语言模型的创新方法将致力于复制人类的“慢思考”过程,从而显著增强逻辑、推理和数学任务的能力,并可能减少对训练数据的需求。OpenAI的O1模型就是这一趋势的体现。大型语言模型的功能调用API将整合各种系统和应用程序,从而简化工作流程并形成统一的企业工具集,这在2024年通过自主式AI框架得到了体现。复杂的代码解释器(如ChatGPT的高级数据分析功能)使业务用户能够自行执行数据分析,从而改变了传统分析师的角色,导致工作场所出现剧变。人工智能对技术领域的工作岗位影响巨大,导致许多高科技公司裁员,软件开发职位发布数量也大幅下降。2024年谷歌的强势回归是年度最佳回归。谷歌凭借Gemini 2.0等产品在各个基准测试中均表现出色,重新回到了AI领域的竞争前沿。2024年OpenAI的表现令人失望,未能达到人们的预期。苹果的Apple Intelligence令人失望,未能达到预期,这可能是因为苹果公司对安全性和可靠性产品的重视。OpenAI的O1模型是2024年最令人惊艳的时刻。特斯拉的自动驾驶功能令人印象深刻,实现了儿时梦想。Waymo的无人驾驶汽车体验令人印象深刻。Waymo公司致力于打造“世界上最好的司机”,这体现了规模化机器智能的潜力。Anthropic公司及其Claude模型是2024年的最大赢家。Claude模型在调试、总结和转录等任务上表现出色,其友好的用户界面也令人印象深刻。2025年,企业对人工智能的货币化将至关重要,因为公司需要在其巨额硬件投资中获得回报。2025年,市场对人工智能工程技能的需求将超过对传统数据科学技能的需求,但这代表着角色的演变而非替代,从业者需要在现有技术基础上构建新的AI工程能力。

Deep Dive

Key Insights

What was the 'comeback of the year' in AI for 2024 according to Sadie St. Lawrence and Jon Krohn?

Google was named the 'comeback of the year' for 2024 due to its significant advancements in AI, including the release of Gemini 2.0, Willow AI Studio, and Notebook LM, which helped it regain a competitive position in the AI landscape.

Why was OpenAI considered the 'disappointment of the year' in 2024?

OpenAI was considered the 'disappointment of the year' because it failed to meet the high expectations set by its previous innovations, such as GPT-3 and GPT-4. Despite releasing the O1 model, the anticipated breakthroughs in scaling and capabilities did not materialize as expected.

What was the 'wow moment of the year' in AI for 2024?

The 'wow moment of the year' was OpenAI's O1 model, which introduced slow thinking capabilities, allowing AI to break down complex tasks into intermediate steps and validate each step, significantly enhancing logic, reasoning, and mathematical tasks.

What is the primary focus of Sadie St. Lawrence's prediction for AI in 2025?

Sadie St. Lawrence predicts that agentic AI will be the dominant trend in 2025, moving beyond single applications to create specialized networks that can autonomously handle complex tasks, though challenges around security and permissions between platforms remain.

How does Sadie St. Lawrence see AI integration into everyday devices evolving in 2025?

Sadie predicts that AI integration into everyday devices will accelerate in 2025, with advancements like augmented reality glasses offering real-time translation and more sophisticated personal computing experiences, though not all integrations will prove valuable.

What impact does Sadie St. Lawrence expect AI to have on scientific research in 2025?

Sadie expects AI-driven scientific research to expand significantly in 2025, building on current successes where AI-assisted researchers achieved 44% more new material discoveries and 39% more patents than researchers who weren't AI-assisted.

What is the key challenge in enterprise AI monetization as discussed by Sadie St. Lawrence?

The key challenge in enterprise AI monetization is ensuring profitability while addressing security and privacy concerns. Many AI projects get stuck in proof-of-concept purgatory due to difficulties in making them cost-effective and secure for production environments.

What skills does Sadie St. Lawrence predict will be in high demand in 2025?

Sadie predicts that demand for AI engineering skills will surpass traditional data science skills in 2025, though this represents an evolution of the role rather than a replacement, requiring practitioners to build on existing technical foundations with new AI engineering capabilities.

Chapters
This chapter recaps Sadie St. Lawrence's predictions for 2024, assessing their accuracy. Topics include the demand for GPUs, LLMs as a new operating system, advancements in LLM capabilities, tool consolidation via LLM APIs, and workplace upheaval.
  • Sadie's predictions for 2024 included increased demand for GPUs, LLMs transforming human-machine interaction, advancements in LLM capabilities beyond scaling, tool consolidation via LLM APIs, and workplace upheaval due to AI.
  • Three out of five predictions were deemed highly accurate.
  • The prediction about LLMs as a new operating system was considered partially accurate, with more development needed.
  • The prediction regarding workplace upheaval highlighted the impact of AI on tech jobs and the need for skill adaptation.

Shownotes Transcript

Translations:
中文

This is episode number 849 on Data Science Trends for 2025 with Sadie St. Lawrence.

Welcome to the Super Data Science Podcast, the most listened to podcast in the data science industry. Each week, we bring you fun and inspiring people and ideas exploring the cutting edge of machine learning, AI, and related technologies that are transforming our world for the better. I'm your host, John Krohn. Thanks for joining me today. And now, let's make the complex simple.

Happy New Year and welcome back to the Super Data Science Podcast. I hope you had a wonderful 2024. To start you off on the right foot in 2025, for today's episode, we've got our annual data science trend prediction special for you again this year.

In today's episode, which will appeal to technical and non-technical listeners alike, we cover how Sadie's predictions for 2024, which she made a year ago on this show, how those predictions panned out. We award our wow moment of 2024, our comeback of the year, our disappointment of the year, and our overall winner of 2024. And then of course, we speculate on what 2025 will bring us, including a Gentic AI coverage

AI in everyday edge devices, the remarkable way AI is transforming scientific discovery with numbers that will surprise you, what massive GPU investments by tech giants tell us about AI monetization in 2025, and the unexpected shift coming.

to data science careers. As with our 2022, 2023, and 2024 predictions episodes, our special guest again this year is the clairvoyant Sadie St. Lawrence. She's a data science and machine learning instructor whose content has been enjoyed by over 600,000 students. She's the founder and CEO of the Human Machine Collaboration Institute, as well as being founder and chair of Women in Data, a community of over 60,000 women across 55 countries.

On top of all that, Sadie serves on multiple startup boards and is host of the Data Bytes podcast. All right, you ready to join Sadie and me on this visionary episode? Let's go. Sadie, welcome back to the Super Data Science podcast for another year of forecasting. This is your fourth consecutive year where you are leading us into the future, into the exciting future.

Thank you, John. I cannot believe it's been four years that we've been doing this.

As everyone says, it's so cliche at the end of the year, where does the time go? But now I really feel like where did the time go because we've been doing this for four years. And so eventually maybe five years we have to do a full five-year recap or something. How many were we off? How many were we on? Yeah, that's a really good idea because even thinking about, even just as I was looking back to see how many times you've done this just now before we started recording, I was thinking, wow, 2021 predictions, what would those have been like?

Do we even want to look? Not in this, not this year, not this year. Save it for next year. You have to listen to the podcast for a whole nother year to see what we said in 2021. Yeah, yeah, yeah. But yeah, in 2021, we made predictions for 2022. You've always been pretty on the money. We can, I'll quickly recap your predictions for 2024 before we find out what you've been up to over the last year. So for 2024,

your number one prediction was that there would be much, much more demand than ever before for GPUs and other kinds of AI hardware accelerators, which would open the door for new players to compete with established giants like NVIDIA and

Absolutely. I mean, NVIDIA has become... I mean, it's crazy to think from last year, it already seemed like it had a crazy stock price, and we've seen it increase so much more over 2024. So you're absolutely right there. But simultaneously, we have also seen lots of players come in. Tensor processing units have evolved a lot. AWS's Tranium and Inferentia chips are doing exciting things. For example, this...

giant new cluster that's been announced that Amazon is involved with, with Anthropic, with hundreds of thousands of Tranium chips. So I think you were spot on with number one there. Yeah, I'm really excited that there is more diversification in the market, although we may start needing predicting these trends a little further out because I read today that

Nvidia employees, I think like 75% are millionaires just because of how much the stock is. And so I'm like, well, if only we predicted this out a few years ago, you know, maybe I would be in a little bit better financial position today. But regardless, let's start looking at some of those new players coming into the market. And I think it's just going to be great for everyone involved, particularly startups and getting access to the compute that they need.

As a super data science listener, you're probably interested not only in data powered capabilities like ML and AI models, but also interested in the underlying data themselves. If so, check out Data Citizens Dialogues, a forward thinking podcast brought to you by the folks over at Colibra, a leading data intelligence platform.

On that show, you'll hear firsthand from industry titans, innovators, and executives from some of the world's largest companies such as Databricks, Adobe, and Deloitte as they dive into the hottest topics in data. You'll get insight into broad topics like data governance and data sharing, as well as answers to specific nuanced questions like how do we ensure data readability at a global scale?

For folks interested in data quality, data governance, and data intelligence, I found Data Citizens Dialogues to be a solid complement to this podcast because those aren't topics I tend to dig into on this show. So while data may be shaping our world, Data Citizens Dialogues is shaping the conversation. Follow Data Citizens Dialogues on Apple, Spotify, YouTube, or wherever you get your podcasts.

For sure. Your second point for 2024 was that we'd have LLMs as a new operating system, what you called an LLM OS that large language models would transform the way that people interact with machines, meaning that you don't need to be using a keyboard or a phone screen quite as much, that you can have face and voice recognition and voice

You've been absolutely spot on with that. In my opinion, I think this one is a little bit of a miss personally. And for the reason is that I think our habits take much longer to change as humans, right? I don't know how much, I haven't seen the stats of how much people are using voice mode, but we're so used to interacting with individual applications. I know there's computer use from Anthropic.

I don't see it fully integrating into a full operating system yet. I think we're starting to see the tip of the iceberg, but I think there's still a lot more to come in this space. And so I'm going to be a little bit more mean on myself in this one and say this was only half accurate, but more to come.

Okay, well, you made up for that with number three, which could not have been, number three could not have been more on the money. So number three, you discussed how we will make advances in LLM capabilities beyond just C++.

scaling the size of the network or scaling data, you said that we would have a slow thinking model. Innovative approaches will aim to replicate human slow thinking processes as described by the famous economist Daniel Kahneman, uh, who passed away actually in 2024. Um,

And yeah, and you said this could lead to significantly enhanced capabilities in logic, reasoning and mathematical tasks, potentially requiring fewer training data. And O1 from OpenAI exemplifies this trend, which I'm sure we're going to see other of the huge hyperscaler companies are going to be scaling along this inference scaling lever that they now have available to them.

And it's interesting, this is, for now, it's relatively constrained to problems that you can easily break down into subparts. So like math problems, computer science problems, these can be broken down into intermediate steps where you can use reinforcement learning to say, okay, at each, you know, at this first intermediate step, we've done a good job. We've verified that we did it correctly. Let's move on to the next intermediate step. And so it'll be interesting to see

in maybe in 2025, how the hyperscalers can take innovations like O1 and extend them beyond these relatively narrow, easy to break down into intermediate step tasks, which could be a big challenge. I think where we're going to see it move to is more of scaling in terms of specific domains. And a lot of where I'm pulling this from is from psychology and biology. And

If we look at brain development, if we pulled scaling laws into brain development, we would just think, you know, the bigger the animal or the mammal, the larger the brain, the more intelligence that they have. That is not always the case. What we find within biology is

Animals in particular develop a unique type of intelligence based on their environment and the environment that they decide to thrive in. And I think we're actually going to see that happen a lot this next year where, okay, we've realized that slowing down thinking works better.

But that's maybe for really complex problems, not for all types of problems. So I think we're going to move beyond scaling in some really interesting ways this next year. And we're just again seeing the tip of the iceberg of what that looks like with models like O1. Yeah, yeah, yeah. Exciting times. As we'll talk about more later, O1 is a really exciting innovation for me. Marty, yeah.

Making spoilers on things that are to come. So that was number three. Number four for 2024 was tool consolidation via LLM APIs. So you said that function calling APIs of LLMs will unify and integrate diverse systems and applications, leading to streamlined workflows and a consolidated enterprise tool set. So this has ended up coming to life with this agentic AI term.

where you have agentic AI frameworks that have blossomed over 2024 and have allowed LLMs to make use of lots of tools, online capabilities, interacting directly with computers, like kind of all of the functionality of the computer, as opposed to just text in, text out. And so, I mean, this one, you probably won't be able to disagree when I tell you that you were on the money for number four.

Yeah, I'll try and disagree with you a little bit. I was hoping to see some more tool consolidation in terms of, let's say people using ChatGPT, for example, and being at their go-to place for analysis, for coding, for presentations. I still am seeing a little bit of specialized tools. I think Copilot's actually done a little bit better job of it just because it integrates so heavily within the Microsoft suite. The GitHub Copilot?

No, Microsoft Copilot. Or Microsoft Copilot, yeah. Yeah, just from an enterprise level, we rely still so heavily on Microsoft Office products. Do you use Microsoft Office products? Okay, so I have a heavy opinion on this. I use it in one company now. In my previous company, we were all the startup tools, which were G Suite, Slack, etc.,

Which do I prefer? Definitely the G Suite Slack, but we work with more enterprise clients and so that's why we switched over to the Microsoft Suite. Oh boy. That'll make for an interesting 2025 for you.

It's going to be rough. I don't know what's going on with search and Outlook, but I'm sure a lot of people feel my pain on this. But I think in terms of tool consolidation, Microsoft is just set up as a leader in that space just because of how much dominance they have in the enterprise space today.

Nice, yeah. We've gone off piste a bit here. But to go for your fifth and final prediction for 2024 before we move on to our next topic category entirely was you talked about there being workplace upheaval so that sophisticated code interpreters like ChatGPT's advanced data analysis,

that that empowers business users to perform data analytics on their own. And so this, you know, it redefines traditional analyst roles. And I understand that you might have some interesting stats for us on how the 2024 job market changed that kind of bear out what you described last year with your expectation of workplace upheaval. Yeah, unfortunately, we're seeing this heavily impact tech and people who aren't in tech, I think, are taking the wrong conclusion from this, which is,

tech is not a good place to be in and the job market is difficult. But particularly in high-tech companies, we're seeing a lot of layoffs because they know and understand how to implement these tools. So there's a posting from Fred where it showed software development job postings on Indeed in the United States. We kind of reached the peak of that in 2022. And that has been just on a sharp decline since then.

and at one of its lowest rates since 2020. And I think what's interesting about this is that peak coincides right when ChatGPT came out. We saw this new AI boom. And so you can see directly a correlation between technical jobs being affected by AI, particularly because that's one of the easiest places to implement it.

But also technical people are the ones who are aware of what this technology can do and how to implement it. Yeah, I read just the other day that Klarna, they're kind of like a pay, like they allow you to break your payments. You know, you buy a stereo, you can do four quarterly payments at the same price as if you were just buying the thing outright immediately. And so Klarna, K-L-A-R-N-A,

There was recently a report that they haven't done any hiring, basically since that peak that ChatGPT was released at. And they've just been using normal attrition to reduce their workforce size. And they're replacing call center people, I presume software developers, though I don't know that for sure, with AI. Yeah, I think where this is going to get really hard for is...

recent college graduates or less experienced individuals, right? That is where AI really shines. I talk about using AI tools a lot, like having a really excited intern, right? They have deeper knowledge than me and probably fresher knowledge than me in particular areas, but having that context is what they're missing. And so it's,

If you are in that position of being a recent college graduate or early on in your career, this is going to be a really key time to expand your skills, but particularly your business skills, because a lot of the basic tasks are what leaders are going to look for AI to replace. Yeah, good shout there. And we will have more later in this episode on what we think for 2025 and how careers and skills will be impacted later.

by the AL tools that are coming and becoming richer as we go.

So that gives a recap of our predictions, really your predictions for 2024, which certainly you couldn't even argue with me that you got three of the five spot on. So the demand for compute, the slow thinking model, workplace upheaval, tool consolidation maybe could have been, I guess you weren't expecting it to kind of be the agentic way that it ended up proliferating.

But we are going to be talking about that a lot more in today's episode. And then the LLMOS, you felt like that wasn't quite right. But...

Still lots of ripe capability there. I think as these kinds of tools, things like Apple intelligence becoming more useful, more widespread, we'll see LLMs be more of the, you know, the voice, the face, direct interface with the code behind the scenes, with the backend, as opposed to needing to rely on typing. Like we still have mostly been this year. All right.

So quickly, Sadie, before we get to, we're going to do an interesting section next that we've never done before, which is we're going to, and this is your idea. I love it. Is that we'll pick an overall winner for the year. We'll pick a comeback of the year, a wow moment and a disappointment of the year. But before we get to that,

Tell us a bit about what your year has been like, something in particular that interests me and is quite timely because this episode is coming out just before New Year's. And so this is definitely the time that people are thinking about how they can be restructuring their life to be more successful, more happy in the coming year. And you have your first physical product. So after a

a decade of creating popular digital products, you now have a physical product that people can order or pick up at their bookstore and could transform their 2025.

Yes, this has been such a passion project of mine because I, for the past 10 years, being a true data person, have been tracking what I've been doing in half an hour increments for the last 10 years. And I have gained so much from this experience that I decided I wanted my own planner. And a lot of people are like, wait, it's tracking, but it's a planner. How does this combine? I'll explain how it all works together. But I've been doing this for a long time.

to be able to do it in a more effective manner. And so I created a planner called The Observer. And the whole name behind The Observer is to actually observe what you're doing with your time, because I'm a true believer that actions speak louder than words. And we often set goals for ourselves, but then rarely reflect on how we

spend our time to align with those goals. And so the whole idea is that you just fill in daily what you did in your day and just that whole process of taking a second to be like, where did I spend my time and what did I...

go to really triggers something in your brain to question, am I aligning my time with my goals? And so there's some pre-setup in terms of imagining the life that you want, putting those into increments for quarters and months and weeks. And then the most important part is that reflection part on a daily basis where you track your time. So it's

It's been a super fun project. Again, it was just a passion project of mine that then people would ask me how I do it. So I created a Shopify account on a weekend and then next thing we know now we're in stores. So it's, it's been really a pleasure to create. Nice. Yeah. And we'll be sure to include a link to that, which is the observer.store, right? And we'll have observer.store in the show notes so that you can pick up your own observer. See how you are spending your time and,

imagine how you could be spending your time differently and maybe have a more fulfilling or more productive life. Cool. All right. So let's move on to now the section that we promised just before we talked about your planner, which is picking our... Should we do overall winner last? Yeah, let's do winner last. Okay. So then should we do comeback of the year first? Yes. Okay, cool. So for our comeback of the year...

I'll go first. John and I have not discussed these, so I'm also really excited to hear what his are. And we may have the same ones. I'll let you go first. I have mine written down. I promise I won't change it once I hear yours. But yeah, what was your comeback of the year? Google. We had too much good stuff that they saved for last. I mean, even in these last few days, it's been incredible. I think it was today VO2 came out.

Willow AI Studio, Notebook LM. I mean... Yeah, Gemini 2.0. It's across benchmarks across the board. Gemini 2.0 is, you know, they're now competing at the forefront again of

like you would have expected Google to be all along with any kind of AI. You know, two years ago, you know, if we'd done back in 2021 when we were making our predictions for 2022, you would have anticipated that Google would be

the frontrunner still in 2025. And while they are not clearly the frontrunner like they were back then, there are absolutely other competitors out there, some of which we will probably be talking about when we make our predictions about overall winner and so on. So I don't want to try to spoil it too much, but it's not a unipolar AI world anymore. It's a multipolar world.

AI world, which is great. It's great to have lots of smart people working in lots of different labs with their own takes on how things should be done safely and how the envelope can be pushed effectively. Yeah, I'm curious, since Google is both of our comeback for the year,

Do you think that they're going to gain traction in the market or because open AI has such a brand presence when you think of AI and Gen AI, I'm guessing most people think of open AI right away. Do you think Google's going to have a hard time coming back from that, even though technically you and I both see them as the comeback of the year?

Yeah, it's an interesting question. It probably depends on where you are. I know that I recently came back from a trip in San Francisco and lots of people there seem to be... It's interesting. So I went to an event, this Gen AI event, where on your name tag, you wrote your name as well as your favorite AI tool.

And a lot of people had ChatGPT written on their name tag, but there were also a lot of people verbally expressing disappointment with OpenAI's year in 2024. So that is, yeah, so it's interesting. I think OpenAI absolutely had pole position in late 2022 with the release of ChatGPT, and they maintained that in 2023 with the release of GPT-4.

But I think people were expecting, and who knows, maybe by the time this episode goes out, there will be like a GPT 4.5 release or something like that. But yeah, I think people with the delta between GPT 3 and 4, I think there was a lot of expectation that scaling alone, scaling data, scaling your number of weights in your model, that that would continue to yield the crazy advances that we saw between GPT 3 and 4.

And that hasn't borne out. So we are seeing open AI still absolutely be competing at the threshold with things like their O1 model, which I absolutely love. We already talked about that earlier in the episode. Scaling at inference time has a lot of potential, especially with tasks that can be broken into intermediate steps and validated at each of those intermediate steps. But yeah, it'll be interesting to see

where open AI goes and how they are able to monetize or not. Like that's something that Google still enjoys. Like they still enjoy this monopoly over search. They're so dominant in advertising. They can afford to make some missteps or be a bit slower and still catch up over time. They have huge amounts of compute. They have, I think probably still the strongest, largest concentration of,

of AI talent. And so I don't know, it's interesting to see where they'll go. Open AI is potentially more vulnerable, not so much to another giant, existing giant like Google, but other upstarts like an XAI or an Anthropic.

Yeah, I think it's shaping up to be a really exciting 2025 because now that we're about two years into this Gen AI movement,

it seems like everybody is at the starting line again. And I think that that's just going to make a really interesting year for us next year. But I would agree with you. Should we do the disappointment in terms of who the, can we jump to that now? Yeah, let's do it. Let's do disappointment of the year. What's yours? Yeah, so I will say it is OpenAI. And I think it was really hard just because they set expectations so high.

Yeah, yeah. I wrote them down and I scratched it out for something that I thought of that's even more disappointing to me. What's more disappointing? Apple intelligence. Oh, that didn't even cross my radar because I haven't even used it. Yeah, that would be the highest disappointment because that was just what a flash in the pan. Does anyone use it? That's my question. Who uses it? I don't know. No one's talked to me about it, but there's certainly been a lot of hype around its potential.

And, yeah, I mean, Apple still hasn't, at the time of us recording this at least, they haven't figured out how to embed. I think for them, a really tricky thing for Apple is that security is so important to them.

And a reliable product experience is so important to them. And LLMs are fundamentally neither of those things, especially if you're having to send off your user requests to open AIs, GPT models for processing because Apple doesn't have its own in-house LLMs that are capable of the range of tasks that consumers expect today.

So yeah, it's an interesting, they're in an interesting pickle, but same kind of thing to Google. Apple has huge amounts of revenue, amazing margins, and they have time to figure this out. And I wouldn't, I, I would not be surprised if in another five years from now, uh,

Google and Apple are still dominant players in technology. See, and I was going to say that I hope that next year our comeback of the year will be Apple and Apple Intelligence. I mean, yes, they want things to be right, but I mean, who's used the Photos app? That was not right. I think we all can agree that the new release of the Photos app...

was a total miss as well. So, you know, maybe Apple intelligence and they're tied with open AI with as the backend behind a lot of that as well. So, you know, they can tie together in both of our disappointments, but I really hope to see both of these back in the game as our comeback for next year, because there's so much potential from the phone integration with AI that I would really hope to see this next year. For sure. Yeah.

All right. So, yeah. So our comeback of the year, we both agreed on Google. Our disappointment of the year was

Yeah, your top pick was my second top pick with OpenAI. And it sounds like Apple Intelligence did disappoint both of us. Let's move on now to our wow moment of the year. Maybe I'll go first on this one since you got the last one. And so for me, I've already alluded to this. It's a one from OpenAI. It's interesting that simultaneously that expectations were so high for them that they could both be the disappointment of the year and the provider of our biggest wow moment.

Yeah, I think that just shows how high expectations were, are, and they continue to be within AI. I think that all of us in AI now...

are almost TikTokified. I don't even know if that's a word, but in terms of wanting that quick dopamine hit of like, if something isn't happening this week or something that's not wowing us or blowing us away, we kind of just write it off. And so it's interesting that you have them as your wow moment when it's also my disappointment, because I think it really just ties into

expectations are high and we are looking for that next dopamine hit in AI every single week, if not every day. What's your wow moment? So my wow moment is not necessarily from an overall use, but just from a human level of when I just listen to it, and this should give you a key into what it is, but I was just truly impressed. And that was with the Notebook LM. That's my number two. That's my number two.

And the reason why it just was like so human to me, and that's why it wowed me is their expressions, the way that they talked. It felt like you and I talking on a podcast. And so just from a human level, is it going to change the world? I don't know, but I just thought it was cool. And so that was my wow moment.

Absolutely. I almost had that as my number one as well. And we did an episode of this podcast, number 822, which came out in late September. And in that episode, I expressed how blown away I was by Notebook LM.

And I also air in its entirety a 12-minute podcast episode about my PhD dissertation, which is so boring. But these fake podcast hosts did manage to make it seem exciting. And so I included it in full in the episode and people were blown away. That must be one of my most commented posts of the year of a large number of people reaching out and saying, wow.

I hadn't heard of this or I hadn't used this and now I have used it and it blew me away. Here's what I tried. And so, yeah, that was really cool. I think it was a wow moment for a lot of people. Yeah, I'll add one sub-wow moment in there which may not get talked about as well. Sub-wow.

I hope we have a sound effect for that too. Some wow. Or maybe it's its own sound effect. But I recently got a Tesla and the full self-driving on that is incredible. And I was just blown away because as a kid, you know, my mom was like, hey, you really need to learn to drive and do all these things.

And I told her one day I will have somebody who drives me around. I did not think it would be a robot and full self-driving, but here we are today. And so just to have like childhood memories of saying something and then to be living it today is truly incredible. That also, I've got to add my sub wow moment, which is Waymo. I had my first Waymo experience this Northern Hemisphere summer and

And that was really cool. Like having a car, because I think that's even, that's another level of autonomy beyond Tesla's full self-driving, right? Where with Tesla's full self-driving, you need to have somebody sitting behind the wheel. But to have the Waymos now in San Francisco and at the time of recording also in Scottsdale, Arizona, I think, you can just use the Waymo app and a driverless car comes up, picks you up, you get in it, and it drops you off. And it's, that's a...

That's also, you know, I almost want to make that my biggest wow moment of the year. I, you know, I don't know how I didn't think of it right off the bat, but that, I mean, cause that, that physical presence, cause that's, yes. I also, I come back to the Waymo example a lot with, uh, when, when people ask me, oh, like when people find out I work in AI, uh, as the quote, um, and there are a lot of people, uh,

completely outside of AI will say things like, oh, contentious. And I'm like, really? Oh, I wasn't aware it was so contentious. And they're like, well, yeah, I'm a creative or I have lots of friends who are creatives and I can see that. Okay, yeah, I can see why it's so contentious. But for me, I guess I'm so often seeing big changes and benefits. But there are, you know, the Waymo example is one that I come to frequently to say, this is a

As opposed to something that's happening on your computer screen. This is a physical, very obvious manifestation of AI that you can, when you experience that, when you call a Waymo car, get into it and it drops you off somewhere, you see the steering wheel spinning all on its own and it's making great driving decisions, that makes it clear that

In the future, in the not too distant future, we don't need drivers. We don't need human drivers. And in the United States, in most states, the number one job is truck driver.

And there's tons of related jobs that support the truck driver. You know, people working in cafes along the roadside and that kind of thing. You don't need that. Self-driving cars don't need cafes or motels. And so it's going to make a really big impact. And...

And given the upheaval that will be caused by this, there's things that we need to be doing as a society in terms of retraining people, because this AI shift should end up being just like all other automations in the past. It should provide people with more interesting work than ever before. And I mean, this time there's talk about this time being different.

but all past increases in automation have led to more employment and lower unemployment. So I don't know. I've touched on a lot of topics there, but I haven't let you speak for a long time. So Sadie? No, I was really lucky this year to hear one of the co-COs of Waymo talk. And one of the things that she said was, we are building the single best driver.

And I found that really interesting because she talked about how they have over 100,000 fleet of cars out driving. But she talked about it as one driver. They talk about it as a single brain, a single driver brain, and they're only building one. And that

just resonated with me so much because it really gives us perspective of what intelligence and machine intelligence can do at scale, right? You only need to build one of the best single drivers and you can change a whole industry. And so I think that's something just to think about, like get really specific in the models that you're building and the domain that you're building, because when you do that at scale, it's incredible.

Agreed. That's a really nice way of thinking about it. I hadn't heard it phrased that way before. But now that's a great kind of meme to be thinking about industry by industry. How can I create a brain for some specific task that can be outstanding at that specific task? And then because it's software, you can just replicate it as much as you like. Software update to the fleet of 100,000 cars, transform the industry.

Yeah. Cool. All right. So then that leaves us just with overall winner still to select. And do you want to go first or do you want me to go first? I feel like this was all your idea. And so, you know, maybe you should go first on this one. Okay. So my overall winner for 2024 is open source and particularly...

I think just us as individuals. But, but, but, okay. So I could choose Meta then and Meta's Llama. Oh, wow. Particularly if you want me to get specific, but I really think it's us as consumers. I think we are going to be the winners of AI because it is open source. I'm seeing more and more startups entering the market and just the capabilities of the models that are available. The costs are getting cheaper and cheaper and,

So it's an open source community. We saw that also from a legal perspective with some of the bills that were being passed in California that got turned down and was really great for the open source community. So that's why I say open source in general, from a company perspective, what Meta's done with Lama. And from an individual perspective, I think it's a consumer who's really going to win from all of this.

Wow. Nicely done. Nicely done. You went very lowercase m meta with your answer. I just picked a company. Okay. Which is Anthropic.

I had a feeling this was going to be yours. Please tell me the love for Anthropic of why it's your overall winner. Yeah. Anyone who knows me knows that I'm a big Claude fan. I use it more. I subscribe to Gemini. I subscribe to ChatGPT+. I subscribe to You.com. But I end up at least 80% of the time

using Claude as my weapon of choice. I don't end up needing the kind of O1 capabilities on a lot of the tasks that I'm doing. Probably that could be related to tasks that I'm doing are

relatively common kind of coding tasks related to neural networks. Uh, and on that kind of thing, claw 3.5 sonnet does an outstanding job debugging things. I don't need all that power of a one. I just get an answer quickly from Claude and same kind of thing, you know, things related to the podcast, you know, summarizing episodes for me, uh, transcribing things. Claude just does an amazing job. Uh, it does the best job of any of the, uh,

private LLMs that I'm regularly using. There are still places where I use Gemini. If I have very large files, I think Gemini is great. I haven't used Gemini 2.0 that much. Maybe if I'd been using it a lot more in recent days, I'd say, wow, this can replace Cloud for me. But I also love...

The UI of cloud. I think they've done a really good job of it. It just, it has this kind of friendliness and this warmth to me that I don't get from any of the other tools that I subscribe to. All the other ones kind of seem like really, like they're designed to look really futuristic, but it creates this kind of coldness, whereas somehow...

Yeah. Claude manages to give me the warm fuzzies. That's the first time I've heard a comparison of what tool makes you feel better. I like that. We can maybe add that to one of the winning awards next year. I'm curious, though, because you didn't mention perplexity. Do you use perplexity? Is that in your repertoire? I don't use perplexity much. That's something that I have been using a lot. I love perplexity. Oh, yeah?

Maybe add that to, tell me how it gives you, if it gives you any more fuzzies I'm curious about. It will be hard to replace Claude. But yeah, perplexity is something that I would say in the past two months, I've really started to dive into a lot more and love just the references and have pretty much replaced search with perplexity now. Oh, there you go.

Christmas shopping. Wasn't able to find a product that I needed to buy for someone. They were all sold out. Thanks to Perplexity, was able to find it, snag it, and have a very happy Christmas. Cool. Yeah, I think I've ended up using u.com for some of those same kinds of agentic tasks. But yeah, I'll definitely, you know, I haven't, I've only done a few searches in Perplexity. I don't know why that ends up happening. Like, it's definitely a blind spot that I felt on other podcast episodes in the past.

So I'll have to spend some more time with that. All right. So we have been...

This episode is all about forecasting the future, but like your planner, so far we've been looking back at 2024. But I also think that that sets the stage for the predictions that we are making in 2025. And it also gives you a sense of how reliable we are as crystal ball readers for the forthcoming year. And largely good, largely good. Largely, you can trust us based on our performance in 2024 and the previous years, which you can go back and listen to if you want to.

So for 2025, the big number one topic that you highlighted, Sadie, before we started recording, and I agree with 100%, is

is agentic AI. There's no question. I could speak for a long time about this, but you're the guest, so I'm going to just let you go first, please. Yes. I mean, this has started in what I call the fall conference season to boil up as a key topic. I think it's going to take over all conferences will be about agentic AI next year. But more importantly, the number of startups in this space, I think there's over 600

as of today. And who knows if that's even an accurate count of it. But it's really the next step of what I think all of us as consumers are also looking for is

We're looking for more of those autonomous agents where we don't want to be copying and pasting from different applications. We want it to just go on our behalf. And I think the, I don't know if it was a meme or an actual article from this year where it was a woman who said, you know, we got AI and,

It creates things, but I was hoping it would like fold my laundry or do kind of basic tasks, right? And that to me is really where the need for agentic AI comes in. That's more of a robotic task is folding your laundry. But particularly, we wanted to now, what I would say is leave the confines of

of an application and go do actual tasks for us autonomously on our behalf. I think that the trust in AI has built that we as humans are ready to maybe unleash it to that next step. We see a little bit of that with like computer use, but more importantly,

I think we're going to see a lot of companies who, just like Waymo, are building a specialized brain for a particular application, one that will be the best ever social media marketer, one that will be the best financial analyst. And so we'll have these specialized models who will be able to truly be agents on our behalf and take autonomous steps. Yeah, it's going to be absolutely transformative. I think to go into a bit more detail on that,

on that viral post that you were mentioning there. I think it kind of tied into the woman who wrote that post saying, you know, I wanted automation to be taking away the mundane tasks from me, like you said, folding laundry, and leaving the creative tasks like, you know, video production, artistic endeavors to me. But instead, AI has taken that away and I'm left folding the laundry. It's a...

But agentic AI systems are a step on the way to having, even in the real world, more Waymo-like physical embodiments making changes and being able to fold your laundry for you.

you know, that is coming. And it seems like now in our lifetimes for sure that we will have machines that can do this kind of stuff. And hopefully, you know, most people, if not everyone will be able to afford machines like that. Um, these should be widely available, not something that's just for, you know, some small percentage of the planet. Um,

And I recently came back at the time of recording. I've just come back from NeurIPS, Neural Information Processing Systems. This is the 38th year it's been running. And it's, I think, safe to say the most prestigious AI conference for academics that is out there. And at this conference...

The agentic AI trainings, workshops, some of them were so, there was so much interest in them that there was a crowd of people outside the room unable to squeeze into the standing room. And so that gives you a sense of how popular this topic is. Fei-Fei Li gave one of the keynotes at NURBS this year, and she talked a lot about how her company, WorldLabs,

is creating sophisticated datasets that will allow agents to explore 3D visual worlds as opposed to just being, you know, right now visual agents are becoming pretty good at recognizing two-dimensional images, but that doesn't necessarily make them great at exploring the real world and being able to forge their laundry. So, yeah, really exciting things happening in agentic AI. I

I'm completely smitten with agentic AI and we've been doing podcast episodes on it a fair bit recently. We have many more planned for 2025, early 2025 with agentic AI experts. Uh, so look out for those. Um,

We, yeah, I'm going to be creating a half day or full day agentic AI hands-on training for ODSC East, Open Data Science Conference East in Boston this spring. Yeah, I think it's a fast-moving space, but it's also, it's unquestionably where these things are moving. And it's a testament to

to how far along we've come with LLMs and their ability to be accurate. Because if you're going to have AI agents going off and doing tasks autonomously that you have assigned them, or having an AI agent kind of master that's spinning up a bunch of slaves, that's a really rough word to be using. There's got to be a better one. But in computer science, that is often the word that's used. But sub-processes, sub-agents,

Sub-wow agents. Sub-wow, there we go. In order for any of that to work effectively, you need to have accuracy. If your LLM is hallucinating 10% of the time, that's an extremely large amount if you're going to have a large number of processes running.

But, you know, if you get to a 1% or a fraction of 1% error rate, and with things like O1 from OpenAI being able to check their work step by step,

those kinds of, yeah, this evolution in LLMs, the way that they've advanced, it allows us now to be in this agentic AI moment that we're in. I'll just add one bit of caution on this one where I can see it not manifesting in the way that we may hope for and fully imagine, which is

I think we have a lot to figure out in permissions and access. And so just because your agent can go and do something and is accurate, we switch quite frequently between applications on our computer and giving it that automatic permission may not so much be an issue from a human perspective, but companies playing nicely with each other. And so I think that will be

be really interesting to figure out is, okay, will Apple give Microsoft access? They've done it with having Outlook as one of your mail applications, but how does that look like once different agents are released into that platform? So I'm curious to see how that will all work out this next year. Nicely said. Yeah. I mean, this is kind of the wall that we run into with LLMs or agentic AI frameworks, you

being able to be effective is around data security, privacy. You know, it would be ideal to be able to take a whole giant enterprise and say, okay, agentic AI system, here's all the information. But of course, if you do that, then you're opening it up to abuse. I mean, because then all of a sudden, you know, someone in the software engineering department can write an agent that's like, provide me with the pay packages of everyone in the company. Yeah.

Exactly. Or who's not to say some other agent can't come in and fool your current agent to think that they're a helper. I mean, there's so many ways that this could go wrong. But I think the difficulty of this task that we're facing right now is really on the security and privacy side of things. Nice. All right. So that's number one. Agentic AI is our big prediction for 2025. I feel very safe in this prediction.

Seems like a layup. Number two from you is AI integration into everyday devices. So this is a higher risk one from you, I guess similar to your LLMOS prediction for 2024. Mm-hmm.

But yeah, tell us about your idea here with these everyday devices. Give us some examples. Yes, I think this is a playoff of the LLM operating system and maybe more of a stepping stone to get there. But where I'm seeing this go is recently I got the Meta Glasses. And I particularly just like buying different glasses.

tools and applications just to test things out and was really impressed with them overall. I'm going to Mexico next week and I really want to test out the real-time translation. So I think that we're going to start to see AI just streamline into more and more of our devices.

We talked about our disappointment with Apple Intelligence, but again, I think that they have so much cash flow. They have access to so much talent. We could see them make a comeback next year. And then Microsoft this year released what they called their AI computer. I know they took it off the market for a little bit because people were concerned about the privacy of it, but it was essentially taking snapshots of your screen, right?

throughout your workday and giving you recommendations and starting to train how it can be its own assistant with you. Again, back to the privacy and security concerns, they took it off, but I think next year we'll have something figured out where we'll see AI woven more seamlessly into everyday devices more than we have today.

On the last one there, I will say I do think some will not always be smart weaving into products. When I went to Costco this year and saw AI on a toothbrush, an electric toothbrush, that's how you knew it has gone too far. So I didn't say it will all be positive, but we will see it continue to pop up in our everyday devices. Yeah, where I was staying in Vancouver, I would walk from the Airbnb that I was in in Kitsilano, beautiful area of Vancouver,

and walk from my Airbnb to F45, which I'd never done before. Have you ever worked out with F45? No, but I've walked by one, but I've never walked in one. It was pretty cool. It was like, you know, Sadie and I both do CrossFit and I, you know, they don't have barbells. You don't spend a lot of time on technique. But in 45 minutes, you do get a good mix of cardio and strength work.

You know, there's, yeah, it was an interesting experience for a week. But anyway, when I was walking in between my Airbnb and F45, every day I would walk past this golf pro shop and all of the posters in this golf pro shop were for drivers that had AI in the name. So things like AI smoke. And there isn't like a- Wait, golf drivers with AI in them?

I don't think my, you know, I honestly, I didn't look into this. My assumption was that there isn't AI in the club. I mean that, I mean, I don't understand how that could be.

But my assumption is that somehow AI is involved in the design of the club in some way. That's got to be it. Maybe there's some tracking on it that there's an app that you upload. Maybe that is what it is. I don't know. As listeners can hear, neither of us sound like we're that into golf to help us out here. Some of the audience will have to give us some help on this one.

Yeah, please tell us comments on social media and let us know whether there's computer chips in golf clubs now. It would not be shocking if they were. Like you say, they're in toothbrushes. They will be increasingly around. They'll be in your brain soon. Well, yeah, that is happening more too. So agentic AI, number one. AI integration into everyday devices, number two. Number three, I love this one.

You see AI-driven scientific research and innovation becoming a much bigger thing. I absolutely agree. I'm going to quickly preempt what you're going to say with a couple of recent episodes that I released on this topic. So number 812 was on this Japanese company that's full of Google DeepMind alums called Sakana.

S-A-K-A-N-A, they released this AI scientist paper that was able to draft papers specifically on machine learning at that time. And they were planning on spreading to other industries as well, but being able to write papers, you know, come up with ideas for papers and run the experiments and

get the results, write up the results independently and did a pretty good job. And, you know, this kind of thing will only get better, especially in that kind of environment where the AI system can actually be doing experiments. And I think we will see more. I don't know if we'll see big examples of this prominently in 2025, but it is definitely the future of

that big pharmaceutical companies or big energy companies, they will allow AI systems to run physical labs. And so for episode 812, for this AI scientist, the reason why they stuck with machine learning problems is because these experiments could be run in silico. It could be run on compute hardware. But in the not too distant future, as I'm sure you're about to say, Sadie,

These AI systems will control physical labs that can also do physical experiments. And so science will accelerate because of that. The machines don't need to sleep. And as talked about in episode number 835 with U.com's co-founder, CTO Brian McCann, Brian talked a lot in the episode about how

scientific discovery will be... AI systems will be able to do kinds of scientific discoveries that a human never could because AI systems are trained on all knowledge and no human scientists are expert across all domains. Anyway, I've kind of... Hopefully I didn't take too much air out of your AI scientist balloon, Sadie. No, I think it just shows how much excitement we both have for this area. And we saw just...

Yeah.

And being a doctor, yes, you go to lengthy years of medical school, but keeping up with all of the new drug discoveries, all of the new scientific discoveries, there's just not enough time. And so I'm really excited for this just from a medical side in terms of not only new drug developments and new diagnosis, but that support for doctors in the healthcare system.

And then just in terms of the new inventions of what haven't even been created. And so there was a paper that came out this fall from MIT. It's about artificial intelligence, scientific discovery, and product innovation. And what they found was that AI-assisted researchers achieved 44% new material discoveries, 39% more patents, and had a 70% raise in new product prototypes. Wow.

So just like really, truly incredible results. And I think this adds to this utopia future that I know you often talk about and I hope to actively build with AI, particularly in the realm of science, medicine, and innovation. Yeah, but speaking of the dystopian aspects of these kinds of things, I think that same study...

showed that use of tools like ChallengeGPT to accelerate research led to job dissatisfaction increases as well for the scientists. Yeah, okay, so this is so fascinating to me because the job satisfaction definitely depends on the job type. When looking at knowledge workers, so there were two studies, one from Slack and then one from Microsoft,

Knowledge workers love using AI for their day-to-day jobs, whereas scientists and what I would say is more specialized individuals feel that it's taking away the joy of what they were doing. And I find this so fascinating because my hypothesis is that

As a scientist or as a doctor, you spend many years going to school to have a particular knowledge, and then you feel like you're being replaced. Whereas my hypothesis for knowledge workers is,

you're just trying to get a job. Maybe it's not what you majored in. You don't really like your boss. You're just trying to get by. If AI will come in and do part of that job for you, then the happier you are. And so to me, that's my hypothesis behind it. But if someone does a study, please let me know if you test this hypothesis. That sounds reasonable to me there.

All right. So we've done three of our five predictions for 2025. Agentic AI, AI integration to everyday devices. Number three just now was AI-driven scientific research. Number four is enterprise AI monetization. Sadie, tell us about that one.

Yeah, so this year we talked about new competitors entering the market with compute. We also, I think, saw some investment in nuclear and in energy. I think we're going to continue to see more of that next year. But I think we'll start this year to see...

What is that investment turning into? And so for me, what I'm really watching keenly next year is XAI. So Elon Musk, kind of subsidiary of X, but now the AI company and the supercomputer that they built where they broke the coherence bottleneck. And I think this is going to be really our first test of

fully on the scaling laws and if we're going to actually break the scaling laws or we are going to be constrained by the scaling laws or if we're going to actually have another major breakthrough in this place. And so this year I saw a ton of investment. I think Mark Zuckerberg talked about how he had bought like

600,000 GPUs from NVIDIA. You'll have to correct me if I'm wrong on the exact number, but I mean, people were buying so much. I'm ready to see the return on this investment. And I think we're going to continue to see more investment in infrastructure this next year, but particularly in energy and in nuclear.

Fact-checking you. Meta plans to have nearly 600,000 NVIDIA H100 GPUs by the end of 2024. Says the AI overview provided automatically to me by Google Gemini. As of March 2024, Meta had 350,000 H100s. Again, yeah, so that is...

Yeah, it's interesting. So that's actually, these quotes are all coming from the beginning of the year. So it was like a January 2024 article where, yeah, Mark was saying that, yeah, by the end of 2024, he expects them to be running 600,000 H100 GPU equivalents, which is interesting because then, yeah, it doesn't necessarily mean NVIDIA H100s. It could be like an AWS Trinium 2 chip or, yeah, that kind of thing. But something of that nature

roughly that capability. Yeah, because it's my understanding that XAI's 100,000 GPU AI cluster is the largest that has been built to date. Yeah, I think it could be that metas are distributed for different programs, different geographies. Whereas I think that XAI cluster is one single cluster training one LLM. Yes.

Nice. It is interesting, this enterprise AI monetization topic, Sadie, because obviously as we spend these huge amounts of money on hardware, on talent, it means that the impacts on the bottom line need to be substantial for the project to ultimately be profitable. And this is something that we've talked a lot about on this podcast recently with guests. So we had a great one

episode number 843 with the CEO of a company called Protopia, Iman Ibrahimi. And he talked a lot about these trade-offs between profitability and security and how so many projects, AI projects, end up getting stuck in proof-of-concept purgatory because you can quickly spin up using

Cloud 3.5 Sonnet or OpenAI 01, some cutting edge LLM capability. You're like, you can create a Jupyter notebook or some kind of simple Gradio application within a few hours or over a weekend that does amazing things.

But then you bring that to your manager, to a VP at your company, and you say, look at this amazing tool I built. And when you start to get into the nitty gritty of, okay, but how will we make this performance and cost effective in production while also ensuring data security? Enterprise AI monetization is a tricky point. And it also means that it provides a lot of opportunity for people out there who are listening to

to specialize in thinking about these kinds of problems because as it becomes easier and easier to use an LLM to spin up the code that you're going to use or just use an existing API to provide all of the intricate AI functionality of your platform,

It's these kinds of questions like, how can I do this profitably for me or my business that will allow you to really get ahead in data science? Yes. As much as data scientists are numbers people, I unfortunately feel the financial numbers are what are often forgot to be run when they build these models. And not just from the cost to actually...

put this model into production and continue to maintain it. But what is that return on investment? And so if anybody wants to get ahead in their career, just include a small financial model with your proposal. And I think you'll get through the red tape a lot faster. For sure. And if you're looking for a quick, an episode we did this year,

that digs into that idea a lot of profitable AI projects is number 781 with Sol Rashidi. Sadie, do you know Sol? Yes, I do. She's been on the Data Bytes podcast and met her at a couple conferences before. She's great. She is really great. I hope to have her on the show again soon. That episode ended way too quickly and I had so much more to talk to her about. Really well-reviewed. So yeah, Sol Rashidi.

Maybe we'll have her back on the show in 2025. All right. But speaking of episode length, we are reaching the final, the fifth and final prediction for 2025. So this one is maybe controversial. We'll end up seeing what happens here. I wanted to touch on with one of these predictions how...

scientists or people working in AI are going to be impacted by all these innovations in 2025. And we touched on something there. We already talked to

just at the end of number four on enterprise AI monetization, how helpful it can be to be, like you said, that idea of including a financial model as part of your proposal, that is probably going to cut through red tape a lot more quickly. So that's great advice there. So kind of financial savviness can be something. But in terms of the quote unquote hard skills of data science or AI, I expect that in 2025, the demand for

from the market, from employers, for AI engineering skills will surpass the demand for data science skills. And when I say data science skills, I mean...

I guess everything else a data scientist has to do other than doing AI engineering, which includes things like using existing third-party APIs, building prototypes, linking a bunch of APIs together, maybe downloading open-source model weights and fine-tuning them, or even fine-tuning a closed-source model, which is something that you can do. All those kinds of AI engineering skills, I think that the demand for those will surpass anything

all other data science skills put together. Now, that doesn't mean that all those other data science skills aren't valuable. You know, even from things like data visualization to data communication, which are relatively soft, but even harder skills like statistics and other kinds of machine learning approaches, regression models, none of that stuff is going away. But in terms of what the market is asking for,

I think we're going to see this, there's going to be surplus in demand for AI engineering skills specifically. And so that seems clearly to me to be an area to be leaning into as a listener to this show and add that as a compliment to whatever existing data science skills you have. You don't need to be

Or you could actually even listen to our most recent Tuesday episode, which was number 847 with Ed Donner, in which he talks about how you can become an AI engineer. And you could jump right to that without necessarily developing other data science skills.

But I think the most powerful way of approaching this is to have the AI engineering skills that Ed talked about in detail in episode 847 and using those to complement whatever existing skill set you have. Yes, I mean, to add on to what you said, I would say I think it's...

really a consolidation of roles and additional skills added on. If you signed up for any career in technology, you signed up for a lifelong learning approach. And so it's just continued to expand. Just as you mentioned, it's not saying your data visualization skills are going to go away. Not at all. Your Python, your SQL, none of that's going to go away. It's just going to be a continual evolution and building onto it.

Something that came to my mind as you were sharing this was when I entered the corporate world was in 2014. And at that time, moving to the cloud was a big deal. I remember meeting people in the IT teams who were like, yes, I used to manage these servers and these products. And they were nervous about moving to the cloud because they thought there was going to be less to manage.

But what I came to find out is just the shape of their job changed, right? Instead of actually physically managing servers, you were managing permissions within the cloud. And so I see it somewhat similar in the role of data science as well. Maybe you're not...

training a model from scratch anymore, but fine tuning a model and building onto it, really looking at how does that integrate into your existing products, into your pipeline. And so while it may seem like your job is changing or evolving, you're probably actually going to end up doing more, doing more roles and taking a higher approach to the orchestration of how it's done. Nicely articulated, Sadie. Yeah, spot on.

I think that rounds up where we are. Oh, I have one last thing to add to this number five. I just consulted my notes here quickly, which is that something that's really annoying for me that has happened as a result of people integrating LLMs into everything that they're doing and allowing people to do more with less thanks to AI. The most annoying thing is how on LinkedIn now, every post I make

I get several completely banal comments written by obviously generative AI. And if I believed that what was written there, like, okay, let's say you have some thought that you want to express about some post on LinkedIn or some social media platform.

And you wanted to make that a bit, you know, you wanted to clean up the language around it. Great. But so often these comments are obviously, there's just, there's no meaningful opinion in there. There's no soul to it. There's no soul to it. I completely agree. I don't even know how people do it. Maybe there's like some extra like premium subscription that I'm not on because I'm like, if you're taking time to copy paste, it's...

But yes, the soul of it is missing. I mean, my whole thing is that misspellings are the new captchas, right? Like actually, you know, throw in some misspellings, throw in some weird punctuation. Just make it real and raw. And particularly if you're commenting on John's post, make sure that it has some soul in it and no more AI comments on that side. Mm-hmm.

Yeah, so that is my conclusion for 2024. If you're out there doing that, please reconsider. You're just filling us with noise, humans with noise. Well, Sadie, thanks so much again for the fourth year in a row running, making these predictions with us. I love how we spent more time on the retrospective stuff this year. I think it colored our predictions a bit more and was maybe a bit of fun for listeners in terms of what they think about as

The overall winners this year, the comeback of the year, the wow moment, the disappointment of the year. I would love to hear from all of you in the comments with your human-typed comments or even maybe Gen AI augmented, but just your real opinion at the soul of

of that comment on this episode to hear what you think about us doing the format this way and, you know, what your predictions are for 2025. Sadie and I would love to hear those and I'm sure we'll both comment in reply using our fingers on a keyboard. Or some voice to text because that always gets real messy and fun, you know. I can't 100% guarantee it will be fingers on a keyboard, but it will at least be from my soul to your soul. Nice.

All right. So yeah, Sadie, anything else you'd like to add before we wrap today's episode? I don't think it makes sense for us to do the, well, I'm not going to ask you for a book recommendation because we do this every year, unless you have something that you think this year, you know, book recommendation that people need to know about something that touched you in 2024. Ooh, books that touched, you know, I wouldn't say there was a book that touched me this year. I think my only comment and thought to wrap this up is

One, just like how lucky we all are to live in this time and to work in this space. And maybe you're trying to get a job in this space, but the fact that you're even listening to this podcast shows that you have a deeper interest, a deeper knowledge, and that's a super lucky position to be in. And I hope that like all of the awesome things that we talked about in this show, people go...

take advantage of, use these tools are inspired and really create that utopian future that you and I hope for, we dream of, we can see. Because to me, it's just like such an exciting time to be alive. I'm so excited for this year ahead of us. And I think it's going to be a really transformative year. And I can't wait to connect with everybody and see what we got right, see what we got wrong, but most importantly, make adjustments along the way.

For sure. Nicely said, Sadie. And so for people who want to listen to you between this predictions episode and the one coming in about 365 days, Sadie, how should people follow you or connect with you? Yes. You know, social media is great. I do love LinkedIn, X, Instagram, all of the things I'm a big fan of. I do have a sub stack as well. So I do share out kind of what happened on a monthly basis. And I do have a

happy to connect in all the common ways. Nice. I think when we did our very first episode on our very first predictions episode or one of, one of my first episodes with you on this podcast, your Instagram had been hacked or something like that, but now you're back on track. I'm back and better than ever. Yes. So I had, I had a, I had to start from scratch again. I had a viral post last year that gave me lots of followers because I had a typo on it that said I married my dog. Um,

So there you go. This is why you still want to write your posts yourself because those typos can turn into your biggest win. What had you really done? Carried your dog? No, I think I had added something. I don't even remember now. It was like I got married and something about I have a cute dog. Anyways, my marriage and my dog got intertwined and through some editing and unfortunately they came together. That's funny.

People are still in the comments arguing, did she marry her dog or not? So you can join in on the debate. All right. Sadie, thank you so much for taking the time again this year. I know how valuable your time is and I'm always appreciative of it. Thank you so much and we'll catch you again next year. Sounds great. Thanks, everybody.

What an exciting year we have ahead in 2025 indeed. In today's episode, Sadie predicted first that agentic AI will be the dominant trend moving beyond single applications to create specialized networks that can autonomously handle complex tasks, though security and permissions between platforms remain a challenge.

Her second prediction was that AI integration into everyday devices will accelerate, from augmented reality glasses with real-time translation to more sophisticated personal computing experiences, though not all integrations will prove valuable.

Her third was that AI-driven scientific research will expand significantly, building on current successes where AI-assisted researchers achieved 44% more new material discoveries and 39% more patents than researchers who weren't AI-assisted. And Citi's fourth is that enterprise AI monetization will be crucial as companies seek returns on massive hardware investments.

And finally, our fifth prediction for 2025 is that demand for AI engineering skills will surpass traditional data science skills, though this represents an evolution of the role rather than a replacement, requiring practitioners to build on existing technical foundations with new AI engineering capabilities.

As always, you can get all the show notes, including the transcript for this episode, the video recording, any materials mentioned on the show, the URLs for Sadie's social media profiles, as well as my own at superdatascience.com slash 849.

Thanks, of course, to everyone on the Super Data Science podcast team for their work all year long. That includes our podcast manager, Sonia Brayovich, our media editor, Mario Pombo, partnerships manager, Natalie Zheisky, researcher, Serge Massis, our writers, Dr. Zahra Karche and Sylvia Ogwang, and our founder, Kirill Aramenko. Thanks to all of them for

producing another visionary episode for us today to wind down the year. All right, if you enjoyed this episode, please share it with people who you think might like it. Review the episode, of course, on your favorite podcasting app or on YouTube. Subscribe if you're not a subscriber. But most importantly, I just hope you'll keep on tuning in. Thanks for listening in 2024, and I hope to see you a bunch more in 2025. I'm so grateful to have you listening.

And I hope I can continue to make episodes you love for years and years more to come. Until next time, keep on rocking it out there. And I'm looking forward to enjoying another round of the Super Data Science Podcast with you very soon.