We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode EP 438: AI News That Matters - January 13th, 2024

EP 438: AI News That Matters - January 13th, 2024

2025/1/13
logo of podcast Everyday AI Podcast – An AI and ChatGPT Podcast

Everyday AI Podcast – An AI and ChatGPT Podcast

AI Deep Dive AI Chapters Transcript
People
J
Jordan Wilson
一位经验丰富的数字策略专家和《Everyday AI》播客的主持人,专注于帮助普通人通过 AI 提升职业生涯。
Topics
Jordan Wilson: 我将讨论 NVIDIA 在 CES 上发布的令人印象深刻的 AI 产品,包括新的 RTX 50 系列 GPU、Project Digit 个人 AI 超级计算机以及 Cosmos 和 Nemotron 模型系列。这些产品将极大地改变我们工作的方式,使强大的 AI 计算能力能够在本地运行,并推动企业级 AI 解决方案的发展。 此外,我还将分析微软和 Google 在 AI 战略上的调整。微软发布了开源的 Phi-4 模型,同时计划裁员,这反映了科技公司在 AI 投资和人才竞争方面的策略转变。Google DeepMind 则正在组建一个新的 AI 研究团队,专注于开发世界模型,以期在 AGI 领域取得突破。 最后,OpenAI 重建机器人部门的消息也值得关注,这标志着 AI 技术在机器人领域的应用将进入一个新的阶段。OpenAI 还发布了一份文件,强调美国需要投资和监管来保持其在 AI 领域的领先地位。

Deep Dive

Chapters

Shownotes Transcript

Translations:
中文

This is the Everyday AI Show, the everyday podcast where we simplify AI and bring its power to your fingertips. Listen daily for practical advice to boost your career, business, and everyday life.

Is Microsoft laying off thousands of people because of AI? How did one small box from NVIDIA change the future of work? What are Google's big AI shakeups that they just announced? And why is OpenAI getting into humanoid robots? Y'all, so many AI questions that might be swirling around in your head right now. We've got your AI answers. So,

Every week, every day, you could spend hours trying to answer these questions, right? And more importantly, how they impact your company, your career, your growth. Don't do that. Don't spend hours a day trying to keep up, trying to worry about it. Join us. This is what we do. Well, we do it every day, but on Mondays, we bring you the AI news that matters.

What's going on, y'all? My name is Jordan Wilson, and welcome to Everyday AI. This is your daily podcast live stream and free daily newsletter helping everyday people like you and me not just keep up with AI, but how we can be the smartest person in AI at your company and grow your company and your career. So if that's kind of what you're doing, right, trying to keep up, trying to get ahead, trying to be the smart person in AI at your company, welcome. This is your new home.

All right. So if you haven't already, please go to youreverydayai.com. There you can sign up for our free daily newsletter. In that daily newsletter, we not only recap each and every podcast slash live stream every single Monday through Friday, but we also break down everything that you need to know that's going on in the world of AI in that newsletter. Also on our website, if you didn't already know, there's like 430 some podcasts.

back episodes that you can listen to from some of the smartest people in AI, all sort of by category. So if you care about legal, tech, enterprise, marketing, whatever, go click that category and you can go listen to the world's leading experts, dish it all off for free on our website. All right. So I'm excited to announce this though, before we get into the AI news that matters. Next week. All right. So starting January 20th, all week,

We got something special for you. So we have our 2025 AI Predictions and Roadmap series. So five straight days, five banger episodes. Y'all, I can't tell you how excited I am for this. So I did this last year. I did it in December, actually, with kind of our 2024 AI Predictions.

And I was still getting people reaching out in October, November, nine to 10 months after the episodes saying how helpful it was. So what we decided to do was go even more in depth, double down on not just predictions, right? That's not necessarily what this is about, but really the roadmap and how you can plan and make use of all of these predictions. So we're going to be doing 25,

of these and just five a day quick shows. We're going to keep them. They're not going to be an hour long for those of you that, you know, are out walking your dog or on the treadmill, uh, listening to this show. So they should be fast action packed. I cannot wait. So mark your calendars right now. If you're a listener of the podcast, make sure you listen to every single one. So that's next week, starting Monday, January 25th or sorry, Monday, January 20th.

through Friday, January 24th. Keep an eye out for that. All right, enough chit-chat. Let's get into the AI news that matters for January 13th for this week. And hey, live stream audience, thank you all for tuning in. Michael joining on YouTube, Brian, Jackie, Christopher, Zofia, Gene, Joe, Michael, big crew today, Tim. Let's go, Michael says. All right, let's do it. So

Nvidia, Nvidia, Nvidia. All right. Going to have a lot of that today. So while open AI and Google had a straight up slugfest in December, that ultimately is going to change the way we work.

NVIDIA said, all right, y'all, hold my AI beer. And NVIDIA straight up went to town during their CEO, Jensen Wang's keynote address at the CES, so the Consumer Electronics Show Conference,

in Las Vegas. So this is now it seems like old news, but it happened on Monday night after our last AI News That Matters. So we've been covering a lot of this in our newsletter, but we have to bring it to you on our AI News That Matters show because it's a lot of AI News That Matters.

Some a little dorky, some not so much. So first, let's talk GPUs, all right? This is, if you didn't know, GPUs is what makes everything go round in this AI world, right? So the big companies, right, that are, you know, running your AI chatbot or running whatever AI your company's using, they're using GPUs and then

People locally, right? Like if you want to run local models, if you want to have the best tech, the best hardware on your computer to take advantage of this AI, you probably need one of NVIDIA's GPUs actually. So NVIDIA unveiled its highly anticipated RTX 50 series GPUs at the end of the year.

So the new lineup includes the RTX 5090, RTX 5080, the 5070, and prices starting at $549 for the base new GPU.

All right, so these GPUs boast pretty impressive speed increases, promising to double the performance of their predecessor thanks to some updated technology like DLSS4. So the series introduces a more compact design, making high-performance GPUs accessible also for smaller PCs.

So NVIDIA is extending its RTX 50 series to laptops. Yes, you don't have to have a big at-home box. And that's the availability for that is beginning in March. So basically it's this, their new $549 RTX 50 series GPU.

$550. Wild. Because it offers pretty much the same performance as their previous NVIDIA 4090 GPU, which was $1,600. All right. You might be wondering, like, okay, how is that even possible? Well, I don't know. NVIDIA is some sort of manufacturer.

I don't know, a bunch of wizards running around there. But one thing is they're using generative frames now. So this is something that is going to be a growing trend for GPUs and gaming, video, animation, all of those things is generative frames. So essentially, you might have

Previously, you might have needed more kind of onboard power to generate more, but now we're using generative AI to essentially generate frames between or to upscale frames. So that's one of the reasons this is even possible and that we're able to get these huge, hugely powerful GPUs on smaller devices. A lot of NVIDIA digits.

I'm freaking excited about this. I don't personally have $3,000 to burn, but I might have to find $3,000 to burn. So that's because that's the price tag for the new NVIDIA Project Digit. So this was announced also at CES. This is essentially NVIDIA's first personal AI supercomputer.

So it starts at $3,000 and it's designed to bring AI supercomputing capabilities to the desktop, making it accessible to developers, data scientists, and smaller organizations. So the new device is powered by NVIDIA's GB10, which is its new Grace Blackwell super chip, combining a powerful Blackwell GPU with a 20 core Grace CPU. All right. So essentially a bunch of

Bunch of crazy hardware tech all in one box. 128 gig of memory if you're a dork like me and care about those things. And it also boasts one petaflop of AI performance. Yes, that's a petaflop. We're in a new era.

a new tier of flops, apparently. We're in the petaflop tier now. So this essentially allows users to handle complex AI tasks and large language models locally. That's the big thing here that were previously limited to larger supercomputers. So yeah, you put together like as an example, two of these things, because you can chain them, interconnect them, and you can run as an example, like Lama 405B.

right? A 405 billion parameter model. In theory, on one of these, you could run GPT-4.0. Yeah, let me repeat that. On one of these, in theory, you could run GPT-4.0. The world's, probably one of the world's top two, top three most powerful large language models. Obviously, you can't download a proprietary model like OpenAI's GPT-4.0. But what this means is the

I don't think large language models, especially open source ones, are getting larger. They're actually getting smaller. So there will be no large language model, open one, that you cannot run on this first version of digits. And this is going to be the worst this thing is ever going to be. In a year or two, it's going to be cheaper. It's going to be more powerful. But that's what this signifies. This changes how we work.

Here's something important to know as well that I think a lot of people are overlooking because right now, yes, project digits can function as a standalone Linux based workstation. So yeah, you can, you know, plug in a keyboard and a mouse and all that stuff. Uh,

but it can also connect to a different primary computer for additional AI computing power for your current machine. So you don't have to be running on NVIDIA's kind of Linux powered operating system. You can hook it up to your computer, your company's computers, whatever, and it can run with alongside any Windows or Mac computer.

So if you are running it just as its own computer, it runs on NVIDIA's DGX operating system, which like I said, is Linux based, but it can operate and extend your current Mac or PC system. That's the thing that people are kind of overlooking. But y'all think of this. I talk about AI every day. Yeah, I'm a big NVIDIA fan. All right. I told y'all like two years ago,

When hardly no one had heard of NVIDIA, I said they're the most important company in the world no one's ever heard of. Right. And they've obviously risen to prominence. And at times, you know, they're usually top three. But at times they were the number one biggest company in the world by market cap. Went from relatively unknown to biggest company in the world. I told you all that that was going to happen. All right. It's because every company, every person needs GPUs. The future of work is generative AI, large language models. And you need GPUs.

Power. You need GPUs. That's why this Digits is extremely impressive, right? Livestream audiences, anyone going to go drop three grand on this? Michael says NVIDIA needs to send you one. Yeah. Hey, friends at NVIDIA, I know a couple of you are listening. Yeah. You have my address. Go ahead and send me one. I'll review it on the show. But it's wild. It's how do I even...

do you understand this right two three four years ago no one would have thought this is possible to run in theory it has the capability to run the world's most powerful large language models locally so then you don't have to worry as much about data security sending everything to the cloud uh extended inference times because you're running everything on your own machine right i think

I think even as edge AI or on-device AI was more of a trending topic a year and a half ago, two years ago, maybe we were talking about small language models, right? We were talking about the mini, mini versions. Now we have the capability in the future, at least open source models that you can download, right? So when we talk about Meta's Lama or NVIDIA's Nemotron, which we're going to be talking about here, running it locally. Wow. All right.

More NVIDIA news. Cosmos. So NVIDIA has unveiled Cosmos. All right. So that is designed to enhance capabilities in humanoids, industrial robots, and self-driving cars. So according to NVIDIA, Cosmos is capable of generating images and 3D models of the physical world, unlike language models that focus more on text generation. So Cosmos

Jensen Wong in his keynote, which if you haven't listened to the keynote yet, I highly recommend you do if you want to know where AI is going and AI is heading. So he was talking about this, demonstrating Cosmos' use in simulating warehouse activities, showcasing its training on 20 million hours of real world footage. Let me say that again. Cosmos was trained on 20 million hours of

of real world footage. Wow. All right. So the goal of Cosmos is to enable AI to better understand and interact with the physical world more effectively rather than just creating content. So companies like Agility, Figure AI, Uber, Wave, and all these others are already employing Cosmos to advance their robotics and autonomous vehicles technology.

NVIDIA also announced some enhancements and updates to its Isaac robot simulation platform, which is essentially a gym or a workout place for AI robots. They go into NVIDIA's cloud and go work out and learn. So that will now help robots better learn tasks more efficiently by generating synthetic data

From limited examples. So the introduction of Cosmos and updates to Isaac are expected to appeal the businesses aiming to develop and deploy humanoid robots in various settings. Yeah, we're going to be hearing a lot about humanoid robots in the very near future.

Hey, this is Jordan, the host of Everyday AI. I've spent more than a thousand hours inside ChatGPT and I'm sharing all of my secrets in our free Prime Prompt Polish ChatGPT course that's only available to loyal listeners like you. Check out what Mike, a freelance marketer, said about the PPP course. I just got out of Jordan's webinar.

It was incredible, huge value. It's live, so you get your questions answered. I'm pretty stoked on it. It's an incredible resource. Pretty much everything's free. I would gladly pay for a lot of the stuff that Jordan's putting out. So if you're wondering whether you should join the webinar, just make the time to do it. It's totally worth it. Everyone's prompting wrong, and the PPP course fixes that.

If you want access, go to podpp.com. Again, that's podpp.com. Sign up for the free course and start putting ChatGPT to work for you. One last, one last NVIDIA news story, y'all. NVIDIA also launched Nemotron Models.

So they unveiled their new Nemotron large language family of models. All right. So previously NVIDIA essentially sent out a blog post when they kind of announced or unveiled their one variation of Nemotron, but now it got super official at CES. It's a whole family now. So,

Nemotron, which you're going to be hearing a lot more about, is built on Meta's Lama Foundation models. So these large language models are designed to power robust enterprise-grade AI solutions. So Jensen Wong highlighted the transformative potential of these new models in software coding, predicting that AI assistance will become essentially the same thing as coding. So

I'll leave. Well, actually, I have another news story on that later, so I won't give too much of my hot take on that now. So the Lama Nemotron models are available in small, medium, and large sizes, suitable for deployment on PCs, edge devices, and cloud environments. So why is NVIDIA doing this? Why are they essentially forking devices?

Meta's Lama models, well, they're really trying to get even more of a hold in enterprise AI, right? You know, they had their chat with RTX platform, which I think was good, but you had to have only a specific GPU on your computer to run it, and it was somewhat limited. So by essentially NVIDIA saying they're going all in on forking Meta's Lama models,

But I think it's smart because they can now more quickly enter the enterprise AI space with powerful tools customized for businesses' needs. And if you follow things like, I don't know, model rankings, benchmarks, right? Actually, Meta's Nemotron, originally their first variation of it, kind of ranked or scored higher than the model it was based off.

Right. So all the smart engineers over there in the San Jose area in California somehow whipped up a better version of Lama themselves.

Also, this boosts demand for their hardware. So these AI models, especially the medium and large one, require significant computational power to run effectively. So businesses using Lama Nemotron will likely need NVIDIA's GPUs, right? Or as an example, their new digits, right? So

smart here in video, really doubling down on their investment on this open. It's not truly open source, but open source ask a variation of Lama. Uh, I do expect once we get benchmarks, I expect some pretty impressive benchmarks and I do express some enterprise, uh, companies to see this and be like, yeah, we're going to be, uh,

at least putting this in our LLM stack almost immediately. So it does make sense for NVIDIA to probably try to improve existing open source software that's out there because if companies latch onto it, they're probably going to need even more GPUs. All right. We got away from the NVIDIA news, y'all. And live stream audience, podcast audience too, let me know what you think.

Were you impressed with what Nvidia announced at CES? The thing I'm more shocked by, if I'm being honest, they're going to be doing this. Nvidia is going to be doing this all again in like three months at their GTC conference, which I attended last year. I may or may not attend this year. We'll see how things go with my schedule. But the fact that they're probably going to be announcing a slew of new updates in like three months is,

I don't comprehend how they can innovate this quickly. All right. Speaking of innovating, Google DeepMind is creating a new team of AI researchers focused on developing world models to simulate physical environments, according to a recent announcement from Tim Brooks. So Tim Brooks talked about on the show last week, or maybe it was the week before,

Previously was a co-lead. This is why it's important. He was previously a co-lead for OpenAI's Sora, their AI video tool. So he joined DeepMind in October and is leading this new initiative with DeepMind doubling down even more on world models, essentially trying to advance Google's AI capability and simulating real world scenarios.

So world models are cutting edge development in AI trying to revolutionize. Yes. Things like video games, movies, and realistic training environments for robots. But even more so than that, this is ultimately for all of these companies that are pushing toward artificial general intelligence. These world models are a big challenge.

gap in what they need, right? Essentially, all these large language models have been trained on the entirety of the internet, scraped, copyrighted content, everything, but not what actually happens out in the real world, right? So essentially, large language models have never went out and touched grass, right? I'm sure there's some people that tell me that internally. They're like, Jordan, you should probably

Go outside and touch grass. You're in front of the computer too much, talking about AI too much, right? So this is the equivalent, right, of large language models going outside and touching grass. It's world models, right? So real world physics, the relationships between things in the real world, right? This is what these world models are for.

So this initiative, Google's kind of new forming team, is part of their broader strategy to achieve artificial general intelligence, AGI, before its competitors, emphasizing the importance of scaling pre-training on video and multimodal data, not just text only. So the project is in competition with other AI advancements, like we talked about, Sora, NVIDIA's Cosmos platform, like we talked about, and World Labs.

So DeepMind's new team will collaborate with existing Google AI projects, including the Gemini AI models, VO, Video Generator, and Genie, which simulates 3D environments in real time.

So according to job descriptions, DeepMind is seeking research engineers and scientists to tackle challenges related to training at massive scale in integrating world models with multimodal language models. It was also announced that the teams at, or sorry, the teams that develop Google AI Studio and the Gemini developer API will be moving under Google DeepMind. So little footnote there, but extremely important.

All right. More large language model news. Microsoft has released its open source PHY4 model.

So Microsoft has released its five, four model as a fully open source project on hugging face. So the five, four, and you know, make sure to go check it out in the newsletter. But if you're just wondering, if you want to look it up, it's P H I four. All right. It is a 14 billion parameter model offering powerful reasoning capabilities while still being efficient in resource management.

So according to Microsoft, this update, so from the PHY3, makes the model accessible to a wider audience, including commercial applications under the popular permissive MIT license. So PHY4 excels in benchmarks, scoring over 80% in tests like Math and MGSM, outperforming larger models such as Google's Gemini Pro. So PHY4,

You know, you're not going to see smaller models like 5.4 outperforming the world-leading models like Google's Gemini Pro, Ultra, OpenAI's GPT-4.0 on all benchmarks. But it is pretty impressive that a much smaller model is already competing with the world-class models on certain metrics in an open model nonetheless.

Microsoft's decision to open source the model aligns with an ongoing trend of fostering innovation and transparency in AI development. So the move could impact the AI landscape, right? By making advanced AI capabilities more accessible to organizations with more limited resources. Joe says, can't wait to get my hands on five, four to test. It is available now. We'll be leaving the link in our newsletter. You have to have a pretty powerful, uh,

Set up right now to run it locally, but it is available. Actually, no, it shouldn't be that powerful. If you have a pretty, pretty new computer, 1632 gig of RAM, you should be able to do it. I don't think any of my computers will do it. Well, maybe my new Copilot Plus PC. I'll have to see if that can run it. All right.

Next, Microsoft, according to reports, is already is reporting reportedly planning to lay off about 1% of its workforce, which could result in thousands of jobs. So the layoffs are said to be performance based and span across the security division and other departments.

So despite these layoffs, Microsoft tends to backfill roles vacated for performance reasons, which means the overall employee count at Microsoft might remain stable. So as of last report, which I believe was in June, Microsoft had approximately 228 full-time employees.

So the move follows previous workforce reductions at Microsoft, including a notable cut of 10,000 employees in January 2023, which at the time was less than 5% of their workforce at the time. So the layoffs come as Microsoft continues to invest heavily in the emerging artificial intelligence market, aiming to maintain its leadership position.

Microsoft is also reportedly focusing on retaining key talent, particularly in its AI initiatives by offering retention bonuses in the form of stock or cash reward. So an internal document, according to reports, shows that Microsoft managers are prompted to evaluate the potential harm of losing specific employees, especially those critical to AI efforts.

So a question about an employee's contribution to AI projects has reportedly been added to performance evaluations targeting a specific large group within the company. The focus on AI talent retention highlights Microsoft's strategic shift toward AI with some employees being moved from the Teams chat app to AI projects like Copilot.

We're going to see this all across the board, y'all. I'm not going to be the AI doom and gloom guy, but I don't know. I literally said this back in 2023, early 2023, that I said quarter four, 2024. And thereafter, I said, that's when the layoffs are going to start to hit. Because what happens is first quarter,

Big tech companies like Microsoft, like Google, like Amazon, like Meta, they're going to stop hiring new employees as the technology grows, right? I think they're going to not replace people that leave and they might hire some. But I think we're going to see at the big tech company, we're going to see overall headcount

kind of go down, right? Yes. A lot of them were a little overinflated thanks to post pandemic or mid pandemic over hiring. But I do think that we are going to see a lot of layoffs. So 2024 was a record breaking year for tech layoffs. And I think we are going to see that trend continue even more so in 2025.

And unfortunately, it is a lagging effect. So the rest of the business enterprise world, especially here in the US, is going to follow suit later in the year. Speaking of that...

Yeah. So, uh, meta CEO, Mark Zuckerberg announced that meta plans to automate the work of mid level software engineers using AI, uh, potentially changing the entire landscape of the tech industry. Yeah. Uh,

Zuckerberg has gone full all like 180, it seems in the last month or two, right? So Matt is also doing some things that some people are looking at as controversial, right? Kind of de-emphasizing some initiatives internally that were previously emphasized, getting rid of human fact checkers and instead using a community note system. But Zuckerberg straight up said, yeah,

we're going to start to automate a lot of the work that our mid-level software engineers were doing. And we're going to use AI to do that. What's interesting here is,

Is he emphasized mid-level engineers, not entry level, right? So that shows the people building the technology know what large language models are capable of. This isn't always just the low hanging quote unquote entry level, easy jobs. It's sometimes mid-level jobs. So, uh,

According to Zuckerberg, Meta and other tech giants, that's the key there, are aiming to have AI that functions as mid-level engineers capable of writing code effectively by 2025. So the transition to AI-driven coding might initially be costly, but Zuckerberg believes it will eventually lead to all coding in Meta's apps being done by AI.

Currently, mid-level software engineers at Meta earn close to mid six figures, not close to six figures, close to mid six figures. That means most of them are earning about a half million a year. So interesting, right? And I do think as we see, it's just a wind of change that we've seen in the coding space.

Right. I actually think it was NVIDIA CEO Jensen Wong that kind of started this conversation last year saying like, hey, yeah, I wouldn't teach kids how to code. Right. And everyone's like, whoa, what do you mean? Right. Shouldn't everyone learn how to code? Isn't coding become isn't coding becoming more accessible? Right. And it's like, yeah, but it's also becoming natural language. Right. You can literally code an app, quote unquote, code an app without knowing anything about code.

You can just use something like Microsoft's GitHub Copilot, something like cursor, something like lovable windsurf, et cetera. And just be like, yo, go code me this, or, Hey, here's a screenshot of something I use. Here's what it does. Go build me this. Right. And you can, if you know what you're doing one shot, an entire app, right. You could clone something, something that you need. That's the future. Right. And, and,

I know I've been saying it for a while. Some of the tech leaders have been saying it for a while, but I think we have to actually pay attention here. When we get official and companies and leaders at these companies are literally saying, yeah, AI is going to be doing most of our coding, even at a kind of mid-engineer level, right? Not just entry-level stuff.

Michael saying I've used Replit and it's incredibly simple and intuitive. Dennis says, as Gardner predicted in recent trend reports, you will begin to see knowledge workers unionize. Yeah, I do think that's coming, Dennis. And I think we're going to have an AI researcher from Gardner on the show here in the coming weeks, FYI. All right.

Open AI, going back all in on robotics. So we talked about initial rumors on the show a couple of weeks ago, but now Open AI has made those rumors official and has announced the revival of its robotics department.

aiming to develop a general-purpose, adaptive, and versatile robotic fleet. So according to a social media post by OpenAI's hardware director, the company plans to create robots with a custom sensor suite and AI models developed internally. So job listings reveal that OpenAI's robotics team will focus on integrating cutting-edge hardware and software to explore various robotic form factors.

The company is looking to employ contract workers to test robotic prototypes, suggesting the potential inclusion of robots with limbs. Yay. I don't know. I don't. I don't. I'm weird. I don't really want a bunch of AI humanoid robots, but it doesn't matter if any of us want them or not. They're clearly coming.

So open AI has explored building humanoid robots with aspirations for full scale production in the future. The robotics sector. Yeah, obviously been booming recently and has seen some significant investments raising over 6.4 billion from VC firms last year, highlighting the growing interest in robotics technology. So companies like X one and figure back,

by OpenAI, NVIDIA, and just about everyone else are working on creating humanoid robots, though the challenges remain significant. And AI, not a new sector. Robotics, not a new sector. Autonomous humanoids, not a new sector.

Why it matters now, it's the intersection of all of the work that has been done in the prior decades, but obviously with the emergence and the ever-growing capabilities of large language models, right? So large language models are no longer text-based, right? So previously, like I said, this is not new.

There's been great autonomous robots used in factories before chat GPT even was a thing. But this is how it changes. It brings the cost down, capabilities go up, so the possibilities expand, right? Because these large language models in 2025 and beyond are multimodal by default, right?

Whereas before, as an example, let's just look at GPT-4, right? Technically, it was three different models kind of working together under the hood. So, you know, things like processing, you know, a photo or producing an audio output. Essentially, the way it worked is three different models would kind of pass information to each other. And that created more latency. It created sometimes lag.

errors or an increased hallucination rate. So now models are multimodal by default. What that means is they are much faster, much more capable and less error prone when dealing in multimodal facets. All right. So if you're like wondering like, okay, what's all this about robotics? Robotics are new. Well, robotics plus multimodal gen AI is extremely new, right? And that's why the sector is exploding.

All right. Our last piece of AI news for the day. OpenAI has released a 15-page document titled Economic Blueprints, emphasizing the need for the U.S. to secure investments and supportive regulations to maintain its lead in AI over China.

So the document highlights the significance of chips, data, and energy as crucial elements for winning the AI race, urging immediate action to establish nationwide rules. This announcement comes just before President-elect Donald Trump takes office here in the U.S. with expectations of a tech-friendly administration.

Open AI CEO Sam Altman did contribute approximately $1 million, as a lot of the leading tech guys out there in Silicon Valley did, to Trump's inaugural fund, indicating a strategic move to strengthen ties with the new administration. So the blueprint, which we will be linking to in our free daily newsletter, warns that an estimated $175 billion in global funds will

are ready for AI investments, which could potentially flow to China if not attracted by the U.S., therefore increasing China's global influence. OpenAI proposes export controls on AI models to prevent access by adversary nations that might misuse the technology. An event is planned in Washington, D.C. later this month by OpenAI to discuss these proposals further. All right, all I'm going to say is

Is a lot of today's news stories. Align perfectly with the series that we're launching next week. All right. But thank you for tuning in, y'all. I'm going to quickly recap.

Everything the AI news that matters. So NVIDIA with a lot at CES and their keynote from Jensen Wong, four big stories we covered today, their new 50 series GPUs, their new project digits, a personal AI supercomputer. They unveiled Cosmos, kind of a new era, what they hope to be a new era for AI robotics and automation. Then their Lama Nemotron models, small, medium, and large.

Then we saw DeepMind assembling a team to develop AI world models, as well as a shakeup in their structure there. Microsoft released their open source Thigh4 model. Microsoft also reportedly are planning to lay off up to a thousand employees amid AI investments and a kind of refocus on AI talent internally.

Meta's Mark Zuckerberg said there's a push toward AI automation in their software engineering. And then OpenAI is reviving their robotics efforts and just unveiled their strategic blueprint to maintain USAI leadership. All right. I hope this was helpful. If it was, I'm just gonna say this.

Mark your frigging calendars, all right? If one of your big goals in 2025 is to better understand the AI landscape, maybe you want a raise at your company. Maybe you're trying to land a new AI job. Maybe you just feel overwhelmed, right? And you want to secure your career. You want to lead your department. You're a small business owner. You're trying to make sense of everything. I don't know if anyone knows this, but

Um, this is what I do every day. All right. And I've been doing this for, I don't know, a long time. Um, I think more than two years, almost two years now. Right. So I've talked to on this show, hundreds of AI leaders from big tech companies like Google, Microsoft, IBM, open AI, right. And then small startups and every day, hundreds of just enterprise business leaders. I've learned a lot from them. And sometimes, um,

It's not until later when I can start connecting all these dots, right? It's like I get these little breadcrumbs from people. And then eventually, as days become weeks and as weeks become months, as months become quarters, I start to piece these things together and pick up on trends that maybe no one else is picking up on, if I'm being honest, right? I'm not trying to toot my own horn, but go back and look at my 2024 predictions from December, 2023.

they were eerily spot on right i gave 24 of them depending on how you look at it about 21 of them technically came true or didn't go false maybe i was slightly off on one uh i said we would have more ai agents in 2024 than humans although we got a lot of reports that even single people had tens of thousands of ai agents we didn't see like an official number but for the most part

Did a very good job of predicting what 2024 would look like. All right. And I literally had people reaching out in September, October, November, who had just listened to that episode and said, this changes my perspective on everything. And all of a sudden I am understanding things at a much higher level and telling me the impact that has had on their career or their company. So

2025, I'm changing it. Five episodes. You need to tune in. I don't say that a lot, right? More or less, I literally tell people all the time. People will reach out and they're like, oh, Jordan, I listen to the show every day. I'm like, why? Right? I made this. Maybe you tune in two or three times a week, maybe two or three times a month, and that's okay. I never say you need to listen to every single episode. No, listen when you want. Listen when something catches your eye. All of you, if you're listening to this right now,

Do not miss a single episode, period. I'm going to say that. All right. If you want to get ahead in 2025, literally we are laying out the blueprint for you, the roadmap for you based on

hundreds of conversations, thousands of hours of research. We're laying it all out for you. So I hope this was helpful. If so, tell someone about it. If you're listening on the podcast, make sure to subscribe so you don't miss our new series launching January 20th. Please leave us a rating if you find it helpful. If you're on social media, tag someone that needs to hear this. Repost this to your network. We'd super appreciate it. Thank you for tuning in. Hope to see you back tomorrow and every day for more Everyday AI. Thanks, y'all.

And that's a wrap for today's edition of Everyday AI. Thanks for joining us. If you enjoyed this episode, please subscribe and leave us a rating. It helps keep us going. For a little more AI magic, visit youreverydayai.com and sign up to our daily newsletter so you don't get left behind. Go break some barriers and we'll see you next time.