Hey everyone, welcome back for another deep dive. Today we're going to be looking at AI as it was on February 3rd, 2025. Sounds exciting. It really is. We've got a copy of the Daily Chronicle of AI Innovations from that day. Oh wow. And yeah, it's pretty amazing stuff. And we're going to be kind of looking through this and seeing what kind of picture of the future of AI we can get from this one day. I like it.
So, yeah, we've got research that could totally change how we learn AI, designing computer chips that are too complex for humans to even understand the EU setting a global precedent with its AI Act. Oh, wow. We'll be hearing from OpenAI, DeepSeat, Google, Meta. Oh. So, yeah, a lot of big names. Looking forward to it.
All right. So let's jump right in. I guess let's start with OpenAI. They unveiled a brand new AI tool that can actually conduct research online. Okay. So, you know, when you're working on a project and you have to go through research papers, articles, websites, all that stuff, this can do it for you. It can analyze, you know, tons of different data formats, pull out the key findings and summarize everything in a way that's easy to understand. So it's like having a research assistant. Yeah, like a super powered research assistant. That's really cool.
But how do we know it's actually good at what it does? Well, they put it to the test with humanity's last exam, which is this benchmark that's used to evaluate how well AI understands like the nuances of human language. Kind of like a massive IQ test for AI. And this new tool scored 26.6%, which is significantly higher than any other AI model out there. Oh, wow. So it's like even
Even though it can't replace human researchers, yet it could really save a ton of time. Right. I'd be curious to see how this changes the way we approach research in the future. Yeah, for sure. And actually, speaking of things that could change how we approach things, let's talk about AI designing computer chips. Okay. And this is the crazy part. These chips are so complex that we humans can't even fully grasp how they work. Well, and AI is designing chips that are beyond human comprehension.
It's true. We're like entering this era where AI has surpassed human capabilities in certain areas like chip design. It can create these designs that are optimized for performance in ways that we might not even be able to think of. So like what are the benefits of that? I mean, potentially a whole new level of computing power. But there are also a lot of questions. Like what happens if something goes wrong? Yeah, exactly. How do we even begin to troubleshoot these chips if we don't understand how they work?
There are concerns about transparency, security, the potential for like unforeseen failures. It's kind of scary. Yeah, a little bit. Okay. Well, let's move on to something a bit less scary. Google's X Lab has spun out a company called Heritable Agriculture. And they're using AI to tackle some of the big challenges facing the agricultural industry. Interesting. Yeah. So they're focused on optimizing crop production. Okay.
They're using AI to analyze everything from soil conditions and weather patterns to plant genetics and pest behavior. So like making them resistant to things? Yeah. But it's more than that. It's about using AI to create like a completely optimized farming system. Yeah. The goal is to boost crop yields.
while minimizing the environmental impact. - So more food and better for the environment. - Exactly. And this could be huge, especially as the world's population grows and climate change continues to impact food production. - Yeah, that's a big one. - Definitely. Okay, so we've talked about food now. Let's talk about education. - All right. - NVIDIA CEO Jensen Huang, he believes that everyone should have access to an AI tutor. Like imagine personalized learning experiences that are tailored to each student's individual needs
And learning style. That would be amazing. Right? It's like having a personal tutor available 24-7. Yeah. You know? You can learn at your own pace, focus on the areas where you need the most help, and get instant feedback. And AI is getting so advanced now, these tutors could be really sophisticated. Yeah, exactly. Yeah. Okay, so from the future of education, let's look at something happening right now.
The EU's AI Act just came into force on February 3rd, 2025. It's kind of a big deal because it sets a global precedent for AI regulation. Wow. Yeah. So the EU is taking this like proactive approach to AI.
They want to balance fostering innovation with protecting people's rights. Right. So the AI Act outlines these rules and guidelines for AI development. Okay. And it focuses a lot on what they call high-risk systems. High-risk systems. Yeah. So these are systems that could be a threat to people's safety or their basic rights. Oh, you see. So, for example, the Act bans social scoring systems. Okay. Yeah.
And it bans AI systems that exploit people or manipulate them. Like subliminal messaging and stuff. Exactly. So basically the EU is saying AI development needs to be responsible and ethical. It's about time. Right. And the act also has stuff about transparency and accountability.
It requires developers to provide clear information about how their AI systems work. Okay. And there are big fines if companies don't follow the rules. So they're serious about this. Yeah, they are. It's going to be interesting to see how this impacts AI development all over the world. Okay, now let's talk about DeepSeek. They've been making some big claims about their R1 AI model.
But there seems to be some mysteries surrounding their whole operation. Oh, really? Yeah. So DeepSeek has been saying that they developed R1 with very limited resources. Okay. But a new report from SemiAnalysis says that DeepSeek has actually invested tons of money in infrastructure and computing power. Oh, wow. Yeah. So their spending is actually on par with some of the biggest tech giants out there. That doesn't sound very efficient. Right. So is DeepSeek being misleading or what's going on?
It's hard to say. Maybe they're underestimating their costs or maybe they're trying to downplay their spending to look good. Hmm. It makes you wonder what else they might not be telling us. Transparency is so important with AI development. Right. Especially if we're going to trust these systems to, you know, make important decisions. Exactly. OK, speaking of big spending, let's talk about Meta. What are they up to? They've been pouring money into smart glasses.
Like billions of dollars. Really? Yeah. Reportedly close to $100 billion. Wow. What are they planning? Well, it seems like they're betting big on wearable AI as the next big thing. You know, they want to move beyond smartphones and create a new computing platform where augmented reality is part of our everyday lives. So like a world where you can access information, connect with people, and experience all sorts of digital content all through a pair of glasses. Exactly. It sounds pretty futuristic. Yeah, it does.
But with all their resources and their experience in tech, they might actually pull it off. It'll be interesting to see what happens for them. Okay, now let's talk about a real-world AI showdown that's been getting a lot of attention.
ChatGPT, Quen, and DeepSeq went head-to-head to see which one is the best. Oh, wow. Who won? Well, it's not that simple. Each model had its own strengths and weaknesses. I see. For example, ChatGPT was super fast. It could generate text and code really quickly, but sometimes it wasn't very accurate. So it sacrificed accuracy for speed? Exactly. DeepSeq, on the other hand, was slower, but it was more accurate and efficient. Interesting.
And what about Quinn? Quinn struggled a bit with some of the more creative tasks. So each model kind of has its own specialty. Like having a team of different AI specialists. Exactly. And as AI continues to develop, we're probably going to see even more specialized models. Yeah, that makes sense. All right. So to wrap up this first part of our deep dive, let's touch on a few other AI developments from February 3rd, 2025. First, we have a report from David Sachs, the U.S. AI czar. Okay.
He's questioning DeepSync's spending.
He says their reported training costs are really misleading compared to their actual spending on infrastructure. So like the transparency issue again? Yeah, exactly. Okay, next up we have Microsoft forming a new research unit that's going to study the societal impact of AI. That's a good idea. Yeah, they're bringing in experts from all sorts of fields to look at how AI is going to affect our lives. It's important to think about those things. For sure. Okay, and then over at MIT, researchers unveiled a new AI model called Chromogen.
What does that one do? It can predict 3D genome structures way faster than previous methods. Wow. Yeah. It's like a game changer for DNA analysis and could revolutionize healthcare. That's amazing. It is. And lastly, we have a security alert. Researchers discovered an exposed DeepSeq database with over a million user prompts and API keys. Oh, no.
That's not good. Not good at all. It shows how important security is in AI development. Absolutely. We need to protect people's data. Yeah, it really is a wake-up call for the whole industry. It really is. But before we get too caught up in the security concerns, I want to circle back to that AI face-off we were talking about, ChatGPT versus Quen versus DeepSeek. Oh, yeah. That was fascinating. What were some of the standout moments from that? Well...
uh one that really comes to mind is that coding challenge okay chat gpt just like blazed through it generating code in like seconds oh wow but it was really messy and inefficient deep seek on the other hand took its time
analyze the problem and then produce the solution that was not only correct, but also like super elegant and efficient. So like ChatGPT is quantity over quality. DeepSeq is quality over quantity. Yeah, something like that. And it was the same in other tasks too, like that physics simulation. Oh yeah, with the rotating ball bouncing around. Yeah, ChatGPT's ball was practically glitching through the walls. Yeah, I remember that. But DeepSeq's simulation was so smooth and realistic, it was almost hypnotic. It really was.
And Quinn, I remember Quinn kind of struggled with some of the creative tasks. Yeah, that's right. Quinn was competent in a lot of areas, but when it came to things like creative writing or generating original content, it just didn't really measure up. Hmm.
So it's like each model has its own personality. I guess you could say that. Like if DeepSeek is the meticulous scientist, then ChatGPT is like the quick-witted writer. And Quinn is kind of like the reliable assistant. Yeah, that's a good way to put it. So basically, the best tool for the job really depends on what you need to do. Exactly. Okay, now I want to go back to that report from David Sachs about DeepSeek spending. Oh, right. The one where he basically accused them of hiding their massive spending. Yeah.
Yeah. That definitely got people talking about transparency and accountability and development. For sure. I mean, if a company says they achieved amazing results with limited resources, but then it turns out they've been spending billions, it just makes you question everything they say. Right. It's about trust. We need to be able to trust the companies that are developing these powerful AI systems. Absolutely.
Okay, now let's move on to something more positive. We talked about Microsoft's new research unit that's going to study the societal impact of AI. Yeah, that's a good step. Yeah, they're bringing in all sorts of experts to really dig deep into how AI is going to change our lives. Oh, I think that's really important. You know, we need to be thinking about the bigger picture, not just the technology itself. For sure.
And speaking of amazing technology, we can't forget about MIT's Chromogen. Oh, yeah, the AI model that can predict 3D genome structures in minutes. It's incredible. It could really revolutionize health care. It really could. But of course, we also have to remember that security breach with DeepSeq. Right, that was a big one. It just goes to show how important security is. Absolutely. We need to make sure that these AI systems are protected and that people's data is safe. It's a huge responsibility. It is. And it's something that the whole industry needs to take seriously.
Definitely. Okay, so we've covered a lot of ground in this deep dive. We have. We've seen some incredible breakthroughs and some serious challenges. It seems like with AI, there's always something new to talk about. There really is. And it makes you wonder what the future holds. What do you think? I think the future of AI is full of potential, but it's also full of uncertainty. You know, it's up to us to shape that future, to make sure that AI is used for good. Ah, I agree. It's a powerful tool and we need to use it wisely. Exactly.
All right, so let's take a moment to reflect on everything we've discussed before we jump into the final part of our deep dive. Sounds good. Okay, so we're back for the final part of our AI deep dive. It's been quite a journey through the world of AI as it was on February 3rd, 2025. It really has. And you know, it's amazing to see how much progress was made in just one year. It is. We've seen everything from, you know, those incredible AI research tools and those crazy chip designs. Right, and the EU stepping up with its AI Act.
Yeah, trying to set some ground rules for AI development. It's definitely a step in the right direction. For sure. But with all this progress, it makes you wonder, what does it all mean? Like, what does the future hold for AI and for us?
It's the big question, isn't it? It is. Are we creating a future where AI helps us or a future where AI becomes a problem? It's hard to say. There's so much potential, but also so much uncertainty. Exactly. And it feels like the choices we make now are going to have a huge impact on that future. I think you're right. We need to be careful and thoughtful about how we develop and use AI. Absolutely. But we also need to be optimistic.
AI has the potential to do so much good. Oh, absolutely. Think about healthcare, for example. AI could help us diagnose diseases earlier, develop new treatments, and maybe even cure diseases that we thought were incurable. And what about education?
Right. AI tutors could personalize learning for every student, help them reach their full potential. And then there's sustainability. Yeah. AI could be a game changer for developing clean energy, reducing waste, and managing our resources. It's pretty mind-blowing when you think about it. It really is. But at the end of the day, AI is a tool. That's a good point. And it's up to us to decide how we use it. We have the power to shape the future of AI. Exactly. Okay. So to our listeners out there, I want to leave you with this question.
What role do you want AI to play in your life, in your community, in the world? And what can you do to make sure that AI is used for good? It's something we all need to think about because the future of AI is in our hands. And that's what makes it so exciting. And a little bit scary. Maybe, but mostly exciting. I agree.
Well, that brings us to the end of our deep dive into the world of AI on February 3rd, 2025. It's been a fascinating journey. It really has. Thank you all for joining us. And remember, the future is what we make it. So let's make it a good one. Absolutely.