We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode AI Daily News March 28 2025: 💰OpenAI Nears $40 Billion Funding Round 🧠Anthropic Reveals How Claude ‘Thinks’  🔍Microsoft Adds Deep Research to AI Code Editors 👁️Alibaba Releases Qwen’s QVQ-Max Visual Reasoning Model 💥OpenAI: GPUs are melting

AI Daily News March 28 2025: 💰OpenAI Nears $40 Billion Funding Round 🧠Anthropic Reveals How Claude ‘Thinks’ 🔍Microsoft Adds Deep Research to AI Code Editors 👁️Alibaba Releases Qwen’s QVQ-Max Visual Reasoning Model 💥OpenAI: GPUs are melting

2025/3/28
logo of podcast AI Unraveled: Latest AI News & Trends, GPT, ChatGPT, Gemini, Generative AI, LLMs, Prompting

AI Unraveled: Latest AI News & Trends, GPT, ChatGPT, Gemini, Generative AI, LLMs, Prompting

Transcript

Shownotes Transcript

This is a new episode of the podcast AI Unraveled, created and produced by Etienne Newman, a senior software engineer and passionate soccer dad from Canada.

Welcome to another deep dive. If you're enjoying these explorations into the rapidly changing world of AI, please consider liking and subscribing to the podcast on Apple. We really appreciate your support. It's certainly been a whirlwind in the AI space these past few months. Absolutely. Trying to stay on top of everything can feel overwhelming. That's exactly why we do these deep dives to try and make sense of it all. Specifically, we're looking at news from around March 28th, 2025.

and we'll try to distill out the key insights for you. Sounds like a plan. Okay, so we've got a bunch of exciting updates on AI innovations that really paint a picture of where things are headed. What's on the menu for today? Well, first up, we'll look at some big funding news, particularly surrounding OpenAI. Then we'll dig into some fascinating work by Anthropic. That gives us a glimpse into how AI models actually think. We'll also discuss new AI tools being released for software developers.

And in the area of visual understanding from companies like Microsoft and Alibaba. Then we'll tackle the challenge of keeping up with the demand for these popular AI services. And finally, we'll examine some major business moves happening in the tech industry, all related to AI, of course. Sounds like a comprehensive deep dive. Let's jump in. All right.

Let's start with this big news about OpenAI. They're reportedly on the verge of securing $40 billion. Wow, $40 billion. Yeah, $40 billion in funding. What kind of impact does that sort of investment have? Well, obviously, their valuation goes up significantly.

But more importantly, it shows a huge amount of confidence from investors. They believe in OpenAI's vision and the potential of AI to be truly transformative. Right. And it gives them a massive amount of money to pursue those plans. Not just the size of the investment is impressive, but the financial projections are pretty incredible, too.

The report suggests their revenue will triple to $12.7 billion in 2025. And they're aiming for over $125 billion by 2029. That's quite a jump. Yeah, and they're hoping to become cash flow positive along the way. What are your thoughts on those numbers? They're definitely ambitious, but I don't think they're unrealistic. When you look at how quickly AI is being adopted across different industries,

That kind of growth seems possible, but we should also consider that the same report mentioned they lost about $5 billion in 2024.

despite earning $3.7 billion in revenue. So still a lot of investment going on. Exactly. Those losses are mostly due to the cost of building and running the infrastructure that these advanced AI models require, plus the training process itself. It takes a lot of computing power and specialized expertise too. Absolutely. And this new funding isn't just to cover those costs.

A big chunk of it will go towards new initiatives like Stargate, their massive $300 billion joint venture with SoftBank and Oracle to build next generation AI infrastructure. Wow. Stargate, $300 billion. What's the significance of that, strategically speaking? I think it shows a commitment to pushing AI to its limits. It needs a level of infrastructure that very few companies could build on their own. Okay. So big picture, what's the main takeaway here?

I think this funding will cement OpenAI's position as the leader in AI globally. It gives them the resources to invest in everything they need, the infrastructure to handle growing demand, the talent to drive innovation,

And the freedom to pursue groundbreaking research like Stargate. It's going to be very difficult for competitors to catch up. All right. Let's move on from big money to something a bit more cerebral. Anthropic, another big player in A.I., has been doing some really interesting work trying to understand how their Claude model thinks. How do they even begin to do that? They're using something called interpretability techniques. Interpretability techniques. Yeah, I know it sounds a bit technical, but essentially.

These techniques allow researchers to see what's happening inside these large language models instead of just treating them as black boxes. It's like debugging a complex piece of software.

But applied to the intricate network of connections in a neural network. I see. One really interesting finding is that Claude seems to use a universal language of thought. A language of thought? Yeah, and it spans different human languages like they've observed it in English, French, and Chinese. That's fascinating. It suggests that Claude is developing a deeper level of understanding than just simple translation. It's more about grasping underlying concepts. What could this mean for the future of AI?

Well, it could lead to more robust and adaptable AI systems that can handle multilingual tasks, maybe even learn new languages more easily. That would be amazing. Now, this detail really caught my eye. When Claude is generating

Poetry. Yeah, poetry. It seems to plan ahead. Plan ahead. Yeah. It identifies potential rhymes before it writes the lines that lead to those rhymes. That's incredibly sophisticated. How is that even possible? It suggests a level of strategic planning that goes beyond just predicting the next word in a sequence. It implies that it has an internal model.

model. That can consider longer range dependencies, artistic constraints even, things we usually associate with human creativity. It really makes you wonder about the nature of intelligence. Absolutely. And they've also found what seems to be a built-in safety mechanism in CLAWD. A safety mechanism? What do you mean? It's a default that prevents speculation unless the model is very confident in its answer. They think this explains why CLAWD is so good at avoiding hallucinations.

What are hallucinations in this context? Ah, right. An AI hallucination is basically when the model generates information. That's factually wrong. It just makes things up and presents them as true. So this mechanism helps prevent that. Exactly. By being cautious about speculation. Claude is less likely to make confident but false statements.

That's important for building trust in AI systems. Absolutely. To get these insights, Anthropic developed a tool they call an AI microscope. An AI microscope. Yeah, it's a visual and interactive tool that allows researchers to really examine the reasoning processes happening inside these large language models. So they can actually see the model thinking. In a way, yes, they can see the model's internal activations and decision-making steps.

as it processes information. This gives them a much more detailed understanding of how the model works. And what have they learned from this? Well, they've seen how CLAWD performs multi-step reasoning, you know, by activating specific sequences of representations within its neural network

They've confirmed this planning ahead behavior in poetry generation and even observed parallel processing pathways when Claude tackles math problems. Wow, that's incredible. So for our listeners, why is this research into Claude's thinking important? I think it's fascinating for anyone curious about intelligence, artificial or human. It gives us unprecedented glimpses into how these complex systems work.

we're moving beyond just using AI as a tool. And starting to understand the mechanisms that drive its capabilities. This transparency is crucial for building more reliable and trustworthy AI systems. Now let's look at some practical applications of AI and the new tools being developed.

Microsoft has added a feature called Deep Research to its AI code editors. Deitch Research. What's that all about? Well, it's designed to make life easier for software developers. It integrates research papers, documentation and code examples right into their coding environment so they don't have to constantly switch between different applications. That sounds convenient. Yeah. And it saves them from wasting time searching for resources. So it's all about streamlining the development process and boosting productivity. Exactly.

By providing all the information they need right at their fingertips, Microsoft hopes to help developers write better code faster. And reduce errors caused by missing information. Makes sense. Meanwhile, over in China, Alibaba's Keen AI team has unveiled a new visual reasoning model.

called QVQmax. What can this model do? QVQmax is designed to improve AI's ability to understand and reason about visual information. Going beyond just recognizing objects in an image, it has an interesting feature, an adjustable thinking mechanism. Adjustable thinking. Yeah, essentially by allowing the model to spend more time processing an image, its accuracy and visual reasoning tasks goes up. So it's like giving the AI more time to think about what it's seeing. Exactly. They've shown some impressive applications of this.

QvQmax can analyze complex blueprints, understand spatial relationships, it can solve geometry problems just by looking at diagrams, and even provide feedback on sketches. So it's not just seeing, it's understanding. Precisely. It's a big leap beyond basic image recognition. Where do you see this technology having an impact in the real world? Well, the potential is huge. Think about autonomous vehicles. This could help them better understand complex road scenarios.

In fields like visual analytics, AI could extract deeper insights from images like medical scans or satellite imagery. And looking even further ahead, it's a step towards creating truly intelligent AI agents that can understand and respond to the visual world around them, maybe even operate devices or assist in complex tasks. That's a fascinating vision of the future. All these advancements are exciting, but they also highlight the need for massive computing power.

OpenAI, for example, recently announced temporary usage limits on chat GPT. They said the demand has been so high it's actually causing their GPUs to melt. GPUs to melt. Yeah, it's a way of describing the strain on their computing infrastructure.

GPUs or graphics processing units are specialized processors. They were originally designed for graphics, but their parallel processing capabilities make them perfect for AI. I see. They can do many calculations at the same time, which speeds up

the demanding tasks involved in running AI models. So the popularity of these services is pushing the hardware to its limits. Exactly. They'd already delayed the wider release of their image generation feature, and now they're having to implement usage limits across the board.

It's a challenge for sure. Yeah, it seems the demand has just outstripped their current capacity. Sam Altman has said they're working on improving efficiency and expanding their infrastructure with the aim of eventually offering image generation to free users with a daily limit. It's constant balancing acts. Absolutely. They're trying to innovate on the software side.

while also scaling up the hardware to meet the growing demand. And the viral success of ChatGPT's image generation probably didn't help. I think you're right. It just shows how computationally intensive these AI models are. Now let's look at some other interesting ways AI is being used. There's a Harvard professor who's created an AI replica of himself. An AI replica? Yeah, for personalized tutoring. That's an interesting concept. What are the pros and cons of that?

Well, on the one hand, it could make expert tutoring accessible to so many more people. Regardless of location or time constraints, it could offer really personalized learning experiences. But there are concerns too, right? Of course, there are always ethical considerations, like the role of human interaction in education, the potential for bias from the professor's own views, and even the implications of creating digital replicas of people. It's a complex issue for sure.

On a more concerning note, there are reports that North Korea's new drones might be using AI for autonomous strikes. Autonomous strikes? Yeah, potentially targeting South Korean and U.S. forces. That's worrying. It's a significant escalation in autonomous warfare capabilities and raises serious questions about international security. The potential for unintended escalation is a big concern.

As is the lack of human control over lethal force decisions. And it's difficult to establish accountability if something goes wrong. It's a slippery slope. In a different arena, open source developers are pushing back. Against what? Against AI web crawlers that are scraping their code repositories. Why are they doing that? They're worried about their intellectual property. They don't want their work being used to train commercial AI models without problematizing

proper attribution or compensation, they're trying to set boundaries and protect their rights. It highlights the tension between AI development and digital ownership. It's a debate that's not going away anytime soon. And in some major business news, Elon Musk's XAI has acquired the social media platform X.

X? You mean Twitter? Yeah, formerly Twitter, for $45 billion in an all-stock deal. That's a huge acquisition. It is. It values XAI at $80 billion and X at $33 billion, including debt. What's the strategy behind this move? Musk wants to integrate XAI's AI capabilities, specifically their Grok chatbot.

with X's massive user base and existing infrastructure. So he's combining his AI company with the social media platform. Exactly. He wants to use X's data, AI models, computing resources, and talent pool to accelerate XAI's development. He's been talking about making X into an everything app.

like WeChat. Yeah, and integrating XAI is a key part of that vision. He wants to create a more engaging, personalized, and useful experience for users by offering AI-powered services directly within the platform. So we can expect to see Grok and other XAI technologies embedded in X. That seems to be the plan. It positions Musk to create a unique AI-driven ecosystem, which could have a big impact on the future of social media. It's certainly a bold move. There was a lot of other AI news around March 28th as well.

OpenAI released an updated GPT-4.0. A Chinese AI startup called Butterfly Effect is looking for more funding.

AI infrastructure provider CoreWeave adjusted its IPO target, but secured an investment from NVIDIA. Archetype AI introduced lenses for their Newton model. PwC announced an agent OS for business. And Lockheed Martin is partnering with Google Public Sector on AI for national security. Wow. Seems like AI is everywhere these days. It really does. Every corner of tech and business is being touched by it.

And finally, in a bit of a surprise move, WhatsApp has become the default calling and messaging app for iPhones globally. Really? I thought that was just for EU users. Nope. It's a global rollout with iOS 18.2. That's significant, replacing iMessage and FaceTime. It could have a big impact on mobile communication. How so? Well, for one thing, it might affect user privacy. Different apps have different data policies. That's sure. It could also change how other apps integrate with the iPhone's core functions.

and it might even intensify competition between messaging platforms. It's a fascinating development. We'll have to see how it plays out. Definitely. It just shows how quickly things can change in the tech world.

Speaking of change, with this ever-evolving AI landscape, having the right skills is crucial. Absolutely. And that's where Etienne's AI-powered GemGatTech app comes in. What can this app do? It can help anyone master over 50 in-demand certifications. It feels like cloud, finance, cybersecurity, healthcare, business, and more. That sounds incredibly useful. If you're looking to upskill and stay competitive, be sure to check it out. The links are in the show notes. I'll be sure to take a look.

So to sum up our deep dive into the AI news of late March 2025, we've covered OpenAI's massive funding round, the incredible insights into how AI models like Claude Reason, the advancements in AI tools from companies like Microsoft and Alibaba, the infrastructure challenges facing popular AI services, and those big strategic moves in the tech industry.

Like XAI buying X and WhatsApp becoming the default on iPhones. It's been a whirlwind tour of a very active period in the AI world. It really has. And that brings us to our final thought for you, our listener. Given all these advancements, especially in AI reasoning and the ongoing infrastructure hurdles, what do you think? What fundamental changes will we see in how we interact with and rely on AI in the near future?

It's a question worth pondering for sure. Absolutely. It's a fascinating time to be following this technology. And for more insightful deep dives into the world of AI, remember to like and subscribe to AI Unraveled on Apple. And once again, if you're looking to upskill in any of those key areas, cloud, finance, cybersecurity, you name it, don't forget to check out Etienne's AI-powered Jamgak app.

Links are in the show notes. It can help you master over 50 in-demand certifications. That's a fantastic resource. Thanks for joining us for this deep dive. We look forward to exploring more with you next time.