Welcome to a new deep dive from AI Unraveled. This podcast is created and produced by Etienne Newman, senior software engineer and passionate soccer dad from Canada. That's right.
If you're enjoying these journeys into the world of AI, we'd really appreciate it if you'd hit that like button and subscribe to the podcast, especially on Apple Podcasts. Yeah, that helps us out a lot. So today we're diving into a fascinating collection of AI news from March 27th, 2025. A very busy day in the AI world. It seems like almost every single day is a busy day in the AI world now, though. That's true. We've got everything from like
unbelievable image generation. They're doing really good. To AI actually co-piloting our cars, which is a little scary, but also super exciting. And then we've got these really bold predictions about how we'll work in the future. Yeah, some big changes coming. Some really big changes coming. Yeah. And we are going to talk about those. So think of this deep dive as you're like
cheat sheet for understanding what's actually going on in AI right now. Yeah. Cut through the noise. Yeah. We'll do it for you. Our mission for you is to really unpack these developments. What's important? What's interesting? What does it all mean for us? Hopefully you'll walk away feeling more informed and maybe even a little surprised. Yeah. Hopefully some aha moments.
That's what we aim for. So let's dive in. Yeah, let's jump right in. Maybe we can start with something that's really captured a lot of imaginations lately, the incredible advancements in AI image generation, particularly with the very specific art style. Yes, exactly. We've got to talk about this.
The Studio Ghibli moment. Ah, yes. Sparked by OpenAI's latest image tool, right? I mean, it feels like everyone's creating these beautiful, like, nostalgic scenes that look like they're straight out of, like, a classic animated film. I'm seeing them everywhere. Yeah, I've got to say, it's pretty impressive how fast this has taken off. And the quality is...
Quite amazing. You probably see them popping up in your feeds, your pet as a Ghibli character, your breakfast as a Ghibli scene, your living room with that whimsical Ghibli touch. Yeah, there's definitely a sense of wonder and playfulness around it.
But I think it also brings up some really, really interesting questions, especially when we think about copyright. Yeah. I mean, it's hard not to think about Hayao Miyazaki's views on AI art when you see these incredibly accurate stylistic replications. I mean, he's the co-founder of Studio Ghibli, and he's been pretty outspoken about his concerns. Right. Right. It's a bit of a touchy subject. Yeah. But that's what makes this so fascinating is that the intersection of technology, creativity, and the law.
Right. And I think that's the heart of the issue here. While you can't copyright a general artistic style, the legal debate is centered on whether training these AI models on a massive amount of copyrighted material, like, you know, decades of Studio Ghibli films, is that fair use or is that crossing the line into infringement? It's tricky. It is. It is. It's a very complex legal landscape and there's not a lot of precedent to go on. What's interesting is that from what I've seen,
OpenAI's tool seems particularly good at capturing that specific Ghibli aesthetic. Even better than some of the other platforms out there. Now, OpenAI has said that they're okay with people imitating broader studio styles, but they want to avoid directly copying the work of individual living artists. It's a subtle distinction.
But it's an important one. Yeah, it is. It is. So if we kind of zoom out here, what does this all mean? It seems like AI image generation isn't just about creating any image anymore. It's about mastering and replicating these specific visual identities. Right. It's about style and almost like a visual signature. And that raises some pretty interesting questions for all of us about ownership and creativity in this digital age. Could this lead to like
totally new legal frameworks to address AI-driven stylistic replications? Will there be more demand for transparency about the data that's used to train these models? Yeah, I think we're going to see some very interesting legal battles play out in the coming years. It's inevitable. Absolutely. Technology always seems to be a few steps ahead of the law. Yeah, always playing catch up. All right, so let's switch gears now. Let's move from the world of art to the world of...
Big money. Let's talk about OpenAI's financial projections. Oh, yeah. They're making some serious bank. Their projected growth is pretty mind-blowing. They're looking at $12.7 billion in revenue for 2025. That's more than three times what they earned last year. Wow. Those are numbers you usually only see with, like,
The tech giants. What's driving this huge surge in revenue? Well, a few things. The continued popularity of ChatGPT Pro subscriptions is a big factor. People are really finding value in those advanced features. And then there's the increasing adoption of their API. You know, that's the interface that lets other companies plug OpenAI's AI models into their own products and services. Right, right. It's like renting out their AI brains. Exactly. And then you've got their enterprise tools, which are becoming increasingly popular with businesses.
And their team plans are also bringing in a lot of revenue. So it's a pretty diverse revenue stream. Oh, yeah. They've got a lot of irons in the fire. But here's the interesting part. Even with all this money pouring in, billions of dollars, they're still expecting to be cash flow negative until 2029. Really?
That's a pretty long runway of high revenue, but also high spending. What's the thinking there? Well, it really shows you how much investment is needed to be at the absolute forefront of AI development. They're spending enormous amounts on things like specialized chips, the hardware that's needed to run these incredibly complex models, the massive data sets that they use to train these AIs.
and just the fundamental infrastructure to support all of it. Right. It's like building a whole new digital universe. Exactly. They're not just building a product. They're building a platform. They're building an ecosystem. And their future bets are even bigger. Their projections for 2026 and beyond are really, really eye-opening. $29.4 billion in 2026 and potentially over $125 billion by 2029. And that's when they expect to finally turn cash flow positive.
Yeah, big ambitions. They're really playing the long game here. It's pretty remarkable. It's also interesting to note that around 75% of their current revenue comes from individual paid subscriptions. Oh, that's a huge number. It really highlights the power of a direct-to-consumer AI product like ChatGPT that people are actually willing to pay for. Yeah, people are finding real value in these tools and they're willing to put their money where their mouth is.
So what's the big takeaway for you here? Well, it seems like open AI is operating on a scale that we usually only see with the biggest tech companies in the world, this kind of hyperscaler growth trajectory. But it also really highlights the immense financial commitment that's needed to push the boundaries of AI. Right. It's a high stakes, high reward game.
And they're clearly all in. Now let's move back to image generation. We've seen some really exciting developments from another player in this space, Ideogram. Yes, Ideogram. Very interesting company. Their new release, Ideogram 3.0, is making waves. The buzz suggests it's a pretty major upgrade. What makes it stand out? Well, from what I've seen, it's the combination of realism and creative flexibility.
It's said to be really good at generating photorealistic images but also coming up with inventive designs. And, crucially, it can maintain a consistent artistic style across multiple images. So if you want to create a series of images with a specific look and feel, it can do that. That sounds incredibly useful for designers and artists. Oh, absolutely.
It's also much faster than previous versions, which is always a welcome improvement. And you can access it through their website or their iOS app. So it's pretty accessible. Yeah. They've done a good job of making it user-friendly. And it's not just about creating pretty pictures. It sounds like they focused on some very practical applications, too. Yes, exactly. They've added new text rendering and graphic design capabilities so it can create complex layouts, logos, typography, that kind of thing. Really? So it's like a...
a multi-tool for visual content creation. That's a good way to put it. It's for designers, marketers, anyone who needs to combine text and images in creative ways. And how does it stack up against the competition? Well, in testing, it apparently outperforms some of the leading models out there, like Google's Imogen 3,
FluxPro 1.1 and ReCraft V3. So that's pretty impressive. It seems like they've made some significant breakthroughs. Definitely sounds like it. Tell me more about this style references feature. Oh, yeah, that's a really cool feature. You can upload up to three images and the AI will analyze their visual style and use that as a guide for generating new content. So if you want to create images in the style of your favorite artist or photographer, you can do that.
Wow, that's really intuitive. Yeah, it's very user friendly. They also have a huge library of preset styles, like billions of them. So you can explore all sorts of different aesthetics. And all of these features are available to free users. Yeah, that's the really impressive part. They're really trying to make this technology accessible to everyone. So what's the big picture here with Ideagram 3.0? Well, I think they're really raising the bar for text to image generation.
They're offering a very powerful but also very accessible tool for a wide range of creative endeavors. So we've seen how AI is changing the way we create images, but now let's look at how it's becoming more integrated into our everyday lives. Yeah, like in our cars. Exactly. The partnership between BMW and Alibaba is generating a lot of buzz. It's a very interesting collaboration. They're working together to embed some pretty advanced AI into BMW's upcoming vehicles, specifically in China initially.
So they're developing this just spoke AI engine to enhance the BMW Intelligent Personal Assistant. That's the IPA. Right. The IPA. And the goal is to achieve much more sophisticated voice recognition and offer those super personalized in-car services. And this isn't some far off concept, is it? Nope.
This is going to be a key feature in their new class models produced in China starting in 2026. That's right around the corner. Very soon. Yeah. And the AI that will power this enhanced IPA is Alibaba's QIN AI model.
The idea is to allow the in-car assistant to understand natural language with greater nuance and provide information that's actually relevant to what's happening during your journey. So what kind of things will this AI assistant be able to do? Well, imagine you're driving and you say, hey, I'm feeling hungry. Find me a good sushi restaurant nearby with lots of parking. Okay, yeah. The AI assistant will understand that request.
factor in your location, your preferences, maybe even the time of day and the traffic conditions, and then give you a list of recommendations. That's pretty cool. Or imagine you're driving in a crowded city and you need to find a parking spot. You just ask the AI assistant and it'll scan for available spaces and guide you to the nearest one. It's like having a personal concierge in your car. Exactly. And BMW wants to take this even further.
They're planning to introduce two dedicated AI agents, CarGenius and Travel Companion. Oh, yeah. I read about that. So CarGenius will be all about vehicle diagnostics, like if there's a problem with your car. Right. It'll help you understand what's going on and what you need to do. And then Travel Companion will be all about personalized recommendations for places to visit, things to do, and it'll help you plan your trips. So they're really building out a whole suite of AI-powered features for the car? Yeah, it's pretty ambitious.
And they're not just relying on voice commands either. They're incorporating multimodal inputs. That means the system will understand information from your gestures, your eye movements, even your body posture. Wow. So they're really trying to create a very intuitive and responsive driving experience. Exactly. They want the car to understand you, not just your commands.
So what does this all mean for the future of driving? Well, I think it points toward a future where our cars are much more than just vehicles. They're intelligent assistants, travel companions, and maybe even friends. I'm not sure about friends, but definitely more integrated into our lives. Definitely. Now, let's talk about another interesting development from Alibaba.
They've unveiled a new multisensory AI model specifically for mobile devices. Right. I read about that. It's called QEN 2.5 Omni 7B. That's the one. Yeah. And it's been optimized to run super efficiently on smartphones and laptops. The really impressive thing is that it can process text, images, audio, and video all very quickly. So it's a very versatile model. Yeah. Incredibly versatile.
And there are rumors that some big players are interested in adopting this technology. Oh, really? Who? Well, apparently Apple is planning to integrate Alibaba's AI models into some new iPhone features, specifically for the Chinese market. Wow, that's a big endorsement. Yeah, it is. And of course, BMW is also planning to use it in their vehicles, as we just discussed.
For developers who want to explore this technology further, Alibaba has made QUIN 2.5 Omni7 be open source. It's available on Hugging Face and GitHub. So anyone can download it and start experimenting with it.
Yeah, they're really encouraging innovation. So how does this model work? What's this thinker talker system I read about? Yeah, it's a really clever design. It allows the model to process information from different modalities like text, audio, images and video all at the same time. So it's like a multi-sensory AI. Exactly. And it can generate responses in either text or speech.
This simultaneous processing is what makes the interactions so smooth and natural. It's like it's thinking and talking at the same time. Exactly. Hence the name Thinker Talker. And it's particularly good at understanding and generating speech. In fact, it outperforms some AI models
that are specifically designed for audio tasks. Wow, that's impressive. Yeah, it's a really powerful model. And because it's designed to run efficiently on everyday devices, it has a lot of practical applications. Right, like real-time translation, image recognition, voice control. Yeah, all sorts of things. Alibaba has even talked about using it to provide real-time audio descriptions for people with visual impairments. So it's not just about cool tech.
It's about accessibility, too. Exactly. By making it open source, Alibaba is basically saying, here's a powerful tool. Go build amazing things with it. So they're really trying to democratize AI development. Absolutely. And I think that's a good thing. The more people who have access to these tools, the more innovation we'll see. Now, let's shift gears a little bit and talk about a pretty bold prediction about the future of work.
Bill Gates recently said that he thinks AI will replace many doctors and teachers within the next decade. Wow, that's a big statement. Yeah, it is. But his reasoning is that AI will eventually be better than humans at handling complex tasks like medical diagnosis and personalized instruction. So he's saying that AI will be able to make better decisions than doctors and teach better than teachers? Well, not necessarily better, but certainly more efficient and maybe even more effective in some cases.
And this could have huge implications for the workforce. Millions of people could potentially be out of jobs. It's a little scary to think about. Yeah, it is.
But it's something we need to start thinking about now. If this prediction comes true, we need to figure out how to adapt. Right. We need to start preparing for a future where AI plays a much bigger role in our lives. Exactly. And that means thinking about our education systems, our social safety nets, and our economic models. We need to make sure that everyone benefits from the rise of AI, not just a select few. Absolutely. Now let's circle back to OpenAI for a moment. They've just announced a
Pretty big upgrade to ChatGPT. Oh yeah, they've integrated the advanced image generation model from GPT-4 directly into ChatGPT. So now you can generate really vivid, detailed, and realistic images right within your ChatGPT conversations. Yep, that's right. So it's like a one-stop shop for content creation. Exactly. You can write text, generate images, all in one place.
And the quality of the images is really good from what I've seen. It's on par with some of the dedicated image generation tools out there. So ChatGPT is becoming more and more like a
A creative Swiss army knife. Yeah, that's a good analogy. It's a very powerful and versatile tool. Now, for a slightly more concerning development, North Korea has unveiled some new AI-powered military technology. Oh, yeah, I read about that. It's a little unsettling. Yeah, it's definitely something to keep an eye on. Kim Jong-un recently showed off a new spy drone and, more worryingly, some AI-equipped drones.
Suicide drones. Suicide drones. Yeah, they're basically autonomous weapons that are designed to seek out and destroy targets. And they're using AI to do that.
Yeah, the AI helps them to identify targets, navigate to them, and then detonate their explosives. This is a significant development. It shows that North Korea is serious about integrating AI into its military capabilities. And that could have big implications for regional stability. Definitely something to watch closely. Absolutely. Now, let's talk about Alibaba's Quinn model again. We talked about it earlier in the context of mobile devices and cars.
But there's another aspect to its release that's worth highlighting. Oh, right. The fact that it's open source. Exactly. By making it open source, Alibaba is essentially giving away its AI technology to the world. But why would they do that? Well, they say they want to democratize access to AI and accelerate innovation. But there are also some strategic reasons. Like what? Well, by making their model open source, they're creating a community of developers who are familiar with their technology.
And that could give them an advantage in the long run. It's like building an ecosystem around their AI. So it's a bit like giving away the razor to sell the blades. Yeah, something like that. But in this case, the blades are the applications and services that are built on top of their AI model. So it's a clever strategy. Very clever. And I think it's going to pay off for them in the long run. We covered a lot of ground today. But there were also some other notable AI developments on March 27th. Yeah, it was a very busy day in the AI world.
Just to give you a quick rundown, OpenAI announced that they're adopting Anthropic's Model Context Protocol. That will allow their models to better integrate with external data and software. Microsoft unveiled some new AI-powered agents for their Microsoft 365 Copilot suite. These agents are designed to help with research and data analysis. Okay, that's interesting. A federal judge rejected a request from the Universal Music Group to block Anthropic from using song lyrics to train their clawed model.
The judge said that UMG hadn't shown that they would suffer irreparable harm.
So that's a win for AI developers who want to use copyrighted material for training. Hmm. That's interesting. I wonder if that'll set a precedent. Yeah. It could be a landmark case. We'll have to see how it plays out. XAI integrated their Grok chatbot into Telegram. That's the messaging app. Okay. So Telegram premium users now have access to this AI chatbot. Amazon launched a new AI-powered shopping feature called Interests. Oh, yeah. I heard about that.
It's basically a way to get personalized product recommendations based on what you're interested in. You just tell Amazon what you're looking for in natural language. So no more endless scrolling through product listings. Yeah, hopefully not. And finally, the U.S. government added over 50 Chinese tech companies to an export blacklist. They're targeting companies that are developing advanced AI. So the AI arms race is heating up. Definitely heating up. It's a very interesting time to be following this field. You know, with all this talk about competition,
cutting-edge technology and AI changing the world. If you're feeling inspired to level up your own skills, you might want to check out Etienne's JamGatTech app. Oh yeah, that's a great resource. It's designed to help you master and ace over 50 in-demand certifications in all sorts of fields, like cloud computing, finance, cybersecurity, healthcare, even business.
So whether you're looking to boost your current career or explore something new, JamGetTech can help you get there. Yeah, it's a really valuable tool, especially in today's rapidly changing job market. You can find the app links in the show notes. So we covered a lot of ground today. We did. We talked about AI image generation, OpenAI's incredible growth, Adiagram's new model, AI in cars, AI on our phones, Bill Gates' prediction about the future of work.
ChatGPT's new features, North Korea's AI weapons, and the global AI landscape. It's a lot to take in. So what stands out to you? What are you most curious about? Yeah, what are your takeaways? What surprised you? What made you think? We want to make sure you walk away from this deep dive feeling informed and inspired. And maybe even a little bit challenged. So here's a final thought for you to chew on.
We've seen how much AI has advanced in just one day. So what aspects of your life do you think will be most profoundly changed by AI in the next few years? Yeah, how do you think AI will shape your future, your work, your relationships, your hobbies, even your sense of self?
It's a big question, but it's one worth asking. And remember, if you're looking to upskill and stay ahead of the curve in this rapidly changing world, check out ATN's JamGet app. You can find the links in the show notes. Thanks for joining us for this deep dive.