Welcome to the Deep Dive, the show where we take a stack of fresh sources, crack them open, and really pull out the most important nuggets of knowledge and insight just for you. And just so you know, this is a brand new episode of the podcast AI Unraveled. It's created and produced by Etienne Newman. He's a senior engineer and, get this, a passionate soccer dad from Canada. But before we really jump in, quick reminder, please do like and subscribe over at Apple Podcasts. It really helps and you won't miss any of our Deep Dives.
Okay, so today we're diving headfirst into the AI Unraveled, July 3rd, 2025 Daily Innovations Chronicle. Our mission, simple, give you that shortcut to being genuinely well-informed about the absolute latest in AI. We'll hit you with some surprising facts and, you know, hopefully keep it entertaining. And it's quite the chronicle, isn't it? I mean, the pace of change in AI right now is just...
Well, it's breathtaking. A daily snapshot like this, it's essential. It's not just about knowing what happened yesterday. It's about really understanding why it matters in the bigger picture. We're here to help you connect those dots, kind of grasp the implications. Exactly. Okay, let's unpack this first one. Our first big theme today, it really highlights this sort of dual nature of AI's impact, especially on content creation and information. It's fascinating stuff, but yeah, it also raises some pretty big questions. Let's start with something frankly alarming. There are these, of
offensive, deepfake videos, AI-generated, and they're spreading like wildfire on TikTok right now. We're not talking about harmless stuff. These are short clips using really blatant racist and anti-Semitic tropes, targeting different groups with awful stereotypes. What's really concerning is there's actually a VO watermark on these little eight-second clips. That confirms they came from Google's VO3 model that's their cutting-edge video generation AI.
And even though TikTok's own rules clearly ban hate speech, these hateful things are just spreading. The comment sections often just echo the same harmful caricatures. I agree. Yeah.
That is incredibly troubling. And it goes beyond just, you know, putting pressure on TikTok or other platforms. What this incident really exposes, I think, is a critical vulnerability in the algorithms themselves. They're designed for engagement above all else. Right. So maybe the solution isn't just better moderation after the fact. Maybe it requires a fundamental rethink of AI's role in spreading content, building an ethical platform.
filters right at the source, at the generation stage, because the speed this stuff spreads, plus how hard it is to trace, it's a unique challenge. Absolutely. A really tough one. Okay. Let's shift gears a bit, but stick with AI content, music.
There's this AI-powered band that went viral, hit over half a million monthly listeners on streaming platforms. What's interesting is how they just sort of appeared zero digital footprint, as the source says. That naturally led to a ton of skepticism online, especially from Reddit users and musicians trying to figure out if they were real. Get this. Deezer, the music platform, actually flagged potential AI usage. But Spotify? Apparently no disclosure requirements there.
The "band" initially just brushed off the AI claims, called them lazy and baseless. But then an adjunct member, this guy Andrew Freelon, admitted they did use Suno, the AI music generator, for at least some tracks. He even mentioned using Suno's persona feature to keep the vocal style consistent.
So they were actively trying to make it sound cohesive. Right. And what stands out here isn't just that AI music can get popular. That's maybe not so surprising anymore. It's the whole debate it kicks off around transparency. You know, sure, this band eventually fessed up, but it really makes you wonder how we even know in the future and maybe more profoundly if
The content's good. Does it even matter if a human made it? This whole thing is a pretty clear signal. AI music is here. It's hitting the mainstream. And it's already blurring those lines around originality and authorship. We're going to have to rethink what creator even means. Definitely forces a shift in perspective. And look, if hearing about this stuff, these creative uses of AI, whether it's music or video, gets you thinking, gets you inspired. Maybe you want to understand how to build your own AI innovations or maybe get certified in this rapidly growing field. Well,
Etienne Newman, who created this show, AI Unraveled, has some fantastic resources. Seriously, check out his AI cert prep books, things like the Azure AI Engineer Associate Guide or the Google Cloud Generative AI Leader Certification Prep. They're designed to help anyone really get certified in AI and give their career a boost. You can find them all at djmgate.tech.com.
Okay, let's stay on content for a moment longer. Let's talk VTubers, virtual YouTubers, but fully AI-generated ones. This is wild. We're talking about Blue, a virtual YouTuber with, wait for it, 2.5 million subscribers and over 700 million views. Blue was created by Jordy VandenBooche. Maybe you know his couple of copies, a longtime human YouTuber. He apparently built Blue to deal with the insane demands of daily content creation after hitting burnout himself. Smart.
Maybe. So Blue uses a whole suite of AI tools, ChatGPT, Gemini, 11 Labs for voiceovers, thumbnail creation, even translating content for a global audience.
Grubble Cop has even tried fully AI generated episodes. He admits they're not quite as good as the human guided ones. No? Yet. Yet is doing a lot of work there, I think. Oh, absolutely. And the really compelling part here, from my perspective, is how AI just fundamentally changes the economics of making content. It lets you scale these digital personalities, you know, blow past the usual production bottlenecks. That's why we're seeing this VTuber explosion. Blue, for example, apparently generated...
seven figures in revenue without a single human on camera that's a game changer this whole trend could seriously redefine entertainment and it pushes all these ethical creative and frankly labor questions right to the front think about it we're already seeing tools like hedra's character 3 animating ai characters in real time platforms like tube chef offering tools to make faceless ai videos for like
18 bucks a month. It's a completely new landscape emerging. It really is. OK. Shifting focus slightly within platforms. Elon Musk's platform is rolling out an AI driven fact checking tool. The idea is it will automatically analyze posts and flag potentially misleading content for their community note system. Though, crucially, the AI created notes only go public if they get reviewed and approved by human contributors from different viewpoints. So there's a human check.
Right. A human check is key. And while, you know, this might help curb the tide of misinformation, which is a huge problem, there's a flip side, isn't there? Could this just fuel new debates about censorship? Could it intensify these already heated controversies around AI moderation and who gets to decide what's true? It's a tricky balance. Very tricky.
Okay, one more on the information content front before we move on. Cloudflare. They've created a new marketplace, basically. It lets website owners charge AI companies every single time their bots crawl the site for data. This is a pretty big reversal of like decades of open web policy where crawling was generally free. Big publishers like Condé Nast, Timee, The Atlantic are already jumping on board. They're citing traffic losses, basically saying AI companies are benefiting from their content without giving back. Yeah, and if you connect
this to the bigger picture. It makes sense, right? AI training needs massive amounts of data. So creators and publishers are, I think quite rightly, starting to demand compensation. This could set a real precedent for a fairer internet economy, one driven more by content licensing, especially when you hear things like OpenAI's web crawler apparently scrapes sites 1,700 times per referral compared to Google's 14 to 1 ratio. That's a huge difference in data harvesting intensity. Wow, 1,700 times. OK.
So that's the content and information side. What about AI's core abilities? What breakthroughs are we seeing there? Because beyond content, there's some incredible advancements happening, even for social good. Let's talk about Sakana AI out of Japan. They developed a technique that actually teaches multiple AI models to collaborate, to team up, almost like a human team works together. They built a system combining models like ChatGPT, Gemini, and DeepSeek. And get this.
This multi-model team solved 30% of these complex ARC-AGI-2 reasoning puzzles.
The best solo models only hit about 23%. So teamwork makes the dream work, apparently even for AI. They call the framework TreeQuest and uses something called ABMCTS fancy acronym, but it basically dynamically assigns tasks to the models based on their strengths. And the models learn from each other's mistakes, building on partial solutions. Right. And what's fascinating here is how perfectly this aligns with other trends we're seeing, like the rise of swarms of AI agents.
It suggests the biggest breakthroughs in the future might not come from one single giant supermodel, but from teams of specialized AIs working together, this kind of swarm intelligence could unlock more scalable, more adaptable AI systems. Think about logistics, complex planning, even defense applications. It's a really interesting direction. That is cool. For you listening, if you want to dive deeper into how these complex AI systems, these teams of models are actually built, Etienne Newman's AI Unraveled Builders Toolkit is an invaluable resource.
Seriously, it includes a series of AI tutorials, PDFs, audio, video, plus AI and machine learning certification guides. Everything you need to actually start building with AI. You can find the links in the show notes again at djamgate tech dot com.
Okay, now for something potentially even more profound. Scientists have built an AI that can, and I quote, think like humans. They're calling it the Centaur model. This isn't just another LLM. It's a whole new cognitive architecture designed to let AI simulate human-like thought processes, abstract reasoning, planning ahead, even that quirky human thing called mental time travel, thinking of the past and future, how they do it.
Well, researchers took Meta's Lama model, one of their big language models, and fine-tuned it. But not on webtext. They used data from 60,000 human participants across 160 different psychology experiments. They basically taught the AI to replicate human decision patterns observed in those experiments and the resulting Centaur model. It accurately predicts human choices and behaviors in tasks it's never even seen before.
It apparently outperformed 14 traditional cognitive models on 31 out of 32 different tasks, things like gambling simulations, memory tests, problem-solving scenarios.
The researchers hope to use it as a kind of virtual laboratory for studying human cognition and mental health. Yeah, if we really connect this to the bigger picture, Centaur's success is more than just a technical achievement. It kind of suggests that human cognition, our decision making might be way more predictable than we often like to think, which, if true, could mean future advanced AI, maybe ASI level models could simulate complex human scenarios with, well, frankly, scary accuracy.
But on the other hand, it's also this potentially massive research tool, right? Imagine running behavioral studies without needing huge budgets or spending years recruiting participants. This could genuinely bridge the gap between today's neural nets and something closer to artificial general intelligence. But yeah, it absolutely raises fresh ethical and safety questions about simulating humans that well. We need to think about those implications. Profound stuff indeed. Okay, let's end this segment on a definitely positive note, though.
AI for good. Scientists are using AI to create a new kind of white paint. Sounds simple, but this paint has ultra-high reflectivity, and it drastically reduces indoor temperatures without using any energy.
Think about the climate impact. So researchers from a bunch of universities, UT Austin, Shanghai Jiao Tong, National University of Singapore, Umea University, they used machine learning algorithms. They fed it data on over 1500 potential materials to predict the optimal chemical structures and compositions for maximum reflectivity. And it worked. When they tested it on model houses, surfaces coated with this AI-designed paint stayed between 5 and 20 degrees Celsius, cooler than surfaces with regular paint.
even after hours in direct sunlight. The researchers estimate this could save something like 15,800 kilowatt hours per year for an entire apartment building in a hot climate. Just for context, a typical home AC uses maybe 1,500 kilowatts a year. So that's huge savings. It really is huge. And what's compelling here is how AI tackles a major bottleneck in materials science.
Traditionally, finding new materials is this slow, laborious trial and error process. AI speeds that up immensely. This kind of innovation could be absolutely key for sustainable cooling strategies, especially when you consider what 17% of all residential electricity in the US goes just to air conditioning. AI design materials like this paint offer substantial energy savings.
It could be a critical tool for helping cities everywhere adapt to rising global temperatures. Really promising stuff. Definitely promising. Okay, let's switch gears now. Let's talk about the business of AI. Because the stakes here are incredibly high and the landscape is just constantly shifting. It's dizzying sometimes. We have to start with OpenAI's recent moves. First, this
Absolutely massive multi-year cloud deal they signed with Oracle. We're talking 4.5 gigawatts of computing power. The deal is valued at roughly $30 billion annually. That's enormous. It clearly signals a diversifying beyond just Microsoft Azure. And it's tied to OpenAI's huge Stargate initiative for their plan for giant data centers. To meet this demand, Oracle actually has to build new U.S. data centers with capacity equal to maybe a quarter of the entire nation's current supply.
And Oracle's planning to buy 400,000 of Nvidia's latest GB200 chips, spending around $40 billion just to power this stuff. Right. This Oracle deal is a massive strategic play for OpenAI's future scale.
It tells us a few things. One, they're absolutely hedging their cloud strategy, not putting all their eggs in the Microsoft basket. Two, they are securing the sheer, almost unimaginable compute power they're going to need for the even larger AI models they're clearly planning. Plus, their big push into enterprise services. It's laying the foundation for whatever comes next.
And it's not just about the hardware, the compute power. OpenAI is also, somewhat quietly, rolling out a new high-level enterprise consulting arm. They're going directly after Fortune 500 companies now, offering bespoke AI solutions, strategic development, the whole package. They've hired these forward-deployed engineers. Interesting title. Apparently, many came from Palantir, known for that kind of embedded work. And the price tag?
Customers have to commit at least $10 million just for access, with some deals reportedly reaching hundreds of millions over several years. Yeah. What's fascinating here is seeing OpenAI move so decisively beyond just being an API provider or a chatbot maker. They're now offering hands-on strategic support. It really cements the role as both the core AI innovator and a direct partner for big enterprises.
They're basically positioning themselves to compete directly with giants like Accenture or Deloitte in the AI strategy space. And they're landing big deals already like that reported $200 million defense contract with the Pentagon, plus work with firms like Morgan Stanley and Grav. It's a really significant expansion of their business model. They want to be deeply embedded. Okay. And speaking of competition.
Things are getting spicy between OpenAI and Meta. OpenAI CEO Sam Altman reportedly criticized Meta's aggressive recruiting tactics as distasteful. He apparently told his own team, missionaries will beat mercenaries, suggesting OpenAI has a stronger mission-driven culture compared to Meta just throwing money around. He even claimed Meta failed to poach its top targets, despite offering packages worth up to $300 million over four years. Right, and this really throws the whole
war for AI talent into sharp relief, doesn't it? It's not just about money, though clearly huge sums are involved. Altman's framing it as a clash of cultures, maybe even philosophies. OpenAI's mission versus what he implies is a more fleeting flavor of the week approach at Meta.
These philosophical differences could genuinely shape the future direction of the field, depending on who attracts and retains the top minds. It's a high stakes game. Quickly on OpenAI, again, they had to come out and deny reports of a partnership with Robinhood. Apparently, there were false claims about an integration or the sale of OpenAI tokens representing equity, which OpenAI says simply don't exist.
Yeah. And that's maybe a smaller news item, but it connects to a bigger picture point. As AI becomes so central, so hyped, false affiliations, scams using AI buzzwords and just general AI generated misinformation pose significant reputational and regulatory risks for these big tech firms. They have to constantly police this stuff. Good point. OK, let's zoom out a bit to the broader industry and workforce impact.
Ford CEO Jim Farley dropped a pretty stark warning. He predicted AI could eliminate 40 to 50 percent, half, of white-collar jobs within the auto industry. And he's not alone. Leaders at Anthropic and JPMorgan Chase have echoed similar concerns. Some firms are apparently already using AI agents to replace certain HR functions. But then you have the counterpoint. Executives at NVIDIA and OpenAI, perhaps predictably, are pushing back, claiming there's little evidence for such dramatic job loss.
Their line is that AI will mostly just make existing employees more efficient, augmenting jobs rather than outright replacing them. And this just perfectly
Just perfectly highlights the intense debate, the uncertainty around AI-driven automation and what it means for jobs. We're definitely seeing acceleration, especially in those white-collar areas, design, HR, legal, finance. It's fueling these really urgent conversations about reskilling, upskilling, and how our economy needs to adapt. The core question isn't really if jobs will change, but how many, how fast, and can our workforce, our society, actually adapt in time? It's the multi-trillion dollar question, isn't it?
And for those of you listening, maybe navigating these exact workforce changes, feeling that pressure and looking to stay ahead, this is where Etienne Newman's AI CertPro books are incredibly relevant. Whether it's the AWS Certified AI Practitioner Study Guide or Azure AI Fundamentals or the Google Machine Learning Certification Guide, these resources available at djamgac.com are specifically designed to give you the skills and crucially the certifications to not just survive, but actually thrive in this evolving AI driven economy.
worth checking out. Okay, a couple more industry dynamics. Microsoft seems to be hitting some speed bumps with its custom AI chip ambitions. They've decided to temporarily pause parts of that project, apparently to double down on efficiency and maybe lean more heavily on existing vendors like AMD and Nvidia for now. This comes after they reportedly pushed back the release of their Maya 200 chip from 2025 to maybe 2026.
and to face delays with another chip called Braga. Yeah, it's a good reminder that even for big tech, building cutting-edge custom hardware is incredibly difficult and prone to delays. It's not easy competing with Nvidia, so these strategic pivots, like maybe developing an intermediary Maya 280 chip that aims for better performance per watt than even Nvidia's 2027 chips,
These kinds of decisions could really determine who leads the next phase of AI compute infrastructure, especially since Microsoft was Nvidia's single biggest customer last year. The stakes are huge for them. Absolutely. And related to Microsoft's focus, they also announced another round of layoffs affecting 9000 employees. That's less than 4% of their global workforce, but it follows previous cuts.
The rationale seems to be doubling down on AI and cloud priorities, meaning shifts away from other areas. Right. And again, it's just another clear indicator, unfortunately, that this AI transition is accelerating job displacement, even within major tech companies impacting traditional tech roles. It feeds right back into those ongoing debates about upskilling and economic adaptation we were just talking about.
It does seem to be a recurring theme. Finally, in the AI search space, Perplexity AI just launched a new premium tier. It's a hefty $200 per month plan offering advanced AI research tools, much longer context windows for processing information, and generally enterprise-grade performance. And this just shows how that AI search race is really intensifying, doesn't it? Right. We're moving beyond just free or cheap consumer tools.
Premium tiers like this are now directly targeting researchers, professionals, enterprise teams willing to pay top dollar for advanced capabilities. It's becoming a serious B2B market, too. Wow. OK, what a deep dive that was into just what one day's worth of AI news is kind of staggering when you lay it all out from the, you know, the creative and sometimes deeply problematic side of generative AI news.
to these truly mind-bending advances in AI's cognitive abilities and then all the high-stakes corporate maneuvering and the real-world impact on jobs.
It's just clear AI is touching, reshaping pretty much every facet of our world now. Yeah, absolutely. If you connect all these dots, the sheer breadth of innovation just in this single daily snapshot is remarkable. I mean, AI designing paint to cool buildings, AI models that can simulate human thought patterns, entirely new business models popping up around AI content. It really underscores that AI isn't just another tool, is it?
It's becoming this fundamental force that's changing how we create things, how we work, maybe even how we understand intelligence itself. So the big question then, what does this all mean for you listening? As you think about everything we've uncovered today, you know, AI making viral hits, AI potentially thinking like us, here's maybe a provocative thought to leave you with. As AI gets better and better at mimicking, maybe even surpassing human capabilities in more and more areas.
How is that going to force us, collectively and individually, to redefine what it even means to be human in the 21st century? And maybe more practically, how are you going to prepare for that future? Something to chew on. Before you go, just one final reminder that this deep dive, all these insights, were brought to you in partnership with Etienne Newman, Senior Engineer, passionate soccer dad from Canada, and the creator and producer of this show, AI Unraveled.
And Sirfus, if you're feeling ready to move beyond just consuming AI knowledge, if you want to actually engage with it, build things, advance your career in this space, please do explore Etienne Newman's AI Cert Prep Books. We mentioned them earlier. Azure AI Engineer Associate, Google Cloud Generative AI Leader Certification, AWS Certified AI Practitioner, Azure AI Fundamentals, Google Machine Learning Certification. Truly fantastic resources to help anyone get certified and boost their career. Find them at djmgodtech.com.
And one more thing. Don't forget Etienne Newman's toolkit. It's called the AI Unraveled Builders Toolkit. It's absolutely packed with practical stuff. AI tutorials in PDF, audio and video formats, plus those crucial AI and machine learning certification guides to really help you start building with AI today. All the links for everything right there in the show notes at djamguide.com. Okay, that is it for us today. Thank you so much for joining us on this deep dive. Until next time, keep exploring, keep learning and definitely stay curious.