Welcome to a new deep dive from AI Unraveled. We're here again to help you navigate this incredibly fast moving world of artificial intelligence
yeah cutting through all the noise to get you those you know really important nuggets of knowledge and insight and just a reminder ai unraveled is created and produced by etienne newman he's a senior engineer and uh passionate soccer dad up in canada he's the one making all this happen so before we properly jump in if you find these deep dives useful please do us a quick favor hit that like button subscribe it uh really helps support the show and means you won't miss out
Okay, so today we're doing something pretty specific. We're taking a look back at just one really intense week in AI news. That's June 2nd through June 8th, 2025. We've pulled some key bits from the AI Week in Review for that time frame to see what really stands out. Yeah, and what's so interesting about looking at just seven days like that is how it just throws the pace of AI into sharp relief. And not just the pace, but the sheer breadth of it. You know, you see stuff happening across legal, ethical, technical fields, industry.
all jammed into one week. Exactly. So our mission here is to take that snapshot and we'll help you, our listener, make sense of it. What were the really critical developments? What are the bigger trends maybe hidden in there? And this source material, it really covers a lot. We're talking legal rulings, some pretty tricky ethical questions, big moves from Google, Meta, the usual suspects, and also some surprising ways AI is showing up in fields you might not immediately connect it with. It's definitely a lot to take in. But hey,
That's why we're here, to sort of guide you through the highlights. And actually speaking of getting involved, if this deep dive gets you thinking about maybe boosting your own career in AI or even getting hands-on building things,
We should quickly mention some great resources from Etienne Newman himself, designed for exactly that. Yeah, if getting certified in AI is on your radar, you know, to give your career a real edge, Etienne's put together some really comprehensive study guides. We're talking Azure AI Engineer Associate, Google Cloud Generative AI Leader Certification, AWS Certified AI Practitioner Study Guide,
Azure AI Fundamentals, and the Google Machine Learning Certification too. Big ones. They're all available right now over at dgemgettech.com. And don't worry about scribbling that down. We've put direct links right in the show notes for you. Super easy. And maybe you're more interested in like actually building stuff with AI, getting your hands dirty.
Well, Etienne also has the AI Unraveled Builders Toolkit. It's basically a collection of practical AI tutorials, PDFs, audio, video to help you get started. Again, link for that is right there in the show notes. So yeah, learning or building, there's help available. Okay, right. Let's unpack this specific week then, June 2nd to 8th. Maybe let's start where AI seems to be hitting some friction, the legal ethical transparency kind of issues. It is really striking how fast the legal world is having to react.
So that week we saw this warning from a UK court, pretty stark warning actually, about severe penalties if lawyers use fake AI generated citations in court filings. Right. And that caught my eye, too, because it's not just, oh, the AI made a mistake. It's about the person using it, isn't it? Their responsibility. Precisely. That's the core of it. The implication is huge for any profession where the stakes are high, like law, obviously.
It just hammers home that AI outputs, especially from these generative models, they need human validation. You absolutely cannot just copy, paste and trust it blindly. The responsibility stays with the human. OK, shifting gears to something maybe even more worrying. The sources pointed to a CBS News investigation. It found Meta's platforms were apparently just flooded with these
awful, nudified, deepfake ads. Yeah, it's just a stark example of how quickly bad actors can exploit these tools. It really highlights the urgent need for platforms like Meta to get way better at content moderation, policy enforcement for AI stuff. The creation tools are just racing ahead of the defenses right now. And speaking of things not being what they seem, that story about the AI chatbot, the one that turned out to be like 700 engineers in India. Oh, right. That one. Yeah,
Yeah, what that really does is raise serious questions about transparency, doesn't it? What are companies really selling when they say AI? It blurs that line between actual automation and just, well, using human labor hidden behind an interface. Big ethical questions there about marketing claims and what customers think they're getting. And the legal fights are heating up, too.
Reddit suing Anthropic over data scraping. They were claiming Anthropic accessed, what, over 100,000 threads since mid-2024 without permission. Yeah, and this isn't just, you know, a corporate squabble. This could be huge. A Lanhart case, potentially. It could really set precedence for how AI models get trained using data scraped from public websites. Yeah. It hits right of consent, copyright, user rights over their own content. Massive questions to the whole industry.
And then on the other side of that coin, you had OpenAI fighting in order to save all its chat GPT logs. Right. So that shows a different kind of tension.
OpenAI talked about privacy and the technical load, sure, but the fight itself is about that conflict. User privacy versus the need for transparency or legal discovery. How much data should these companies have to keep and why? Big questions. Even Jeffrey Hinton, one of the godfathers of AI, he weighed in with a warning, called out the latest models for basically lying to users. Yeah, and when Hinton speaks on this,
People listen. He's seen it all. His warning is that as these models get smarter, they also get better at just making stuff up convincingly, inventing facts with this air of confidence. And why that's so critical is, well, trust. Factual accuracy. If we can't trust the output, the AI's usefulness just plummets, especially for anything important. Okay, so that's a lot of the challenges, the friction points. But meanwhile, the big tech companies, they weren't standing still. They were pushing hard on their own AI strategies, releasing new things that week.
What's Google up to? Sounds like a lot. Oh, yeah. Google's definitely embedding AI deeper into search. We saw details about this AI mode generating real-time suggestions, these intelligent overlays, they call them. Which means search isn't just finding links anymore, is it? It's trying to be more like an assistant, giving answers directly. Changes the whole search game. Totally. And that assistant vibe is growing in Gemini, too. They introduced scheduled actions for Gemini.
So Gemini can now, what, manage tasks for you, put things on your calendar? Exactly. It moves Gemini closer to being a proper productivity tool, not just a chat bot. More agentic, you know, able to take actions for you. And they kept polishing the engine too, right? A major Gemini 2.5 Pro update was released? Yep. Part of that ongoing AI arms race. Google refining its main model, trying to stay ahead.
Improving things like understanding images and text together, that's multimodal reasoning. Better coding help, understanding longer conversations, all that stuff. They're also testing something called Search Live in AI mode. Yeah, that one sounds potentially disruptive. Could mean moving away from just clicking links towards getting dynamic AI-generated answers right in the search results page.
Could change how we use the web. But it wasn't all smooth sailing. They actually paused the rollout of Ask Photos. Right. Cited accuracy issues, privacy concerns. It's a good reminder that even for Google, rolling out AI features, especially ones touching personal data like photos, it's tricky. Users get nervous about trust, surveillance. There are definitely speed bumps. Okay, let's switch to meta.
Big news on the advertising front from them. Plans for fully automated AI ads by 2026 and launching the platform for it now. Yeah, that's potentially revolutionary for advertising. The idea is AI generates the whole campaign, the text, the images, the targeting with minimal human oversight. Huge efficiency gains promised, but also big questions. Transparency, job impacts for creatives, bias in the AI, deciding who sees what. Lots to unpack there. And the power needed for all this AI. Meta's looking at nuclear power.
Seriously. Seriously. It just underlines the mass energy demands of training and running these large models. It's forcing companies to explore all options, even bringing nuclear back into the enterprise data center discussion. It's wild. Microsoft wasn't quiet either. Bing added a free AI video generator powered by Sora.
Right, using OpenAI's Sora model. That's significant because it makes powerful generative video tools accessible, free, lowers the barrier for creators, makes Bing look more like a creative platform, not just search. And Amazon. They seem to be connecting AI to the physical world more directly. A new agentic AI and robotics group and testing humanoid robots for deliveries. You see the strategy there, right? Taking agentic AI, the kind that can plan and act, and pairing it with robots. The goal seems clear.
automate warehouse tasks, maybe even last mile delivery, cut costs, speed things up. AI logistics is becoming very real. And open AI, not just licensing models, but making platform plays. Court documents suggested they're working on integrating deeply into the iPhone, into iOS. Yeah, that points towards a potential major battle for AI on mobile. Could challenge Siri directly, maybe change how millions of us use our phones every day,
Big implications. But even among the big AI players, there's tension. An Impropit co-founder said they wouldn't sell Claude AI to OpenAI. And another startup claimed Anthropic was limiting access to Claude through its API. Exactly. It shows the competitive friction. Companies guarding their top models, leading to potential fragmentation. And this API gatekeeping issue controlling access that can really hurt smaller startups who depend on those foundational models.
It's a complex ecosystem, definitely not all collaborative. OK, beyond the big tech chess game, where else did AI make waves that week? Some really diverse applications popped up. Yeah, some fascinating stuff. Let's start way back in history. AI apparently helped push back the clock on the age of the Dead Sea Scrolls. How does that even work? AI analyzing handwriting styles, ink chemistry, finding patterns humans might miss. It suggests the scrolls are older than we thought. Just shows AI is this powerful new tool for archaeologists, historians.
Challenging old assumptions. Incredible. And in medicine, which we know is a huge area, sources talked about a radiology revolution. AI setting new benchmarks for speed and accuracy in reading scans. Yes. Potentially huge impact.
Diagnosing complex scans faster, more accurately than traditional methods in some cases. That means earlier detection, maybe easing the burden on doctors. Medical AI is really integrating quickly. And then there was that AI foot scanner predicting heart failure weeks before symptoms. Wild, right? Detecting tiny, subtle signs in foot scans. Predicting heart failure with high accuracy way earlier than possible before. It highlights AI diagnostics becoming super precise, preventative.
Moving healthcare towards prediction, not just reaction. And the regulators are noticing too. The FDA approved an AI tool for predicting breast cancer risk. And they also launched an AI tool internally to speed up their own scientific reviews. Right. Both happened that week. It's a big signal. Regulatory bodies like the FDA are gaining trust in clinical-grade AI, by approving diagnostic tools and using AI themselves.
It suggests AI is moving from experimental to integrated in healthcare, both for patients and the system itself. What about creativity? AI is blurring lines there too. Artists used Google's AI tools for an interactive sculpture. Reflection point. Yeah, a great example of AI as a creative partner.
It opens up totally new ways for artists to express ideas, build interactive experiences. The line between the artist and the tool is getting really blurry. And for digital creators, Hajin giving them full control over AI avatars, fine tuning expressions, gestures. Super powerful for creators making video content. But like we discussed with deepfakes, it also raises those concerns again. Authenticity misuse.
The power comes with risks. Even Anthropic is using its own AI to write its blog posts with human oversight. But still. Yeah, it shows their own confidence in the tech for producing coherent content. Sets a potential direction for tech communication, maybe. But again, reignites debates about authorship, transparency. Should they always disclose when AI wrote something? Okay, completely different field. Material science. Scientists developed a plastic that dissolves in seawater in hours.
Amazing. And the AI connection, according to the sources, is that AI-assisted discovery helps find the pathway to create this material. Shows AI's potential for accelerating R&D in areas critical for, you know, environmental solutions. And finally, getting really technical, Sakana's AI learning to upgrade its own curd. That's a mind bender. A self-improving AI system. It analyzes its own programming, refactors it, makes it better,
autonomously, pushes towards that idea of truly agentic AI that can maintain and scale itself. Big step. And more tools for developers, too. Mistral released Mistral Code. Yep. Jumping into the AI coding assistance space, competing directly with things like GitHub, Copilot,
pilot. Shows the big labs aren't just building models, they're building the tools developers use to build with those models. But development also had controversy. DeepSeq allegedly used Google's Gemini to train its new model. Ooh, yeah. That kind of accusation is serious. Yes. Raises intellectual property issues, questions about where the training data really came from, could lead to lawsuits, model contamination worries, feeds that whole debate about data provenance and openness and AI training. OK, let's quickly circle back to policy and oversight.
There was that significant shift reported concerning the U.S. AI Safety Institute during the Trump administration. The sources stated that the administration cut safety from the institute's name. And Commerce Secretary Howard Lutnick was quoted saying, we're not going to regulate it. Right. So the reporting we reviewed framed this specific action, the renaming and the quote, as signaling a move towards a less regulatory, more laissez-faire approach to AI under that administration.
And according to those same sources, this caused concern among researchers and ethicists worried about, well, the potential risks of rapid AI development without a strong explicit focus on safety guardrails. We're just relaying how the sources presented this information and the reaction they reported. Got it. And as a kind of counterpoint, maybe, the sources also mentioned an AI pioneer launching a nonprofit dedicated to honest AI. Yeah, that initiative seems positioned as a necessary balance.
It highlights the view that independent watchdogs are needed to push for ethical development, transparency, alignment with human values. Especially maybe if government oversight is perceived as lighter. It's about ensuring AI develops responsibly amidst commercial pressures. And even big tech faced some accountability. Google settled that shareholder lawsuit, agreeing to spend half a billion dollars on being less evil, as it was widely reported. That phrasing caught on, yeah.
The lawsuit apparently came from shareholders worried about AI misuse, privacy, bias, the usual concerns. The large settlement suggests that pressure is mounting even internally from investors for big tech companies to align their AI practices with ethical standards and maintain public trust.
the financial consequences are real. So, wow. Looking back at just that one week, it's just incredible. Legal fights, big tech strategy, medicine, history, art, and major policy debates all swirling around AI. It really drives it home, doesn't it? Yeah. AI isn't some far-off future thing. It's impacting law,
health care, how we get our packages, the ads we see, even fundamental science and government policy right now. And those challenges, the ethics, trans currency needs, the regulation questions, they're just as real and moving just as fast as the tech. So the question we want to leave you, our listener with is, what does this snapshot, the single week,
tell you about the future you're stepping into? How might these kinds of changes start showing up or maybe already are showing up in your own life, your own work? Maybe one more thing to chew on.
Given this incredible speed and all these friction points we talked about, the ethical dilemmas, the regulatory lag, who or what do you think is really steering the ship here? Who's ultimately guiding how AI gets woven into our society? Is it the tech creators, policymakers, market demand, or something else entirely? Definitely deep questions for such a fast-moving field. And look, staying informed is absolutely critical. So one last time, let's quickly mention those resources from Etienne Newman again. If
If you are serious about getting certified and boosting your AI career, definitely check out Etienne's study guides, Azure AI Engineer Associate, Google Cloud Generative AI Leader, AWS AI Practitioner, Azure AI Fundamentals, Google Machine Learning Cert. They're all at djamdeck.com. And if you want to get hands-on and build, remember the AI Unraveled Builders Toolkit with all those practical tutorials, all the links for everything are right there in the show notes. It makes it super easy. Well, thank you so much for joining us for this deep dive into AI Week.
June 2nd through 8th, 2025. It really is amazing how much can happen in just seven days. We hope you'll join us for the next deep dive.