Welcome to the Deep Dive. This is a new Deep Dive from AI Unraveled, which, as always, is created and produced by Etienne Newman, senior engineer and passionate soccer dad up in Canada. Great to be diving in again. Absolutely. And hey, before we really get started, if you enjoy what we do here, unpacking AI, please do take a second to like and subscribe to AI Unraveled. It genuinely helps us keep bringing these insights to you. Definitely. So today's mission is...
Well, it's a bit different, isn't it? We're not looking at a long-term trend. Right. It's more like cracking open a time capsule. Exactly. Our source material is basically a chronicle, a news feed, if you will, of AI happenings, but all from one single day, June 9th, 2025. One day, just 24 hours. Wow. That sounds kind of intense. So we're zooming right in. That's the plan. Think of it like a snapshot. Not the greatest hits, just incredible.
Everything that came across the wire that day. We're talking legal stuff, technical limits, totally new uses for AI, even public protests. And big money changing hands, I bet. Oh, yeah. Huge investments, too. It's a real mixed bag. OK, so the goal is to unpack this single day and see what it tells us about the sheer pace and I guess the variety of AI's impact.
pull out the really key stuff. Precisely. We want to grab the most important nuggets and show you just through this one window how fast things are moving and how wide the ripples are spreading. Let's see what June 9th, 2025 actually felt like in AI. All right. Where do we kick off? I saw something big hitting the legal news that day. OpenAI facing a major privacy challenge. Yeah, that was a lead item in the Chronicle. OpenAI was apparently fighting a potential court order and the scope
Well, it's pretty broad. How broad? They'd potentially have to log and keep all user chats. Free users plus pro teams, you name it. Whoa. And I think the source mentioned this even included chats people thought they'd deleted. That's the really sticky part. Manually deleted conversations potentially being retained. It definitely goes against what you'd expect delete to mean. So why? Who is pushing for this? Well, the source pointed towards interest from The New York Times, which has its own ongoing legal tussles with OpenAI.
The concern seemed to be about users possibly infringing copyright in chats and then trying to cover their tracks. Ah, okay. So a potential need for evidence, even if the user hit delete. How did OpenAI, specifically Sam Altman, react? He reportedly called it an inappropriate request, something that sets a bad precedent. Yeah. And he floated this really interesting idea that the concept of AI privilege...
Sort of drawing a parallel to doctor-patient confidentiality, suggesting maybe AI interactions need similar protection from third parties. AI privilege. Hmm. That's definitely putting a new idea on the table. Were any users exempt from this potential order? Yes. The Chronicle noted that. Enterprise, ADU, and API customers who already had zero data retention agreements in place were reportedly carved out. Their existing contracts already prevented that kind of logging. Got it.
So for you listening, what's the real takeaway from this legal fight? It's more than just a corporate legal drama. This could really set a precedent for how your data is handled by any AI you use. It raises big questions. Who controls your AI conversations? How long can they be kept, maybe against your wishes? And who gets to look at them? It's that classic tension data needs versus your expectation of privacy.
makes you think twice about what you share doesn't it it really does heavy stuff okay let's switch gears completely june 9th also apparently brought up something more personal the human side of interacting with ai yeah this came from a global ai policy expert featured in the source
They were reportedly warning about, well, the potential for people to form pretty deep emotional attachments to AI systems as they get better and more woven into our lives. Deep emotional bonds with AI. That feels like a leap from just using a tool. What's behind that? The expert mentioned our natural human tendency to anthropomorphize, you know, seeing human traits and non-human things. AI seems especially good at triggering this.
The source noted how AI can seem nonjudgmental, validating, even empathetic. That makes it easy to get attached. So the AI isn't actually feeling, but its responses make us feel understood. And that's the hook. That's the idea. And the Chronicle mentioned OpenAI's angle on this. Reportedly, they're not trying to solve if AI is conscious.
They're focused on how conscious it seems to users and what that does to people's mental well-being. That's a really important distinction. So how does that affect how they design the A.I.'s personality? They're trying to walk a fine line, according to the source, aiming for warm, helpful, but deliberately not given the A.I. a fake backstory or fake feelings.
that could trick users into thinking it's actually sentient. Right. And the experts' warning went beyond just individual connections, didn't it? Something about broader society. Exactly. They reportedly said these human-AI relationships aren't just about how we use the tech, but they may shape how people relate to each other. Wow. Okay, so potentially changing human social dynamics. What does that mean for you, the listener? It's an invitation, really, to think about your own interactions. Voice assistants, chatbots,
Are you feeling any kind of connection? How does getting easy validation from an AI compare to, you know, the messy reality of human relationships? It raises some big questions about where our social interactions might be headed. Definitely food for thought. Okay, from future relationships to the ancient past, June 9th was all over the map. AI apparently made waves in archaeology too. Yeah, this one was fascinating. The Chronicle reported on advanced AI models being used to reanalyze the Dead Sea Scrolls.
- The Dead Sea Scrolls, seriously. How can AI even help with something that ancient? - Well, the method was pretty clever. The AI model called Enoch in the source was reportedly trained by linking known radiocarbon dates of scroll fragments to the specific handwriting styles on those fragments. - Okay, wait, so it learned to connect how the writing looked with when it was written. - Exactly. It learned to associate visual patterns in the script with specific time periods, like a scribe's fingerprint changing over time. - And what did this AI analysis find?
Something pretty significant. It suggested the scrolls were written maybe 100 years earlier than previously thought, based on other methods. 100 years! That's a huge shift in historical terms. What's the impact on biblical studies?
The source noted it could really reshape things. It potentially pushes some texts closer to the time they were traditionally thought to be written, maybe making some fragments as old as 2300 years. It changes the context. And I imagine there's a practical advantage over something like carbon dating. A huge one, especially for preservation.
Carbon dating is destructive. You have to physically take a sample. This AI method, it just analyzes images. It's completely non-destructive, which is critical for fragile artifacts like the scrolls. That's incredible for protecting history. So seeing AI applied here, what does it show you? It just hammers home how versatile AI is becoming. It's not just for tech problems. It's a tool for discovery, for preservation, and feels like history and archaeology.
Seeing it tackle privacy one minute and ancient text the next, it really shows the breadth. It really does. And, you know, seeing all these applications, legal, personal, historical head, it really drives home why getting a handle on AI is becoming so important. If you're listening and thinking you want to go deeper, maybe get certified or even start building things. Yeah, there are definitely ways to do that. And actually, Etienne Newman, who creates and produces this deep dive,
has put together some really solid resources to help with exactly that. Right. He's developed a set of study guides aimed at helping anyone wanting to get certified in AI. Things like the Azure AI Engineer Associate. The Google Cloud Generative AI Leader Certification. AWS Certified AI Practitioner. Azure AI Fundamentals. And the Google Machine Learning Certification, too. These are designed to help you prep efficiently and really boost your career prospects in AI. You can find all those over at DJMKate.com. It's a great starting point.
And it's not just about passing exams. If you actually want to get your hands dirty, start building with AI. Etienne also created something called the AI Unraveled Builders Toolkit. Yeah, that toolkit bundles together AI tutorials, PDFs, audio, video, along with those certification guides.
It's really designed to help you make that jump from just learning about AI to actively using it. So whether you're aiming for a cert or just want practical guidance to start building, DJMGadTech.com has you covered. We'll stick all the links in the show notes for you. Definitely check those out. Okay, back to June 9th. It wasn't all progress and breakthroughs. There was also research highlighting some limitations.
Specifically from Apple researchers. Yeah, this caught my eye, too. It's important context, right? Apple's internal research apparently found that even the big leading AI models we use, well, they kind of stumble when it comes to logical reasoning, especially under pressure. Under pressure. What did they mean by that exactly? The researchers seem to suggest the models are almost too focused on nailing benchmark scores rather than truly understanding and solving the problem itself.
They describe this effect as a collapse. A collapse? That sounds not good. Right. Their research reportedly showed that on harder custom-made puzzles, models like OpenAI's O3 Mini didn't just do worse. Their accuracy actually dropped, and they used fewer resources, fewer tokens. It's like they just...
gave up when things got tough. That's really counterintuitive. You'd think complexity would demand more effort from the model. Was there an example? The Tower of Hanoi puzzle. Classic logic problem. Apparently, the models couldn't improve on it, even when the researchers basically gave them the step-by-step algorithm to solve it. Wow. Couldn't even follow the recipe, huh? Yeah. That really does underline a limitation. So what's the key thing for you, the listener, to keep in mind here? I think it's a good reality check. A
against the hype about AI having, you know, true general intelligence right now. These LMs are amazing language machines, but this research shows their limits with complex step-by-step logic. So if you're using an AI for something that needs deep, novel reasoning, just be aware. It might not be its strong suit yet. Good point. Important context. Okay, speaking of how we interact with this stuff,
June 9th also had some buzz about a potential major visual change from Apple. Something about the interface. Yes, Liquid Glass. That was the rumored code name floating around for a potential new mobile UI from Apple. Liquid Glass.
Sounds very fluid and maybe transparent. The description in the Chronicle mentioned things like adaptive fluidity, AI-driven layout changes, and visually, yeah, taking cues from Vision OS. Like sheen, dynamic transparency, sort of glassy effects on iPhones and iPads. Interesting. Like physical glass layers maybe? Where would this show up? The report suggested toolbars, in-app interfaces, controls, probably rolling out with a future OS like iOS 26 maybe.
And Bloomberg's Mark Gurman had a take on this being more than just looks. Yeah, he reportedly suggested this design language wasn't just cosmetic, but was laying the groundwork for future hardware. Specifically mentioned the rumored glass wing, that potential 20th anniversary iPhone concept that's supposedly all about glass. Ah, so it's potentially a signpost for future hardware and AI influencing the basic look and feel we experience. Exactly. A potential big shift in UI aesthetics, driven partly by AI capabilities.
shows AI creeping into design itself. All right, let's shift from the sleek digital world to something much rougher happening on the actual streets. June 9th apparently saw some serious public pushback against automation. Yeah, this was a pretty stark report in the Chronicle. Violent protests targeting Waymo RoboTaxes, specifically in parts of San Francisco and Los Angeles. Violent? How bad did it get?
The source described reports out of L.A. where people were apparently intentionally ordering Waymo cars during protests specifically to set them on fire. Whoa, ordering them like at Target. That's extreme. It really is. The LAPD reportedly even asked Waymo to pause service in some areas after several vehicles were attacked and burned. How many cars were involved? There were videos showing multiple Waymo Jaguar I-PACE EVs burning in L.A. Reports suggested at least five were destroyed by arson around that date.
Just wow. That's a very physical, very angry reaction. What does this kind of backlash tell us about rolling out autonomous tech? It's a vivid reminder of the real world friction, the social tension, even hostility that automation can face. It shows the challenges aren't just technical, they're deeply social. Companies, cities, they have to deal with public fear, worries about jobs, and
and sometimes, like this, active destructive resistance. Public acceptance is definitely not a given. A stark reminder indeed. Okay, shifting from street-level action to boardroom strategy, June 9th also had news of a massive investment involving Meta. A huge one, yeah. Meta was reportedly lining up a major investment in a company called Scale AI. How major are we talking? The number being reported was over $10 billion.
If it went through, that would apparently be one of the biggest funding rounds for a private company, like ever. 10 billion for one company. What does scale AI do that warrants that kind of cash injection? Fundamentally, they're about data.
Data labeling, data curation, preparing massive data sets is the essential groundwork needed to train the really powerful AI models. Right. So they provide the high quality fuel for the AI engines that companies like Meta rely on. Precisely. And the source mentioned Meta wasn't new to them. They were part of Scale AI's previous funding round in 2024 that valued them at $14 billion.
This looks like doubling down big time. So what's the big picture insight for you, the listener, from this kind of mega deal? Well, first, it just shows the insane scale of the AI investment race. But more specifically, it really highlights how crucial data is.
Scaled AI's whole business is making data useful for AI. Meta pouring billions into that underlines that having the best data, perfectly labeled, is just as critical as having the best algorithm. It's a key strategic weapon in the fight between Meta, OpenAI, Google, Anthropic, and the rest. Underscores the old saying, data is the new oil, I guess. Okay, let's try to round up the main themes from this snapshot day by looking at how AI was getting embedded in, well, everything.
everyday structures like education and business. Yeah, the Chronicle had a note about Ohio State University. Apparently they announced they're mandating AI integration for all students. All students. Mandatory AI literacy across the university. That's a pretty strong statement. It really is. Seems like a shift in thinking AI isn't just for computer science anymore. It's becoming a fundamental skill like reading or writing for everyone entering the workforce. And it's not just economics.
academia pushing this business leaders are already on board oh absolutely there was a Forbes survey mentioned in the source found that something like 75% of billionaires are reportedly already using AI tools actively in their business operations 75% Wow that's
That's not dipping a toe in. That sounds like core strategy. Exactly. The source suggested it's moved way beyond experimentation for them. It's seen as an essential business strategy, a power amplifier. And this transformation is hitting huge industries, too. For sure. The gaming industry worth something like $455 billion was highlighted.
AI is seen as the next big thing. They're powering everything from dynamic storylines that changes you play to super personalized gameplay. It's not just making games better. It's opening up totally new ways to make money. Okay, so education, the very top of business, massive entertainment sectors.
What's the overall message about AI adoption here? It's that AI is definitely not a niche tech anymore. It's fast becoming a foundational layer everywhere. It's getting baked into how we learn, how businesses run, how we entertain ourselves. It's pervasive and accelerating. And just to really drive home how much was happening on this one day, June 9th, the source listed a bunch of other headlines too, right?
Quick hits. Yeah, a few more quick ones. Apple researchers, again, noting this weird scaling limitation reasoning model is getting worse as problems get harder. Anthropic added Richard Fontaine to its trust, signaling a bigger focus on national security implications of powerful AI.
OpenAI rolled out an update to its advanced voice mode, aiming for more natural speech, better translation. AnySphere launched Cursor v1.0 that's an AI coding assistant with new tools like a background agent and a bug bot for helping developers. Google had a new experiment called Portraits, letting you create interactive AI versions of experts based on their knowledge. Higgs Field AI made it easier to create talking avatars with custom looks. And a company called FutureHouse released Ether.
an open source AI model specifically for chemistry, reportedly beating bigger models on certain tasks. Phew, that really was just one day's worth. Makes you realize trying to follow all this day to day is, well, it's almost impossible unless it's your full time job. No kidding. The pace is just relentless. So let's pull it all together. Looking back at everything reported on June 9th,
What's the overarching takeaway? For me, it's the sheer breadth and the speed. In just 24 hours, we saw AI tangled up in courtrooms over privacy. Sparking debates about human-AI emotional bond. Being used as a tool to redate ancient history. Hitting roadblocks in research labs. Potentially changing how our phone screens look. Facing violent protests in the streets. Driving absolutely massive corporate investments. And weaving itself into the fabric of education and major industries.
It just shows AI isn't moving forward on one single track. It's exploding outwards, technically, ethically, socially, commercially, scientifically, all at the same time. And that was just one day. It really underlines why trying to stay informed about AI isn't just, you know, interesting for tech geeks. It's becoming essential for understanding the direction the whole world is heading. Couldn't agree more. Every day seems to bring something new that could reshape some part of our lives.
So, as we wrap up this particular deep dive into June 9th, 2025, maybe here is a final thought for you, the listener, to chew on.
Given everything we just saw packed into one day, the privacy fights, the relationships, the history, the UI changes, all of it, where do you think the next really surprising or truly significant impact of AI is going to pop up? It's a great question. And thinking about those possibilities, understanding the tech behind them, that feels key to navigating what comes next. Absolutely. And just a reminder, if you want to move beyond just listening and thinking, if you want to learn more deeply or even start building with AI yourself,
Etienne Newman has put those resources together. Yeah, check out those certification study guides, Azure, Google Cloud, AWS. They're solid tools to help you prep, get certified, and really give your career an edge in the AI space.
And don't forget the AI Unraveled Builders Toolkit. It combines those guides with practical tutorials, PDFs, audio, video to help you actually start creating with AI. You can find both the study guides and the toolkit over at jamgattech.com. And yeah, like we said, all the links will be right there for you in the show notes. Easy to find.
Perfect. Well, thanks for joining us on this whirlwind tour of a single day in the life of AI. Until next time, keep exploring, keep asking questions, and keep thinking about how all this AI is changing our world.