This is a new deep dive from AI Unraveled, created and produced by Etienne Newman, senior engineer and passionate soccer dad from Canada. Remember to like and subscribe. And this is the deep dive.
So we've gathered a stack of sources from yesterday, May 28th, 2025, all buzzing with the latest AI developments. We're looking at big updates to AI assistance, some interesting new startups. Yeah, pushing boundaries. Exactly. Platform integrations, too, and even some, well, some research on how we actually use these things. Right.
Our goal today is, you know, to cut through that noise. We'll unpack these updates, find the really important bits. The nuggets. Yeah, the nuggets. And help you figure out what it all means for, well, for AI's evolution and ultimately for you. Okay, let's dive in. First up, it seems like AI assistants are getting much better at just talking.
Anthropics made some moves with Claude. They certainly have. A big one is this new voice mode. It's in beta, but rolling out on the mobile apps, iOS and Android. And it's not just reading text back. It's designed for proper spoken conversations, full back and forth. Full conversations. Hands free. That feels like a genuinely useful step, doesn't it? It really does. And I heard something about choosing different voices, like five of them.
That's right. Five options. Initially, it's just English and it's powered by their CloudSonic 4 model. Gotcha. And what's kind of neat is the app. It displays key points on the screen while you're chatting. Oh, interesting. Yeah. So you can follow along. And then afterwards, you get a transcript and a summary, too, which is pretty handy. Right. So if you're, I don't know, cooking or something and asking it a question, you can glance down and see the main points. Smart.
Is everyone getting this voice feature? Well, free users get it, but with daily limits. So you can definitely try it. Okay. But the real power boost is for paid subscribers. They're getting integrations with Google Calendar and Gmail. Ah, okay.
Imagine just asking Claude to summarize your new emails or, you know, check your schedule just by talking. No tapping needed. Okay. That really starts to sound like something that could change how you manage your day, especially if you're busy. And there's another update too, right? Something for everyone. Yes. And this is a big one. Integrated web search is now free for all users. Ah.
So Claude can actually check the internet in real time now. Exactly. Which makes a huge difference for, you know, current events or anything where information changes quickly, better accuracy, more up-to-date answers. So putting those two together, the voice chat and the free web access. Yeah. What's the sort of
What's the bigger picture insight here? Well, I think the insight is Anthropic isn't just adding random features. They're making a deliberate shift. Okay. Claude is becoming more accessible, more real time, more like a conversational agent. Pairing that slick voice mode with free web search puts them right up against ChatGPT and Gemini. Right. The big players. Yeah. Especially for stuff you need now or things you want to do hands free.
It's about embedding Claude more naturally into your life, like a helper you can just talk to. Makes sense. OK, let's switch gears a bit. While AI is getting better at talking, there's also this huge push for AI as a creator, a builder, especially in like digital world. Tell us about Spatial. Right, Spatial. So this is a new startup just came out of stealth mode. OK.
One of the co-founders is Matthias Niesner. You might know him from Synthesia, the AI video avatar company. Right, Synthesia, yeah. Spatial's based in Munich, and they launched with a decent chunk of change, $13 million in seed funding. $13 million in seed funding. And their focus is...
Spatial foundation models. What does that actually mean? Well, it's pretty cutting edge. It hints at where generative AI might go next. Their goal basically is to generate interactive, really photorealistic 3D environments from super simple inputs like just a text prompt or maybe a single image. Whoa. So I could type like create a neon lit cyberpunk alleyway with rain puddles. Yeah. And it would build it.
In 3D. That's the dream, yeah. They want to build AI that really understands 3D space geometry, physics, how light works, material. Not just pasting pictures onto boxes. Yeah, exactly. Creating worlds that feel, you know, real, that you can interact with. If they can actually do that, the applications seem kind of endless.
Totally. I mean, think about gaming, film, architecture, CAD engineering, even robotic simulations. Anywhere you need realistic 3D stuff. And the team's got people from Google AI, Meta AI, serious researchers. So the insight here, it seems, is they're trying to crack a major bottleneck, right? Yeah.
making 3D world creation way faster and maybe even easier. Precisely. Instead of artists building every single thing by hand. Which takes forever. Right. The AI could generate the foundation, maybe humans refine it. It shifts things from pure manual work to like AI-assisted generation. Could really open up
creating stuff for VR AR games. Makes sense. Scaling up 3D content. Exactly. Making these immersive worlds much more accessible. And it's not just these specialized startups, is it? Even web browsers are getting ambitious. Opera announced something. Opera Neon. Yeah. Opera Neon. This is a really bold concept. They're calling it an agentic browser. Agentic meaning it acts. Yeah. Designed with AI agents.
built right in, deeply integrated. And these agents aren't just finding info for you. They're meant to actually do stuff, perform tasks, even create content for you inside the browser. Okay, hold on. An agentic browser sounds way beyond just showing websites what kind of tasks, what creation. Well, the reports mention some pretty ambitious goals, like using AI to code entire websites or simple games just from a text prompt you give it. Well, the browser codes a website.
Just based on me typing something. That's...
That's wild. It is. It's a totally different idea of what a browser does. It's behind a waitlist now and sounds like it'll be a premium subscription thing. They talk about different modes. Chat, do, and make. Make is that creation part asking the AI to generate websites, code, whatever. And Opera claims it uses cloud workflows, so maybe it can even finish generating stuff after you go offline. That part's a bit fuzzy still. Yeah, the offline bit sounds complex. What about do mode? Do uses their...
browser operator, AI agent. It's supposed to automate more routine website tasks, filling out forms, maybe helping book trips, things like that, all within the browser. So the insight with Opera Neon. Mm-hmm.
They're trying to fundamentally change what a browser is, aren't they? Absolutely. From just a window to view stuff to an active assistant that creates and automates. Right. If it works, it could totally change how we interact online. Blurring lines between the browser, the OS app.
It's a very ambitious play, making the browser itself a kind of productivity hub. Exactly. Okay, speaking of integration, let's talk about AI embedding itself into platforms we use constantly. News about OpenAI exploring sign in with ChatGPT. Yeah, this is really interesting. It's still early days, just exploratory, they say. But they put out a form basically asking developers, hey, would you want to let users sign into your app using their ChatGPT account?
Oh, so just like you use your Google or Apple ID to log into other services. Exactly that. Positioning ChatGPT as another one of those universal sign-in options. Apparently, they tested this quietly already with a command line tool, Codex CLI. They even gave out API credits to get people to link accounts. And given how many people use ChatGPT,
What is it like 600 million active users a month? It's something like that. Yeah, it's huge. So this could be a massive move for them. It's definitely a strategic play, leverages that huge user base beyond just the chat bot. Right. The insight here is open AI trying to weave itself into the basic digital infrastructure, become an identity provider. Makes the account stickier. Exactly. And gives them a foothold competing against Google, Apple in that authentication space.
They're apparently looking at maybe 2025 for a release, but obviously lots of details needed on security, data sharing. Sure, sure. That'll be key. And staying on platform integration, this rumored deal between Telegram and XAI, that's been getting a lot of attention. Oh, that's a big one. Reports say they've agreed in principle on a one-year deal. Okay. Worth a report of $300 million.
to integrate Grok, Elon Musk's AI, across Telegram. Wow, $300 million. That's serious money. What's in it for Telegram, apart from just having Grok? Well,
According to Pavel Durov, the Telegram CEO, XAI provides the $300 million cash and equity mix. Right. And Telegram gets 50% of the revenue from any XAI subscriptions sold through Telegram. Ah, a revenue share. Okay. And when might this actually happen? When would users see Grok in Telegram?
They're aiming for this summer. The idea is to bring Grok's abilities, summarizing chats, maybe editing text, even moderating groups, to Telegram's massive user base, over a billion users. A billion plus, yeah. So you might see it pop up in chats or maybe the search bar.
Something like that seems likely. Now, Elon Musk did tweet, "No deal has been signed yet." Right, saw that. But Durov followed up saying, "The formalities are just pending. Seems like the main agreement is basically done." And the privacy angle, Telegram's big on that. Crucial point.
Derov explicitly stated no Telegram data will be supplied for Grok training. OK, that's important for users. So the core insight from this potential mega deal. Well, for Grok, it's obviously a massive distribution channel, instant access to potentially a billion users on a major platform. Huge reach. And for Telegram, it's a huge push towards being an AI powered super app.
Not just messaging, but offering useful AI tools integrated right in, plus a potentially significant new revenue stream. Right. So they're not building the AI themselves, but integrating a powerful one to boost their platform. Exactly. Strategic integration to enhance the core service and make money.
OK, before we move to the final section, just a quick heads up for you if you want to actually do stuff with AI. We've just launched AI Unraveled, the Builder's Toolkit. It's basically a collection of practical AI tutorials, comes with PDF guides, videos, audio snippets, and you get lifetime access to all the future updates we add. It's a great way to turn listening into doing. Exactly.
Perfect way to turn what you hear into what you do. And, you know, it helps keep this deep dive running daily. You can find it over at DJMGateTech.com, D-J-A-M-G-E-T-E-C-H dot A-A-G, or just grab the link in the show notes. All right, welcome back. So we've talked about AI getting more conversational, becoming a creator. Building worlds, coding websites. Yeah, and getting baked into the platforms you use constantly. But all this integration raises questions about
Well, about us, how we humans interact with these tools. And there's a new study on that, right? From Microsoft Research and Carnegie Mellon. That's right. Presented at a big conference, CHI 25. They look specifically at user confidence and how that affects critical thinking when people use generative AI. Okay. Confidence and critical thinking with AI. What did they find? They surveyed about 319 knowledge workers.
and found something really interesting, almost counterintuitive. You what? Higher self-confidence in your own abilities, like how good you think you are at something, was linked to more critical thinking when using Gen AI. Wait, really? So if you think you're good, you're more critical of the AI? Kind of, yeah. If you're confident in what you know, you're apparently more likely to question the AI's output to scrutinize it. But conversely, higher confidence in the AI tool itself, thinking the AI is really smart or reliable...
was linked to less critical thinking by the user. Ah, okay. That makes more sense if you just blindly trust the tool. Exactly. You're less likely to double check it, verify its facts, or wonder if its approach is even the right one for your task. That feels, yeah, that resonates. If you assume it's correct, why bother checking? Right. So the key insight from the study isn't really that critical thinking vanishes when we use AI. Okay. It's more that the kind of critical thinking needed changes
It's shift. Shift's how? Less about generating the initial ideas yourself and more about things like verifying the AI's information, integrating its output sensibly into your work, and just general task stewardship, basically, managing the AI, keeping it on track, making sure it's actually helping you achieve your goal properly. So it really highlights this human element, doesn't it? It's not just the tech. It's our mindset. Totally.
It strongly suggests that to use these tools well to make sure they augment our thinking, not just replace it, we need to foster our own self-confidence and have a realistic understanding of the AI's limits. Knowing when to trust it and
And when to push back or verify. Precisely. Knowing when not to trust blindly is maybe the most important part. OK, so let's try and pull these threads together from all these May 28th updates. We're seeing AI get way more conversational, easier to talk to with voice. More accessible. And way more powerful as a creator building 3D worlds, maybe even coding inside your browser.
Yeah, pushing creative boundaries. And then becoming deeply woven into the platforms we use every day. Messaging apps, maybe even our logins. Right. Deeper integrations everywhere. And underpinning all that tech progress is the human side. This research reminds us that how we approach these tools, our confidence, our trust really shapes how well we actually use them. So what this really means for you listening is
is that as AI keeps embedding itself deeper into your digital life, your phone, your work tools, how you connect with people, understanding what it can and can't do. And how you interact with it. Yeah. Understanding your own approach, being mindful that's becoming crucial. Staying informed isn't just about knowing the new features. It's about figuring out how to navigate this effectively. It's definitely a dynamic landscape right now. So here's a final thought to leave you with.
With AI potentially becoming our universal sign-in, our coding partner, embedded in our chats and browsing, what's our responsibility here? How do we maintain our own confidence, our own critical judgment to make sure these incredibly powerful tools genuinely augment us rather than just automating away our own engagement and critical thinking?