This is the Everyday AI Show, the everyday podcast where we simplify AI and bring its power to your fingertips. Listen daily for practical advice to boost your career, business, and everyday life. No, I'm not repeating myself. We are going over Google I.O. updates again. That's because the tech giant had so many very impressive updates.
new AI offerings, modes, upgrades, updates to its Gemini model and across the spectrum. So many that we had to do two shows on it. And we're now coming to you with part two of Google's iOS
AI updates, 15 new features and how they can grow your business. So if you missed yesterday's show, make sure you go check that out where we went over, I believe our first eight, we went over numbers 15 through eight in episode 530. And now today we're going to go through number seven through number one, because yeah, Google is straight up cooking in the AI kitchen. And like I said yesterday,
A year and a half ago, they were fighting for the podium, right? I don't even know if you could confidently say Google was top three in the world at AI with OpenAI, Microsoft, and Anthropic probably ahead of them a year and a half ago. But now, and especially after Google I/O, they are undoubtedly at the top of that podium by themselves.
with everyone else fighting now to keep up. I mean, I literally can't believe how many new AI updates they shipped out. I had only planned out, if I'm being honest, I'd only planned out one show this week to cover everything that Google announced at IO. Similarly, how I had one show for Microsoft Builds AI announcements. But yeah, Google went straight up nuttier than a squirrel on keto. And yeah, we had to do two shows. So we're going to be going over the second half today.
All right, I'm excited for this one. I hope you are too. What's going on, y'all? My name is Jordan Wilson and welcome to Everyday AI. This is your daily live stream podcast and free daily newsletter helping us all not just learn what's happening in the world of AI, but how we can leverage it to keep up, but actually grow our companies and our careers, right?
And I understand it's hard to keep up sometimes, but it starts here by listening to this live stream, the podcast. It's unedited, unscripted. I like to say it's the realist thing in artificial intelligence, but you really get ahead by going to our website. That's where you leverage what you learn at youreverydayai.com. So we're going to be recapping today's episode, giving you just the highlights as well as everything else happening in the world of AI, but also on our website, there's now more than 500 and
30 episodes you can listen to. You can watch the videos, so maybe there's something visual. But we've interviewed hundreds of the world's leading experts on AI. It's all available for free. Yeah, on youreverydayai.com. All right. Normally, we go over the AI news. There's actually a decent amount today, but we're probably going to be seeing a big drop from Anthropic here in a couple of hours. Could be Clawed 4. We saw some huge news out of OpenAI, of
acquiring the Johnny Ives startup now called IO working on their hardware. So we've already covered some of that in the newsletter. We're going to be covering today's cloud updates in there as well. So without further ado, let's jump straight into, well, first, what you missed yesterday. All right. So like I said, if you want to know more about some of these AI updates, make sure you go listen to episode 530 just one day back. So let's start with number 15.
So we had, let's get the screen right there. Okay. So a number 15 was imagined for Google's new AI image generator. 14 was the new Chrome integration coming with Gemini soon. 13 personalization and email. If you didn't see that one, my gosh, Google, please get that out soon. I need it.
12 was some notebook LM updates, customizable length, as well as video coming soon. Gemini Diffusion was our number 11, a new type of large language model. 10, real-time translation in Google Meet. Right now, only available for paid subscribers and English, Spanish, but going to new languages soon. Number nine, Gemini App Appliances.
updates, which we'll probably be doing multiple shows on this in the coming weeks because there's literally so many things in there. And then our last one for yesterday was Gemma 3N, Google's new open source, small language model, 4 billion parameters that is literally ranking the same in the ELO score as Claude 3.7 Sonnet. So Anthropix biggest proprietary model.
All right, that was our updates from yesterday. Straight into it. Here's our seven biggest updates out of IO. So number seven would be Flow, Google's new AI filmmaking tool. Six would be VO3, their AI video tool and some VO2 updates. Five would be AI mode in Google search. Four would be Gemini Live with that project application.
Astra integration, little crossover there. Three would be Gemini 2.5 models, their updates to their 2.5 series. Number two is the Google AI Ultra subscription, ultra pricey. And last but not least, number one, Project Mariner or Agent Mode, which has essentially turned into, now that I've used it a little bit, just Google's version of Operator, their computer using agent that you can use in the browser. Woof, a lot there.
a lot there so uh let me sip my coffee say good morning to everyone how rude what's up live stream audience big bogey face joining us on the uh YouTube machine as well as giordi uh Michelle thanks for joining us on LinkedIn uh Sarah thanks for stopping by Brian Nathan Marie Juliet McDonald big crew this this morning Jose uh Irish thank you for joining us
Let me know what questions you have. I'll either answer them at the end, or if I can't answer them, I'll reach out to some of my friends at Google. Make sure to get you the answers that you need. All right. I already have a list of questions from my show yesterday that I'm going to be reaching out. So yeah, feel free. Or if you're listening on the podcast, FYI, I always put a link to the LinkedIn live stream. So you can come back even after the fact, FYI, and leave a, leave a
comment or a question or, you know, if you want to see something that happened on the screen, even though I'm going to try my best to describe everything today, you know, you can do it that way. All right. So first is flow. And y'all, if this actually works as it could, I'm going to start with a hot take. It's Thursday, but here's a hot take. This is the future of short video, period. It's so, so good.
And I don't say that lightly. I don't say that lightly, right? I mentioned this briefly yesterday. I think it's worth mentioning again today since our first two updates are on the visual side. So for about, I don't know, the first, geez, half of my professional career, I took a lot of photos, you know, with the DSLR camera and a lot of video as well. So I would say I've, I've,
Got a little more experience than most people when it comes to visuals. All right. Both doing things for, you know,
you know my own you know kind of had some freelancing gigs as a photographer uh you know right out of grad school as well as you know when i worked at the non-profit where we essentially just did work with nike and jordan brand i was just shooting photos and videos for nike and jordan brand right so i have a lot of experience and what you can get in our number seven and number six updates in flow and vo2
It's bonkers. It doesn't make sense to me. I'll probably, just so people can actually believe our live stream audience, I will play a couple, a sample or two for number six, which is VO3. But here's what Flow is. It is essentially a filmmaking tool that brings together the best of Google's creative offerings. So it brings in essentially...
their VO3, which is their new updated video generator, Imagine 4, which is their newest updated AI image generator, and Gemini Prompting, right? So you essentially get the best Gemini, Imagine 4 on the photo side, and VO3. And this is a tool that helps you seamlessly put together clips in consistent order.
scenes. That's the big thing because even like
I'd say there's a lot of like third party tools that have worked on consistency over the past like three to six months. But I would say a year ago, even as the quality of some of these tools, right? So I think Runway was probably with their Gen 3 was probably the first AI video tool that was just really good and stellar. And then you had a lot of them from China like Kling. And then, you know, more recently with Sora, then VO2 from Google and VO3.
But, you know, you actually have now videos that look extremely real. But the problem is, at least when you were trying to piece something together, right? A lot of these tools can only create something, you know, between five, eight or 10 seconds long, which doesn't do a whole lot if you think about how this could grow your business, right? But when you can stitch them together and get consistency in the characters, right?
That's where these tools actually go from, you know, shiny party trick to business utility. And that's what flow brings. It brings that consistency across scenes and can help you stitch together things that actually make sense. So.
Right now, Flow is only available to paid subscribers. So that's both in the new $20 a month. Well, I say new because it's a new name. It used to be called Gemini Advance. Now it's just called Google AI Pro for $20 a month.
and you get a hundred generations in flow. Uh, or if you have ultra, which is the $250 a month. Yeah. We have a new king of the hill in terms of, uh, most expensive AI plan overtook, uh, open AI is $200 pro plan. So yeah, uh,
I don't think there's any limits right now on the ultra plan in flow. At least they weren't listed. Um, I have the ultra plan. I'm going to be giving you guys a rundown on it, uh, after I've spent enough hours, uh, you know, in every nook and cranny of what's available. Uh, but,
right now i mean the features for this is really really good because yes we're going to get a little bit more into vo3 and what makes that special so you know obviously what makes flow special is being being able to piece together um multiple scenes with vo3 but also being able to work with imagine which is the um the image generator so it really just saves time and i think you know initially this is going to be great for uh creators
right, for solopreneurs, entrepreneurs, creative storytellers, right, in the beginning. But I think kind of a hot take is all companies are gonna have to use something like this, right? You do actually have to tip your cap first
to OpenAI because their Sora actually had a feature like this built in where you could piece together multiple scenes. It didn't have its own name. It wasn't its own tool. You know, it was I think it was called like remix. You could kind of like remix different clips or piece them together. It was essentially a storyboard. So Google essentially, you know, and maybe they had this in the works first, you know, who knows? But they essentially took on that idea, made it much better.
So this is natural prompting for scene generation, camera movement controls, tools for editing and extending shots, a scene builder for storyboarding, and also a flow TV content library. So like I said, right now, it's only being able to generate eight second clips, but you can chain them together. It includes ambient sounds as well as these audio generation features of VO3, which I'm going to show you guys here in a second. And also a couple of things to note.
I've talked a lot. Well, not a lot. I've talked multiple times about video FX on this show. That's Google's previous kind of video suite. So this is... Flow is replacing video FX, FYI. And so I think 1080p is the...
the generation where it's capped at. So, you know, no 2K, 4K scenes right now, but I mean, the fact that you can go in, stitch a bunch of really good looking clips together, it's very impressive with flow. I'm wondering, what are your thoughts on this, livestream audience? I think the majority of you are going to look at something like this and say, okay, this, you know, this isn't for me, this isn't for my business, but
I would challenge you and say, well, it probably is, right? I'm even thinking of ways that I can use this here at Everyday AI, right? Even though it's very easy for me to go make a video on something, it's not easy for me to go make something with a high production value, right? That's hard, right? It's very easy. I do video every single day for this podcast. I used to shoot and edit video, but I still look at tools like this with like VO3 and Flow, and I'm like,
I'd be stupid to not be using these, even though the initial versions might seem like 10% gimmicky, which is not a lot considering before this, it would have been like, you know, 50% gimmicky, right? If we were just looking at VO2. But now that you have the consistency between scenes, character consistency, the audio and then dialogue as well, crazy.
So let's talk a little bit about VO3, and I'm going to hopefully play a couple examples here.
So, uh, some big updates with Google's VO three and they updated, uh, VO two. So right now, if you want VO three, which is what a lot of people, this is kind of what's blowing up on social media right now. Similarly, how Chad GPTs, GPT four Oh image, Jen made the viral rounds for a couple of weeks. Uh, yeah, you're going to be seeing a lot of VO three. If you pay attention to anything on, on LinkedIn, uh, Twitter, and you know, it's probably just gonna be blowing up everywhere. Um,
But this is what it's from VO three. It's only available though. If you have, at least for now, it's only available for those customers in the U S who are ultra subscribers. So that $250 a month plan, and you have to be in the U S as well. And it's also available in the app, in the Gemini app, which is pretty cool. So right now you use VO three, uh, just inside Gemini. So, you know, Gemini.google.com. If you have that ultra, um,
that ultra subscription or on the app as well. So, uh, here's what it is. This is Google's, uh, VO three is Google's new state of the art video generation model with significant enhancements over VO two. It has, and here's the big thing, native audio generation with environmental sounds in character dialogue. You can have a scene with multiple characters, uh, and,
it will do the audio right there's there's been some examples um and you know i'll probably have to do a vo3 dedicated show uh here in the coming weeks you know after i've had a little bit more time to play with it myself but people have literally done where it's like two musicians and it will do the audio it'll do the background noise and i'm like my gosh i don't if i'm being honest i don't understand how this is possible i don't it is that good right um
And I don't think I had similar feelings with like Sora or Runway Gen 3, Gen 4, or even, you know, via like VO2. I think when VO2 came out, I'm like, wow, this is like shockingly good, right? Google's VO2. They didn't roll out access until like two months ago, though. So they didn't really give the technology or make it easily accessible for most people.
It's available now. You have to pay for VO3, but this is something I look at it and it really makes me wonder how much of the video that we consume in the future will be AI generated. And we don't know. Right. And if I'm being honest, I could and this might sound crazy. I could see a scenario.
Very soon, right? Probably not in the next two years, but after that, in the late 2020s, where the overwhelming majority of content that we consume is AI generated.
Even, you know, watching things on Netflix, TV shows, etc. I know that might sound crazy, but it is that good, right? And when you talk about the speed to bring something to market, right? You always see these shows that, oh, they finished taping, but, you know, oh, it was delayed for two years because they had to reshoot like two scenes. It's like things aren't going to be like that anymore. It is very, very good. So right now,
It's improved consistency and physics handling in V3. When we talk about the jumps from VO2 to VO3, it's just improved consistency, physics handling. And then like we said, with the character dialogue, better lip syncing, more realistic scene transitions. So what are the business use cases for this? I mean, a ton, but just creating realistic video.
for your company, right? Little promo clips, think of those pre-roll videos that go on YouTube,
I mean, maybe you've thought about running those, but you're like, I don't know where to start. Okay. Well, you know, start with a large language model, say what you're doing, share information about your company and say, Hey, give me 10 ideas for little 10 second video scripts for my company. And I've seen people make some of these and I'm like, it's actually pretty good, right? Because you can have a character say whatever you want them to. It sounds extremely real. It looks extremely real. You can have text pop up on the screen. You'd,
Don't really need at least to get something that's, you know, internet quality. I'm not saying this is cinema quality yet, but if with enough word, it can be almost cinema quality. All right. Yeah. Big bogey here says, let's face it. If you paid $250, you should use everything it has to offer. I agree. Oliver here from YouTube is saying VO3 is insane and it will only get better. All right.
So let's go ahead, live stream audience, do me a favor. I'm going to go ahead and share my screen here. So hopefully everyone will be able to hear the audio as well. So Google shared a couple of their favorite, some of their favorite videos.
clips that were generated. Okay. So our podcast audience, you should be able to hear this. So live stream audience, let me know if you can hear this as well. So I'm going to play a couple of these. Most of them are eight seconds. So podcast, everything you hear here was generated with VO. All right. Live stream peeps. Let me know if you can hear. Do you think we are in VO three? If you cannot tell, does it matter? Do you think we are in VO three?
If you cannot tell, does it matter? All right. So that's the first one. Very like 1920s, 1930s kind of vibe. Two characters, a woman and a man smoking a cigar in a parlor. There's like background music. There's ambient sounds and dialogue between the two. There's a couple other good ones here. I'm scrolling to find them.
All right, let's go ahead. Let's do this one singing. And it looks like this person probably stitched a couple of clips together. But let's listen to this. This is a woman in an opera. There's people playing violin in the background. Let me double check if everyone can hear. Yeah, audio is good. Okay. All right, here we go. Here is a woman singing opera. It looks like with an orchestra in the background.
Are you still running in circles trying to figure out how to actually grow your business with AI? Maybe your company has been tinkering with large language models for a year or more, but can't really get traction to find ROI on Gen AI. Hey, this is Jordan Wilson, host of this very podcast.
Companies like Adobe, Microsoft, and NVIDIA have partnered with us because they trust our expertise in educating the masses around generative AI to get ahead. And some of the most innovative companies in the country hire us to help with their AI strategy and to train hundreds of their employees on how to use Gen AI. So whether you're looking for chat GPT training for thousands,
or just need help building your front-end AI strategy, you can partner with us too, just like some of the biggest companies in the world do. Go to youreverydayai.com slash partner to get in contact with our team, or you can just click on the partner section of our website. We'll help you stop running in those AI circles and help get your team ahead and build a straight path to ROI on Gen AI. We're gonna live!
And then it goes into some more music. All right, here we go. I'm not sure I can go on. All right. So, uh,
Okay, let me describe. So the first clip there was a woman singing opera. And I mean, you have people in the background. They're not in focus, but I've looked at this clip and I pause it and I'm like, there's no telltale signs of AI right now. I mean, you have like a shadow on her arm that's moving accordingly. All right. With people in the background playing violin. And I'm even looking at their, their strokes of the violin in the background.
and the music it all looks synced up right so pretty pretty impressive there's a couple if you look at it long enough there's a couple of artifacts that a 2-9 might see okay this might be ai generated but not really all right the second clip that you heard was a woman kind of performing
You know, in a small venue concert, there's a drummer. So I'm going to play the audio on this one again because it's very impressive. And I'm hearing the lip sync is perfect. There's a woman performing on stage. There's people in the crowd. I mean, it's very impressive. I'm going to be watching the drum sync on this one. ♪ I'm going to light up the sky ♪
Great. It works. All right. The third clip, because I think there's four total clips here. This one started with a close up on a guitar and then it kind of zoomed up. Very, very good looking video, too. Right. You know, there's no six fingers. Right. Some of those telltale AI signs from from yesteryear. I mean, this is someone playing the guitar.
And I don't play any instruments, but what's on the screen and the music coming out seems to be correct, right? When you look at the pacing, the tempo, the cadence of this close-up of the person playing the guitar, it looks on point. And then we zoom up to the person singing. Again, it looks extremely realistic. I'm wondering, I'm on 720 here. Okay, so...
I mean, the lip sync and everything. You know, this person, I mean, the details of the face, the lip sync, like, I can't, I've had a loss for words. This is so good. So I don't know. Like live stream audience, can you believe, can you believe that's real? Can you believe, like anyone, does anyone have a reaction? You know, Danny just says, wow. Yeah.
Yeah, I don't know what else to say. But after looking at a lot of the VO3 examples that have come out so far, like I said, I don't see a way that we as humans are consuming more AI content than not.
And I don't know what year it is, right? I'll save this for my, you know, 2026 hot takes. But if you remember, one of my hot takes was that we will start seeing AI video on the big screen and we won't even know it. And I think VO3 is probably that first iteration. There were some Adobe, you know, kind of like scene generation tools that I think would probably make it. But in terms of like starting something from scratch and seeing it on the big screen and not even knowing it,
I think VO3 is probably where it's at. Lisa here on YouTube said, unbelievably impressive and a little scary because of its potential for misuse.
Absolutely. And I don't think we can glaze over that. That was kind of on my little notes here to end on for this slide is, yeah, the potential for disinformation and misinformation with this is through the roof. So obviously Google also announced some updates to their SynthID technology that so essentially if you create something with VO, there's an invisible
to oversimplify it, there's an invisible watermark that someone can go in and essentially see if this is AI generated or not. However,
I can already guarantee there's going to be dozens of startups that pop up and their whole value prop is going to be, hey, upload your AI generated content, whatever it is. And we're going to strip out, you know, whatever, you know, invisible watermark there is that says this is AI generated. I can already guarantee that there's because, you know, that was a booming industry for AI text, which
was a lie, by the way. So I would assume that even for photos and videos, even though the companies I think are doing an adequate or above adequate job in kind of protecting consumers and letting them know when and if something is AI generated, but
But I think in the end, it's not gonna be individual clips. In the end, this is what production houses, Hollywood studios are going to be using for expensive scenes, scenes that maybe take too long. Essentially, scenes that would take millions of dollars to shoot,
Instead, you get 100 editors, you give them all a couple of hours and they're going to be able to for like one one thousandth of the cost in one one thousandth of the time. They're going to be able to get something usable, at least out of VO3. But imagine when we get VO4. Scary. Yeah. So.
Marie here saying there should be a disclaimer saying this is AI generated material. That would be great, but I don't think that's where we're heading, unfortunately. All right. I got to pick this up. So number five, uh, is AI mode in Google search. So that is a dedicated AI powered tab in Google search available both on the web and on mobile or in on, uh,
in the app that handles complex queries and provides AI generated answers. So this isn't technically new, but it has been greatly improved and it is available for free users as well, certain features. So this can also, some of the newer options
upgraded features is it can generate custom charts and graphics and it handles follow-up questions naturally. And there's some new shopping features, including virtual try-on with personal photos. So yeah, if you're shopping for something and you're like, ah, how is this going to fit?
going to literally upload a photo of yourself, right? So if it's for clothes, you probably want to make sure it's a full body photo and then you can just virtually try things on and shop. Yeah. And there's also a genetic checkout. So like I said, that's going to be dangerous on the wallet. So right now this is the use case here is, well, if I'm being honest, the AI mode for me
It's interesting. I think this is a way that Google is kind of hedging
its own bet on AI search and how it might potentially take away from its cash cow, which is traditional search. Because also in traditional search, Google has their AI overviews. All right. So that's one way that they're trying to not lose any ground to chat GPT and perplexity, co-pilots. Now that Claude finally has web search. So in
traditional search, they have their AI overviews. Obviously in their Gemini product, it can reach the web, but now there's also this dedicated AI mode, right? Which I would say is more of a dedicated answers engine. So it's kind of, there's three different ways that you can search the web using Google, right? So you can do the traditional, you know, traditional google.com search, which does integrate those AI overviews. You can go
full on the AI only end, which is using it inside of Google Gemini. And then something in between, which is AI mode. And this AI mode did actually get a lot of these great updates
now powered also by the newest model, Gemini 2.5. And there's deep search now within AI mode. So it is something it's, I'm not going to say it's an answer looking for a question to solve. You know, obviously Google is one of the smartest companies in the world. I think this is just, they need to spread their chips in different baskets, so to speak, or their eggs in different baskets.
So to me, AI mode is actually nice to use. It is a very answers engine. So it feels more like you're using Google search versus Google Gemini, but it is AI only. So in traditional Google search, even with AI overviews, you still have the non-AI content there. Whereas in AI mode, it's only AI. But it does feel a little more like traditional search. So we'll probably do a
a specific show on that one as well. All right. Number four, Gemini live.
And this also has some Project Astra integration, which of course is our number one thing. So what is Gemini Live? So this is their enhanced real-time assistant that can understand and interact with your surroundings through device cameras. So this is available for Android users. And some of these features have already been live on iOS, but some of them are going to be rolling out to iOS this week.
So what the heck is Gemini live? This is an assistant on your phone that can see, hear, and understand. You can share your screen. You can do all of these different things with natural voice. That's the biggest thing, right? There's some good demos that I think that I'll share later.
that I'll share in the newsletter. But it's essentially think if you have had an assistant that can see anything that you can see, it can help you with anything that you're trying to do.
It's very impressive, right? So it has been live already, certain features, but now there's just more integration with some of this Project Astra technology inside of Gemini Live. So like I said,
It has that camera-based visual understanding, screen sharing capabilities, more natural voice output with improved native audio, better memory, and also computer control. That's the thing that
I was really impressed with, right? A lot of people are saying this is, you know, Jarvis-esque. So I will actually, I didn't want to have to show like a minute and a half video, but I actually think this one is worth showing. So, all right, I'm going to go ahead.
share my screen here. This is about a minute and a half clip. So this is essentially shows a young gentleman in a bike shop and he's repairing his bike. So in these instances, you're going to hear his voice for our podcast audience and then the Gemini live voice. And he's essentially pointing his camera
at different things in the bike shop to help him better understand. So this is something I've tried to do all the time, but you'll see what's new here when we talk about agentic capabilities. And this is where the Project Astra capabilities come in. So live stream audience as we go on here, this one's a minute and a half. So let me know if you can hear the audio. I'd really appreciate it. So I don't waste a minute and a half of everyone's time. Can you go on YouTube and find a video for how to fix that? Of course, I'm opening YouTube now.
This looks like a good video. Place a rubber band over the head of the screw. Can you go through my emails with the bike shop and find which size hex nut I need? Sure. According to the email from Bicycle Habitat, you need a 3-8 inch hex nut for your sprocket. I've highlighted the correct bin for you. It seems like I need a spare tension screw.
Can you call the nearest bike shop and see what they have in stock? Yep, calling them now. I'll get back to you with what they have in stock. I think my brake pads are wearing down. Can you check the manual again and see if it talks about them anywhere? According to the manual, if you look on page 24... Hey, do you want to get lunch? Yeah, give me five minutes. Can you finish what you were saying? As I was saying, if you look on page 24, section 2, you'll see how to replace the brake pads. Hey, any updates on that call?
Yep, I just got off of the bike shop. They confirm they have your tension screw in stock. Would you like me to place a pickup order? Could you show me some examples of dog baskets I could put on my bike? Sure, I can help you with that. Just give me a moment. Here are some options. I think Zuka would look really great in these. Okay, wild, right? So...
A lot of these features that you just saw here, and I'm going to quickly recap them. You do need that $250 a month plan to get that essentially the Project Astra integration. But even on the free plan, some of those features that you just heard, or if you're on the live stream that you saw there are available. So just speaking to the agent, to the AI assistant, the ability for it to see and identify things in real time, that's all available on the free plan.
If you do have the Ultra plan right now, and I'm gonna put in the newsletter what tiers are available. So that's tricky, right? What you can do on free, what you can do on the $20 a month.
Now AI Pro. And then what you can do on the $250 a month ultra. I'll probably have a dedicated show just on this because I think it is that impactful that can change how we all work. But let me just recap a couple of things that actually happened in this if you're just listening on the podcast. So number one, Gemini Live via Project Astra was able to open up YouTube.
search through YouTube on its own. It was able to pull in context from the person's emails. He pointed his phone at the wall where there was a bunch of different parts on the wall and it correctly identified the correct part based on the manual that it looked up and found.
it made a phone call to a bike shop. So, you know, some agentic abilities there. And it literally talked to someone on the phone and got answers for them, placed an order, check the manual on screen, did some shopping, right? So, uh,
Some of these different things, and they're calling this action intelligence plus Gemini. So content retrieval, interface control. So being able to control a phone. I believe a lot of those things are only going to be available on Android, not on iOS. But agents highlighting call assistance, knowledge grounding, context-aware dialogue, personalized shopping, and native audio dialogue. Oh my gosh, right? Pretty, pretty...
Yeah. People just saying, OMG, Monica saying everything I want and more. Denny just saying, holy cannoli. Juliet saying need a dedicated show. Yeah, I think we're going to have to do a dedicated show on that.
All right, so let's go to number three. Yeah, there's whole updates to their Gemini 2.5 models. Yeah, no big deal there, but the world's most powerful models got even a little bit more capable. So specifically, the Gemini 2.5 Flash is now available, where before, this one wasn't as powerful. But now, if I pull up my next screen here,
Gemini 2.5 Flash is the second best model in the world. Second to only Gemini 2.5 Pro. So if you don't follow kind of the
small model, and this isn't small model, I should say smaller, large language model space, right? Most of the companies have like a big, a medium and a small version of their most capable models. But for the most part, there's a pretty big drop off between the big boy and then the small one, right? So OpenAI has like GPT-40 mini or 04 mini, right? Google has Flash,
So now the Flash, the little one, right? The little version of the big boy is the second best model in the world, which has never happened, right? The small, right? The small version of the large language model, the best that they've ever been is usually like, ah, within the top, you know, seven or eight, maybe. This is unprecedented, right?
right? It wasn't unprecedented that Gemini 2.5 Pro, Google's big, big boy, came in and wiped everyone away on the LM Arena board, which you put in one prompt, you get two answers. You don't know which one is which and you vote. And that's what gives an ELO score. So it wasn't unprecedented that Gemini 2.5
Pro came in and was number one. It was kind of expected. That's usually normal, right? So if we get a Claude, you know, Opus 4 as an example, I wouldn't be surprised if it came in as the number one on the arena leaderboard. Although I'm personally not expecting it, but I wouldn't be surprised. I am legit flabbergasted that Gemini 2.5 Flash, the lightweight version, is...
The most powerful model in the world, aside from the big boy. Nuts. Absolutely nuts. So a couple of other things, and we touched on some of these things briefly yesterday, but Gemini 2.5 Flash will become generally available in early June. And we talked about also Gemini 2.5 Pro is getting a little more capable as well because there's a new DeepThink
mode that is coming to ultra subscribers. It's not out yet. Google said it should be coming out in the coming weeks. So that's just, you know, essentially, you know, uh, what Microsoft copilot has like the think deeper button, right? So it's essentially, you're saying, Hey, take more time on this. So, um, I don't know if that's going to technically be a, its own dedicated model, or if it's just going to be a kind of a button in the UI, uh, that you just click.
But the DeepThink should probably really extend the Gemini 2.5 Pro's capabilities pretty exponentially, I would guess. Also, like we talked about, Flash is now not just optimized for speed and efficiency, but also for
Power Now and the new Gemini 2.5 Pro with DeepThink that just gives you more control over advanced reasoning mode for complex problem solving and also thought summaries to just have a little bit of transparency. Although I'm not a fan of the new thought summaries, I liked how Google previously showed more of a raw chain of thought. So I'm not a huge fan of the thought summaries.
So, well, what are the, what are the benefits for, uh, you know, business use cases for this? Well, every right. You should probably be starting your day, uh, right. Whether it's in Google Gemini 2.5, whether it's in, you know, open a eyes, Oh three or GPT four Oh, uh, or, you know, co-pilot or Claude, right. I don't have to tell you the business use cases, but
regardless of what your AI operating system is, you need to be starting there because these models, like we just saw with Gemini 2.5 and all these updates are getting more and more powerful. All right. And in the future, they're only going to get more integrated with their other technology.
All right, our last two, we're going to go quickly here. So the Google AI Ultra subscription, this is pretty big news and not just in a good way, right? One thing that I've loved about the generative AI movement for the most part thus far is the democratization of this technology, right? For the most part, anyone out there
can scrap up $20 and get access to the most powerful technology in the world. So OpenAI started this kind of luxury boys and girls club when it came to AI with their $200 a month pro plan.
So now Google is following suit. So there's good, like, right. There's pros and there's cons, right? The pro is that again, breathtaking technology that for businesses at least is still generally affordable for everyone else. You know, like $250 a month.
Unless your company is paying for it, I don't see most individuals doing this, right? Unless you're, you know, you're the owner of a small business, or if you are that big of a believer in technology. Another huge downside here, at least right now, this is only available for personal Gmail, which come on Google.
And also, I have reached out and be like, hey, when is this coming? So it's a workspace account. So when I do get information on that, I will let everyone know. But I think this is a huge miss right now out of the gate, right? So I
I did sign up for an ultra subscription. So I'll let you guys know. I'll probably do a couple of dedicated shows. We're going to do a lot more hands-on learning throughout June and July, you know, because there's been so many new features over the last couple of months and I haven't given them due time. So we're going to be doing a lot of these things that are only available on Gemini Ultra or, you know, open AIs, different paid tiers.
But one other reason I don't like this is because yeah, Google workspace accounts, you don't have access to this right now. There is no option. So you are just kind of quote unquote stuck on that lower tier. So, so many of these features that I would want to take advantage of, I would want to do it for work, right? I would want to do it for my work account. But right now that is not an option. If you are on Google workspace and I'm just like, this is right. I'm sure there's reasons, right?
And this is standard for the industry. So as an example, right, even OpenAI, when they roll something out, usually it'll come out to, you know, ChatGPT Plus and ChatGPT Pro users first, and then later it'll come out to Teams, and then later, later, it'll come out to Enterprise.
So I probably understand, or I guess I understand how there's a lot of extra hoops that you would probably have to jump through in order to provide this level of technology to enterprise organizations, right? And maybe there's some technical reasons why it can't be done yet, but at least whatever is possible, I wish that...
that Google would give access to Workspace. Because even those things like personalized email and some of these Gemini Live capabilities and being able for it to use your contacts,
How is that helpful for me if I can't use it for my work account? Right. Okay. So literally what I'm going to have to try to do is forward all my work emails to my personal Gmail, create a filter, you know, skip the inbox, go straight in there. I'm going to have to, I don't know, set up some sort of, um, automation where everything that gets created in my, you know, my work workspace gets duplicated over to my personal, because this is, uh, and, and the
The technology that you get inside this Google AI Ultra plan, it's freaking revolutionary. But what good is it if you can only have it with a gmail.com, with your personal Gmail?
Right. It's not, if I'm being honest, it's not very helpful. Like I said, I'm sure there's reasons why they can't roll it out now, or maybe they won't ever roll it out. But this has been the case with a lot of Google Gemini offerings since day one. So many of these things are not available if you have a workspace account. All right. So now I'm off my, my, my little rant there. So,
Uh, here's what else comes in that. So you get high, the highest usage limit, uh, for deep research, you get early access to VO three. So right now that's the only way you can get access to VO three is with an ultra plan. Uh, you get access to project Mariner and flow, uh, which we've already talked about flow talking about project manner here in a second. Uh, you get the access to
to Gemini 2.5 Pro with DeepThink. That's the only way you get it. You also get YouTube premiums. You can skip commercials on YouTube, 30 terabytes of storage, et cetera, et cetera. All right, so I have a little, for our live stream audience here, a little...
uh, pricing table. So for three months you do get it for, uh, $125, then it's $250 a month after that. Uh, and I do think also for Google AI pro, which was previously Gemini advanced, it does look like users get one month free. So, uh, you can at least try out a lot of these features that are available, uh, on paid plans, but not everything on the AI ultra, you're at least going to have to do it, uh,
for one month, you can try it for $125, which is pretty pricey. All right. And then you also get VO3 access. So like I said, you can use flow for free right now with that free month of Google AI pro, but you're only getting VO2 access the same thing with whisk, uh, another creative tool, but only with VO2 on Google AI pro you have to have Google AI ultra to be able to use VO3 across those tools. Also you get higher limits.
In Notebook LM, you get the 2.5 Pro DeepThink and VO3 access. But the big one here is Project Mariner. And that brings us to number one, Project Mariner in agent mode, I think is the biggest announcement at Google's AI, or sorry, Google's IO conference. So what is this? Well, it's an AI agent that autonomously completes online tasks. So the simplest way to put it,
is think of OpenAI's operator, but it has access to your data, right? So I've been testing it out
It's been a little slow so far, but very capable. Also, it can perform up to 10 tasks at once. That's the other thing is it kind of gets its own sandbox, which is really cool. So essentially you can control a Chrome browser. It can move the cursor. It can click buttons. It can fill out forms. But the big thing is it can handle up to 10 tasks simultaneously. And the other thing that
I like about this is it has a teach and repeat mode. So that's one thing I'm going to be testing out is a complex process that you can do over and over. You can literally, it will record your screen. It will record your voice and you can teach it
a mode to do. So let's say if you go, you know, Hey, I go in, I check this email, then I go do this research. Then I go put this document together. Then I, you know, blah, blah, blah, et cetera, et cetera. If you have a multi-step process that you manually do, uh, right. There's no guarantee. It's going to be able to perform it, uh, at, uh, at a high level or at a human level. But this is something that I haven't seen out of the other agents, right? A simple, uh,
way to quote unquote, teach and repeat, or to train a computer using agent to do your job versus just prompting it in natural language. So I think, and that's why this is the number one update is literally just for that teach and repeat. And that's something that I'm going to be spending a lot of time on. But like I said, the downside is this is only available right now to those users on the Google AI ultra plan, $250 a month after that,
introductory price, only us users, and you can't use it on workspace, which is a huge bummer. All right. Uh, also there's a broader rollout plans, uh, for this summer. So, I mean, this is huge, right? To be able to just run your common repetitive mundane, right? When we talk about what are the business use cases, being able to, uh, automate some of those mundane
Tasks are huge across multiple platforms to just improve and handle, hopefully, some of these manual tasks that work over time. And obviously, it still works with Gemini. So anything that you would or in theory could do with a large language model, it can still do that. It's not just, oh, I can only use a computer. It can still process, think, strategize just like a large language model can because it is powered by Gemini 2.5. So yeah.
the business use cases on this are honestly limitless. They're only limited by how well the technology works, right? Which that's the big asterisk here. And that's what I'm going to be spending a lot of time in the coming weeks, but just the, the ability to autonomously complete the,
those online tasks. There's also, like I said, it integrates directly with Google's products, as well as they have some official third-party integrations for things like, you know, buying, you know, buying tickets, like through Ticketmaster and StubHub, right? So this is huge. And, you know, honestly, one thing I'm going to be testing out, what other AI products
Can Project Mariner use? Can it go in and use Google Gemini on its own? Can it use OpenAI, you know, ChatGPT? Can it use other Google AI products? Can it go in and use Notebook LL, right? So that's honestly, right, kind of meta, you know, telling an AI to go use other AI. But those are some of the things that I'm going to be testing here. So yeah, keep an eye out.
Yeah, keep an eye out for more shows on this in the future. All right, that was a lot. This was a long one. Let me quickly wrap. So here is the top top.
uh io uh features or sorry the top uh updates ai powers uh ai modes ai upgrades announced at google io 2025. all right so seven was flow the new ai filmmaking tool six vo3 uh in insanely good ai video and some vo2 updates uh number five ai mode in google search number four gemini live which has some uh project astra capabilities
Then we have the updates to Gemini 2.5 models. Number two, the new tier of Google AI Ultra. Pros and cons, at least we get access to the most powerful tools available. But
Huge creates a huge divide. And then number one, uh, that project Mariner or agent mode for, for it to be able to perform tasks for you online. All right. I hope this show was helpful. Like I said, make sure you go check out yesterday's show five 30. That's where we went over the front half. There were so many updates. We went over number 15 through eight, uh,
of the announcements in Google's IO. Some other related episodes, if you want to know more about Google and Google Gemini, episode 501 sat down with Logan Kilpatrick at Google Cloud Next. And then episodes 494 and 495, where we did a deeper dive in Google's new Gemini 2.5 Pro models. All right.
please if you haven't already go to your everydayai.com sign up for the free daily newsletter i hope this was helpful you know what i'm probably going to put a poll in today's newsletter uh saying like hey which of these features do you want to know most about but hey live stream audience if you're still sticking if you're still here right now just go ahead and type a number on the screen right which of these do you want to see more information on you know flow vo2 ai mode gemini live
New advancements in Gemini 2.5, the new Altair subscription, if it's worth it, Project Mariner. You can even just type in 7 through 1 or type in, you know, what do you want to see more? I've worked for you. You tell me what's going to help you grow your company and your career. All right. I could...
pretend I know what you all want to hear, but you just let me know. It's on the screen now. I'm probably going to put a couple of polls in the coming days in our newsletter as well. So make sure you go sign up for that at youreverydayai.com. Thanks for tuning in. Hope to see you back tomorrow and every day for more Everyday AI. Thanks, y'all.
And that's a wrap for today's edition of Everyday AI. Thanks for joining us. If you enjoyed this episode, please subscribe and leave us a rating. It helps keep us going. For a little more AI magic, visit youreverydayai.com and sign up to our daily newsletter so you don't get left behind. Go break some barriers and we'll see you next time.