This is the Everyday AI Show, the everyday podcast where we simplify AI and bring its power to your fingertips. Listen daily for practical advice to boost your career, business, and everyday life.
Google went absolutely crazy with AI updates. OpenAI is countersuing Elon Musk. Canva is trying to be an AI player amongst the big boys. And OpenAI could release up to five new AI models this week starting today. Yeah, that's a lot.
As always, the world of AI news and developments has been absolutely nonstop. And you could waste hours every single day trying to keep up with the AI news that matters or
You could just tune in on Mondays and join me. What's going on, y'all? My name is Jordan Wilson, and I'm the host of Everyday AI. We are your daily live stream podcast and free daily newsletter, helping everyday people like you and me not just learn about AI, but how we can actually leverage it and use it to grow our companies and our careers. If that sounds like you and what you are doing, you're in the right place.
It starts here with our daily podcast and live stream, but it continues on our website. So if you haven't already, please go to youreverydayai.com, sign up for the free daily newsletter. So each day we not only recap the podcast and the live stream with usually some exclusive insights, but we also keep you up to date with everything else happening in the AI world. But most Mondays we bring you the AI news that matters. And this week was a doozy and a half
So like I said, I think that Google may have for the first time definitively jumped into the lead for the AI race. Yeah, I was at their Google Cloud Next conference in Las Vegas. We're going to be recapping everything they announced there. And I think from a strictly model perspective, it is no surprise.
longer a race. I think Google is ahead, but for how long? Because OpenAI is reportedly going to be releasing new models today and multiple of them this week. It's going to be crazy. I'm excited for this one. Let's get into it and go over the AI news that matters for the week of April 14th.
And hey, live stream audience, love to see you dropping by. You know, if it looks or sounds a little bit different. Yeah. Heading out to the Google Next conference earlier last week, I was on my way out of O'Hare. The TSA line was long. All of a sudden, you know, I'm getting ready to interview someone after I arrived. And I'm like, oh.
Looks like the TSA held on to some of my equipment. So yeah, hopefully this won't sound too bad. But yeah, my mic and some of my normal camera equipment gone with the wind, gone with the TSA wind. So yeah, if the live stream at least sounds or looks a little funky today, bear with me. All right. But you came here for the AI news.
not for the audio visual quality. So let's first talk about, hey, live stream audience, what's up guys? I forgot to say hi to you. Big bogey face. Kyle, what's going on? Ted, join in on YouTube as well. We got Brian and Dr. Scott, Christopher, Muhammad, Joe, Douglas, everyone else. Fred, join in from Chicago. Thank you for tuning in.
Let's get into it. So Google went absolutely nuts at their Google Cloud Next conference in Las Vegas. So Google has made some significant announcements at its annual Cloud Next conference, focusing on advancements in AI technology and its flagship Gemini models.
All right. So I could and I probably will do just a dedicated episode on everything that Google announced. I think during the course of the multiple keynotes, I wrote down about 27 different updates that I thought would be meaningful for everyday business leaders such as yourself. And if I'm being honest, between the newsletter, the podcast, we've covered like five of
them. All right, so let's bullet point some of the more important ones. So the big one, potentially Google announced its Gemini 2.5 Pro was available in more places. This is the world's most powerful and highly rated large language model, at least as of this minute. Who knows? OpenAI may be announcing something before we even get this live stream out on the podcast. So Gemini 2.5
Pro, which topped the chatbot's LM leaderboards. And it's not even close. It's crushing everyone else. It's now available in public preview via the Vertex AI and the standalone Gemini app. Even if you have a free Gemini account, you can also use Gemini 2.5 Pro inside Google AI Studio for free. Additionally, new models from Google, they introduced the smaller, more lightweight and affordable version of
Gemini 2.5 with its Gemini 2.5 Flash, an optimized version of its Gemini 2.5 Pro model. So this new workhorse smaller model is designed to be faster and more cost efficient for companies building on top of Google's API, thanks to its ability to adjust processing power on each task, a technique
called test time computes. So yes, Gemini 2.5 is a hybrid model. So it uses kind of reasoning or thinking underneath to plan out its response or plan out its next steps. But it is a hybrid model. So it still has that kind of old school speed of a transformer, but it has the ability to just use more compute and think about an issue.
All right. Uh, let's keep going. A lot of new updates. They're not out yet, but they were announced. Uh, they're rolling out an alpha, uh, to Google workspace, which I think will be, uh, pretty impactful for a lot of our audience. Um, so, uh, including, uh,
audio creation for Google Docs, automated data analysis in Google Sheets, and Google Workspace Flows. That's one I'm extremely excited about. It's kind of like a slimmed down version of Zapier or marketing automation using Google Gemini's AI, but just for all of your Google Gemini Workspace products. So yeah, Workspace Flows, keep an eye out for that.
Also, Google officially announced the adoption of the Model Context Protocol or MCP. All right. So that's not all. All right. So let me quickly bullet point the rest of these and live stream audience. Let me know what you want to hear more of because I think we will do a dedicated Google show because they announced so freaking much. So new updates to their Google app.
agent space platform. They announced a completely new language that called the A2A or the agent to agent protocol, which is allows diverse AI agents from different vendors to communicate and collaborate with each other. So yeah, Google,
really just bringing a JetTech AI together, both with its adoption of Anthropix MCP protocol, but then also announcing their agent-to-agent protocol, making it easier for different AI agents to communicate and collaborate with each other. Pretty cool. They had the agent development kit
Gemini on GDC, which I think is pretty big. That allows enterprise companies to run Google's powerful Gemini models locally on-prem in your data center by using Google's distributed cloud with NVIDIA hardware. So that one's pretty big. It should be rolling out in quarter three, so just a couple of months. So yeah, that one,
A move we haven't really seen from Google so far. These are similar. Microsoft does have similar offerings. So this is pretty big, allowing companies to run Gemini locally in their data centers. Huge.
All right. Also, Ironwood, which is Google's new TPU chip, so their newest AI chip that delivers a reported 10x performance boost for efficient AI inference, powering their next-gen Gemini models. We briefly talked about the new Help Me Analyze in Sheets, the very popular...
audio overviews, right? So if you use Notebook LM, if you listen to this show, you know I love Notebook LM and you should be using it as well. So the audio overviews feature where you can generate a podcast based on your information or documents, that's rolling out to Google Docs as well. So that should be a pretty big one. They also announced a text-to-music model. A lot going on in the creative space
space, which I was actually pretty impressed with. So right now it's kind of like a Suno competitor. It's not as good, at least from the demos that we saw, but you can create custom royalty free 30 second music tracks from just text prompts using Google's Lyria AI model. And that's in Vertex AI. And that is available now inside Vertex AI.
So obviously inside Vertex, you are paying by usage, right? So you're not paying a subscription fee. And then another one, VO2 is now available for almost everyone. So yeah, if you have a paid Gemini account, if you log into AI Studio. So yeah, this is where it gets a little confusing, but you know,
I get the kind of separation. So when we talk about Google Gemini, you have your kind of front end Google chat bot, right? So Gemini.google.com. Then you have Vertex, which is more for enterprise and developments. And then you have Google's AI studio, which is kind of a sandbox, so to speak. So Google AI studio is free. However, there's no data protection. So anything you put in there, Google does use to train its models to keep that in mind. But
They do have the new VO2 AI video model, which is by far the best AI video model in the world. It's available. So you might, if you have a workspace, a paid workspace account, it might not be in there. I logged into my personal Gmail account in AI Studio and I do have access to VO2. Some of the
And there's not just the access and availability on VO2, but there's also some new things you can do. You can control camera angles, some things like that. Really cool. And then not last but not least, Google's remaking the Wizard of Oz using its generative AI tools, including VO2, and they're debuting it in the Sphere in Las Vegas. So I was there,
for that announcement. Pretty impressive, right? I had never been to the sphere before, but you know, really how Google is using their different AI tools to remake the Wizard of Oz. It looks really good. All right.
Yeah, I told y'all that was a lot from Google. So yeah, live stream audience, what more out of all those Google announcements do you want to see or hear more of? We'll maybe, you know, make sure we cover that in whatever we do get time to cover all of those Google announcements.
Are you still running in circles trying to figure out how to actually grow your business with AI? Maybe your company has been tinkering with large language models for a year or more, but can't really get traction to find ROI on Gen AI. Hey, this is Jordan Wilson, host of this very podcast.
Companies like Adobe, Microsoft, and NVIDIA have partnered with us because they trust our expertise in educating the masses around generative AI to get ahead. And some of the most innovative companies in the country hire us to help with their AI strategy and to train hundreds of their employees on how to use Gen AI. So whether you're looking for chat GPT training for thousands,
or just need help building your front-end AI strategy, you can partner with us too, just like some of the biggest companies in the world do. Go to youreverydayai.com slash partner to get in contact with our team, or you can just click on the partner section of our website. We'll help you stop running in those AI circles and help get your team ahead and build a straight path to ROI on Gen AI. All right, next.
OpenAI has expanded ChatGPT's memory feature to include all past conversations. So OpenAI has rolled out a significant update to ChatGPT's memory feature, allowing it to be more comprehensive and personalized. So this new feature allows the chatbot to reference all processes
previous conversations to deliver tailored responses, marking a step toward AI systems that evolve alongside over time with users.
So OpenAI's updated memory feature now enables ChatGPT to not only remember saved preferences, but also insights from all of your past chats, aiming to provide a more relevant and personalized interaction. So according to OpenAI CEO Sam Altman, this development reflects the company's vision of an AI system that grows with users over their lifetime, offering increasingly useful and customized experiences.
So the new updated memory feature works in two different ways. One way is not that impressive or very useful, in my opinion, which is the saved memories. So that's, you know, that's been out for a very long time. That's just where, you know, ChatGPT just kind of automatically adds certain things that you talk about to its memory, essentially like a memory bank. And then it recalls on those things later. But now it just has
chat history so it can remember and gather all information from all of your previous chats. So you can opt in or opt out. So for our live stream audience, I have this on my screen. So if you have a paid chat GPT account and you want to give this a try, you can go into settings, personalization, and then under memory, you can toggle it on or off. It's a new option that says reference chat history.
So I think this is going to be one of those divisive features. I think, I think casual or more casual chat GPT users who really use chat GPT for a specific focus are going to absolutely love this. Myself,
Probably not. Not going to like this probably at all. So, you know, even though I do have, I don't know, I lost count at least seven, seven different chat GPT accounts, right? So, you know, companies hire us to teach them chat GPT. So, you know, I've access to a couple of enterprise accounts at everyday AI. We have a team account. I have my own chat GPT plus and chat GPT pro accounts, a couple of free accounts because I'm always needing to try things out for you all, right? But even in those individual accounts, I use chat GPT for
for absolutely everything, multiple lines of businesses, multiple clients, multiple personal projects, things happening in my personal life as well. So this ability for the chat GPT to automatically remember things from past chats and then reflect that in its responses
At least if you use Chad GPT for a lot of different reasons, you might not like this, right? As an example, sometimes I want Chad GPT to be very short, speaking bullet points, you know, for maybe things about, you know, my own personal health or a vacation I'm wanting to take, right? And then when I'm researching something for everyday AI or for a client, I don't necessarily want those preferences to be reflected in the response. So 50-50, we'll see how it goes.
Oh gosh, big bogey face. Thank you. I forgot to mention Google DeepMind's Firebase IDE. Yeah, might have to throw that in. All right, our next piece of AI news. This one's not good if it's true, but Elon Musk's Doge, or Doge, I don't know what it's called, Doge team has been accused of using AI to monitor federal agencies for anti-Trump sentiment. Yeah.
It's not good if true, especially for our viewers in the U.S. So, Elon Musk's Department of Government Efficiency, or DOGE, is reportedly leveraging artificial intelligence to monitor federal agency communications for perceived hostility toward President Donald Trump, raising concerns about transparency, legality, and security.
ethics. So this is according to a report from Reuters. So according to this report, Trump administration officials claim Doge is surveilling communications with at least one federal agency to identify anti-Trump sentiment using AI, marking a controversial use of the technology in government operations. So the Environmental Protection Agency is allegedly one of the agencies targeted by
with AI monitoring employee communications for perceived disloyalty to Trump or Musk's agenda. Ugh, that's gross.
i don't know i don't i don't want to go on an accidental uh hot take tuesday this is the uh the you know ai news that matters so we'll try to keep it uh middle of the road here but doge staff are accused of bypassing traditional betting processes and operating in secret secrecy including working collaboratively on google docs instead of circulating official drafts according to sources
So the Trump administration argues Doge is exempt from federal freedom of information laws. You know, that thing protecting our democracy, prompting legal challenges from watchdog groups demanding transparency. A federal judge recently ordered Doge to hand over records to citizens for responsibilities and ethics in Washington. But at least so far, the organization has not done that. So
Critics, including Democrats and Republican, argue that Doge's actions are part of a broader effort to purge nonpartisan public servants and replace them with loyalists aligned with Trump and Musk's agenda. So Musk has previously suggested AI could replace government workers entirely, raising concerns about the ethical implications of automating federal operations.
Yeah, Joe says this is disgusting. Douglas says no Mellow Monday. Bring the heat. Fred is saying kind of big brother. Yeah, this is not good. The fact that this is reportedly happening in the federal government is disgusting.
absolutely bonkers. All right. Put your put your politics aside. Right. You know, throw out a rather whether you're a Republican, a Democrat, moderate, doesn't matter. The fact that we have
reputable reports that the government is using AI to spy on federal workers and to make sure their communications align with a certain organization's agenda. Again, throw out your political bias.
It's absolutely boggers. And if true, an extremely sad day for democracy here in the U.S. So I don't know. I'll just say gross. Hopefully it's not true, but it probably is, which is extremely troubling.
All right, on to news that isn't indicative of democracy dying. Canva has expanded beyond its normal visual design suite and is going all in on everything AI. So Canva has unveiled its new kind of 2.0 suite, including a lot of new AI updates like Canva Sheets, a visually dynamic spreadsheet tool that integrates AI,
powered features like Magic Insights and Magic Formulas. The platform right now aims to rival traditional spreadsheet tools like Microsoft Excel by focusing on visual storytelling, offering bright and engaging designs that Excel doesn't prioritize. So not just Canva Sheets, there's a new AI powered
charts app that complements Canva Sheets as well. Also, there is just this new Canva AI kind of interface. So you can do with a simple text prompt like you could in ChatGPT. There's a couple of different new features in this dedicated Canva AI section. There's a design assistant. It can generate a
and render code, right? So it's Canvas trying to get into the IDE game as well, right? Like a slimmed down version of cursor or something like that. And also, you know, with your normal kind of design. So, you know, creating social media design and then being able to iterate on design based on both reference images and follow up text prompts in natural language.
Also, the Designed For Me feature got a big upgrade and allows users to upload images and receive AI-generated design suggestions, while the Canvas AI-powered photo editor can inject objects into AI-generated backgrounds with realistic lighting adjustments. Yeah, that one I did watch online.
Highlights of the keynote, didn't get to watch Canva's whole keynote because I was at the Google Cloud Next conference. So again, thanks to Google for partnering with us on that one. But it looks pretty impressive, right? But let me just say, a lot of these features right now aren't out. They're quote unquote coming soon. So who knows when we'll get all of them. When we do though, this is pretty impressive, right? Who I don't want to be right now, I don't want to be Adobe.
Canva, some power plays, right? And then, I mean, everything that we talked about last week in our AI News That Matters segment with ChatGPT, their new 4.0 ImageGen,
extremely impressive, right? For graphic design, iterating on photos, editing photos at a very high level with prompts, the same thing inside Google. You can do the same thing with their new Gemini 2.0 Flash. I believe it is. It's multimodal by default. You can upload your own photos and edit them with text prompts. So between what Google and OpenAI and now Canva have released,
a lot of pressure on Adobe, not just to see how they respond, but to really see their position going forward.
And oh yeah, apparently I put this out in the newsletter, I think on Friday. Apparently y'all want to see a lot more coverage on Canva. I had no clue. Apparently our audience, you really want to know and see everything that Canva has to offer with its new AI suite. Livestream audience, let me know if that's you too. Maybe that's just our newsletter audience, but I'd like to know. Douglas says it's time for a Canva show. All right, so we'll see. All right, more drama.
There's always drama in the AI world. So OpenAI has filed a countersuit against Elon Musk, accusing him of attempting to hinder its business and seize control of AI innovation for his personal gain. So OpenAI in their countersuit just filed alleges that Elon Musk, obviously a former co-founder of OpenAI going back to the early days. So they've alleged that Elon Musk has engaged in quote unquote bad faith activities
tactics to slow down OpenAI's progress and gain control of leading AI technologies. The company claims Musk's actions are driven by personal motives rather than a commitment to advance AI for humanity. So Elon Musk in a series of, I don't know, I've lost track how many different lawsuits Elon Musk has filed over the last year and a half against OpenAI, but he's previously
sued openai multiple times in at least multiple updates he's sued openai ceo sam altman to block changes to the company's corporate structure so musk was an original co-founder in openai but left the organization in 2018 citing disagreements over its direction a federal judge has set a trial date for march 2026 in musk's lawsuit against openai fast-tracking the legal fight
So in Musk's original lawsuit, he accused OpenAI of straying from its original mission, which is not illegal, by the way.
So, yeah, I talked about this. You know, let's let's just be honest. You know, it's Musk's original lawsuit is a joke. I don't think anyone believes it has legal ground to stand on. It's more or less a delay tactic. Right. You know, and I have to say that because there's actually been organizations that I've talked to over the last few months that have been like, oh, I'm not sure if
we're going to go, if we're going to start using ChatGPT because, you know, Elon Musk is suing them and it's not a real lawsuit. It is a delay tactic here. Not that I'm trying to choose sides, but, you know, I would say anyone, you know, that can read can go in and see this is just a delay tactic. And obviously, you know, Elon Musk has a...
competing company in XAI and its Grok chatbot. So yeah, and now a position of power in the government. So it could get a little messy. So in short, Musk made a kind of tongue-in-cheek unsolicited $97 billion bid to acquire the company earlier this year. And in response, Altman posted a
open ai ceo sam altman posted a sarcastic offer uh to buy twitter instead for 9.74 billion uh so musk's competing ai company xai has struggled to match open ai's success so now open it uh you know elon musk instead has resorted to filing some what a lot of people are calling frivolous uh lawsuits that are really just delay tactics and are actually just making it harder for open
AI to transition from a nonprofit to a for-profits company. So in the counter suit, OpenAI insists that Elon's actions are self-serving and not aligned with its mission to benefit humanity. So in a statement, the company said, quote unquote, Elon's never been about the mission. He's always been about his own agenda. All right. So we will continue to have more juicy AI developments. It's like a soap opera. Grab your popcorn.
All right. Our next piece of AI news, Anthropic has unveiled a new premium subscription tier called Claude Max. So Anthropic's new subscription plan, Claude Max, comes in two tiers, a $100 per month option offering 5x higher rate limits than the normal Claude paid plan and a $200 a month plan with $20
20 times higher limits. So the higher tiers provide priority access to Anthropix latest AI models and features catering to power users who require extensive usage.
So the launch follows OpenAI's success of ChatGPT Pro, similarly priced at $200 a month, which reportedly contributed $300 million in annualized revenues to OpenAI just two months after its release. So InfraOpic is reportedly hoping for similar financial gains.
So Anthropic's product lead hinted at the possibility of even pricier subscription plans in the future, although Anthropic currently does not offer unlimited usage like OpenAI does. So the company has reportedly seen significant demand for its new Clawed 3.7 Sonnet AI model, which uses advanced reasoning capabilities to deliver more reliable answers to complex questions.
Live stream audience, what do you think about this? I have thoughts. I have thoughts. Let me just say this. When OpenAI announced their $200 a month pro plan, with it came exclusive access at the time to certain features and modes that were not available elsewhere. So as an example, 01 Pro Mode, which is
which is still, even though we have Gemini 2.5 Pro as the world's most capable model, I still use the O1 Pro model a ton, especially when dealing with a lot of data or working with extremely complex business tasks that require a lot of reasoning. Also, at the time, OpenAI said, hey, you get undervalued
get unlimited usage with the $200 a month plan, as well as at the time, Sora access. So if you wanted at the time to access Sora, OpenAI's AI video model, that was the only way.
So with Claude here, Anthropix $200 a month subscription. I don't see any of this. I don't see any of this, right? I had an episode. Let me just go ahead and let's see if I can find what episode this was in case you want to go listen to it. It was a while ago. It was...
Let's see, where was it? Oh, all right. I'll put it in the newsletter. But I did an episode a couple of months ago, essentially on the three reasons why that I thought Anthropic Claude was failing, right? It was just losing its footing as a top AI lab. And one of the reasons is
was the rate limits, right? So on the paid cloud plan, I kid you not, y'all, I'm always testing the boundaries of all the different AI models. I subscribe to almost each and every one if it has a paid plan, especially, you know, quote unquote, AI chatbots. I have the paid plan. If you looked at
at Claude the wrong way, it rate limited, right? Where even with Chad GPT on the $20 a month plan, the limits are pretty generous, right? You could have dozens and dozens of messages in an hour and you're not going to hit your limit. With Claude on a paid plan, the $25 a month paid plan, it's very easy, especially if you're working in a large context window, right? Which is one of Anthropics reported benefits, right?
I would hit my rate limit in less than 10 minutes consistently. Yes, I'm a power user. Yes, I'm usually having multiple tabs open, working in long contacts, but I don't hit those when using Google Gemini, when using OpenAI's ChatGPT, when using even the $20 version of Microsoft Copilot, the web-based version, or any other AI model. So Claude has been absolutely
been absolutely terrible from the get go in terms of letting people actually use their product. Right. So, you know, there's a lot of reasons why I am extremely bearish on Claude, but this and Anthropic in general. But this does not give me any confidence in the long term stability of the company. Right. To me, this shows, well, you probably should have had your rate limits
fixed to begin with. Yes, there's going to be probably thousands of people, maybe more, that are using this and enjoying it because those people that are using Claude on the front end and running into rate limits and who would gladly pay more, this is good for them. I don't see this ultimately being something that is going to help
that is going to help Anthropic in the long run. All right. So Fred said maybe it's episode 301. Gotta love it when our live stream audience. Thanks, Fred. Holding it down for Chicago as well. Gotta love it when our live stream audience is better at finding information on everyday AI than I am. All right. So y'all, if I'm being honest, I don't buy, you know, if I was buying stock at a company right now, I know Anthropic is not a publicly traded company. I would avoid Anthropic.
I'm just saying, yes, I use it. Yes, there's still some specific use cases that I think it is the best in. But at least since Gemini 2.5 Pro from Google came out, those advantages to Anthropic are very, very slim.
All right. Our next piece of AI news, Microsoft is slowing down or pausing some of its ambitious data center construction projects. According to reports, that includes a $1 billion projected facility in Ohio, signaling a recalibration in the tech giant's approach to AI infrastructure expansion. So Microsoft confirmed it is
halting early stage constru land in Ohio and will re sites for farmland. This
So the company has been executing its largest infrastructure expansion in history to meet surging demand for cloud and AI services, but is now reevaluating some projects due to evolving customer needs. And that is according to Microsoft's president of cloud computing operations.
So other paused projects include the later phases of a large data center in Wisconsin and analysts report that Microsoft is scaling back international expansions as well in canceling U S leases for third party operated data centers. Uh,
So it should be interesting because Microsoft did announce plans to spend over $80 billion globally in AI infrastructure this fiscal year, doubling its data center capacity in the past three years. However, it is strategically pacing its investments further
to align with some of its bigger business priorities and evolving customer demand. So this is definitely a story to keep an eye on. And, you know, a lot of people are going to wonder, okay, is this because you have companies like Google, at least right now, coming in with cheaper prices? Is this because of open source models? Are people, you know, using as an example, Chinese models like DeepSeek and Google?
So this is something to definitely keep an eye on from Microsoft. So another thing to keep an eye on from Microsoft, our next piece of AI news, Microsoft has released a handful of new AI updates in a new operating system update, including the controversial recall AI feature. And that feature has faced both backlash and multiple delays over privacy concerns.
So Recall AI allows users to capture encrypted screenshots locally, automatically to search through past activities such as apps, websites, images, or documents. And you can go and find that content by just describing it in natural language. So yeah, Recall AI, when it was announced, I'm like, this is crazy. For me personally, I don't care about my data privacy. I don't.
Right. I don't care. Right. It's like, OK, I'm on all these websites. I'm working in all these documents. I'm creating, you know, these presentations. I don't care if my local computer, because a lot of this runs on device, it runs edge right on an encrypted server. Yes. Take it. I don't care. I know not everyone has the same lax behavior.
uh, or, uh, you know, lax ask, uh, approach to data, especially if you are an enterprise company. So I get that, right. Um, I'm, I'm a smaller company every day. I is a smaller organization, so I'm not as worried, but I mean, just the technology is extremely impressive and very useful. Right. So imagine you're like, oh man, what was that document two weeks ago? I was emailing someone about it. Uh, you know, it had, uh, you know, that green, uh,
cover and it was something about uh you know marketing our uh you know new product yeah
You can literally just say that you can go into Microsoft recall, say, what was that, you know, email and that presentation from a couple of weeks ago, it was green and it was about marketing our new product. And because it takes these, uh, consistent ongoing, uh, encrypted screenshots and it can use, uh, co-pilots and co-pilots underlying, uh, GPT 4.0 computer vision technology. It can instantly tell you that. Uh, so, uh,
I think it's extremely impressive, but there's been a lot of issues early on because earlier versions of recall face criticism for storing sensitive information, including passwords, credit card numbers, and private keys as unsecured text files. So this did lead to widespread backlash from initial users testing it, privacy advocates, and cybersecurity experts.
So in response, Microsoft has added new safeguards to the feature, including requiring users to explicitly opt in. So before it was an opt out feature. So now it is an explicit opt in to saving those snapshots and encrypting them on local storage. So access to these snapshots would be protected by Windows Hello authentication to ensure only the user can view them.
So right now, this new rollout, the tool will be initially available to Windows Insider members and optimized for select languages, including English, Chinese, French, German, Japanese, and Spanish. And it
will reportedly roll out to the European Economic Area later this year. So yeah, it doesn't just work on any PC. You do need a newer PC such as a Co-Pilot Plus PC. Minimal requirements are 16 gigabytes of RAM, 256 gigabytes of storage, and
on device edge ai microsoft also introduced some additional features in the same build including click to do which simplifies actions like text copying object erasing and background removal through intuitive
Shortcuts users can activate it with the windows key or search for it in the taskbar. So other improvements just rolled out in this newest update include enhanced window search capabilities, a speech recap feature for live transcription and quick access to spoken content and updates to productivity tools like paint and photos. All right. So if that's something you're interested, be on the lookout for that.
Yeah, Big Bogey Face says it's a security nightmare. I get that. Yeah, it is. It can be, right? So we'll have to see if this newest release where Microsoft addressed some of these security features and added the new safeguards. We'll see how this is, how the public responds to this as it is finally beginning its rollout to all Windows Insider members in those countries.
All right. Our next piece of AI news, a leaked internal memo from Shopify CEO Tobias Lutke has gone viral because essentially he's saying all of his employees need to absolutely use AI always, right? I'm generalizing here. We covered this in our newsletter, but more or less his memo was like, hey, if you have a problem, you better be using AI immediately to solve it before even going to other humans.
humans. So he outlined in this post, which was originally leaked, but then he just said, hey, it seems like people are talking about this. So here it is. So he did eventually release this full internal employee memo after it was leaked. But he essentially outlined his bold vision for integrating AI into every aspect
of Shopify's operations. So the memo signals a major cultural shift in how businesses approach AI adoption. So some of the key takeaways, he said that using AI effectively is no longer optional, regardless of your job role, marking a significant operational and cultural shift for the company. Employees must justify love, well,
I personally love this. A lot of people hate this. And this is why, you know, kind of went, quote unquote, viral online, because in the memo, he said employees must justify non use of AI before requesting additional resources. So that part, I think, is one of those things that, you know, people were like, wait,
you have to justify not using AI if you're requesting additional resources. So whether that's more people on a project, more resources, more budget, et cetera. First, you have to, in those cases, if you did not use AI, you have to justify, right? So maybe you need a researcher on a project. Instead, you're like, okay, you have to then justify why all the AI research tools could not
do the job. So definitely a bold shift, but I think it's like, let me be honest, this is going to be very common in the coming months.
As more and more enterprise companies go all in on generative AI as the technology, whether that's from Google. So if you are a Google team using Google Workspace, whether you are a Microsoft team, I think Apple, unfortunately, we might be waiting until like 2032 for usable AI in their operating system if recent trends are any indication. But I think for
everyone else, right? And even if you are an Apple team, you're probably using, uh, you know, Google's suite of products anyways. Um,
There's a good chance you are. This is common. I think this is actually a no brainer, right? If I had a large organization and I had people requesting additional people to work on a project, to hire someone new for a project, for additional ad spend, for additional dollars, I'd be like, have you used every single AI resource available to us yet?
That's what I would ask, right? So I know a lot of people are losing their noodles over this and it's like, oh, you know, employees are having to justify not using AI when requesting additional resources. For me, it's like, duh, right? It's like, have you done your job?
Yeah. Do you know there's these deep research tools that I think perform at the level probably higher than junior consultants? Yeah. So consultants that are just getting started out at big four consulting companies, if you actually know how to use these deep research tools, they're probably just as good, if not better, right? Because they're faster. And when you use them correctly, they can be just as accurate. So
I know people are saying this is controversial. I don't think so. All right, last but not least, as we wrap up, y'all, Google at Google Cloud Next, I think, undoubtedly took the lead from OpenAI. Yes, OpenAI, I think, is better as a single application, like we talked about with Google and its Gemini suite of products,
It's a little disjointed, but that's also because of Google's just vast offerings across the ecosystem, right? So yes, you know, there's essentially four different ways, you know, to use Gemini. You can use it at, you know, gemini.google.com. So the chatbot would be number one. Number two would be AI Studio, which is the sandbox. Number three would be Vertex, which is more for, uh,
developers and enterprise users. And number four would be just across the different workspace platforms. So if you're in Google Docs, as an example, you can use Google Gemini. So it's a little
difficult. So that's why I think even though Google is probably in the lead when it comes to AI models, I still think OpenAI is in the lead when it comes to picking an AI operating system, which is something I've talked about in this show as well. Because from the application layer, it's a little simpler, right? And it can be
Pretty difficult, if I'm being honest, for organizations to take advantage of everything that Google just announced. There were so many things that they announced. Even executives that I talked to at the conference were mind-boggled at how much AI they had to offer. So I think it's a little easier to take advantage of it in OpenAI. But yet, we may be getting five new AI models from OpenAI starting today.
Yeah, so even by the time you hear this on the podcast, you know, since I have to improve this audio a little bit, since my mic is, you know, I don't know, in some TSA guy's locker in Chicago, O'Hare, we could be getting five new models from OpenAI this week. Here's what we could be expecting. So OpenAI is going to be releasing, according to some leaks and some drips,
Five new models this week, and those might include GBT 4.1, 4.1 Nano, 4.1 Mini, 04 Mini, and Full O3. Okay, confusing a little bit.
I know one of the, uh, I I'd say it's probably three big, uh, three big series here. So one would be the GPT four. Oh, would be getting updated to 4.1. So we're not sure if it's going to be called 4.1. Oh, that would be confusing if they adopted this. Oh, naming for the, uh, kind of Omni model, but I get it because they're reasoning models have the O in front. So it looks like open AI may just be dropping the four
and just going with 4.1, even though you might be saying, wait, there's GPT-4.5. Yes, but it does look that like OpenAI is even trying to differentiate and have three different tiers of models. So it looks like the GPT-4.1 will be kind of the common everyday workhorse, which is your IQ model.
Then 4.5, which is not getting updated in this latest round, according to reports, it's kind of the EQ model, the emotionally intelligent model. And then you have your thinking models or your O-series models. And even there,
It gets so confusing because you have 01, you have 01 Pro. Then you have the 03 models such as 03 Mini, 03 Mini High. Now, according to reports and leaks, we're going to be getting the 03 Full as well as 04 Mini. So not only are there technically now three different types of OpenAI models, even on the reasoning side, the O series, there might be three different types
tiers of thinking models in that third tier. Confusing, I know. So,
Like I said, GPT-4.1 is probably going to be the headliner of this series. I don't know. Maybe people will want 03 Full or 04 Mini. We'll see how the benchmarks and the ELO scores come out in terms of human preferences. But GPT-4.1 is expected to be a successor to the multimodal GPT-4.0, which is designed for enhanced versatility and performance.
Meanwhile, 04 Mini would be its latest, in theory, reasoning-focused model, highlighting OpenAI's push into specialized AI applications.
So the updated model art and icons for these new releases were spotted on OpenAI's website. Also, OpenAI CEO Sam Altman said on Sunday, he said this on Twitter. He said, we've got a lot of good stuff for you this coming week. Kicking it off.
tomorrow. And then, you know, we had essentially on OpenAI's actual website, you know, kind of leaks, kind of not, you know, some online sleuths found them, but essentially it was the image art for these new models. So anytime that OpenAI releases a new model, they usually have this kind of, you know, nice little gradient image
you know, image that goes with the new model. And it was confirmed, at least there are those images for GPT-41, 41, Nano, 41, Mini, 04, Mini, and 03. My gosh, alphabet soup. I will say this. We'll obviously be covering all these releases as they come out. It is a little confusing. I get it. But here's the reality.
GPT-5, it looks like it's just getting kicked even further and further out, which me personally, I'm fine with, right? So previously, CEO Sam Altman said that they would not be releasing more non-reasoning models before GPT-5 came out. And they also said GPT-5 was going to be more of a system, right?
So it wouldn't necessarily be a model you choose, but instead GPT-5 would be a system that you would talk to. And then essentially it would have kind of hybrid models or it would have an intelligent system that depending on your prompt, it would choose for you which model it should use to complete your query or to produce an output for whatever input that you entered.
Personally, I'm not looking forward to that GPT-5 ecosystem. I would much rather, right? Hopefully whenever we get GPT-5, hopefully we still get the option to override whatever the system, the intelligent system decided. Because let me be honest, power users, I think when and if GPT-5 is released and if it is released as it has been kind of teased or promoted, I think power users,
aren't going to like it. I think power users like the option to be able to pull down and choose, right? Last night, I used five different OpenAI models all for a specific reason, right? And I needed that specific model, right? Whether it was O3 Mini or, you know, O1 Pro or GPT-4.0 or GPT-4.5, I needed that specific model for the specific query that I was putting in there. Would a GPT-5 model
you know system be able to get that right maybe but i'm guessing like most of the time it would either require way more prompting from power users or having to rerun it because you're like oh this is not what i was wanting the model uh to do whereas if you choose the model from a drop down menu i think it's a little easier oh gosh i am
from all that AI news. All right, y'all. Let me know also which one of these should we cover more this week? I think we have an open slot or two this week. Apparently a lot of you all wanted to hear more about Canva. I think we are going to do a dedicated Google recap from their Google Cloud Next conference because it was insane. There's been, like I said, I wrote down more than 27 videos
announcements that I thought would be extremely helpful to the everyday user, and we haven't even covered them all. But let me know, live stream audience, what do you want to see more of? If this was helpful, please let me know. But let me do the very fast recap of the AI news that matters for this week, April 14th. So Google went absolutely nutty and unveiled a dozen of major AI updates to both Gemini and its other platforms at Google Cloud Next.
OpenAI released a new chat GPT memory feature that allows it to pull from all past chats. Next, according to report, Elon Musk's Dodge team is accused of using AI to monitor federal agencies for anti-Trump sentiment.
Canva announced at its conference that they're expanding beyond visual design and going all in on AI with a lot of different features, including an AI coder. OpenAI is now countersuming Elon Musk over alleged tactics to slow the company's AI progress. Anthropic has unveiled an expensive new $200 app.
per month and a $100 per month option that doesn't really do anything too new aside from give you higher limits. Microsoft has paused a $1 billion AI infrastructure plan in Ohio. At the same time, Microsoft is rolling out and ramping up its controversial Recall AI feature in a new Windows 11 preview build.
Shopify CEO has declared that AI proficiency is mandatory for all employees in a viral company-wide memo. And then last but not least, OpenAI is set to launch potentially up to five new AI models with
probably at least that kickoff being started today so we may be getting gpt41 gpt41 nano 4104 mini and oh three my gosh a lot going on i hope this was helpful you don't have to waste hours every single day trying to keep up with the ai news with ai developments you can come and stop by on mondays for our ai news that matters series so i hope this was helpful if so
Please share this with someone. You know, click that little repost button. It takes you like, you know, if you're listening on Twitter or LinkedIn, it takes you like five seconds to do that and to keep your network, your family, your friends, your coworkers more up to date with AI developments because it's changing everything about how we work. Our team puts in countless hours giving you up-to-date,
free, unbiased information. So please, if you find this helpful, share this with others. If you're listening on the podcast, I'd appreciate if you click that follow button on Apple or Spotify. If you could leave us a rating as well, then go to youreverydayai.com and sign up for the free daily newsletter. I hope this was helpful. Absolutely crazy week in AI news. I will see you back tomorrow and every day for more Everyday AI. Thanks, y'all.
And that's a wrap for today's edition of Everyday AI. Thanks for joining us. If you enjoyed this episode, please subscribe and leave us a rating. It helps keep us going. For a little more AI magic, visit youreverydayai.com and sign up to our daily newsletter so you don't get left behind. Go break some barriers and we'll see you next time.