This is the Everyday AI Show, the everyday podcast where we simplify AI and bring its power to your fingertips. Listen daily for practical advice to boost your career, business, and everyday life.
There's an AI talent war going on. AI is doing Salesforce's job and one new AI from Google could kind of wipe out a big part of Claude's business. All right. There's a lot of drama, new developments and pretty breaking news happening in the AI world. And if you don't have hours to spend each and every day,
keeping up and trying to figure out what matters and what doesn't. Well, don't. Instead, spend your Mondays with us at Everyday AI.
What's going on, y'all? My name is Jordan Wilson and welcome to Everyday AI. This is your daily live stream podcast and free daily newsletter helping everyday people like you and me not just learn what's happening in the world of AI, but how we can leverage that information to grow our companies and our careers. If that's what you're trying to do, Mondays are a great way to start. Yeah, you don't have to have a case of the Mondays when you can just keep up and get ahead with everything that's happening in the world of AI news. That's our weekly AI news that
Matters show on Monday. So thank you for tuning in to this unedited, unscripted live stream and podcast. But you got to go really take it to the next level with our free daily newsletter. So if you haven't already, please go to youreverydayai.com. So there, not only can you sign up for that free daily newsletter,
where we recap each day's podcast and give you everything else you need to know happening in the world of AI. But also on our website, there's more than 550 back episodes that you can go listen to, watch, read at any time, all for free. It is a generative AI university. So without any further ado, let's get into it. Live stream audience, good to see you.
Joe, getting sweaty all up in Fort Lauderdale. Thanks for joining us. Brian, joining from Minnesota. Good to see you as always. Marie, we got the trade partners happening here on the YouTube machine. Nathan joining us from Australia. Good day, Nathan. Let's get after it, Dr. Harvey Castro and Michael, shall we? So first piece of AI news, the AI labs,
are winning copyright suits. We never would have thought that, right? So federal judges have allowed AI firms to train on copyrighted books.
So federal judges in California ruled in the past week for two different AI companies. And they said that AI developers, Anthropic and Meta can legally train their large language models on copyrighted books, marking a significant early victory for the AI industry. But.
This isn't the end of either of these two cases. FYI, I would expect both of these to head to higher courts. But these initial rulings from federal judges are some of the first in the U.S. to address how copyright laws applies to AI systems, setting some early precedents but leaving the larger legal fight unresolved.
So Judge William Alsop determined that Anthropic's use of millions of copyrighted books to train its chatbot Claude was legal for the books that the company paid for, but said that Anthropic must still face claims regarding a pirated copy
a library of over 7 million books. I just see now Anthropic hiring hundreds of interns to go to rummage sales and find as many of those 7 million books as possible to go scan them because that's reportedly all that Anthropic did is they bought the books, they kind of took the covers off and scanned them and then they kind of were able to prove at least somewhat in court that, hey, we don't spit out
verbatim copies of these books. So in this initial ruling for anthropic
They're good to go, at least on some of the books. In a separate case, Judge Vince Chibara sided with Meta in a similar lawsuit that Meta was facing on Copyrighted Materials, finding that arguments from a group of 12 authors, including Sarah Silverman, were insufficient to rule on a copyright infringement related to Meta's Lama AI model. So yeah, essentially what the Meta ruling said was the judge more or less said, yeah, we
can't really rule against Meta because the defendants did a kind of a poor job of going after Meta for specifically what they were asking for. So intellectual property experts cited by
the Associated Press say that these cases are likely to be appealed and could eventually reach the U.S. Supreme Court, meaning the ultimate outcome for AI training on copyrighted works remains uncertain in the long run, but a pretty big win in the short term for AI companies.
So right now, US copyright law, it's a little messy. And I know we have listeners from all over the world, but here's the thing. The biggest AI companies are in the US. So I do assume that whatever kind of precedents from a legal perspective are set here, even though those may not be applied unilaterally throughout the globe, obviously, that's going to be the
benchmark that other courts in other countries start with, right? Whatever rulings, if any, were made in the US because that is where the biggest AI players are.
So U.S. copyright laws grant creators exclusive rights over reproductions, distributions, and some derivative works, raising questions about whether AI-generated outputs might infringe on those rights. So legal experts warn that while these rulings give AI companies some breathing room, the broader issue of compensating rights holders and limiting AI outputs that resemble original works is far from settled. And I would agree with that. And
This isn't going anywhere anytime soon. And like we've talked about previously on the show, the biggest domino to fall is obviously going to be the New York Times versus OpenAI case that has been open now since late 2023. It was December 2023.
And I do assume that whatever happens there will have a cascading effect on the rest of the industry. And another big one that could fall before that is the Disney and Universal lawsuit against AI image generating company Midjourney that we covered on our Hot Take Tuesday about two weeks ago. So if you care about copyright law or if you're just intrigued, those are some big cases to keep an eye on.
outside of these two that were temporarily at least ruled in the favor of meta and anthropic. So yeah, what do you guys think? What do you guys think? I like this comment here from Michael on YouTube. He said, OMG, AI companies are about to ravage the free little libraries. Yeah, yeah. It's gonna be interesting. Even on this very small,
piece where the judge essentially said, hey, Anthropic bought these books and scanned them and the companies weren't able to show that you could directly reproduce their works in these large language models. So, yeah, I'm wondering if there's going to be a resurgence of book buying specifically just to train large language models. Yeah.
Clearly, it's already happening. And I would guess with this recent ruling, yeah, that might set a wave of similar aspects in motion. All right. More lawsuits. This one with OpenAI. So OpenAI has been hit with a lawsuit from iMovie.
from IO, a startup led by Jason Rugolo over the name of its upcoming hardware startup with Johnny Ives called IO for what they are saying is a hardware device. Not a lot of details known, but more or less, this is a trademark battle. And a lot of people thought that this partnership between OpenAI and famed Apple designer Johnny Ive was
kind of halted because it was essentially scrubbed from a lot of places online, including OpenAI's website. So a lot of people were like, oh, this means the project has gone under. It looks like it's just a temporary trademark dispute. So more on this.
The dispute became public after OpenAI CEO Sam Altman posted private email exchanges with Rugalow on Twitter, showing that the latter had pitched OpenAI for a $10 million investment and later sought a partnership.
Altman decided to decline to invest, citing that OpenAI was, quote unquote, working on something competitive and later referenced Johnny Ive, the former Apple designer, as leading OpenAI's new hardware division. The lawsuit, though, claims that OpenAI knew about
I think that's how I'm going to say it out loud here. Right. But IO, which is the OpenAI Johnny Ives collaboration, IO versus IO, I-Y-O. All right. So OpenAI, the lawsuit claims that OpenAI knew about IO's technology and branding through meetings with Altman's investment firm and Johnny Ives design company as early as 2022, according to the lawsuit.
And like I said, OpenAI did recently scrub all I.O. branding and mentions from its website after a court granted a temporary restraining order in I.O.'s favor on June 22nd following the trademark lawsuit that was filed on June 9th.
So OpenAI's hardware team testified that its device is not an in-ear or wearable device, which is what IO's product is, right? It was essentially kind of like a, I won't say clunky, but it's a large device.
Think of it as like an AirPod, but like much larger. It almost fills your ear. And then there's some AI capabilities in there. So essentially we got some confirmation on what this new project from OpenAI is not. So it is not something that is going to go in your ear. And it's apparently not going to be called IO. They may rename this or they may fight these trademark claims.
So yeah, pretty interesting here on how Sam Altman decided to deal with this, essentially got ahead of it after people assumed, right? So essentially it just went quiet, right?
All of this OpenAI and Johnny Ives partnership information that OpenAI had posted about on social media and on their website went down. And then, you know, the internet rumors started swirling around like, oh my gosh, this thing is already, you know, this multi-billion dollar acquisition is already in the tanks. And then essentially Sam Altman came out, shared the receipts, shared the screenshots, right where Rugalow was kind of happy
hounding OpenAI to invest in his IO technology. They said no. And then now we see IO, a new hardware division come out. So yeah, I don't think this lawsuit is going to really...
lead to anything, right? It's not going to take down this new hardware company that is working on a product that we said or that we reported earlier might not come out until late 2026. And a lot of people are saying it is a kind of a third hardware device. It's not supposed to compete with a laptop or a phone. So a lot of people are saying it's kind of a hardware piece that you might slip into a pocket or
or something like that. And it's going to potentially pick up, you know, information from your surroundings and then pair that with your open AI data. That's kind of the best reporting that we've seen so far. But yeah, some slight road bumps on the path so far. Yeah, live stream audience, what do you think? Is this a legitimate complaint? I don't know. It's not the same name. It's not I-O.
It's I-Y-O. And yeah, there's some receipts going back a couple of years showing that Jason Rugolo was trying to meet with OpenAI and Sam Altman and, you know, getting some conversations going. But, you know, ultimately, it seemed like...
Tim Altman said, yeah, we're not going to do that. We're going to just do something better. So I don't know how, you know, how far back OpenAI's IO naming went, or if that's something that they came up with recently. I guess we'll find that out through court discovery if this does go that route. But I would assume that this is going to either get squashed pretty quickly or OpenAI is just going to announce a new or a different name. All right.
Next piece of AI news, 11 Labs has launched 11 AI, a voice assistant that executes tasks in real time. So 11 Labs, which a lot of people know as the, just kind of the
the probably the leader in AI text to speech. They've introduced their voice activated AI assistant that can perform actionable tasks across connected digital tools, marking a pretty big step beyond just their text to speech platform.
The alpha version of 11 AI is now live at 11.ai. That's the website allowing users to issue spoken commands such as planning their day, updating project management tools, or researching prospects using integrated services.
So unlike most existing voice assistants that are limited to simple conversation, 11AI is designed to carry out sequential and productive actions by connecting with third-party apps and also internal tools.
The system currently supports integrations with platforms like Perplexity, Linear, Slack, Hacker News, and Google Calendars with connections planned to roll out weekly. And here's the pretty exciting thing. It's built on the
Model Context Protocol, or MCP, a standardized API framework for AI that enables seamless integration with services such as Salesforce, HubSpot, Gmail, and Zapier. It also supports custom MCP servers for enterprise workflows.
I'm excited about this one. Personally, if I'm being honest, I don't use 11 labs a ton. It's one of those pieces of software that I've been paying for since it came out. And I don't know why, you know, they have a pretty affordable entry level plan. And I'll just go in there every once in a while just to kind of
play around with the latest capabilities and see what one of the leaders in the space is doing. But this might change it. I might now actually be using this 11 AI a lot if they continue to invest in it. So right now, the assistant uses a permissions model, allowing users to specify what actions it can take within connected applications, which could address any privacy and security concerns.
11 Labs is positioning 11 AI as a direct competitor to new actionable agents, such as the new one from Perplexity, their iOS app, which I think is really good. Amazon's Alexa Plus, the Alexa that's finally getting smarter, and we saw that up to a
million people have been invited, uh, to try out Alexa plus it's been a very, very slow rollout, right? But Alexa, uh, through their partnership with the entropic Claude is apparently going to be a little less dumb in the near future. Um, so we'll see how this stacks up against the others. And obviously if you follow the show at all, you've seen that, uh,
Apple's smarter Siri has gotten pushed back like, you know, 20 times. It gets pushed more than a kid on a swing. You know, it just keeps getting punted down and along the road. So who knows if we'll ever actually see a smarter AI Siri that you can talk to conversationally and you can have these follow-up questions. But hey, I like what Perplexity and now Eleven Labs are doing by competing in the space where there is a shift
huge gap and also 11 labs taking a pretty unique approach by leveraging the very popular MCP standard that was developed by Anthropic and has been adopted by just about everyone.
Are you still running in circles trying to figure out how to actually grow your business with AI? Maybe your company has been tinkering with large language models for a year or more, but can't really get traction to find ROI on Gen AI. Hey, this is Jordan Wilson, host of this very podcast.
Companies like Adobe, Microsoft, and NVIDIA have partnered with us because they trust our expertise in educating the masses around generative AI to get ahead. And some of the most innovative companies in the country hire us to help with their AI strategy and to train hundreds of their employees on how to use Gen AI. So whether you're looking for chat GPT training for thousands,
or just need help building your front-end AI strategy, you can partner with us too, just like some of the biggest companies in the world do. Go to youreverydayai.com slash partner to get in contact with our team, or you can just click on the partner section of our website. We'll help you stop running in those AI circles and help get your team ahead and build a straight path to ROI on Gen AI. All right, some breaking news here.
The U.S. Senate has struck a deal to pause state AI regulation in exchange for federal funding. That is if this big, beautiful bill, as it's actually being called here in the U.S., passes. So.
This is contingent and this is just hours old. So I haven't even seen a lot of mainstream media report on this yet, but here's what's happening and here's why this is important. So U.S. lawmakers have reached a deal to temporarily block states from regulating artificial intelligence for five years if they want access to $500 million in federal AI infrastructure and deployment funds.
according to recent reports. Uh, so, uh, this is from the Hill. So the compromise was negotiated between senators, Marsha Blackburn and Senate commerce chair, Ted Cruz, and it reduced the original, uh, proposed moratorium from 10 years to five. So essentially the, this is part of this big, larger, uh,
bill that happens a lot in the us and if you're wondering like what is this this is essentially a budget bill and it's been code named the big beautiful bill i don't understand uh but it has not passed yet and this kind of uh ai piece
Yeah, there's like dozens of things that are just grouped in this bill. You know, a lot of pork, what we call it, you know, here in the U.S. Everyone's, you know, trying to get their certain provisions in or out. So this spending bill is, you know, quite ridiculous if you look at it. And it's always ridiculous. Right. And this isn't a political thing. Right. But just how the U.S. in general and how the government spends money is ridiculous.
Anyways, there was this 10-year ban that said, but there wasn't an option. So before that was included in this bill that was apparently going to cause a lot of U.S. senators to vote against it, specifically Republicans who are in the majority. So they needed to get more Republicans on board in this 10-year AI situation.
state ban was apparently and reportedly a pretty big holding point from keeping the larger bill from moving forward. So now there's kind of this compromise that is a five year ban that states essentially can't come up with new laws regulating AI and instead have to kind of follow the federal government's lead.
But also now there's at least choice. So it's more or less, hey, if you want a big piece or if you want your state's piece of this federal funding, billions of dollars in federal funding for AI, for broadband and some other things, you have to kind of stop regulating AI at the state level. So
It seems like at least right now with this latest compromise that it's gone down. It's a little less strict and it's, you know, instead of 10 years, it's five. Also, the updated provision now exempts state laws focused on unfair or deceptive practices, children's online safety, child sexual abuse material and publicity rights.
So Blackburn is a vocal advocate for online child safety, highlighted the need to protect progress made by states in regulating big tech in AI. So the deal comes as Congress continues to struggle with passing comprehensive legislation to oversee the digital space and protect consumers from potential AI harms. So the new measure survived review by the Senate parliament.
parliamentarian, Elizabeth McDonough, who ruled it did not violate the Byrd rule and could be included in the reconciliation bill. So yeah, there's kind of a lot in the weeds here, but essentially how this 10-year bill
uh ban that was included in this bill right you have to vote for the whole the whole bill so you know this 10-year ban on states regulating ai uh was apparently causing a rift in the republican party that has the majority so all they have to do is just get all of their members to be on board with the rest of this bill to get it sent through but now this ai compromise could be one of the things that now pushes them uh to get this through
So some lawmakers, including Senators Ron Johnson, Josh Howley, and Rep.
Marjorie Taylor Greene, have previously opposed the AI moratorium, and it remains unclear if their concerns are fully addressed in this new version. So now the Senate is expected to vote on the broader package as early as today, which includes this new updated AI provision as they meet with President, as they try to meet President Trump's July 4th deadline.
And that's an arbitrary deadline. You know, it's not like if this thing doesn't pass, it's not going to happen. But this one's pretty interesting, y'all. And I was kind of shocked to see this 10-year ban included in the original bill. And I said, you know, at least online in some online discussions, I'm like, there's a 0% chance this thing gets passed as is. Number one, it violated the Byrd rule. There are some questions on the constitutionality of this.
of this clause, but now it is more of like an opt in. It's more of like, hey, if you want hundreds of millions of dollars, right? The federal government has set aside hundreds of billions of dollars in funding, in infrastructure projects. And they're essentially saying, if you want a piece of this big hundred billion dollar AI pie, your state essentially has to agree not to come up with any state laws regulating AI's development.
And one of the big reasons, and if you don't pay attention a lot to politics, you know, I was a political reporter in a former life. So I understand this right there. And this happens all the time, you know, the federal state level everywhere where sometimes
You know, you're voting on a bill and there's, you know, dozens or hundreds of provisions, right? So a lot of times they're trying to balance, you know, what are these things that are causing people to not vote for this whole bill so the Republican majority can pass their spending bill more or less. And this AI, you know, not allowing any states for 10 years to come up with any laws surrounding AI was absolutely bonkers.
So I still think five years is kind of wild. I would have expected maybe a two or three year potential ban kind of aligned with current President Trump term. But what they're doing right now is essentially making it an opt in. Like, yeah, if you do come up with any laws against AI, you're just not getting this big piece of the pie.
Joe here saying, I don't have any confidence in the U.S. government to figure out how to regulate AI. States are at least a bit more agile, but I doubt they could figure it out either. You know, Joe brings up a pretty good point. If I'm being honest, I kind of agree with Joe. I've covered, you know, both state policy
local and a little bit of federal politics in my time previously as a reporter. And even at the federal level, I mean, you have Congress people who literally don't even know what the internet is or how it works. So I don't have any level of confidence in the federal government to legislate or regulate AI development. But I think that's the whole point.
I think right now this current administration doesn't really want any rules or regulations as it comes to AI's development because essentially it is an AI arms race, the US against China. So the federal government doesn't want any states
you know, holding their federal progress back. But, you know, all you really have to look to right now is number one, is this going to be the final compromise that makes its way in this big, beautiful bill that will be voted on any hour now? And what is California going to do? Because California right now is the only state that actually matters because that is where all of the big AI labs are located, right? Google,
Open AI, Meta, I guess the one big exception to that rule is Microsoft, which is headquartered in Washington. But for the most part, Anthropic as well in California. Almost every single big AI company is in California. So we'll have to see if this is actually what happens and what California decides to do. Do they care about that federal funding or might they just pass
some AI regulation since the majority of the world's AI development actually comes out of the state of California. So keep an eye on what California does. Speaking of California, one of the big companies there, Anthropic, just released a pretty big
big update to their Claude platform. So Anthropic has rolled out some new features for Claude. It's consumer facing AI app and website, making it easier for non-technical users to build software simply by chatting with Claude. So the new update centers on
artifacts one of my favorite ai features and uh anthropic was pretty far ahead everyone else by rolling this type of integration out uh into their platform uh but the new update to artifacts right and if you don't know artifacts uh it's now if you've heard of chat gpt's canvas mode or google's uh canvas mode it's essentially that
Think of it as a side-by-side window and you can dump a bunch of data, you know, on the left side and then Claude artifacts will build something on the right side. And if you've been follow, uh, following our AI at work on Wednesday series that we started about a month ago, we do a lot of work inside Claude artifacts and chat GPT canvas and, uh, Google Gemini canvas. So this new update to artifacts essentially is you can build and share, uh,
artifacts, which isn't new, but the new thing is, is you can embed their Claude AI technology in any artifacts that you create. So this is pretty cool, right? So essentially now you can build an app without any code, right?
But even within that web app, right, you can embed AI functionality in there without really being a developer or even knowing how it all works, right? So you could, as an example, upload all your data, create a little web app. And then in that web app that you could then share with people, you can embed AI technology. So the users could go in and essentially use a version of Claude that lives inside of your artifact, right?
All right, so pretty exciting development there from Anthropic. But if I'm being honest, Google's had this, right? And we demoed this, I think, three weeks ago on our AI at Work Wednesday. And it's actually so crazy because it's like hard to find inside Google Gemini's Canvas mode. And they didn't really make any...
a big deal out of it when they announced it at their io conference in may it's just this little google gemini button and it's just there and it works and the cool thing that i like a little bit more about the google gemini version of their embedded canvas or sorry embedding gemini in your canvas iterations
It's free to use. So right now, the downside with Anthrax is you have to have a Claude account. So if you share this openly and publicly, right, it doesn't go out of your kind of limits, right?
It goes in everyone else's. So front end users, if they want to use that embedded kind of cloud functionality within this new artifacts, they have to be logged in if they want to use that. And it goes essentially from their usage, which we know, according to cloud, you know, if you've used it, if you hear me talk about this all the time. Yeah. Cloud limits are extremely low.
So like I said, that means that right now, subscribers on the highest tier plan, the $200 a month Claude Max plan will have way more capacity to actually use this and leverage it on the backend. But yeah, right now, if you're a normal paid subscriber,
you know 20 a month claude uh subscriber i don't think this is gonna add a ton of utility especially when google has it uh embedded in their canvas mode and it's free right even on the back end you don't have to have uh you don't even have to be logged in to use it so you know i'd like this uh advancement from uh claude but right now google kind of already beat them to the punch
All right, let's keep going on to more big tech news. So Salesforce CEO, Mark Benioff, revealed that AI now handles 30 to 50% of the company's total workload, a major shift that highlights how deeply AI is being integrated into business operations. So Benioff emphasized that AI is now performing tasks at Salesforce, uh,
once done by humans, including software engineering and customer service, allowing current Salesforce employees to focus on what they call higher value work. The company also aims to have 1 billion AI agents by the end of the year, as 65% of companies are experimenting with AI agents based on a April survey from management consulting firm KPMG.
So Salesforce's AI product used by some big clients such as Disney has reached reportedly a 93% accuracy for tasks like customer service without human supervision. So Benioff noted that perfect accuracy is unrealistic, but Salesforce outpaces competitors what they say is because of more extensive data in metadata.
So this is going to be pretty interesting to see. And this is the first time we've seen a huge company admit that their general work is being done by AI.
Could this be exaggerated a little bit? Absolutely. Could this be a play by Salesforce's CEO to get eyes on Salesforce and their agent force platform? Absolutely. I'm not buying it, if I'm being honest. I don't know. I still haven't decided what we're going to do for our Hot Take Tuesday.
Tomorrow, I put out a different poll on LinkedIn and a poll in our newsletter that you can sign up for at youreverydayai.com. And it was split. Some people, I think on LinkedIn, people wanted to hear about Salesforce. And in our newsletter, people wanted to hear about something else. So maybe I'll save some of my spiciest takes on this. I'm just personally not buying it, right? 30 to 50% of your work
No, because if I'm then on the board of Salesforce, I'm going like WTF.
Why is it our revenue then up? Hardly at all. Why has our stock been getting straight up battered for the past year, especially compared to other tech companies, right? So it seems like Salesforce is really trying to position itself as kind of an AI first, an AI native and an agent native company yet.
Their stock is not making that same rise as the AI companies and their revenue isn't soaring either. So I don't know. I'm not personally buying it. What do you all think?
Our next piece of AI news, Google is going free. Yeah, they just launched their free Gemini CLI tool to bring AI coding tools directly to the desktop. And this is something that could straight up kill Anthropix desktop tool, Claude Code.
So Google has released Gemini CLI and the CLI stands for command line interface. If you're not a dork like me, and this is a free essentially desktop coding tool that uses the command line interface, like the terminal on Mac. And it gives developers direct access to the Gemini 2.5 pro models. And it's unlimited. It is free.
free. Well, it's not unlimited, but it's essentially unlimited. I think there's very few people that will be able to hit the number of tasks that there's a limit on. So it's essentially unlimited. It is free and it is open source. And when I saw this news, so I obviously saw it before it was announced. And when it was announced, I'm like, yeah, this is going to get
This is going to crush. This is going to crush not only Claude Code. So Anthropic is going to have to figure out other ways to monetize, but also this could impact some of the kind of desktop AI IDEs out there a little bit as well, such as Claude or sorry, not just Claude Code, you know, potentially Cursor, Windsurf, OpenAI's new Codex, right? So Google making a huge play here.
by making this Gemini CLI essentially a desktop coding agent, free, open source, using the world's most powerful model right now. It is still Gemini 2.5 Pro. So it allows up to 60 model requests per minute and a thousand per day at no cost.
So, like I said, it is open source under the Apache 2.0 license and is available for installation on GitHub. So, developers can use Gemini CLI for coding tasks such as writing, debugging, generating content, conducting research, managing tasks, software development, just about anything that you would want to do in the software development realm.
The tool is integrated with Gemini Code Assist, which is Google's AI coding assistant and is accessible as well in Visual Studio Code for users on free standard and enterprise plans. So this one's interesting, right? And I'm obviously following a lot of the discussion online and I don't know, if I'm anthropic,
I'm not super happy about this, right? Because I think a lot of their revenue obviously is coming from people in software development, coders, engineers, et cetera. So when Google offers this free open source tool that is essentially unlimited,
Yeah, I do assume they're going to get a big chunk of Claude Code users, people that are relying on Claude Code, which is a paid tool and it's not cheap, as well as people using Cursor. So it should be interesting to see what happens here as this continues to gain adoption.
All right. Our last, well, technically story and a half here. There has been an all out war.
AI hiring assault over the last week and change with Meta poaching top researchers and developers from OpenAI. So first, Meta has hired at least eight OpenAI researchers in the last week and a half, according to reports. So that is from the information and from the Wall Street Journal.
So they've hired at least eight. Meta has hired at least eight researchers from OpenAI in the recent weeks. And this aggressive recruitment follows Meta's April launch of its Lama 4 AI models, which reportedly fell short of CEO Mark Zuckerberg's expectations. So what happens, you know, is...
Meta's Lama Ford didn't really make a big enough splash. And then ever since they've been either acquiring companies, right? They spent $14 billion for a 49% stake in scale AI and, uh,
having its leadership team lead Meta's new super intelligent team. So that was a multi-tiered play because not only do they now get a huge chunk of revenue from that 49% equity in scale AI and have its founder,
or its CEO now running Meta's super intelligence teams. But now a lot of their big tech competitors have obviously stopped using scale AI, which is really big in the AI data labeling space and model evaluation. So not only is that a big investment in terms of Meta's potential top line, but also it does hurt their competitors. And now speaking of hurting their competitors, they're just going after
in everyone at OpenAI. So this hiring spree highlights the escalating competition for AI talent with OpenAI CEO Sam Altman last week claiming that Meta has offered signing bonuses of up to $100 million in annual salaries at more than $100 million, though Meta's CTO Andrew Bosworth says that the real offers are not
Not really that much and much more complex than that.
The rivalry between Meta and OpenAI is intensifying as both companies race to build the most advanced AI systems and secure the top AI researchers in the field. Yeah, and reportedly, these researchers that Meta poached from OpenAI are some of the researchers that helped develop their reasoning models and some of the brightest minds at the company. So pretty big.
And then our last AI news story, which is related to that. Well, on the backside, OpenAI is reportedly kind of scrambling to kind of get all their ducks in a row and strengthen their team.
So OpenAI is facing an intense talent war after Meta has recruited up to eight senior researchers from the company to join its super intelligence lab. So Zuckerberg, like we said, had reportedly offered bonuses and salaries as high as $100 million, which is a figure confirmed by multiple sources. And like we said, but Meta said, not really true. But here's the most recent example.
new news on this ongoing escalation between the two companies. So Mark Chen, OpenAI's chief research officer, reportedly just addressed staff in a forceful memo on Slack, which was reportedly leaked to or obtained by Wire. And it expressed a sense of violation and promising aggressive action to retain key researchers.
So
OpenAI leadership, including Chen and Altman, are working, quote unquote, around the clock to talk to employees with competing offers, recalibrating their own OpenAI, or sorry, recalibrating OpenAI's compensation and exploring new ways to reward and recognize top talent. So despite the push to keep researchers, Chen emphasized fairness and said that he would not retain employees at the expense of equity among staff.
So the internal memo reportedly included encouragement from other OpenAI research leaders urging staff to reach out if pressured by Meta's exploding offers and highlighted the company's support system. Yeah. So even the headline or the subhead here in Wired was OpenAI leadership responds to Meta offers, quote unquote, it's like someone has broken into our home.
Wow. So the competition for AI researchers is heating up across Silicon Valley with meta targeting talent from both OpenAI and Google, while Anthropic is seen as a less of a cultural fit for meta poaching leaders there.
Also, OpenAI employees are reportedly working up to 80 hours a week, prompting the company to kind of shut down for the week. But this is something that they've done reportedly almost every single year around the July 4th holiday here in the US. But that's why the timing here is particularly interesting and reportedly why this very forceful memo from OpenAI went out is because OpenAI is essentially going to
Have a week off and Meta is apparently not done poaching. So it's going to be interesting to revisit this in the coming weeks and see what happens and see if this is something that is eventually going to tilt the scales in favor to Meta. So whether we see a Llama 4.5 or a Llama 5, I would expect...
their next version of their large language model to be extremely competitive. Whereas a lot of people, including myself, didn't even really expect a lot out of Meta's Lama 4. And I was not personally impressed, but I am impressed very much so by the recent moves from Meta and what this ultimately means for the long term.
All right. That's it for the AI news, but a really quick roundup of what's new and what's next. So some recent releases and some recent rumors or upcoming releases. So, uh,
Twitter's Grok 3.5 is no longer. Elon Musk confirmed that instead they're just going to skip 3.5 and said that Grok 4 will be out around July 4th. I wouldn't hold my breath. There has been about a couple hundred delays, you know, through everything that Grok has been, you know, talking about and, you know, releasing the weights and all that stuff. So I wouldn't hold your breath on that if you were a Grok fan.
Google Gemini scheduled tasks is out to paid users. So check your account for that. There's no dedicated UI for it on the front end, FYI, but there is a setting for your scheduled tasks. Google has also rolled out a Gemini model that runs on robots locally, which we covered in the newsletter last week.
A lot of new Google rollouts. Also, their Ask Photos kind of feature that was previously delayed for a long time is being slowly rolled out first to paid users where you can essentially talk with your history of Google Photos, which I'm personally looking forward to that one. OpenAI is gradually rolling out limited connector features to standard chats. So what that means is before
Some of these connectors like Gmail, Google Calendar, Microsoft Teams, Outlook were only available to deep research chats, which a lot of people were like, okay, this is cool. But if I have to wait seven to 15 minutes, is it that useful? So now OpenAI has been rolling out some connectors to normal chats and not just deep research.
ChatGPT has also rolled out a Canva connector. If you're big into the Canva design space and also a Slack connector will reportedly be rolling out next. And ChatGPT, pretty big one here actually, or OpenAI rolled out their deep research to their API as well as webhooks. So that one's actually really big. So I would now expect...
hundreds of new of these kind of deep research tools to be rolled out specifically for even specific verticals or type of work. So that's gonna be an exciting one that OpenAI now is opening that out, opening their deep research functionality out to thousands of tools. So yeah, keep an eye out for that. All right, that's it. That's a wrap y'all. Let me do the world's quickest recap
of the AI news that matters for this week. So first, a federal judge has allowed at least temporarily Meta and Anthropic to train on copyrighted materials, although we're going to see some ongoing lawsuits about that. Speaking of lawsuits, OpenAI is facing a trademark case over their device name, IO versus I-Yo, the other hardware startup. 11 Labs has launched 11AI, a
voice assistant that execute tasks in real time by using the MCP protocol.
The Senate has struck a deal here in the U.S. to pause state AI regulation for five years in exchange for federal funding if that bill passes. Anthropic has unveiled a new update to its artifacts feature, allowing users to embed cloud functionality in its artifacts creations. Salesforce is apparently leveraging AI to do up to 30 to 50 percent of its company data
of its company's day-to-day work. Google has launched a free and open source tool called Gemini CLI or command line interface to bring AI coding tools directly to developers. Meta has hired up to eight open AI researchers amid an intense AI talent war and open AI on the back end is scrambling to retain top talent according to a recent report.
If this was helpful, please let me know. Share this with your network. We spend so much time cutting through all the fluff, telling it to you just how it is. So if this was helpful, please tell someone about it. If you're listening on the podcast, thank you. Please make sure to follow the Everyday AI Show and subscribe. Leave us a rating or review if you're listening on Apple Podcasts or Spotify. We'd really appreciate that. And I'd appreciate you tuning in tomorrow and every day for more Everyday AI. Thanks, y'all.
And that's a wrap for today's edition of Everyday AI. Thanks for joining us. If you enjoyed this episode, please subscribe and leave us a rating. It helps keep us going. For a little more AI magic, visit youreverydayai.com and sign up to our daily newsletter so you don't get left behind. Go break some barriers and we'll see you next time.