cover of episode EP 537: Perplexity goes agentic, Google Gemini updates, NYT/Amazon team up & more AI News That Matters

EP 537: Perplexity goes agentic, Google Gemini updates, NYT/Amazon team up & more AI News That Matters

2025/6/2
logo of podcast Everyday AI Podcast – An AI and ChatGPT Podcast

Everyday AI Podcast – An AI and ChatGPT Podcast

AI Deep Dive AI Chapters Transcript
People
J
Jordan Wilson
一位经验丰富的数字策略专家和《Everyday AI》播客的主持人,专注于帮助普通人通过 AI 提升职业生涯。
Topics
Jordan Wilson:Perplexity正在大力转向代理,这是值得期待的。纽约时报与一家大型人工智能公司建立了合作关系,尽管它正在与大型人工智能公司作斗争。谷歌 Gemini 在工作区用户体验方面变得更好,而且用户无需费力。如果你每周花费大量时间来了解人工智能领域的动态及其对你、你的公司和你的职业生涯的影响,那就停止这样做。每周一加入我们的“人工智能要闻”环节。我是 Jordan Wilson,欢迎来到 Everyday AI。这是一个每日直播播客和免费的每日新闻通讯,帮助像你我这样的日常商业领袖不仅学习人工智能,还学习如何利用它来发展我们的公司和职业生涯。你需要确保注册免费的每日新闻通讯。我们会回顾每天节目中最重要的一些点,并在新闻通讯中为你提供你需要了解的一切信息。我们的网站上有超过 530 集节目。大部分周一,我们都会发布“人工智能要闻”。如果你每周只能参加一次,或者你总是花费太多时间试图阅读、解读和理解人工智能领域的新闻,并且想知道这些新闻是否有意义,还是只是营销噱头,那么请参加我们周一的节目。加入我们周一的节目。Perplexity 正变得疯狂。

Deep Dive

Chapters
Perplexity Labs, a new agentic mode from Perplexity, enables users to create complex reports, dashboards, and web apps using AI-driven research. It leverages third-party capabilities and offers features like an app tab for building dashboards and an assets tab for downloading generated content. While not perfect, its slide creation capabilities are impressive.
  • Perplexity Labs offers AI-driven research capabilities for report and app creation.
  • It features an app tab for dashboards and slideshows and an assets tab for downloading generated content.
  • The slide creation feature is particularly noteworthy, surpassing other similar tools.

Shownotes Transcript

Translations:
中文

This is the Everyday AI Show, the everyday podcast where we simplify AI and bring its power to your fingertips. Listen daily for practical advice to boost your career, business, and everyday life.

Perplexity is pivoting hard into agents and we're here for it. The New York times, even though it's fighting with big AI just entered into a partnership with big AI and Google Gemini is getting a lot better for workspace users and you don't even have to lift a finger. There's a lot still going on in the world of AI news after last week,

Experiencing our biggest week of AI news ever. A lot of smaller yet meaningful updates happening this week. So if you spend hours every single week trying to keep up with what's happening in the world of AI and how it'll impact you, your company, and your career, stop doing that. Just join us on Mondays for our AI News That Matters segment.

What's going on y'all? My name is Jordan Wilson and welcome to Everyday AI. This is your daily live stream podcast and free daily newsletter helping everyday business leaders like you and me not just learn AI but how we can leverage it to grow our companies and our careers. So if that's what you're trying to do, it starts here with this unscripted

unedited live stream and podcast, but where you're actually going to leverage what we learn is by going to our website at youreverydayai.com. There, first, you need to make sure to sign up for the free daily newsletter. We recap the most important points from each day's show, as well as giving you everything else in that newsletter that you need to understand what's going on. Also on our website, what's going on there is now more than 530 episodes. You can go listen to it.

Read, watch the video. Everything you need is sorted by category to be the smartest person in AI at your company. All right. Like I said, most Mondays we do the AI news that matters. So if you can only join us once a week, or maybe you're just always spending way too much time trying to read and decipher and understand the AI news. And you're like, is this meaningful or is this marketing spin?

joining us Mondays. I can't get you straight. Livestream audience, it's great to see you.

Jose joining us from Santiago, Chile. Love to see it. Fred joining us from Chicago. Livestream crew on the YouTube machine like Keith. Thanks for tuning in. Christopher from Kentucky. Brian holding it down for Minnesota. Joe in Fort Lauderdale. It's a good day to be agentic, he says. Yeah, a lot of agentic news. All right, let's get straight into it. First, perplexity is going wild.

So Perplexity has introduced a new tool or a new mode called Perplexity Labs that enables users to create complex reports, spreadsheet dashboards, and even web apps, all supported by extensive AI-driven research from Perplexity.

So this new agentic mode works by spending about 10 minutes or more on self-supervised tasks, leveraging third-party capabilities such as deep web browsing, code execution, and generating visuals like charts and images or even slideshows and spreadsheets.

So Perplexity Labs builds on the company's existing AI search products, including their flagship conversational search engine and their deep research mode, which produces in-depth, well-sourced documents after extensive data gathering.

So the AI agents behind labs can organize data, apply formulas, generate charts, create text documents, spreadsheets, dashboards, and even small websites without requiring users to have coding skills or development tools.

So users can access examples and templates through a project gallery showcasing use cases like interactive war maps, stock portfolio dashboards, comparing traditional and AI managed investments and futuristic social media platform designs. So the new tool or mode features an app tab for building simple dashboards, slideshows and interactive websites with Jeff.

all generated assets, images, charts, CSV files, and code accessible for download in the assets tab. So right now labs is currently available to subscribers of perplexities pro plan on web iOS and Android with plans to extend availability to Mac and windows apps soon. Uh, live stream audience. Have any of you guys used.

Uh, labs yet. So there's parts of it that I'm actually pretty impressed by. Uh, and then there's parts of it that I'm like, all right, this isn't that great. Um,

One thing that I really like is what I just read there at the end is how it's all sorted. Right. So a lot of times when you're using maybe other similar tools, I would say the most similar thing I can compare this to right now is Claude artifacts or similarly Google, Google Gemini's canvas or open AI chat GPT's canvas. Very similar in that it, it,

combines kind of this research and answer gathering from the web, but then it can create something new of value that's more than just straight text. So at least in my initial testing, the web app capability was okay.

I love being able to have that assets tab because that's something that kind of the quote unquote competitive tools or modes don't necessarily have. So if it is something that is very multimedia, that assets tab is really nice.

But what I really think it does a pretty good job on is generating slides, right? Kind of a small thing that we don't talk about a lot, but yet we all spend so much time on. They're not going to be fantastic looking slides, right? Think if you open PowerPoint and you're looking at 10 templates and there's two that are bad, there's two that are great. And then there's like six that are like, okay, these are okay, right?

I'd say in your best case scenario, perplexity labs will hit that like, oh, the okay kind of template. So this isn't something that's going to be overly designed, but it actually does a way better job than I thought specifically on slide creation, which is something that the other, those platforms, like I mentioned, aren't great at. And aside from

dedicated slide tools like, you know, gamma or, you know, beautiful dot AI, right? There's not a lot of AI research tools that also create slides yet. Like I said,

That's what so many business knowledge workers, that's what we do. We create slides. Even right now, I have slides on my screen for our podcast audience. I always have kind of a screenshot of a news article, and then I link to it in the newsletter. So you can, well, first to give credit to the company that I kind of read the article, but then you can go back and look to it. But I create slides almost every single day.

So I tried to get something to the level of what I would use. It's not quite there yet, but it actually did a way better job than I thought when I said, hey, here's my 10 news stories that I'm going over today. Go research them, go bullet point them and create a kind of slideshow. So it's not at the point where I'd want to use it yet necessarily, but it's definitely possible.

passable, right? My use case is different. I like to put up my slides for a lot of people to watch on the live stream, but pretty good. It's pretty good. And I was pretty impressed. Mahan here says perplexity is innovative, will grow big. I actually have thoughts on that.

Because I said back in January, I said perplexity is going to have to pivot or they will get squashed. And what we've seen here is, well, this is a pretty impressive pivot, I'd say, right? And yes, like Dr. Harvey Castro said, great call out here, said Manus AI just added slides. That's big as well. So kind of a very popular version if you aren't following the space very closely, Manus is

is an AI agent, like similar to OpenAI's operator, a computer using internet using agent that now they just roll out slides mode about a week ago. And the good thing about the Manus AI slides is they're editable as well. Whereas something like in Perplexity, they're not necessarily editable how you would want to edit them without regenerating the whole thing. And if you regenerate the whole thing, it might change 10 things that you didn't want and only change the two things that you want.

But I think so far pretty good. Douglas saying, I have made dashboards side-by-side comparisons with Gemini Canvas. For me, I like the perplexity lab output better. Yeah, for sure. It depends on what you're trying to build, right? A lot of times what,

I'm trying to build inside these, you know, artifacts or canvas, et cetera, is something a little more interactive and visual and not necessarily asset-based or research-based even. It's more of, you know, building little mini websites or something like that, or generating certain types of code. So at least for,

My use cases, I would probably use Perplexity Labs a little bit more for information gathering. So kind of combining kind of deep research with slide formation. That's what I would use it for. And then when you need to be able to download those assets, that's what I would personally do. But I think a lot of good use cases

Also, a great call out here. I had in my notes for later, but also, yeah, Perplexity does have their comment browser that they've been slowly rolling out. They did the first rollouts about a week ago. So pretty impressive so far.

All right. Our next piece of AI news, Hugging Face has released an open source robot called Reachy Mini offering affordable AI hardware for developers. So the new open source robot is designed to help developers test and build AI applications. And it is priced between $250. Yeah, $250. Wow.

Not $250,000. It's a little robot guy. All right. For a podcast audience, it's this, you know, cute little, uh, you know, desktop size robot, but it is making it significantly more accessible and affordable than other robotics hardware on the market. So reachy mini resembles a small wall-e style bust that can turn its head and interact with users through speech.

So the product is positioned as a kind of rise, a raspberry pie for robotics, targeting AI developers by providing affordable, customizable, customizable hardware for experimentation. Also the open source nature of Ritchie mini means developers can modify both its software and hardware, which I think means this is going to be the thing that ultimately pushes humanoids and, uh,

like robotics into the mainstream, into the home, right? I think what's going to happen is this new Ricci Mini is probably going to be extremely popular just because it's only $250 to $300. It is open source. So you have to know the basics. You don't got to be a geek, but you have to be dork-esque to be able to set this up and run it. But then after that, I do see a lot of households probably using this piece of hardware

uh to do it and i need to double check on the price actually all right uh yeah sometimes sometimes i'm a little uh tired when i put all of my uh notes uh together and i wanted to double check on this price uh because my screenshot here uh from the article says 3 000 and my notes said something else so yeah i made sorry i made a mistake it's not 250 dollars all right uh bad human

I just hallucinated because I get tired sometimes. I got my coffee here though. Sorry, $3,000, not 300, $3,000. Still, when you look at for comparison, like the Tesla Optimus Gen 2, that's expected to cost probably like $25,000 and it hasn't even been released, right? Even though we're like, oh, we're going to get these robots very soon, right? And then you have other

have other more advanced humanoid robots that are costing hundreds of thousands of dollars. So for an open source robot or humanoid to only cost $3,000, not bad, not bad.

And I do see that cost going down, maybe not to $250, like I accidentally said, missing a zero there, but maybe to a less than $1,000. I could see that happening within a couple of years. So I do see this as a huge boon for not just robotics, but in-home humanoids as well. You know, Marie said, oh, Jordan, I never thought you would hallucinate. Yeah, it's a feature, not a bug. All right, our next piece of AI news.

Well, Google Gemini is getting a little more user friendly. So Google has launched automatic AI powered email summary cards in Gmail, which now appear at the top of emails without requiring users to tap for a summary.

So this update means that Gemini will proactively summarize long email threads and keep the summaries updated as new replies arrive. That's the part that I'm looking forward to. So right now, workplace admins can control whether users have access to these summaries via the admin council, giving organizations some oversight. So if you're wondering like, Hey, why haven't I seen this? That's probably because the, your workspace admin maybe hasn't enabled it yet.

So it is turned on by default, which a lot of people are saying from a privacy perspective is problematic, but I'll say this.

It's not right because you're going to go in and use this feature anyways, whether it's turned on by default or not. So in certain countries such as the EU, UK, Switzerland and Japan, it will be opt in or turn off by default. In the US, when this is fully rolled out, it will be opted in by default, but you can opt out. So again, once it's enabled by your workspace admin.

So this manual option to generate a summary still does remain available as a what Google calls a clickable chip at the top of emails and in the Gemini side panel. All right, another kind of small but still pretty big quality of life improvement for Google Drive Workspace.

is a new feature that Google has rolled out in Gemini that lets workspace users get quick summaries and insights from videos saved in Google Drive. So the same thing, saving a Google, or sorry, saving a Google in a video drive, saving a video inside Google Drive, Gemini will now automatically use its AI prowess and summarize that thing instantly without you having to go in and ask it to summarize, which is wonderful.

really nice having this available inside Google Drive. This is something I spent a lot of time using in AI Studio for this exact reason.

So the Gemini inside Google Drive now supports not only video files, but also by extending its previous document and PDF summarization capabilities. So users can interact with a chatbot interface inside of Google Drive to request summaries or specific details, such as listing action items from recorded meetings or highlighting major updates in announcement videos.

This feature, though, does require captions to be enabled on videos and can be accessed through Google Drive's overview previewer or a new browser tab. It's currently available in English only right now for Google Workspace and Google One AI Premium users, plus anyone with Google

Gemini business or enterprise add-ons with a full rollout expected in the coming weeks. So you might not see this yet, but if you are on a paid Google plan, you will be seeing this feature pretty soon. So.

I don't know. Are these Gemini features worth writing home about? Are you going to use them? I know for me personally, I'm going to be using them because these are features that if I'm being honest, Gemini hasn't been that great at inside of its default workspace apps. What's been great is Google AI studio. So I'm constantly bringing long email threads over to Gemini AI, like super long email threads because of the context window and being able to use Gemini 2.5, uh, pro is, is nutty. Uh, uh,

as well as video. So I've been using Google's AI Studio for a lot of these things now, but it's really good to have these rolling out directly to workspace apps. Hopefully the integration and the rollout goes well. Sometimes it's super slow, especially when anytime this type of utility comes to workspace users. So unfortunately, sometimes more of the kind of great

AI features don't roll out initially to workspace users. So as an example, I'm on the new, you know, $250 a month Google Gemini AI Ultra plan, but I can't access it via workspace, which absolutely stinks, right? So I can't really use it with my business data. I have to use it with my personal Gmail account. So if I want to take advantage of a lot of the more

powerful AI features, I would have to set up an automation that sends all of my work emails and files over to my personal Gmail account. So at least it's good that Google is rolling these out into workspace. So love to see it. Next.

This is not a good look for the federal government, but a government health care government health report led by Robert F. Kennedy Jr. and released by the Department of Health and Human Services. It's causing a lot of controversy because this paper was found to contain a ton of fake and biased

botched scientific citations, raising concerns over the use of generative AI in official policy documents. So this is according to reporting from the Associated Press and the Washington Post. So the White House responded to criticism by updating the report and correcting citation errors, but downplayed this issue as, ah, this is just a minor formatting thing rather than huge formatting

A huge mistake. So at least one of these studies, so it was the overprescribing of oral corticosteroids for children with asthma. It does not exist outside of the report. All right. So in...

37 of the 522 footnotes were repeated multiple times, according to reports. Also, several URLs in the report included the tag OA site, which

which is a tag linked to open ai so they did not even bother to go and try to say like ah this isn't ai generated they included those like citations that clearly show that this was generated by a large language model uh so not a great look

Also, some of the studies that the report talked about were cited correctly or they misrepresented or inaccurately summarized. So as an example, there was a claim in this study about a 40-fold increase in childhood bipolar and ADHD diagnosis tied to psych...

I can't even talk, a psychiatric manual that wasn't published until years after the cited period. So, yeah, just a government report having to do with health that was released with just a ton of made up information. So not a good look when this is happening, not just at the federal government level, but

for something extremely important, like health. Are you still running in circles trying to figure out how to actually grow your business with AI? Maybe your company has been tinkering with large language models for a year or more, but can't really get traction to find ROI on GenAI. Hey, this is Jordan Wilson, host of this very podcast.

Companies like Adobe, Microsoft, and NVIDIA have partnered with us because they trust our expertise in educating the masses around generative AI to get ahead. And some of the most innovative companies in the country hire us to help with their AI strategy and to train hundreds of their employees on how to use Gen AI. So whether you're looking for chat GPT training for thousands,

or just need help building your front-end AI strategy, you can partner with us too, just like some of the biggest companies in the world do. Go to youreverydayai.com slash partner to get in contact with our team, or you can just click on the partner section of our website. We'll help you stop running in those AI circles and help get your team ahead and build a straight path to ROI on Gen AI. Marie says, seems like the administration is hurrying and rushing and not checking data first.

That is what it looks like. All right. Speaking of checking data, you got to check this story. I was not expecting this because Amazon and the New York Times have struck in a deal. And I'll tell you why that's important here after the details. So here's the details.

The New York Times has reached a multi-year agreement with Amazon to allow the tech giant to use its editorial content across Amazon's AI platforms. So the deal enables Amazon to incorporate real-time summaries and short excerpts from the New York Times and its other properties like New York Times cooking and the athletic, the sports publication into Amazon products such as Alexa.

Will Alexa finally get a little smarter? Hopefully. So Amazon will use the Times content to train its proprietary foundational AI models, helping improve the quality and relevance of its AI-driven services. So terms of the agreement were not disclosed, but the partnership reflects a growing trend of news outlets opting for licensing deals with tech firms rather than pursuing litigation over AI content use.

Here's why this is pretty noteworthy. Well, it's the first big licensing partnership the New York Times has struck. That's because they have one of the most noteworthy and technically famous or infamous, depending on how it lands.

lawsuits against Microsoft and OpenAI. So the New York Times filed this lawsuit in December of 2023. It is still in the courts where the New York Times is suing Microsoft and OpenAI for allegedly copying millions of the New York Times articles. So one of the things that the New York Times asked for in its lawsuit was for the GPT technology to be destroyed.

Which, again, I think has like a next to zero chance of actually happening because the whole world now runs off of like the GPT technology. And even if it were theoretically possible to destroy it, it's kind of too late. But it would essentially bring the world's economy to a screeching halt.

Yet, this is extremely noteworthy because this is the first time that New York Times is entering into an agreement with a big tech company being like, yeah, go ahead, train your model on our data. So I'm not saying they're the last media giant to fall in line, but I've been saying this all along. Yo, I was a journalist for seven years, right? So you could say I have bias either way, right? But-

There's no other option for news organizations either. There's three routes route, like when it comes to AI, right? Cause like a lot of news organizations are like, oh, well we'll block all these, you know, web crawlers. So they can't scrape our information. That won't work because number one, the crawlers don't always listen to your instructions and your robots, not text files. Number one, if you want to show up on Google,

You don't have a choice. You can't opt out of Google AI training, but opt into Google search. So if you want to be discoverable, you have to opt in. So you either

slowly die because people aren't going to find you anymore. If you want to opt out, you're like, ah, I don't want large language models to scrape my content. I'm going to opt out. Well, number one, good luck. Number two, you're going to die because you're not going to get new users. So you either slowly die, you sue, right? You sue all these companies, which is what a lot of media outlets and news organizations are doing, or you enter partnerships. There's no other way.

There's literally no other way. And even if you are trying to block all these AI scrapers from getting the information from your website and putting it into their training set, even if you do that, there's other scrapers out there that scrape the entire internet. They make these third-party data sets. And then the large language model and AI companies train off those data sets anyways.

So there's no way around it. So a pretty interesting and newsworthy announcement there from the New York Times. But can we please, finally, when are we going to get this Alexa that's powered by Claude and has all this information? It was supposed to be rolling out as a paid service. I still don't have it. I will pay for that.

I will pay, right? It's obviously great to be able to talk to, you know, ChatGPT or Gemini Live, right? When I have my phone near me, but I still am calling out to Alexa or Siri a lot of the times and I could get better answers from talking to a brick. It is mind-numbingly stupid.

Right, Alexa Siri. So please, can we get this smarter version already? I'm not counting on it from Apple, but at least from Amazon and Alexa, can we please just roll it out? All right. Douglas saying Jordan saying Alexa right now and setting off thousands of Amazon devices during the tech podcast. Yeah, I've heard that happens quite a bit. Sorry.

I need a code word. How can I talk about these things without saying that? I can spell them out, right? But that's a lot of work. All right, next. And I don't usually talk about a lot of announcements on the creative side, right? So, you know, photo and video tools or audio tools. But I have two this week that I think are pretty big announcements.

or from a quality perspective are worth talking about. But, you know, live stream audience, if you want to see more updates on these visual tools, let me know in the AI news roundups. Generally, I'm focusing on large language models. So generally, I'm focusing on OpenAI, Google, Anthropic,

Microsoft, Meta, right? Some of the big tech trillionaire companies, because I think those are the ones that impact most of us business leaders. So if you want more news on kind of the multimedia or creative side, let me know. But I think this one from Black Forest Labs is big enough to talk about because the quality is,

Very impressive. All right. It is already on par. If you want to say state of the art AI image generating, it's there. So Black Forest Labs has launched Flux One Context. So podcast audience, that's context with a K. All right. A new family of image generating and image editing AI models.

So the most advanced model is called Flux One Context, and it can generate images from text prompts in optional reference images, delivering results up to eight times faster than leading competitors. That's according to Black Forest Labs. So the suite includes two main models, Flux One Context, which allows for multi-step image refinement while preserving style and character, and Flux One Context,

Max, which emphasizes speed and prompt accuracy. So unlike previous models, these new versions are not available for offline download. All right. So yeah, a lot of people are like, oh, wait, I can, you know, this is an open source. This one's not. All right. So you can't, you know, download this and use it right now in that way. It's only available right now in a private beta for safety and testing purposes, according to Black Forest Labs.

So they're also launching a model playground, giving users 200 free credits to try the models online on their website. So this release comes as competition heats up in the AI image generation sector with Google and OpenAI recently releasing their own advanced models.

So the company Black Forest Labs reportedly sought $100 million in funding in at least a $1 billion valuation last year, and they are based in Germany and founded by former members from the Stability AI team.

So why the heck am I talking about this? Well, we've talked about not add great length, but many times how good specifically Google Gemini is with editing images with simple text prompts. And then obviously we covered open AI, their GPT-4O image gen, right? Which went viral kind of online many times. But I would say at least today,

Flux one context from Black Forest Labs is on par or even better than those. And here's why I think it's important enough to talk about on a, you know, on a news show. Don't trust anything anymore. You can't write anything you see online rather than thinking, is this is this AI generated or is this real?

Right. You should probably assume from now on. Right. And think this is going to add, which is why they weren't making this available kind of as an open source model. This is going to add to the deep fake problem. Right. To the misinformation, disinformation epidemic that's sweeping the nation, at least here in the U.S.,

As these models get better and better and you can get character consistency and you can go in and edit photos and you can't even tell. And then those can be the base for videos and a very powerful tool like Google VO3, right? It's scary. You should assume.

Everything you see online now, right? There's been a lot of recent stories. Maybe I'll do an entire episode on this, but people are launching fraudulent GoFundMes, right? People are obviously using this for blackmail in a lot of bad ways. This is going to be a problem. But for our audience, you should always assume from here on out,

Everything is AI generated. Everything you see, unless you know otherwise, right? Even probably me, I would tell you, right? But you should always assume because that's how good these models are now. And I think it took the general public maybe 20 years or at least 10 to 15 years to come up with this concept of, oh, things could be photoshopped.

Things you see online or something in a magazine, something you're seeing on TV. Oh, something could be Photoshopped. Something could be digitally altered. Now, I think you have to start with that as default. With anything you see, whether it's an ad, these UGC ads, whether it's something you're seeing on TV, a pre-roll message on YouTube,

Something you see from a celebrity that you're like, oh, this is interesting. I wouldn't have thought this person would have this take or even your favorite news anchor. Assume everything is AI generated until you are able to confirm it's not. So when in doubt, assume it's AI generated. And...

Adding to that, our next AI news story, 11 Labs has launched their Conversational AI 2.0. So this is a pretty significant update to its enterprise voice agent platform. And this is just four months after the original launch of their Conversational AI platform.

So if you don't know 11 Labs, I would say they've traditionally been the leader or a leader, probably 1A or 1B in text-to-speech. Although now you are seeing a lot of competitors, both open source and from the big tech conglomerates in OpenAI and Google. But the new 11 Labs text-to-speech conversational platform, these new updates in their 2.0 version, feature state-of-the-art technology.

turn taking model. And that's great. So, and you might be wondering, okay, why does this matter? Well, just like how I said, assume everything that you interact with is AI generated. I would assume probably within a year, anytime you call a call center, you are going to be calling or talking to at first an AI voice, right? Or an AI voice tree. Okay. Which isn't necessarily a bad thing,

Right. Versus talking to somebody in a, in a call center in a different country where you can't really hear anything or talking to one of those, uh, you know, computer generated ones. And you're just like, you know, human, human, I

operator, right? And you're just screaming things to get past, uh, the robotic prompts. So I don't think it's necessarily a bad thing, but I think it is, uh, things like 11 labs, conversational AI 2.0. This is going to be the future of how we interface, uh, with the rest of the world, especially on the phone and on websites. So this turn taking, uh,

advancement, I think is what makes it kind of noteworthy. So it does, this new model does a little bit better job of identifying when the human on the other end is done talking or not, right? So they have some demos where someone is talking and it seems like, oh, maybe they're just thinking or taking a pause or retrieving some information where normally a voice AI might instantly interrupt or

There might be a huge latency when you are done. So that's what is kind of pretty impressive here, at least from the demos that I looked at at 11 Labs. Also, now it has integrated language detection that allows for seamless multilingual conversations and the update

also introduces a built-in retrieval augmentative generation or RAG system for companies to use, enabling voice agents to instantly access external knowledge bases while maintaining low latency and privacy, which is especially useful for industries like healthcare and customer support.

Multimodal communication is now also supported so agents can interact via voice, text or both. So yes, companies can go build these conversational AI agents and embed them pretty simply on their website or you can use this as a traditional kind of phone operator or phone operating system.

So right now, enterprises can automate large-scale outreach with batch outbound calling, allowing multiple calls to be made at once for surveys, alerts, or personalized messages. So right now, subscription prices range from a very limited free tier with limited minutes to a tier that is more than $1,300 a month for a business plan. So it just depends.

on how often or how much you're going to be using it. Those are the base plans. So if you're a true enterprise and you want to scale this out right across like call center type, it's going to be way more than $1,300 a month. Those are just the base business plans. All right. Our next piece of AI news.

A job apocalypse is coming. Well, at least according to Anthropic CEO Dario Amodi, who warned that AI could eliminate up to 50% of all entry level white collar jobs within five years.

So the Anthropic CEO kind of made a short media tour last week, shortly after their Claude 4 product was announced and, you know, kind of sounded the alarm on, hey, this powerful AI that we're all building, it's going to take a lot of jobs. So Amodi predicted that US unemployment could spike up to 10 to 20% due to AI driven automation, a dramatic jump from the current 4%.

4.2 unemployment rate. So he says the threat to white collar jobs is not being acknowledged by society or lawmakers and that the impacts will hit faster than most expect. So

A lot more on this story. Actually, I'll go ahead and tease it tomorrow. So I won't spend any more time talking about this because we're going to do our hot take Tuesday on this exactly because I think this one needs a little bit more exploring. And I have some takes on this, particularly the timing of all of this little media tour. All right.

All right. So make sure to tune in tomorrow as we go over that. And last but not least on our AI news that matters. So according to reports from Bloomberg reporter, Mark Gurman, Apple's AI conference or sorry, Apple's WWDC conference is going to be not really about AI this year. And that's probably for the best. So.

At WWDC 2025, Apple is expected to reveal only minimal advancements in artificial intelligence a full year after going all in on AI and falling flat on their face. So this is, according to reports, a signal that Apple realizes they are very far behind in AI. And also they've been facing a lot of class action lawsuits, right? Because all of these AI features and they even went...

as bold my gosh to try to rebrand ai call it apple intelligence last year at their worldwide developer conference they you know hyped up all this uh you know ai that was going to come out to iphones they even had a bunch of marketing commercials and now they're facing a lot of class action lawsuits because most of what they announced and marketed never came to fruition uh so

According to this report, the biggest AI news expected, and this is next week, June 9th. So one week from today is the kickoff of Apple's WWDC conference. So the biggest AI news will be Apple opening its on-device foundation models, which have around 3 billion parameters to third-party developers. So it doesn't look like there's going to be a lot of AI announcements, which is probably for the best because Apple is...

I can't wait for someone to make the movie about how this is probably one of the biggest failures in modern business history, right? Apple's inability to successfully roll out any really usable piece of artificial intelligence when they are multiple years behind their biggest competitors like Google,

Samsung on the device side, Microsoft, right? They are so far behind. It is laughable. So a couple more things from this report. So Apple plans to introduce several smaller AI related features in their new operating system, which they're also renaming the operating system. It'll be called iOS 26 now. So they're aligning devices.

the operating system numbers to the years. Apparently they spent like millions of dollars to work with a consultancy and that's what they got, right? Instead of, I don't know, we're on like iOS like 17 or 18 right now. Similarly on the Mac OS. So instead they're just going to change it to the year that it's released. Anyways, Apple does plan to introduce several AI related features in the next operating system, including a new battery,

powered management mode, a revamped Translight app integrated with AirPods and Series, and labeling some app features in Safari and Photos as AI-powered.

So, Gurman, the famed Bloomberg reporter who gets this right almost every single time and is the one breaking all the Apple news, has described this as a gap year for Apple, which is hilarious, right? Like, yeah, we're going to go ahead and sit this one out, right? Anything that Apple tried to announce between what was announced two weeks ago

between Microsoft and Google, what they announced at their, uh, respective, uh, build and IO conferences. If Apple tried to like do what they did last year with AI, they would get laughed at and their stock would go in the tank. I would expect, right. I don't, I'm not, I'm not your financial analyst, but I would expect that Apple's stock is not going to be looking great for the next month or two, uh, because of this kind of gap year. So I love, uh,

I love that this is how it's being described. But Apple is, according to reports, actively developing more advanced AI projects, including a large language model version of Siri, a redesigned shortcuts app, Project Mulberry focused on health, and a chat GPT-like competitor with web search.

But for the most part, what we're going to see is they're essentially, you know, Apple has their own edge AI models and they have larger variations of their own in-house large language model that they built. And they're essentially going to be opening this up to third party developers. So developers using their apps can take advantage of the on-device AI, which in the end might actually just be better than Apple trying to do it because they've failed and they failed miserably. So.

uh that is a rap alison just saying seriously apple yeah seriously uh big bogey here on youtube just throwing about apple intelligence with a bunch of face palm emojis that's saying it nicely all right that is our recap of what's happened but got a new little segment hey if you're still here in the live stream

Just drop a yes or no if you want this little segment to be tagged on at the end of our future AI News That Matters, right? So I'm just calling this Rumors and What's Next. All right, so live stream audience takes two seconds. You can even just say a Y or an N. Do you want to hear this? So these are some rumors and what's next? What you should be expecting in large language model developments.

So a lot of these things could happen as soon as this week. A lot of them are expected to happen this week, or they might happen later in June. But here we go. OpenAI's 03 Pro.

Pro could be announced very soon. Perplexity's Comet browser that we already talked about, which kind of doubles as a computer use agent, may be getting a wider release, but it's already been starting to roll out last week to people who are very early on the wait list. So that should be rolling out to everyone else soon.

Open AI's GPTs, which have been pretty much widely ignored for the last year and a half, could finally be updated and get new features as well as being able to use the O3 model, which I can't emphasize enough. That could be some of the biggest large language model news in like six months if Open AI actually updates their GPTs. Could be huge.

GROK, sorry, GROK 3.5 could be coming out any day now, but this is also after Elon Musk saying it's almost here for multiple weeks, but we're seeing some reports that it could be rolled out within the next week or two. Claude may be getting an

artifacts studio, which is an easier way to save your different Claude artifacts generations, as well as kind of an inspiration gallery that you can see and learn from other people that are using the Claude artifacts feature. We are in this has been confirmed by Logan Kilpatrick from Google, we will be seeing a new version of Google Gemini 2.5 Pro within two weeks. And I would assume that any benchmarks that

Claude was able to achieve with Claude for Opus or Claude for Sonnet. I'm assuming many of those are going to be absolutely erased with this new version of Gemini 2.5 Pro. Sorry, InfraPic, you're not going to be able to keep up with Google. Not this new Google. Sorry. And then like we said, next week we will have WWDC happening on June 9th. And this is going to be a nothing burger for AI.

So we won't be able to cover that next week on the show because it's actually going to happen about three to four hours after the live stream happens, but we will be covering it later in the week, even though it's going to be about nothing. All right. So that is a quick recap of what's going on in the world of AI news. So again, we had a perplexity AI launching their labs in agentic mode or tool.

Hugging Face launching their open source robot, which cost $3,000, not $300, but I think it is still going to be extremely big news regardless for robotics and humanoids. Google has rolled out two pretty new, pretty useful features inside Google Workspace, both auto-summarizing long emails in Gmail, as well as being able to summarize videos inside Google Drive. But

Google Workspace admins do have to enable those for paid users. We had a pretty bad look for the federal government as reports saw that they kind of hallucinated or fabricated a bunch of things in a major US health report. It was not a good look.

Amazon and the New York Times struck a partnership to bring AI content to Amazon's AI platforms, including Alexa. Please get that going sooner rather than later. We saw some new creative labs and features come out, both from Microsoft

black forest labs and their new image generator suite which i think is on par or better than the gemini image editing and the open ai image editing we saw a pretty big update from 11 labs launching their conversational ai 2.0 with some major upgrades for enterprise voice agents

The Anthropic CEO warned that AI could eliminate half of entry-level white-collar jobs within five years, the AI jobs apocalypse. We're going to be talking about that one tomorrow on Hot Take Tuesday, so make sure to join us. And then last but not least, according to reports, Apple is going to release essentially nothing in AI at their WWDC conference next week, and it's going to be an AI gap year. All right.

All right, I hope this was helpful. If so, please go to youreverydayai.com. Sign up for the free daily newsletter where we're going to be recapping these stories as well as keeping you up to date with everything else happening in the world of AI. If this was helpful...

Don't be a jerk. Share this with someone, right? Click that repost if you're listening here on LinkedIn. We'd appreciate that. Or on Twitter, tell someone about this. Even if you think everyday AI is your little secret, you can't keep it to yourself. Share with others. Like the whole point of why I do this, why I keep it free, why I do it every day is because I know artificial intelligence and generative AI is extremely hard to keep up with. And when we talk about job displacement and all of these things,

Everyone needs access to free, unbiased, generative AI education. That's what I do. I do it all for you. So please return the favor by sharing this with your friends. If you're listening on the podcast, I'd appreciate if you could follow, subscribe to the show, leave a review. Thank you for tuning in. Please join us tomorrow in Everyday for more Everyday AI. Thanks, y'all.

And that's a wrap for today's edition of Everyday AI. Thanks for joining us. If you enjoyed this episode, please subscribe and leave us a rating. It helps keep us going. For a little more AI magic, visit youreverydayai.com and sign up to our daily newsletter so you don't get left behind. Go break some barriers and we'll see you next time.