We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode EP 510: OpenAI's o3 Use Cases - How to use the world’s new most powerful LLM at your company

EP 510: OpenAI's o3 Use Cases - How to use the world’s new most powerful LLM at your company

2025/4/23
logo of podcast Everyday AI Podcast – An AI and ChatGPT Podcast

Everyday AI Podcast – An AI and ChatGPT Podcast

AI Deep Dive AI Chapters Transcript
People
J
Jordan Wilson
一位经验丰富的数字策略专家和《Everyday AI》播客的主持人,专注于帮助普通人通过 AI 提升职业生涯。
Topics
我今天要讨论 OpenAI 新发布的 O3 模型,以及如何在公司中使用这个我认为是目前世界上最强大大型语言模型。O3 模型与传统的大型语言模型不同,它具有代理能力,能够独立思考并按顺序使用多种工具,这使得它的应用潜力巨大。 在之前的 AI 应用中,我们常常受限于技术本身。而 O3 模型则突破了这一限制,它的应用仅受限于我们的想象力。我们可以用它来重新定义工作方式,这绝非夸大其词。 O3 模型能够处理多种任务,例如 PDF 转录、数据分析、市场研究、情感分析等。它能够访问网络信息,使用 Python 代码,并创建交互式仪表板。这些功能使得 O3 模型能够高效地完成许多以前需要耗费大量时间和人力才能完成的任务。 我通过几个实际案例演示了 O3 模型的强大功能。例如,它能够准确地转录复杂的 PDF 文档,即使文档包含图像和非文本内容;它能够从网络上检索最新的信息,并对信息进行总结和趋势分析;它能够处理多个 CSV 文件,并使用 Python 代码进行数据分析,并根据数据分析结果,提供商业增长建议和新的播客主题创意;它能够使用画布模式创建交互式仪表板;它能够根据提供的示例,自动生成 AI 新闻和新鲜发现内容;它能够根据图片信息识别餐厅并提供相关信息;它甚至能够尝试创建视频,虽然最后一次演示因为一些技术问题没有成功。 总而言之,O3 模型的出现标志着大型语言模型发展的一个重要里程碑。它能够独立思考、规划和执行任务,这将极大地改变我们工作的方式。

Deep Dive

Shownotes Transcript

Translations:
中文

This is the Everyday AI Show, the everyday podcast where we simplify AI and bring its power to your fingertips. Listen daily for practical advice to boost your career, business, and everyday life. The world's most powerful large language model is not only here, it's sitting there, waiting for you to go in there and grow your business and grow your career.

And it's agentic. Yeah. A large language model is agentic. I'm going to talk about

what that means. And we're going to go over OpenAI's surprisingly impressive new O3 model and talk about how to use what I think is now the world's most powerful large language model at your company and just talk about some different business use cases. And I think today's episode is actually a really important one because we're going to see, hopefully live, a

what I would call seismic step forward in terms of what we can use large language models for. Because I think previously when it comes to AI specifically working with front end large language models, right? You log into chatgpt.com, you log into, you know, gemini.google.com, claw.ai, right? Using these front end large language models. I think we've always been limited by the technology.

But this is the first time that I've used a large language model where I've felt not limited by the technology, but only limited by my own imagination. And when I say imagination, I'm not talking about, oh, let me go create some cutesy photo to go viral on social media. No, I'm talking about redefining how we all can work. It's not hyperbole.

we're going to be talking about it today on everyday AI. What's going on y'all? My name is Jordan Wilson. I'm the host of everyday AI. This thing is for you. This is your daily live stream podcast and free daily newsletter, helping us all not just learn AI, but how we can leverage it to grow our companies and to grow our careers. So

If that sounds like what you're trying to do, you need to go to our website. So it starts here on this podcast and live stream, and that's where you learn, but where you actually leverage it, that's on our website, youreverydayai.com. So there you can sign up for our free daily newsletter. We're going to be recapping today's conversation and keeping you up to date with everything else that you need to know to not just keep up,

but how to get ahead and how you can be the smartest person in AI at your company. All right. Normally we start off most days by going over the AI news. If you want that, that's going to be in the newsletter. We literally have too much to get to. I don't want this to accidentally be a 90 minute podcast. I'm going to try to go quick, deliver as much value as possible in a very short ish amount of time. All right.

So let's get straight into it. Livestream audience, love this. Man, I love before I hit live at 7.30 a.m. Central Standard Time. I love when people are already kind of in the waiting room leaving comments. So shout out and good morning or good afternoon, depending on where you are, to Sandra and Arvin and Michael, Big Bogey Face, Kyle, everyone joining on the YouTube machine this morning. Brian, Aiden, Nathan, Michelle,

Michelle Hector, too many people to name. Thanks for tuning in. So yeah, if you listen on the podcast, normally we do this thing live. It's fun. You know, so if you have ideas or comments, something you might want to see when we're going over 03 today live, get it in the comment now, get it in the comments now. So first of all, what the heck is new? What is OpenAI's 03 model? Well, OpenAI last week actually published

dropped, let me count them, six different models, right? Three of them were just for the API. So if you log into chatgpt.com, you're not going to see GPT 4.1, 4.1 mini, and what was the other one? 4.1 micro. But you will see three new thinking models that OpenAI released a couple of days ago. Now it's been about a week. So those are 03-01,

04 Mini and 04 Mini High. Yes, I know the naming is confusing. Yes, there's technically three different tiers of these thinking models, right? So without going into too much detail,

Probably the model that most of you are used to using is something called GPT-4-0. So that's kind of a quote unquote old school transformer model. So now you have this new series from OpenAI called the O series models. So yeah, unfortunately now it's very confusing because you have an O1 depending on what your paid plan is, right? So yeah, if you have a ChatGPT Plus plan, if you go into your model selector, you'll probably see O1. If you're on a...

$200 a month plan like I am, you'll see O1 Pro. But now you also have O3 and some people are calling it O3 Full or O3 High, but it's just going to say O3. All right. And then you also have O4 Mini and O4 Mini High. So these are essentially the O series. These are models that kind of use this chain of thought thinking or reasoning under the hood. So you have your old school transformer models like your GBT4O that are more just, we'll say super advanced models

you know, auto-completes, right? To very oversimplify things. The O-series models, they think, right? They reason kind of like a human and you can look at the chain of thoughts or at least the summarized chain of thought, you know, as you give these O-series models a prompt.

Okay. Uh, and that's important to talk about because generally they are going to take a little longer. Uh, so, you know, you also have to think, when should I be using a GPT-4-0 model versus, you know, when should I be using some of these O-series models? So here's what's new. All right. The biggest thing, and we're going to mainly be talking about O-3 today, because I think this is the, uh,

to say groundbreaking might not be doing it justice. I would say this is the category changing model, uh, right. We've, we've had reasoning models. Uh, we've had, you know, quote unquote, old school transformer models, and we have great hybrid models as well. As an example, Gemini 2.5 pro, uh, from Google, uh, Claude 3.7, uh, sonnet, uh, from anthropic. So you have these good, uh,

hybrid models that are both old school transformers and quote unquote new school reasoners, thinkers, right?

but this one is completely bonkers 03 okay so what's different what's new so it's capable of using all tools so what that means 03 can use web search python you can upload your files it can use visual input reasoning it can generate images it can use this canvas feature it just

The tool usage is nuts, right? Because previous O series models, I talked about this briefly yesterday. Yes, this is part two of our show. We did part one yesterday. So if you want to know more of the specs on the model, you can go check that out.

But the O series models previously did not all have access to all of these tools. Some of the different O series models couldn't even get online. So O3 is the first fully capable model that has every single tool under ChatGPT's kind of tool belt, which when we talk about agentic AI, that's ultimately one of the big steps, right?

a large language model or an AI tool needs to be actually agentic, right? For it to have agency, for it to execute tasks on your behalf, right? So it's not a fully, you know, agent, right? But I will say this is the first agentic model that

that I've used and it's a huge step forward. So it's, it's trained to autonomously decide when and how to use these tools, responding with rich answer, rich answers, typically in under one minute. Uh, and if you do have a paid plan, uh, to chat to BT plus, uh, it's available now. So, um,

It's available immediately as well as on the API. So usage is a little different. So like I said, if you're on the price here, a $200 a month pro plan, you have nearly unlimited usage. So that's a good thing. I don't have to worry, you know, when demonstrating these things to run out, cause I have, you know, pretty much unlimited usage. If you are on the normal a $20 a month chat TFT plus or

or a pro or sorry chat gpt plus uh teams enterprise etc uh you get 50 messages a week of 03. uh for 04 mini high which is the next best model out of these series uh you get 50 a day right so uh i did some testing we're only when we're doing these live ones we're only going to be using 03 uh 04 mini high like i said that is the next best and next most impressive model uh and it's still

really good. So at least, you know, even if you're only on that $20 a month plan, you might want to think on, on, you know, saving up those Oh three queries, those 50 a week. Uh, but the Oh four mini high, uh, should suffice for many of your use cases. Uh, all right, let me just say this because people are going to be asking like, Hey Jordan, didn't you just tell us two weeks ago that this Google Gemini 2.5 was the best model in the world? Yes. Two weeks ago it was, uh, today it's

I don't think it is. So talked about this yesterday. Best depends on what you need it for. I will say that the Google Gemini 2.5 Pro is probably the most flexible model with potentially the most utility. But when it comes to the most powerful model, and at least for me, that's like the model that's best.

best, it is this new O3. So if you look at third party benchmarks, which we talked about yesterday, as an example, live bench. So, you know, third party benchmarks that are, you know, unbiased, they look at taking into account a lot of different things on live bench, a good third party benchmarking software or benchmarking methodology. O3 high is pretty far ahead of Gemini 2.5 Pro.

Similarly, on the Artificial Analysis Intelligence Index, so they haven't done 03 yet, but even 04 Mini High is scoring higher on that versus Gemini 2.5 Pro.

So, you know, that goes across seven different evaluations going some, you know, very famous and common benchmarks like MMLU Pro, GPQA Diamond, Humanities Lax Exam, Live Code Bench, Sci Code, AIM, Math 500, right? So when it comes to your standard AI benchmarks, O3 Mini High and AI,

Or sorry, 03, 03 full, 03 full, not 03 mini high. That's gone now. So 03 full is by far the best benched model and even 04 mini high on the artificial analysis intelligence index. All right. So before we get started, we're going to be doing these all live. Keep in mind, if you're listening on the podcast, please go watch the video. I think it's going to be way more impressive. I'm going to do my best to describe what is going on, going on screen.

Going on on screen. Yeah, that's correct. Words escape me sometimes early in the morning when I'm slightly underslept and before the second cup of Nespresso has slacked me in the face. This is going to be one of those that's best watched. So if you're if you are listening on the podcast, always check your show notes. We leave links. So even on our website, right? Your everyday AI dot com. You can go to the episodes page. Click today's episode should be up.

in like 30 minutes after we're done with this live stream. You can watch the video there. You can also listen to it on the podcast, but you can watch the video. Live demos are extremely glitchy. Keep that in mind, right? So even when we did the Gemini 2.5 Pro demo two weeks ago, we were getting some weird hallucinations. I was asking for, you know, certain things about Chicago. And it's, you know, when you're looking at the reasoning on Gemini 2.5 Pro, it's like, oh, you're asking about Easter weekend. And it's like, no, no, I'm not. So,

Keep in mind, live demos, never good to do, especially with generative AI, considering generative AI is generative. So even if you were to run the exact same prompts I have with the exact same information, you're probably going to get something slightly different each and every time. That's because generative AI is generative. It is not deterministic. And like I said so far, this is one of the most impressive pieces of technology I've ever used, right? I've literally used thousands of pieces of software, uh,

I would say even thousands of pieces of AI software, but at least, you know, a thousand, more than a thousand pieces of AI software. I've used thousands of pieces of software over the last 10 to 20 years. This by far is probably the most impressive, single most impressive, maybe. All right. So let's look live.

Uh, so let's get after it. Uh, live stream audience. If you do have, uh, ideas, suggestions, uh, please get them in now. All right. Let's hope I can get this, uh, get this going correctly. Let me, uh, share my window. And if you could live stream audience, be so kind to let me know like, yeah, Jordan, we can see what's on your screen. Uh, all right. So

Are you still running in circles trying to figure out how to actually grow your business with AI? Maybe your company has been tinkering with large language models for a year or more, but can't really get traction to find ROI on Gen AI. Hey, this is Jordan Wilson, host of this very podcast.

Companies like Adobe, Microsoft, and NVIDIA have partnered with us because they trust our expertise in educating the masses around generative AI to get ahead. And some of the most innovative companies in the country hire us to help with their AI strategy and to train hundreds of their employees on how to use Gen AI. So whether you're looking for chat GPT training for thousands,

or just need help building your front-end AI strategy, you can partner with us too, just like some of the biggest companies in the world do. Go to youreverydayai.com slash partner to get in contact with our team, or you can just click on the partner section of our website. We'll help you stop running in those AI circles and help get your team ahead and build a straight path to ROI on Gen AI. All right, hopefully that's up. Let me see if I can get my big fat head out of the way here.

Nope. All right. There we go. Good enough there. All right.

All right, cool. Thank you. Thank you, YouTube crew. You said you can see the screen. All right, so here's what we're going to be doing. We're going to be doing this live. We're going to be doing it fast because I want to get to as many as possible. And some of these are similar prompts that I ran when we did our Gemini 2.5 show. And I want you to also think everything I'm doing here is an example, but think of how you can use this in your business. These are real business use cases. All right, something simple.

that I always like to do because this is a technology that previously was not very good in all large language models and it's gotten progressively better. All right, so what I'm gonna do now is I'm uploading a PDF. All right, so this is actually, let's see here. Let me get the right one. That's not the right one. All right, this is the right one. All right, so I am uploading a PDF, all right? And I am in 03 and all I'm saying is transcribe this word for word.

All right. And I'm going to zoom in here on my screen. So essentially this is, uh, an advertising deck. A lot of people reach out and they want to advertise, uh, with everyday AI. Uh, most of the times I don't like bringing on advertisers, uh, but sometimes, you know, when I feel they're a good fit for our audience, I'll send them this. So I can, uh,

look, right? So always look at this kind of chain of thought so you can expand it and look so I can see what this is doing. So keep in mind, keep in mind, and I'll go ahead and hopefully share this here on my screen so you can see. This PDF, it's not like plain text, okay? It is multimodal to

to the extreme, right? A lot of this was created like in Canva, right? So it's flat images. It's things that a large language model should not be able to see. And six months ago, no large language model could do this. So like as an example, I have a page here with all these stats, but there's images, there's all these logos here at the bottom, you know, it says like trusted by leaders from. So these are all people that listen to our podcast, read our newsletter, et cetera. So it's a bunch of these logos. So this is,

I mean, when you talk about traditional OCR technology that can, quote unquote, read a PDF, when you talk about computer vision, the technology six months ago would greatly struggle to go through and transcribe this because a lot of it is not even text box. There's screenshots. There's, you know, like...

like jpegs there's pngs this is like mixed media this is not something that any tool should be able to read all right so it's going a little slow so that's one thing that i'll say right away when i did this in gemini 2.5 pro it was extremely fast all right so it looks like it's done uh so let's see how it did but think how many times like and i've like a lot of these are my own like uh

personal use cases over the years. It's like, oh, I, you know, created a deck for something or here's an old PDF. I don't know where it's from. I don't know where the original file is and I need to update it. So you might, you know, try to copy and paste everything, reformat, not everything copy and paste over, right? It's something that probably a lot of you

you know are doing like you know looking at old pdfs or just maybe trying to consume them faster trying to have a conversation with important files to your industry so i go through here uh it did a great job so i can go through here you can also click on the chain of thought always do that if you want to become better at using large language models click on the chain of thought so it says pulling in details to give the best answer the user is requesting a word-for-word transcription of a pdf document but

The document consists of images rather than accessible texts. This means I can't simply read the text directly since file search. So I can see the different tools that O3 is trying to use to accomplish this. It says file search didn't detect any texts in the images. My next best option is to manually transcribe using image reading or perhaps OCR. Still, this might do be time consuming, especially given there are eight pages of content.

right? So you can just really see how it does this. So I go through page one, got everything correct. Page two, there we go. There's our, you know, 180,000, uh, you know, uh, monthly listeners on the podcast. There we go. Here's something super impressive. Uh, the trusted by leaders from, and Gemini got this correct as well. These are just logos, right? These are just

Random look, not random logos. I mean, they're logos that mostly everyone would know, but oh three got it right. Right. It said, okay, Google, Amazon, Nvidia, Microsoft, it got all the even logos, right? So extremely, extremely impressive job here. So let's just see if it got everything.

truly page three we had some testimonials it got those correct even like right there's like it's the five stars right so these are actually from our uh website and people go and they leave a star rating and they can leave a review uh and got it correct very very impressive um there's our page four got everything correct daily newsletter it's kind of our different um you know page four uh right here it's kind of our different uh stats across different platforms

Sponsorship options, there we go. It got a good job. So even like, right, so in our daily sponsorship option, not everything is included, right? And there's a little X here. And it even went through and assigned

copy and paste emojis to these little design elements. So a check mark or an X, and this is all plain text. I can copy and paste this extremely impressive. And okay, this is great. I think Gemini 2.5 did this as well. I have a little chart at the end here, just kind of comparing everyday AI podcast to some other podcasts and some other newsletters. And it literally recreated a chart that I could copy and paste.

Livestream audience, what do you think on this first use case? Very impressive. So a little slower than Gemini 2.5 Pro, but in the end, I care about for this thing, I care about accuracy and it's a rather complex task. All right, let's go next. And just for our podcast audience, each time I am opening a new chat, so I don't unintentionally work with the same context window.

All right. Here's one that's probably going to be difficult. Uh, and Google Gemini 2.5 struggled with this one a little bit. All right. So here's what I'm doing. Uh, live stream audience. Like I said, if you have any ideas, if you have any prompts you want me to run, uh, you know, thank you. Like Joe, as an example, he wrote Oh three test prompt. Uh, right. Uh, go

Go ahead, say something like that so I know and I can kind of run it at the end. Or if not, if we run out of time, I'll do it in our newsletter. All right, so here we go. This next one, I'm saying, find the 20 latest episodes of the Everyday AI podcast by Jordan Wilson and give me a brief summary of each one. Then find five trends between episodes.

Pretty hard task, okay? Think of all the different types of research that you might need to do. And this is, in theory, up-to-the-date research. And this is something I, like, you always need to stress test different large language models, specifically how they connect to the internet, right? Because sometimes they work with cached platforms.

pages. So, you know, our website is obviously extremely up to date, right? We update it every single day. So even, you know, six to nine months ago, we did a similar prompt like this across the five major internet connected AI large language models. None of them got it a hundred percent right. Some did better than others, but here we go. I can kind of see, and this is where we get into this agentic, right? Because step-by-step it's going through,

This is very impressive. So first it's breaking it down internally and it says, I need to gather the most recent episode titles of everyday AI podcasts. So then it does a couple of searches on the web. It looks like it goes, it does three different or sorry, two different broad searches. And then it went to about seven different websites. Then it identified where my actual website was after looking at all those websites, then

It navigated to the episode page and it tried to find the most recent episode. Then it took a break and it's starting to reason and logic. And it says, okay, it looks like the previous episodes are all relatively easy to retrieve, but I'll need to target episodes like episode 503 and higher for more complete lists. I'll begin by focusing on the most recent episodes, um,

right? 503, 502, working backwards in search for those specific numbers. Then it goes back to searching the web. Then it goes back to thinking again. It goes back and searches the web three more times, right? So normally when you're using even Gemini 2.5 Pro, it generally goes out in the web in one batch. So it thinks it's like, okay, I need to go to the web. It goes to the web.

It's done. It comes back and it wraps its thinking up this. It goes back and forth between thinking and the web, which might not seem super impressive. But when we talk about and we're going to have some examples of that, when we talk about doing multiple tools, that's when you're going to look at this, this tool use and you're gonna be like, oh yeah, this is freaking gigantic. This is wild. All right. So this one, let's scroll down to the bottom.

Crushed it. Super impressive. Super impressive. All right. So it got it right. So here's literally yesterday's episode. So that's good. It's not working off, you know, old cached websites. So no matter what you're working on, O3 can go and find literally up to the minute correct information online because it has yesterday's episode, which is 509, which is part one. So today's episode is 510. So part one, here we go. It says it has the title and it has the air date.

which is pretty impressive because like I'm thinking here, y'all, I don't even know if I have the air date. Okay. I do. It's not easy to find, right? It's super small text. So it actually did a great job at finding the episode number, the air date, the full title. And then it has a one sentence summary, including a link that right there. Very impressive. Right. And let's see, I'm going to go down to one of these other episodes, make sure it's not making anything up. I didn't get it right.

All right. So let's look at 499 chat. GPT is new GPT for image gen five best business use cases. It says demos, how GPT for O's pixel perfect generation boost product prototyping ads and training assets. Those are all three things I covered. No hallucinations. Very impressive. Now let's go see, did it identify five cross episode trends? All right. So a trend one model release deep dives dominate. Yep.

The last 10 episodes been doing a lot of that. Tuesday to Monday AI news that matters burst. Yup. It went, sorry. Oh no. Did I ask you for the last 20? What did I ask it for? Okay. I did say 20, which is probably better. Right? So that's good. It identified on Mondays you do an AI news that matters, right? And it bundles multi-headline rundowns, creating a reliable start of the week news.

news cadence. So it identified that trend, uh, business first framing. So it said that there's some business first framing in the last 20 episodes, uh, guest driven authority. So it says that there's some recurring guest slots. We had someone from scrunch AI and video startup founders, uh, Google, uh, et cetera. And then it also said infrastructure and cost focus growing pretty impressive.

Pretty impressive for a prompt that took a grand total of, I think it said a minute, no, two minutes and eight seconds. It would have taken a human hours to do that.

And I don't know if they would have done as good of a job. So what would you use this for? Like anything, right? So you can obviously use this for your own information, but talk about competitive insights. Talk about market research, right? To go find the most up-to-date information, you know, grab specifics from those things and then identify key trends. That's what a large language model is great at. Yet that's what so many of us knowledge workers do on an ongoing basis. All right.

This one, I think, is going to break chat GPT. All right. And I didn't do this on Gemini 2.5 because I knew it probably couldn't handle it. Although maybe, hey, live stream audience, if you want to see like a head-to-head

maybe next week, just say head to head in the comments. If I need to do like an 03 versus Gemini 2.5 Pro, if that would be helpful, just say head to head. If you don't care, that's fine. I know some people like head to head. Some people don't. It doesn't matter. All right. So I'm going to upload some documents here. So give me a second. And then we're going to go over. I'm going to explain what's happening here. I don't think... Okay. That was so quick. Um...

Okay, so I don't think Chet Chiviti is going to handle this. I don't think a large language model can handle this. We'll see.

So what I did is I just said, these are my podcast stats. Okay. So I had two different CSV files that I uploaded. So yes, O3 can accept files and browse the web and use Python. And we'll maybe see some of these things happening. So you can already see it's using a ton of Python. I'm going to read the script here or the prompt I use here in a second, but you'll see it's already, uh,

analyzed the images I've uploaded. It's already running its own Python code to start making, right? So you know how as humans, if you had these giant spreadsheets, you would have to go in and probably run all these formulas, really try to manipulate the spreadsheets, spend a ton of time trying to massage the data, right? So it's running Python immediately, all right? And you'll see for our live stream audience, it's going between thinking

Using the internet and Python code async. It's all happening in real time, going back and forth. All right. So while that's loading, all right, I'm actually going to read the prompt. But for our live stream audience, I'll let you guys watch this.

watch the the chain of thought a little bit here all right so here's my prompt I said these are my stats these are my podcast stats I just exported everything all right I said give me actually you know what before before I tell chat gpt what I am asking for let me tell you what's in these two different files these two different cs files so one is a uh it's every single

podcast episode. So 509, there's an episode ID, a publish date, and then the downloads, all time downloads, last 90 days, last 30 days, last seven days, et cetera. All right. So that has, so 509 columns or 509 rows, eight columns, something like that. So that's

Good chunk, you know, 4,000 pieces of data here. This one's impressive. Stats location reports. All right. So this is our, you know, biggest cities by downloads. Uh, and this is a huge spreadsheet. There's this many cities in the world. Okay. So apparently the everyday AI show, um, we have listens or downloads from 22,898 unique cities. I honestly didn't know there were that many cities. Um,

Also, I didn't know this and I don't know why. So countries in the world,

Um, yeah, there's 195 recognized countries in the world yet. Our podcast stats say we have listeners from like 202 countries and I'm like, how is that even possible? Right. So I don't know. Maybe there's, uh, some, some, some countries that aren't, uh, globally recognized as countries from the United Nations. I'm not sure, but this other CSV that I uploaded has nearly 23,000 different cities. All right. And then there's quite a few, uh,

you know, rows in here. So it's city, state, countries, and then the number of incontinence and number of downloads per all of those things. All right. It's still working. All right. I broke it. So it said a network error occurred.

So I'm going to click retry. We'll see if it can finish. And this might be something that I just check back on in a little bit. But what I'm asking it for is five obvious trends, five not so obvious trends, five ideas for growth based on my stats, 10 relevant episode topic ideas for new podcasts I haven't covered based on what's performed well and what's trending in April 2025. Y'all, let me just call out that one thing right there.

That's something bigger podcast, right? People are always like, oh, Jordan, how big is your team? I'm like small, right? That's something that bigger podcasts that have budgets, right? Those from the verge New York times, they're maybe paying consultants like six figures to go do number four, right? Not every month, right? But

They're probably paying consultants to go through, analyze all their data, find trends, go find, you know, so you're not only having to essentially run some complex kind of

queries inside of a spreadsheet to find out what's actually trending because it's not as easy as like sorting by downloads. It's not as easy as that, right? Because sometimes if an, uh, if an episode is, you know, less than 90 days old, those numbers are skewed. So I've done this before. It kind of creates its own algorithm to see, uh, you know, the viral litity, uh,

of certain episodes that are less than 30 days old and matching up different episodes by categories, et cetera. So although it's breaking, I do assume that if I clicked retry enough or if instead of five things, if I just asked for that one thing, 10 relevant episode topics, I think it would probably do it. So as this is going, I'm going to jump into a new window here.

You know, we're really stress testing it here. All right. So I'm not sure if that'll work or not. Next one is going to be a fun one. All right. So here's what we're going to do. This is a super long one. Copy and paste in this one. I'm going to tell you, I'm going to tell you what's going on here. All right. So for this one, I am going to use canvas mode. All right. And there's a lot going on. So I'm starting it off by saying use canvas for this. Okay.

i'm gonna see if o3 can do my job better than me all right it's probably not best for a demo uh to be running two very complex uh extremely complex queries yeah because i'm gonna get a request timeout so i might have to either wait

Or, uh, or pause the second one. This is why y'all, oh man, sometimes, sometimes it's not good to do these things live. Uh, cause you know, if you're running too many queries from the same account, you might run into timeouts, but I want to really push the boundaries and, and, and think about, you know, what's, uh, what's possible, uh, what's not. So, all right. Yeah. Okay. Interesting. So it's still getting stuck on that.

I wish there was an option to pause because this the first one, the very complex one is almost done. All right. So stick with me. I'm going to start describing what I want to happen in this next demo. And then we're going to do it. And then we're going to do that one live. OK, so we're running into some issues.

All good. So the next one that I'm going to do after this, the very deep dive on the podcast stats either works or fails, I'm going to give ChatGPT three examples of my daily AI newsletter, Everyday AI. I'm going to see if it can do the, it's my job better than me. All right. So, and then I'm saying, look at the AI news section and the fresh finds section. So if you read our newsletter, all right, let me just go ahead and bring up an example. So we have our kind of news newsletter.

We have our new section here, generally, you know, five to seven of the biggest AI news stories. And you'll see they're kind of written in a specific way, about two to three sentences, you know, a headline that's hopefully helpful. And then also we have these fresh finds in our newsletter, which are generally just like quicker, quicker,

uh tidbits you know sometimes it's if it's a heavy ai news day some of these fresh finds might be things that are generally uh you know super uh super newsworthy but on a busy news day you know it just gets uh you know a little fresh finds

So essentially, I'm pasting in three examples of my AI newsletter. So 03, the 03 model can read it, analyze it and understand. Here's how the fresh finds are written. Here's how the AI news is written. Then I give it a Boolean URL and I'll kind of show you show you all what that Boolean URL is.

And I talked about this on the Gemini 2.5 Pro episode. All this is, this is how I start my morning, right? So when I read the AI news, I essentially have a dedicated search string inside Google that only brings up news from the past hour for about a dozen or so companies that I care about, but it also has to have the word AI in it. So, you know, OpenAI, Apple, Nvidia, Microsoft, Amazon, et cetera, anything in the last hour, but also includes the word AI

AI and those. So, you know, this is something I care about, right? But think of what you can do. Y'all simple Boolean searches combined with large language models like O3, especially those that can reason and have tool use. Huge, huge hack. So essentially I'm like, yo, here's all the examples of my newsletter. Here's what I want you to do. Here's this Boolean URL. Go write a

Go write the AI news and the fresh finds for today, right? Go out and do my job better than me. All right, let's see. All right.

Thanks for sticking with me, y'all. We're hitting rewind. The very complex query that I didn't think would work, worked. Okay. So now that we're going to go through and read that, I'm going to start this other, hey, do my job, 03, go create a newsletter. All right. So now that one's working and I'm going to jump back and let's see how long it took. I did have to restart this. And again, this is extremely impressive because we're talking about

uh i'd have to do the math here but it's more than a hundred thousand rows of data all right so it only thought for a minute 44 i did have to click the retry button a couple of times

All right, so let's go down. It said five obvious trends, right? So that was the first thing I asked for. So it said monthly downloads keep climbing. That's great, right? So it says our monthly downloads are going up 37, looks like 37% month over month. That's great. It says the AI news that matters episodes dominate. It says eight of the 20 most downloaded shows are our Monday shows.

It says evergreen how to content crushes. That's cool. I kind of knew that, but it's good to know. It says open AI centric titles outperform the average by 18%. That right there, that requires

requires a lot of, you know, either knowing how to work with data, knowing how to run different formulas inside Google Sheets, et cetera. And then it said, US accounts for 66% of all plays and Chicago is the single top city. Yeah, holding it down, Chicago. Thank you all. All right, five not so obvious trends. Number one, Friday drops get 9% more plays than other weekdays. Really? Huh.

Interesting. I always thought Friday was a bad day. Interesting. All right. So these are our hidden signals. Number two, Australia is the surprise number two market. So I actually knew that because I look at my stats a lot. Right. But yeah. Hey, shout out to Sydney and Melbourne. Yeah. Like the Everyday AI podcast is sometimes like a top

five tech podcasts in Australia, right? Where in the US we're normally like top, top 10, top 15. It's been like number five in Australia. So, you know, thank you people in Australia for, for listening. So it says AI agents keywords at a 50% lift.

I had no clue, right? So again, that's something I would have to run some complex formulas inside of Google Sheets or Excel in order to find that out. That's interesting. I didn't know. I should probably do more AI agent shows, right? And AI agents across different mediums, right? So here's another one. Episodes less than seven days old already sit at 80% of the 90-day average velocity. That's impressive. So I didn't know that. So essentially,

80% of the downloads that a podcast gets are going to happen in the first seven days, right? So if I wanted to, I would go back and have a conversation and I would say, hey, what about these more evergreen, right? Sort these evergreen episodes and run it against that 80% velocity, 80%, 90 day velocity, because, okay, well, if I should be focusing on more evergreen, but 80% of the downloads come in the first seven days, should I? But maybe there's an anomaly there with that evergreen content. All right.

It also says Europe's share is up three straight months despite no region-specific content. Okay, that's helpful. Cool. All right. Five growth ideas do...

have an official Friday insights slot. So it says schedule the highest stakes episodes on Fridays to ride that 9% lifts. Uh, it says do an Australia mini series. Okay. I could do that. Having some Aussie friendly, uh, local, uh, local stories. Uh, it says do a spinoff of an agent orchestrator monthly column, refresh and repromo vintage evergreen hits geo personalization, email teaser. All right. Uh, here we go. 10 episodes.

episode topic ideas. All right. Uh, I have them on my screen, live stream audience. Uh, I'm not going to read them all. Let me know which one you want to see live stream audience. Uh, just say, uh, you know, episode two, episode seven. Uh, okay. So some, some good things here. So it actually went on. This is wild. Um,

It got a couple of things wrong. So it said WinServe instead of WinSurf, but it says I should do an OpenAI and WinSurf episode. Hey, it already said I should do a Gemini 2.5 Pro versus GPT-4.0, small language models on device, NVIDIA Blackwell launch recap, Supreme Court's 2025 copyright decision. So it did a great job. All right, so enough of that one. We're going to jump into other ones, but looking at this information,

You really have to go through and look at the chain of thought to see how it did this because extremely impressive. It thought internally, it analyzed some data, thought again, started using the web there to look at trends. So it went to pie chart, right? So

extremely impressive. And Oh yeah, it also created a dashboard. Let's see if this dashboard works. Uh, so I, I used the canvas feature in chat GPT. Oh, this is sweet. This is sweet. Um,

Okay. So here's all my different episodes. This one was a weird reporting error, uh, from, uh, buzzsprout. It said it got 38,000 downloads. It didn't. Uh, but I can go through here and, uh, I, I have an interactive bar chart, which is super cool, right? Uh, just a better way to look at all my data. Here's my top countries and it gets, it brings me an interactive bar chart and I can hover over. This is super sweet, super slick, right? So there's, there's a

uh united kingdom canada australia right here's all my different um all my different downloads here's monthly trends this is super sweet um

Also very impressive that it went through and built out monthly trends. And again, so I'm hovering my cursor over this and it's bringing up the month and then the downloads in that month. And it's fairly, fairly accurate. Uh, it looks like not a hundred percent accurate. Um,

because of how some downloads are reported, right? Seven days, 30 days, et cetera. But I think these are just those episodes that launch those weeks. And then topic for performance. This is pretty cool. So this is giving me average downloads. So I see as an example, the average download for an OpenAI episode looks like about 4,800 versus a micro. Okay, so they're actually all pretty similar.

versus the AI news is actually a little less. So, okay. Super, super impressive. All right. Now let's go back into my other one and let's see if O3 did my job better. So again, for this one, I gave it examples of my newsletter. I gave it a Boolean URL and I said, go write a newsletter for today. And then I also told it to make an interactive dashboard. So let's see how it did.

So again, are you seeing the agentic nature here in what this can do under the hood? So again, it looks like here is our canvas. I'm going to preview that here in a second. I want to scroll down and see what it did here. All right. Hey, for those of you that read our newsletter, does this look and read like it? This is pretty impressive. It did everything. Wow.

Okay, so I also want to make sure that all of this is up to date because I said it has to be from the last 24 hours. If it's old, don't put it in. So, you know, let's say, let's see, number two here. So it says, X OpenAI staff call regulators to block for-profit flip.

Very good headline. It has the one trailing emoji at the end, which I didn't even tell it to do. Right. So it noticed these trends of how the newsletter is written. It did a good job. It looks like most of these recaps are two to three sentences, which is what we always try to do. So let me just read this one. It says former employees.

petitioned California and Delaware AGs to halt OpenAI's plan to merge its research nonprofit into a C-corp, arguing it betrays the original public benefit charter and concentrate powers with investors. The filing amps up governance scrutiny just as OpenAI races to monetize its O-series models, right? And then it has the source there as well, so I can check. So if I click this,

That's correct. You know, I would go through and read this, but it is a new story from today and it looks like it is correct. So it actually did a very impressive job. There's our AI news recaps. Let's see if it got the fresh finds, fresh finds.

Fantastic, right? So here it is, these little shorter tidbits, right? It gave headlines, which is cool. So as an example, it says time plus all business launch and open AI dictionary and daily gen AI brief. And there I can hover over. That's from Yahoo Finance. That's from today. Does a really good job.

All right. So let's see if it actually created an interactive version of this newsletter. All right. So now again, I used canvas mode. I click preview. Okay. So pretty impressive. So my, my, my only gripe. So what I'm seeing here, it looks like an, let's see. Okay. It's actually interactive, which is really cool. There's a toggle for AI news and a toggle for fresh find. And it looks like all of our stories are there. The only thing, and I could go through and, um,

you know, work with this iteratively because it looks like, you know, I would change some of the colors. Like the headline color is a little difficult to read, but there's some nice hover animations, which is pretty cool. Also on the Fresh Finds side, same thing, a little hover animation. It says there's a filter, which I don't know how that would work.

But let's see, I doubt this filter would work. There's like a search bar. Let's see if it works. So I'm going to type in Nvidia because I know there's at least one story here with Nvidia. Wait, that's crazy. It actually worked. Okay. So I typed in Nvidia and only one thing showed up, just the Nvidia one. Oh, wow. Okay. Interesting. So I got rid of that. Everything's gone. Let me just type in OpenAI.

Same thing, just the one open AI story. Let me just type in the word AI. Okay. Just about everything because everything has the word AI in it. My gosh, podcast audience, I'm scratching. I'm scratching my head because very, very impressed. This literally just created essentially an interactive website of today's AI news and today's fresh finds in seconds. If this doesn't make you rethink work.

I don't know what will. I've been, you know, me and my team spend hours daily doing this and we'll still continue to do it the old human way, right? Because like I said on my 500th episode, I think one of the biggest things that we need to focus on as human workers, right? Because as we see these O3 models that are creepy good, like you have to like think of what your agency means now. And I think like, hey, at least for me, my agency is, you know, and this might sound like weird or pompous, like,

I think of myself as a tastemaker, right? Right now, I would hopefully probably have a little bit better taste, you know, going through dozens of AI news stories and then pulling out the five to seven based on what our audience wants. But I could share a bunch more data with ChatGPT and it could know, oh, hey, your audience cares about these 46 topics. So, wow.

Wow. Yeah. Giordi here from YouTube just says nods head approvingly. Michael says dashboard is incredible. Giordi said it's exactly like the newsletter. Jeez, man. Joe said looks like you've hired a new everyday AI podcast research assistant. Denny says the old human way. Yeah. Weird to say, but yeah. All right.

Oh my gosh, y'all. All right. I don't know. Is anyone else very impressed or is it just me? You know, I didn't have this level of being impressed with Gemini 2.5 Pro. Again, extremely capable. It was really, really good. This to me is, gosh, extremely good. All right. Let's do a couple of the other ones that we also did for the Gemini episode. All right. So here, super simple one.

And I'm going to see if I can run a couple of these at once. Again, I might break things, but we're already nearing 50 minutes and I have so many things that I wanted to do that we might not have time for. All right. So let me do a couple of these.

So first one I said, uh, use canvas and create an HTML clone of Wikipedia, but give it heavy Chicago vibes, make it fully featured, including clickable links and a multiple pages that work include the most important Chicago things. All right. Uh, so here we go. It's thinking under the hood, uh, presumably aside from writing some code, it is going to be, uh, it is, it's using the web as well. I see it, uh, actually.

accessing some certain URLs here. All right, so it's going pretty quickly. And within 15 seconds, it's done. Let's see if it's any good. I'm gonna click preview. Okay, it didn't work. That's fine. Funny enough, I did this yesterday. One shot worked fine. So I'm just gonna say there's an error. Please fix. So I could actually use the built-in features

to fix this code. Maybe I'll do that. So I'm just going to say fix bugs, right? So in canvas, there's a bug. I just clicked fix bug. I'll give it one shot. Again, generative AI is generative. And obviously when I'm doing the live demos, things, things didn't work as well. But I did, I did a Chicago one last night and it worked really, really well. Let's see if I can pull it up just in case. Cause I liked it. It was fun. Let's see here. Yeah.

Chicago, Wikipedia. There we go. All right. So if this one doesn't work here in about 30 seconds, I'll go ahead and share this other one that I think turned out fairly well. So let's see if this, you know, fix code option fixed it. All right. It looks like it was adding some icons that didn't work. Okay. So let's see. Sometimes relaunching

relaunching a canvas inside chat. GPT is a little buggy, at least with 03. Yeah. So that's fine. Let me just go ahead and bring up the one that actually finished on one shot yesterday. Super impressive. This one, right? So here we have our Chicago PDA. Let's expand this. Yeah. Oh, don't worry. It's fully mobile responsive. So I can click home. Let's see here.

Look at this. It brought in some images as well. So some of them didn't fully load. I think I would have to click allow all, but here we go. I mean, we have a working Chicago Wikipedia, right? Food, sports, landmarks, et cetera. There's also, let's see if it works interlinking. So I can click on this. I'm on the homepage here. I can click on this landmarks and then it takes me to the landmark page.

Pretty impressive. Pretty impressive, y'all. All right, let's keep going. I'm not going to have time to do all of these. So this other one that I did, let's see if it worked. Okay, it worked, but I'm going to have one follow-up prompt, make it better. So this one, I said...

Analyze the sentiment of online mentions of Apple over the past 30 days, identifying five recurring themes based on sentiment analysis. Provide actionable recommendations for Apple's PR team and create an interactive dashboard displaying your findings. Right. So it literally went out there. It found all the news stories over the last 30 days from Apple. And then it went through here. I both have a chart. Right.

So let me X out of this. So I have a chart that it created and it has links to all these stories. So it says the theme. So as an example,

Let's look at three AI ambitions and marketing claims. So it says this is 42% negative, 30% positive and 28% neutral. So it did a sentiment analysis on this certain topic. And then it says watchdogs. So it says what people are saying and then PR risk opportunity. So it says watchdogs say Apple oversold Apple intelligence 100%.

100% true, timelines and delays fuel skepticism in AI press. And then it says medium risk, momentum is slipping versus open AI Gemini hype cycle. All right. And then I just followed up with one prompt and I said, make it better. All right. Even though the first one looked super impressive.

So let me open up this Apple sentiment analysis dashboard. My gosh, this thing looks fantastic. I don't know. Livestream audience. Is anyone impressed with this? And if you were around for the Google Gemini show, I did this as well. I think this dashboard is a little better, right? So I have this that shows the overall positive, negative,

neutral sentiment tone over the last 30 days. It's broken down by category and it's all, oh my gosh, it's interactive. So I can hover over what was more positive versus what was more negative. As an example, the most positive sentiment

Apple news story over the last 30 days was the iPhone 17 leak buzz, which was a 49% positive, 13% negative. And then conversely, the most negative thing was some regulatory scrutiny in the EU and the DMA. So that only had a 5% positive and an 80% negative. So this is great.

Talk about redefining how you work, right? Probably some of you business owners are probably paying some PR or brand management company five to six figures a month to do this.

And they're probably not doing this good of a job, right? And this is one prompt. Imagine if you actually fed in your own data, refined it a little bit, told it actually what to create. I just open-ended said, yo, go out, scrape 30 days of Apple news. And then I had one follow-up prompt that said, make it better. Oh my, this is so good. I can see over the last 30 days, the positive and negative results.

sentiment. If I wanted to, I could probably map Apple's stock price on this as well to see how much of an impact some of these negative and positive sentiments had on the stock price.

And then down here, there's an interactive, geez, this is so good. There's an interactive, essentially cards, right? So under the regulatory scrutiny, you know, it says positive 5%, neutral 15, negative 80%. And then it says PR playbook, double down on transparency, publisher, a department,

publish a developer centric compliance walkthrough position privacy as a competitive edge and proactively brief tech correspondence before the july 1 dma enforcement date what the frick y'all if you don't think this is like agentic like large language models this is so so impressive right so impressive um all right you know what i i had i had a couple of more um

But I, okay. I want to do two other things. I want to do two other things. All right. And I'm sorry, this is a longer episode. If you're still around, I don't know. Tell me, tell me your favorite pancake topping. Do you put anything on your pancakes? Recently I've been making my own pancakes and I told my wife like, why, why did I ever, you know, do pancakes out of the box? It kind of stinks. So recently been big on the blueberry pancakes. All right. All right. So

Let's see some impressive things. So I'm taking a screenshot of something, a screenshot of a random menu I found because I want you to see how impressive this is. So I just had, um, I'm going to say, find the restaurant and where it is. Okay. So again, I just have a random, uh, photo.

of this menu. There's no identifying characteristics. Okay. And not only is there, there's no geolocation data, there's no exit data that would tell a large language model where this is. Okay. Because I did a screenshot of a photo that I texted myself of a screenshot.

So I checked, there's no identifying data. All we have is there's a fajita platter for, you know, 1549 beef nachos, 1149 veggie rice bowl, 1049 mixed green salad, 1079, right? There's no other information. So let's see. This is, that's right. I didn't even know where it was. Cause I'm like, what's this menu? I had no clue. It found it in 39 seconds.

How? I have no clue. Right. You can go through and read. So it looks like it's breaking down. It's first using computer vision. It's it's identifying the different items, the different prices. And then it's looking at a bunch of different websites to see what single restaurant might have might have that combination. Right.

Oh my gosh, this is so interesting. It also found that, you know, oh, so the correct answer is this was from Disney World's Magic Kingdom Frontierland. And it also said like, oh, you know, those prices aren't exactly accurate because that pricing was only from 2019 to 2022. Y'all, how freaking impressive is that? I mean, number one, it's a little creepy, but think of all the business use cases, right?

I don't know. Let's say you have a field tax out there taking photos of, you know, I don't know, maybe you repair your business owner and you have a home remodeling company, right? And you have, you know, your people out there in the field, they're taking photos of things and you have tens, tens, you know, 10 years of this data and all these random photos. And you're like, Hey, where was this one from?

Or let's say you do on the commercial side, commercial real estate construction, right? And you have this exterior building photo and you're like, where was this from? What was the material used? When was this project completed? Right? And you don't know, you might spend hours because maybe it's extremely important for a bid that you're working on. You could probably find it out in minutes, especially if you have that EXIF or geodata in place.

the photo, which by default, most photos have that data in there. It could find it out immediately, but that is creepy, impressive, creepy, impressive. All right. Last but not least, let's see. I had one example. Where is it? Here we go. So like some of these examples, I saw things on Twitter of people trying. So I think someone tried this. So

All I'm saying is use every tool you have at your disposal and figure out how to make a movie of a cute dog at a beach. Okay. This is not Sora. Okay. GPT-4-0 or sorry, Chad GPT-03 does not have the ability to create a movie, right? When you talk about changing what's possible in your head for your business,

I just literally said, use every tool you have at your disposal and figure out how to create a movie of a cute dog at a beach. It does not have these capabilities. It can't create a movie. It can't. Right. It's not Sora. Right. It literally can't do this. I did a test. I'd hate to give away. Right. So let me just tell you what's kind of going on under the hood here. So it says,

First, I'll create a series of frames showing a cute dog running along the beach and playing with the ball. The frames will vary slightly to show movement. Each frame will have a slightly different pose. I'll produce at least 10 frames, then use Python tools to turn it into a GIF. Lastly, I'll give you instructions on how to watch the GIF and provide you a download link. So literally what's happening here, and this is why this is fairly agentic.

03 realized, hey, I can't really do that. I don't have the capabilities. I don't have a tool that can do that. But let me think for a minute, right? Let me do some research. Let me think internally. It is taking a little while, but it looks like it's going to put it together. It's going to put it together. All right. So.

Okay. It looks okay. Here we go. Livestream audience. It's, it's the great reveal, right? This is like using the, the internet in like 1993 when something's loading, uh, super slow. All right. So it says done. Where did it go? It was loading there. All right. I'm refreshing. It looks like it's still going. Sometimes it does this, right? Um, I'm going to zoom out, try refreshing. I'm sure it's going to pop up later. Obviously, obviously it would do this with, uh,

with the last one that I'm trying to do here, y'all. I'm sorry. This has been such a long rambling episode. Let me just see. Let me see if I can find the test one that I did if this one doesn't quickly do this in the next minute. Let's see here.

Okay. Obviously now, now I can't find it. Are you, are you serious? Are you serious? I'm, I'm, I'm looking in a, another window here. Y'all it, it would be much easier if I didn't use a chat GPT, like, you know, a hundred times a day, um, which makes it a little hard here. All right, wait, here we go. All right. I found my other one. Let's see.

Here we go. We're going to end this. We're going to end this thing here, y'all. All right. Here is the other one. Let's see. So I said, make me a movie where I can download that involves a puppy at the beach. Right. So same thing. Yeah. Unfortunately, it looks like this one either timed out or. Okay. Oh, interesting. So it did it. It did it a little differently.

I don't even know what happened on this one. It actually brought in, uh, it looks like from Pinterest real dogs on the beach. Um, and it gave me instructions previously when I did it, it actually created it. So generative AI is something a little different. Uh, so here I can click download. Oh, you gotta be kidding me. Oh, it says, it says session expired. Let me tell you this previously. It did it. It worked. It created a video of a puppy man.

How come the last demo I do is the one that doesn't work? My goodness. All right, y'all. I was hoping to end that one on a bang, but let's just wrap this up here.

All right. As I desperately try to rerun it in the background, I saw that there might have been a couple of questions. So real quick, there's a test prompt. OK, Marie said, who are the decision makers who hire creatives in pharmacy? OK, that was an example of a prompt. I can run that here later. A lot of you want to see the head to head. All right. A lot of you wanted to see the head to head between 03.

Oh three and, uh, Gemini 2.5 pro Denny is asking, is it better to upload a CSV, uh, or versus Excel for a spreadsheet on a, on an AI? It depends on which, uh, which AI you're talking about. In my experience, uh, you know, if you're working with, oh three, uh, CSV and like XLS or XLS X, uh, work equally, uh, equally as well.

Let's see here. I just want to make sure we didn't have any other questions as we wrap this one up. Mark here with a good comment says, both lovable, does websites much better. Yeah, absolutely. Yeah, there's other AI tools, right? What I really wanted to show you all is how O3 could

Do use these like five different tools all at once agentically and go back and forth and flip flop between them all, which is, I think, you know, one of the things that really separates it from a Gemini 2.5 pro and why I truly think O3 is in, uh, is in an episode or in a series by itself. All right. I ran this one. Let's see. Did it, did it still not do it? Y'all I tried so hard to end this episode.

with a puppy on the beach. I tried so hard. It worked last time. All right. I broke, I broke 0-4 or 0-3. All right. All right. I hope this is helpful, y'all. I know this was a lot, long rambling episode, but let me tell you this. If you didn't see in these examples how drastically work is changing because of this model,

then I might feel a little bad for you. Yes, this was long. This was rambling. We do it unedited, unscripted. That's why I say like, you know, try to bring you the realest thing in artificial intelligence. Stuff breaks sometimes. Sometimes stuff works great the first time. But the fact that we now have a literal, agentic, large language model that's available right now, okay? It can think and reason and plan ahead.

It can browse the internet. It can find trends. It can summarize information. It can literally build you dashboards. It can go through hundreds of thousands of rows of data in a minute or two. And it can on its own agentically decide

how to accomplish those tasks, right? These are all very simple use cases. I haven't, you know, fine tuned the prompts, you know, to show you something impressive. This is basic

human language, right? I could create something very more impressive if I put a ton of work into it. I'm pretty okay at prompting, right? I wanted to show you how anyone, how the everyday AI person, or sorry, how the everyday person can just go in with a natural language prompt, throw in a bunch of data, throw in something that you're working on that requires the web. It requires thinking. It requires reasoning. It requires thinking.

you know, using, you know, spreadsheets. It requires creating visual dashboards. All of these things that used to take, even with models like, you know, O1 Pro, O3 Mini High, the previous versions, Gemini 2.5, right? Generally to do all these things that now O3 can agentically do on its own because it can go back and forth and switch between all of these tools on its own and you don't have to reprompt it

literally changes what's possible in how we work. All right. I didn't get to everything. If this was helpful, I have 10 more business use cases.

that we just didn't have time for. All right. And just please click that repost. So if you're listening on the podcast still, my gosh, this is I should have cut myself off, cut the power at 45 minutes. If you're like, wait, this changes things. Click the repost on LinkedIn.

And I have 10 business use cases for 03 that I couldn't even get to. They were maybe a little too complex, but very much so like technology.

today's episode, but you know, even better. I wanted to show everyone a wide range. So if this was helpful, if you want 10 bonus O3 use cases, including the prompts, you know, that you can kind of fill in the blank and try it out with your own data, just click repost on this. If you're on the podcast, you know, we always leave the link on our, in the show notes to come and

you know, watch it on LinkedIn. So if I don't send it to you, just hit me back, right? Give me a couple of days. You know, if I don't hit you back right away, just click repost. I'll send this to you. I hope this was helpful. If so, go to youreverydayai.com. Sign up for the free daily newsletter. Remember, this was part two. So go listen to part one if you're still a little confused or just leave me questions in the comments. Hope this was helpful. See you back.

tomorrow and every day for more Everyday AI. Thanks, y'all. And that's a wrap for today's edition of Everyday AI. Thanks for joining us. If you enjoyed this episode, please subscribe and leave us a rating. It helps keep us going. For a little more AI magic, visit youreverydayai.com and sign up to our daily newsletter so you don't get left behind. Go break some barriers and we'll see you next time.