Today on the AI Daily Brief, three ways that you can get better at AI right now. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. Welcome back to the AI Daily Brief, friends. Quick announcements. First of all, thank you to today's sponsors, KPMG, Blitzy, Vanta, and Superintelligent. To get an ad-free version of the show, go to patreon.com slash AI Daily Brief. Ad-free starts at just $3 a month.
And speaking of ads, one more thing. We are starting to fill up for the fall inventory of ads. If you are an AI startup, an AI consultancy, basically any company that wants to reach this very professional, highly AI audience, and you are interested in sponsoring the show, reach out to me, nlw at breakdown.network, and I will send you the information. Now for today's episode, we are doing something a little bit different than our normal long reads. I
I've done this a few times in the past, but basically instead of a long essay, I'm going to build off something that I wrote or was thinking about earlier this week for more of an opinionated rather than a newsy episode. And this is one that I wouldn't be surprised if it had a lot of resonance for a lot of folks out there. Or if not for you, maybe for someone that you go ahead and send this to who is a little bit earlier in their AI journey.
The TLDR is that I pretty frequently get asked, as probably many of you do, what's the best way to get better at using AI right now? And for a while, there were some fairly decent answers. Now, I've always had a bit of a critique around the education tools for AI. That's why we originally started there with Superintelligent, even though it's become something different now. But the gap between what I think people need from a learning standpoint and what is available to them has gotten much worse in the last six months.
Basically, so far, AI learning resources haven't caught up with the agentic transformation. You're not seeing Coursera courses yet about AI agent management. You're not seeing Udemy certificates about how to become an agent boss. Moreover, even beyond that, I've always been in the camp that the best way to learn about AI is by actually getting reps in and doing it. Remember, there are no AI experts. There are only people who have practiced more than you. And so if you want to become one of them, you just got to get the time on task.
Still, when that question keeps coming up, I've started to find myself rotating around a pretty clear answer. I think that there are three things that anyone could be doing right now that if they do will make them better than 99.9999% of the population at actually getting value out of AI. The first is for a week using OpenAI's O3 model specifically as a strategic collaborator.
And yes, the model matters here. Other models are good for different things, but I think that this is the one, if you're coming at this from a business standpoint, that is really going to most move the needle. Number two is to use Lovable or Bolt or SoftGen or Replit or really any of these tools to vibe code something. Number three is to use a tool like Zapier or N8n or Lindi to start building agentic workflows.
Now I have a couple of bonus ones, but let's talk about each of these in sequence and what specific types of activities one might do in each of these contexts to up their AI skills.
Let's start with O3. As I mentioned, different models are good for different things. I still, if I'm looking for writing, especially anything that involves good prose, I find myself either using ChatGPT 4.5 or using Claude. But when it comes to using ChatGPT as an actual strategic colleague...
Not just as an assistant or intern, but as someone who has got substantive ideas and thoughts to share that might shape how I think about things, O3 is for me by far the best model. So the prerogative here, if you want to get better at using this tool, is to use O3 and treat it as a strategic colleague for a week. So what are some specific examples of what you might do to treat it as a strategic colleague?
One idea is to give O3 an idea and have it draft a memo or a deck around it. I'll give you an example. Something I've been thinking about lately is that on the one hand, I really tend not to like podcast networks that are traditional. The traditional way that a podcast network works is they handle all of your ad sales, take a cut of the ads, and promise a bunch of stuff around growth that they don't actually deliver.
Now, as an aside, having been through this lots, both of the ad network relationships I have both on this show and with the other one, I've been able to tweak to better fit the learnings that I've had. But still, this is a generally persistent problem. And specifically on that question of growth, I've always thought that there might be a better way to build some sort of collaborative growth alliance directly amongst other podcasters.
The idea might be something involving reserving some portion of inventory to use to run promos for other podcasts in the network, to collaborate more fluidly on feed drops where you actually put other people's podcasts in your feeds to provide a direct intro to your audience for them, and other more creative strategies. And so that's about as far as I've gotten on this idea, but I've had context recently to want to get it a little bit more firmed up and crystallized.
So as I was driving into New York yesterday, I rambled at O3. You can see this very long, not even a prompt, it's just a ramble that was transcribed with OpenAI's Whisper, by the way, not Apple's transcription tool. So it actually got things right. And I basically asked it to take the nut of this idea and turn it into something a little bit bigger and more expansive and more brought together.
What O3 sent back was effectively an overview memo of this idea. And this is one of the things that you're going to notice a lot with O3 if you do this. What it's going to give you probably won't just be a long string of words. It'll be a bunch of sections, many of which have embedded tables, that structure the thoughts in a crisp and clean or at least clear way.
So in this case, it spat back a high-level blueprint for a podcast growth alliance, including a core idea, the guiding principles, membership and governance, growth levers and standard commitments, the tooling stack, how to handle money, an example quarterly cadence, some bullets on why this would work, and next steps to tease the concept.
And so what I got out of this, just right off the bat, was a more structured starting point for this idea, which previously was just a ramble. That's valuable all on its own. However, there were a bunch of things about the way that O3 came back that I knew were still way too heavyweight for what I was looking for. For example, its suggestion of having a steering circle to set a quarterly growth agenda, and
No one's going to want that. And so again, because I'm treating it like a strategic colleague rather than an AI system that I have to prompt precisely, I gave it the very simple follow-up, this is good, but let's try the even lighter weight version.
It went in the opposite direction and came back with the Handshake Alliance, its version of an ultralight model. Similar sections, a little bit less in depth, but basically still an articulation of the idea. Now, hold aside any of the specifics of this. You get the idea here of how you could give O3 an idea and have it spit back some structured thoughts that might be useful.
One thing to note here, and this is really important with O3, I didn't ask it to provide critical feedback on this idea. And if you don't ask it specifically or prompt it specifically to provide critical feedback, O3 is going to start from the standpoint of whatever you're feeding it being a good idea. It's not even going to question that. So another thing that I could have done, for example, is I could have said provide a steel man argument around why this wouldn't work as another way to pressure test it.
I'm pretty sure it's a good idea, so I didn't do it in this case. But it is important to note that O3 is not natively going to give you critical feedback, and that's something that a human strategic colleague might do natively without being asked. Another thing that you could do with O3 if you're using it as a strategic colleague is to give it a strategic question and have it give you a bunch of scenarios to consider. And scenarios is a really important and operative word here.
I don't think that O3 is great at deciding things. I don't think it has discernment and taste. A great example of this is there is basically nothing that O3 or really any AI so far is worse at than coming up with names. It can come up with a million theoretical names for a project or a company, but they're almost always universally terrible. And it's because what makes for a good name is so intangible and
and subtle that it just is a mismatch, at least for the capabilities of these current models. Now, when you bring this to a broader strategic context, I think the way to think about O3 is to basically get a mapping out of some scenarios that you can then go review and feedback.
It's almost like asking a colleague to prepare a plan for how it would work if you did do something to help you decide whether you actually want to do that thing. So the example, which I won't go too into depth with, is we have at Superintelligent these agent readiness audits. They're a tool for helping companies understand where they are in their agent journey, what sort of organizational gaps they have, like data readiness, and what use cases they're best suited to.
It's a one-time process that can, yes, turn into other types of engagements. We go deeper on planning individual use cases, and eventually we can connect it to the marketplace where you can find the vendors to help you. But it is not a subscription service. It's a one-time fee. And so one of the things that we are naturally considering is whether we at some point want to turn those one-time audits
into a more ongoing agent planning product suite. So I asked O3, how might we turn super intelligent agent readiness audits into an agent planning product suite that's an ongoing software tool? For example, where people are continuously looking for agent opportunities, getting advice, and then finding the right partners. And again, because I didn't ask it to be critical, it assumed that that sort of conversion is a good idea and just ran with it.
What it came back with was a structured vision for how this might work. A North Star vision, four key pillars and key capabilities, data and architecture sketch, a phased product roadmap, the commercial model, the go-to-market flywheel, execution risks and mitigations, etc., etc., etc. This is the type of thing that it's really good at. It's going to help you lay out scenarios that you're just starting to consider in a way that is much more comprehensive and gives you a better ability to go review them by.
Now there is of course another thing that you can do with O3 that is very powerful, maybe the most powerful, and that is use deep research. Deep research is basically a single purpose agent for research that uses the O3 model and goes out, searches, researches, based on a prompt that you give it, and comes back with the type of report that might have in the past taken someone a week or two to put together. I use deep research for things all the time, industry background, dossiers on events that happened.
As I was preparing this keynote a couple of weeks ago, one of the things that I was comparing was how fast the big tech hyperscalers have come together around agent standards like MCP, as opposed to how long the battles were around previous protocol wars around things like HTML and email standards.
Now, this is something that I find personally really interesting and have read lots of books about, but I wanted to re-up on a crisp understanding of some of the most important protocol battles in the context of the speech that I was giving. And so I asked Deep Research to put together basically a dossier on all of the protocol wars that have defined technology over the last 70 years.
As it always does, Deep Research asked clarifying questions, and then it was off to the races. After a relatively short seven-minute search across 20 sources and 105 searches, Deep Research came back with the dossier cited as it always is, with what was in this case a pretty short report because I wasn't looking for something that was super comprehensive.
This is the type of thing that you can use deep research for, that you will find that if you give yourself a week to really use O3 as a strategic colleague, you will find lots of use cases that you might not have imagined for it. Today's episode is brought to you by KPMG. In today's fiercely competitive market, unlocking AI's potential could help give you a competitive edge, foster growth, and drive new value.
But here's the key. You don't need an AI strategy. You need to embed AI into your overall business strategy to truly power it up.
KPMG can show you how to integrate AI and AI agents into your business strategy in a way that truly works and is built on trusted AI principles and platforms. Check out real stories from KPMG to hear how AI is driving success with its clients at www.kpmg.us slash AI. Again, that's www.kpmg.us slash AI.
This episode is brought to you by Blitze. Now, I talk to a lot of technical and business leaders who are eager to implement cutting-edge AI, but instead of building competitive moats, their best engineers are stuck modernizing ancient codebases or updating frameworks just to keep the lights on. These projects, like migrating Java 17 to Java 21, often mean staffing a team for a year or more. And sure, co-pilots help, but we all know they hit context limits fast, especially on large legacy systems. Blitze flips the script.
Instead of engineers doing 80% of the work, Blitzy's autonomous platform handles the heavy lifting, processing millions of lines of code and making 80% of the required changes automatically. One major financial firm used Blitzy to modernize a 20 million line Java code base in just three and a half months, cutting 30,000 engineering hours and accelerating their entire roadmap. E-
Email jack at blitzy.com with modernize in the subject line for prioritized onboarding. Visit blitzy.com today before your competitors do. Today's episode is brought to you by Vanta. In today's business landscape, businesses can't just claim security, they have to prove it. Achieving compliance with a framework like SOC 2, ISO 27001, HIPAA, GDPR, and more is how businesses can demonstrate strong security practices.
The problem is that navigating security and compliance is time-consuming and complicated. It can take months of work and use up valuable time and resources. Vanta makes it easy and faster by automating compliance across 35+ frameworks. It gets you audit-ready in weeks instead of months and saves you up to 85% of associated costs. In fact, a recent IDC whitepaper found that Vanta customers achieved $535,000 per year in benefits, and the platform pays for itself in just three months.
The proof is in the numbers. More than 10,000 global companies trust Vanta. For a limited time, listeners get $1,000 off at vanta.com slash nlw. That's v-a-n-t-a dot com slash nlw for $1,000 off. Today's episode is brought to you by Superintelligent, specifically agent readiness audits. Everyone is trying to figure out what agent use cases are going to be most impactful for their business, and the agent readiness audit is the fastest and best way to do that.
We use voice agents to interview your leadership and team and process all of that information to provide an agent readiness score, a set of insights around that score, and a set of highly actionable recommendations on both organizational gaps and high-value agent use cases that you should pursue. Once you've figured out the right use cases, you can use our marketplace to find the right vendors and partners. And what it all adds up to is a faster, better agent strategy.
Check it out at bsuper.ai or email agents at bsuper.ai to learn more. Now, one last note on O3 before we move on to our next way that you can get better at AI right now. O3 Pro came out recently, and people are still just trying to figure out what to use it for. I candidly have not found a regular use case for it, but it seems like from what I've read and what people who have dug deep on this have said,
that where O3 Pro really thrives is when you want to give something a ton of context to help reason through a decision. Indeed, the Latentspace guest article about O3 Pro was called "God is Hungry for Context."
The author writes, My co-founder Alexis and I took the time to assemble a history of all of our past planning meetings at Raindrop, all of our goals, even record voice memos, and then asked O3Pro to come up with a plan. We were blown away. It spit out the exact kind of concrete plan and analysis I've always wanted an LLM to create, complete with target metrics, timelines, what to prioritize, and strict instructions on what to absolutely cut.
But the plan O3 Pro gave us was specific and rooted enough that it actually changed how we are thinking about our future. So remember, I was just talking about how with O3, the goal isn't to get you to the right answer. It's to map and structure thoughts in a way that you can review them more efficiently and quickly to come up with your own answer.
And yet what Ben, the author of this piece for Latent Space is saying is that O3 Pro takes that to the next level. Like I said, I haven't had a chance to try it yet, but if you have some big decision that you're really trying to suss through for your company, maybe try giving O3 Pro a bunch of context and see how it does maybe even compared to O3. All right. So that is the first thing you can do to get better at AI right now. Use O3 as your strategic colleague for a week.
Next up, number two on our list is Vibecode something. Again, I am partial to lovable, but you could use Bolt, Replit, Softgen, any of the Vibecoding tools, and you'll get a big chunk of this experience. There is a reason that Vibecoding is such a breakout use case for AI. It is massively increasing the
the capability for the average person to create real meaningful things, to speak ideas into existence. I believe that interacting with coding tools will be every bit as critical a skill set as interacting with these text-based LLMs, and so you're going to want to start getting your reps in now. Even though the language that they operate in is English, you're still going to want to know how to best use them, get what you want out of them, interact with other tools they give you access to.
Indeed, one of my beefs with the old educational slate that's still sticking around is that a lot of it's about prompt engineering in a way that I just don't think is all that useful, at least not compared to what it was 12 months ago.
However, if you were going to learn one type of prompt engineering right now, figuring out how to get the most out of lovable or bolt, well, you could do a lot worse. So what are some ideas for things that you could vibe code? The suggestions that lovable has include a personal website, a note-taking app, an expense tracker. All those are fine, but here are my three ideas.
The first, and I think most relevant if you are in a startup or a small team that has any sort of digital product that you're interacting with, is to vibe code feature ideas. Something that at Super Intelligent we've done and that I believe is going to be the norm inside of startups and just product teams in general is that basically people are no longer allowed to share product ideas. They have to prototype them.
You can see when you flip through my lovable library that a huge number of the things that I've done with it are exactly this. Here's a sample hub, for example, where I proposed connecting the audits that we do directly to the marketplace that we have so that when a company gets a set of use case recommendations, it can, with a single click, turn them into an RFP. In other words, if Acme Corp wants to implement a customer service AI assistant, it can press a single button to do so.
Now, in this case, this was not ever meant to be production. This was not going to be something that I was turning over to the actual software engineering team to build off of. It was a way to show rather than tell something that I wanted to see built or at least propose that we talk about. And while it took me a bit longer than it would have to simply write down the idea, because I had to write down the idea as a prompt and then massage and refine it, it did two things. First,
First, it gave me a chance to actually make sure that I thought the idea was good or to evolve it to the place where I did want to share it. And two, it made it much easier for the other people on my team to understand exactly what it was that I was talking about, rather than have to try to translate the picture in my brain vis-a-vis these words that I was sharing. Prototyping feature ideas is to me the most easy default use case for these tools, and one that I wouldn't be surprised if is useful for you right now in a production sort of way.
The second thing you could do with your vibe coding test is to vibe code a prototype of a side project idea. I'm sure that all of you have had some point where you've said to yourself, you know, it would be really cool if there was something that did X, Y, or Z. Well, now you can just prototype that thing.
An example for me, I was interested in the idea of a vibe coding platform, but that was inherently social, where the whole idea was that when you built something, you instantly shared it to the community who could remix it, vote it up, and allow you to potentially earn it and eventually launch it.
So basically think about a lovable product hunt hybrid, maybe with a side of a little token ecosystem. This is the modern day equivalent of buying a domain for an idea that you know you're never gonna do, but you still wanna have some outs to. But in terms of your learning, this is another really great way to get a sense of how these vibe coding tools work.
My last suggestion? Try to build a copy of some game you liked as a kid. I am old enough that yes, in my elementary school, we actually had that old black and green version of Oregon Trail, and so I thought it would be fun to build a version of it, but that was anchored to HP Lovecraft instead of to the Western expansion. Thus was born Eldritch Trail, where you can start a new expedition, create a character, decide your occupation, just like in the Oregon Trail game. I'll be an occultist. You can
You can buy your supplies, favorite part of those early games, and start on your journey. This one took longer than any of these other projects that I showed you, and that's part of why I think this game design approach is valuable, is that it turns out that there's just so many more details to think about when you're trying to build something like this that actually works, that it really will give you even a broader sense of what vibe coding tools can do. Now, if you want to go really advanced,
You can also try to go all the way to actually publishing something, learning how to use tools like Supabase and GitHub. But even if you don't do that, I guarantee that if you spend about a week vibe coding, again, you will feel way ahead of where other people are when it comes to AI.
Number three, third thing you can do to get better at AI right now is go spend some time building agentic workflows with N8n or Zapier or Plum or Lindy or whatever. And the important thing here is that even though a lot of consumer agents are coming online that abstract away this sort of process.
By going and building these automated workflows manually, you're going to have a much better sense of how agents actually work, what it looks like under the hood. And my guess is you'll have a leg up on using agents even that abstract all of this complexity away. There are a bunch of really common use cases that even if they're not the most essential thing that you do, are still going to give you a chance to dig in to figure out how to use these tools and probably get some value out of them.
Think research flow, sales outreach flow, content generation flow. So this is a template from Lindy for lead outreach. And you can see we have here the flow from when it receives the message or instruction to what the AI agent first does to the loops that it goes through to satisfy the problem to the exit back to the agent to where it delivers the result. So in this case, the prompting message, ask for the message.
asks the user to provide a list of leads to reach out to, which can be names, emails, or Google Sheets. The agent then interprets based on whether it is a Google Sheets link or a list of leads, what to do next. You can see here that you can toggle the model. Lindi gives you the choice between default, fastest, balanced, or smartest. And you can connect it to specific actions and external tools.
The loop is the action that's going to be repeated over and over, which in this case is searching perplexity and then sending an email. As you can see, when you work with something like Lindi or really any of these tools, they're going to get more powerful when you plug them into other systems, but they're going to give you the ability to determine when and how they actually have access to those systems. For example, in this lead outreach tool, to get all the way out of it, I would need to authorize it to send email, but you can also toggle things on like asking for confirmation before that action happens.
The exit loop refers to when it has completed a complete version of the action and is ready to move on to the next version of that action. In this case with a different lead to focus on. I don't really think you can go wrong with any of the types of tools that I just mentioned. And again, even though consumer grade agents are fast coming online that abstract all this away, I really do think that there are big benefits for figuring out how to map these sort of flows manually just to understand how these connections all work.
Okay, so those are the three things that I recommend to get better at AI right now. Use O3 as your strategic colleague for a week. Vibe code something with one of the vibe coding platforms. Design and interact with an automated or agentic workflow with Lindy or N8n. And here are two more just as bonuses. First of all, as I was just mentioning, we are getting the beginnings of consumer generalist agents, and some of them are growing really fast. Here is entrepreneur Henry Shi talking on LinkedIn about GenSpark,
which he calls the fastest growing lean AI company he's ever seen. The company got to 36 million in ARR in just 45 days with just 24 people. GenSpark is one of these generalist agents that can do a bunch of different things. And while my experience in general has been a little underwhelming with these tools, Manus is another one that has gotten a lot of buzz.
I do think that if you spend the time to figure out different things that they're good for, they probably can add a ton of value. And once again, you'll be ahead of the rest of us who are going to have to catch up to how agent interfaces work as they come online. The second bonus thing that you could do if you really want to go a little bit more advanced is to go learn about and even interact with and try MCP.
MCP stands for Model Context Protocol, and it's basically an API for data that agents can use. So an MCP server is something that gives an agent access to a specific set of data, and once set up, can
can be used by multiple agents. So you can see how in a world where lots of different people are building lots of different agents, that's going to go a lot faster if each individual person doesn't have to by hand connect data sources each time, but can just plug into the relevant MCP server for that data source whenever and how they need to.
The excellent Riley Brown on Twitter right now has pinned to the top of his profile, which is RileyBrown underscore AI on Twitter slash X, an extensive MCP tutorial that can help you figure out how to interact with this. Like I said, much more advanced, but you will definitely be ahead of nearly 100% of users if you actually make it all that far.
So there you have it. Three ways to get better at AI right now. The one additional benefit of all of these strategies is that they are actually pretty genuinely fun. So I am excited to see what you do with it. But for now, that's going to do it for today's AI Daily Brief. Appreciate you listening or watching as always. And until next time, peace.