AI can generate hundreds of ideas without repetition, allowing brainstorming to shift from being about individual creativity to becoming a curation process.
AI is most effective when paired with expert knowledge to quickly assess its outputs, especially in complicated and exacting work where the expert can identify errors or inaccuracies.
If you're already highly skilled at a task, you're likely better than AI at it, so testing AI on such tasks may not reveal its true potential, which lies in assisting with tasks you're less skilled at or dislike doing.
AI doing tasks better than humans is likely the fastest-growing category, with many of these tasks being automated by AI agents in the coming years.
There is a lower tolerance for AI errors compared to human errors, with customers often accepting human mistakes 5% of the time but expecting AI accuracy to be over 99%.
AI can hallucinate, persuade you it's right, or become sycophantic, so understanding its failure modes is crucial to avoid being misled by incorrect outputs.
Effort and struggle are often necessary for deep understanding and breakthroughs, as shortcuts can prevent reaching vital 'aha' moments that come from sustained engagement with a topic.
AI summaries, like those from Notebook LM, are likely to become the first point of consumption for academic papers, helping learners get over the initial hurdle before diving into full readings.
Today we are talking about 20 times to use AI or not. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. To join the conversation, follow the Discord link in our show notes. ♪
Hello, friends. Happy weekend. It is long reads time, and we have not had an Ethan Mollick post on here for a while. And we've got a fun one today. Ethan recently published a piece on his One Useful Thing blog called 15 Times to Use AI and 5 Not To. What we're going to do today is go through and read his arguments. And then where appropriate, I will add either my agreement, my disagreement, or any additional context that I think is interesting. Ethan writes...
There are several types of work where AI can be particularly useful, given the current capabilities and limitations of LLMs. Though this list is based in science, it draws even more from experience, like any form of wisdom using AI will require holding opposing ideas in mind. It can be transformative, yet must be approached with skepticism, powerful yet prone to subtle failures, and
essential for some tasks yet actively harmful for others. I also want to caveat that you shouldn't take this list too seriously except as inspiration. You know your own situation best, and local knowledge matters more than any general principles. With all that out of the way, below are several types of tasks where AI can be especially useful, given current capabilities, and some scenarios where you should remain wary.
So first, we are going to read Ethan's 15 times to use AI. Number one, work that requires quantity. For example, the number of ideas you generate determines the quality of the best idea. You want to generate a lot of ideas in any brainstorming session. Most people stop after generating just a few ideas because they become exhausted by it. But the AI can provide hundreds that do not meaningfully repeat. I would actually tweak this one right out of the gate. We're going to add some additional thoughts.
It's not that Ethan is wrong or I disagree. It's just that I think that because of AI, every brainstorming process should involve more quantity than before. So it's not so much that you want to use this when a brainstorming process requires quantity. You just want to totally redefine how you think about brainstorming. This is going to be a big shift in how we think about idea generation.
Instead of it just being what's the best thing our little brains can come up with, we are going to become curators. Our brainstorming process is going to become a process of curation. There's just literally no reason not to spend a bunch of time getting weird with the brainstorming process rather than doing it the way that you used to.
Moving on, number two. And remember, the first 15 are all areas where Ethan says that you should use AI. Work where you are an expert and can assess quickly whether AI is good or bad. This can involve complicated and exacting work, but it relies on your expertise to determine whether the AI is providing valuable outputs. For
For example, O1, the new AI model from OpenAI, can solve some PhD-level problems, but it can be hard to know whether its answers are useful without being an expert yourself. I think this is a great call-out, although the one caveat or caution that I would have is that a lot of times I've found that people naturally try to use AI to copy a thing that they do already. So for example, a person who's great at social media, test it by seeing how it writes tweets. The problem with that, of course, is that if they're really good at writing tweets, they're still probably better than the AI at writing tweets.
And so oftentimes by testing it only on the thing that you're already great at, you can come away disappointed. I found in fact that in a lot of cases, the best way to use AI is to do things that you are bad at or things that you just don't like doing. Still, I think Ethan's point that knowing about a field or a particular area can help you avoid some of the downsides of AI because you're more quickly able to identify issues with its responses is a good point.
Number three, work that involves summarizing large amounts of information, but where the downside of errors is low and you are not expected to have detailed knowledge of the underlying information. AI is good at summarizing novel length work, but less successful at fact-checking it. I don't know much to add. I think this is a good one. Number four, work that is mere translation between frames or perspectives. For example, you have developed a policy, but now have to turn it into a dozen different training documents for different audiences in your organization. AI is very good at this sort of translation, increasing and decreasing complexity of documents,
so that people can understand them. This is definitely true. And in fact, we're seeing lots and lots of products designed specifically for this. Every spiral, for example, is basically this. It takes one piece of content that you've created, be it a podcast transcript or a YouTube video transcript or an essay, and turns it into all the other stuff that you might want to have around it, which could be posts, could be proposals. And every is far from the only company exploring this. Part of the reason there are so many companies exploring it is that it is very useful right now, and it is hugely time-saving as well.
Number five, work that will keep you moving forward. Little things often block our way and a push might be all we need to accomplish it. When writing prior to AI, I might get stuck on a sentence and walk away from writing for an hour. But now I ask AI to give me 30 distinct ways to end the sentence. Frankly, this is a great little insight. It's not a way that I've used AI a lot, but I can see this being incredibly valuable, especially if I was involved in, for example, a creative writing type of pursuit.
Number six, work where you know the AI is better than the best available human that you can access and where failure modes of AI will not result in worse outcomes if it gets something wrong. Another way to put this if you try to bring this into a business context
is that we are all, of course, to some extent, in one way or another, resource constrained in terms of what we can deploy against any particular problem we're trying to solve in the context of a business. So for example, running a startup like Superintelligent, we only have so many resources, we only have so much time that we can dedicate to, for example, things like creating content for social media. What I think is resonant about Ethan's point here is that best available human doesn't necessarily mean that AI is better than the best humans at a task,
Available could also be a constraint of something like price. If I have $0 to hire someone to write tweets, but I do have ChatGPT, you better believe that AI is better than the best available human. Number seven, work that contains some elements that you understand but need help on the context or details.
Tyler Cowen suggests using the AI as a companion when reading because it allows you to ask infinite questions. This is something that I'm seeing a ton of discussion of recently. In fact, we're almost seeing a request for startups around this. Andre Carpathy recently tweeted, If Amazon or so built a Kindle AI reader that just works, in my opinion, it would be a huge hit.
For now, it's possible to kind of hack it with a bunch of script. Possibly someone already tried to build a very nice AI native reader app and I missed it. Patrick Collison from Stripe weighed in saying, I find the workflow really annoying today, but as you say, the value is so high that I still schlep through it. Have to buy books from Kobo.com in order to get PDFs that you can upload. Some LLMs don't support PDFs. PDF often doesn't fit in the context window, have to split it. Will be awesome when it's super streamlined.
Y Combinator president Gary Tan retweeted Andre and said, I want this. I would fund this too. And so I think what I would suggest is that we broaden this out because one of the things that I think is a big trend for the coming set of years is that a huge amount of our education is going to turn into personalized coaching with a contextually aware AI, be it an agent or an LLM, that's able to interact with you as you're learning or producing work.
Number eight, work where you need variance and where you will select the best answer as an editor or curator. Asking for a variety of solutions, give me 15 ways to rewrite this bullet in radically different styles, be creative, allows you to find ideas that might be interesting. Once again, just like I said with number one, that it's not so much finding work that requires quantity, but instead all work or specifically brainstorming should now include quantity because of the availability of AI.
I think that that's similar here. I think that we should basically be treating a much broader cross-section of work as something where we act as a manager, as an editor, as a curator. Rather than there being a small handful of types of work that fit this idea of needing variance and selecting the best answer, try running all work through that process. I think that we're going to find a lot of better results. Number nine, work that research shows that AI is most certainly helpful in, many kinds of coding, for example. Basically, just don't fight the trends and know what other people have learned. I guess one thing I will say on know what people have learned, this
This sounds so simple, right? This is why this is one of the only bullets on here that just has a single sentence and doesn't feel like he needs to explain it more. But the idea that we should just copy what other people have found works is literally the entire premise and theory of change when it comes to superintelligent. The whole idea of super is that instead of giving people a bunch of courses and certificates and all these old world learning models, just let them copy the AI use cases that other people have found already work. And I think that's basically the same energy here.
Today's episode is brought to you by Vanta. Whether you're starting or scaling your company's security program, demonstrating top-notch security practices and establishing trust is more important than ever.
Vanta automates compliance for ISO 27001, SOC 2, GDPR, and leading AI frameworks like ISO 42001 and NIST AI risk management framework, saving you time and money while helping you build customer trust. Plus, you can streamline security reviews by automating questionnaires and demonstrating your security posture with a customer-facing trust center all powered by Vanta AI.
Over 8,000 global companies like Langchain, Leela AI, and Factory AI use Vanta to demonstrate AI trust and prove security in real time. Learn more at vanta.com slash nlw. That's vanta.com slash nlw. Today's episode is brought to you, as always, by Superintelligent.
Have you ever wanted an AI daily brief but totally focused on how AI relates to your company? Is your company struggling with AI adoption, either because you're getting stalled figuring out what use cases will drive value or because the AI transformation that is happening is siloed at individual teams, departments, and employees and not able to change the company as a whole? Superintelligent has developed a new custom internal podcast product that inspires your teams by sharing the best AI use cases from inside and outside your company.
Think of it as an AI daily brief, but just for your company's AI use cases. If you'd like to learn more, go to besuper.ai slash partner and fill out the information request form. I am really excited about this product, so I will personally get right back to you. Again, that's besuper.ai slash partner.
Number 10, work where you need a first pass view at what a hostile, friendly, or naive recipient might think. Basically, Ethan is suggesting incorporating expectations of feedback into the work. And again, I'm starting to beat a dead horse, but I think what's coming out over and over again is not that you can segment work into a bunch of different varieties, some of which are and aren't useful for AI. You can, but in many cases, the things that are useful with AI are going to impact the rest of the work.
For example, is there really just one type of work or a small handful of types of work where it's useful to have potential feedback from different reviewer perspectives? Or should basically every business communication run through that process? I'm not sure, but my guess is that a far higher percentage of types of work would benefit from that type of potential feedback review. And so I think it's probably worth experimenting a little bit more broadly.
Number 11, work that is entrepreneurial, where you are expected to stretch your expertise widely over many different disciplines and where the alternative to a good enough partner is not being able to act at all. AI can be a surprisingly competent co-founder, helping give mentorship while also acting to build the documents, demos, and approaches that are otherwise likely to be outside your experience.
This is absolutely true. It has a little bit of the element of what I was talking about with best available human. If you are resource constrained, best available may be none. But even more broadly beyond that, constraints breed creativity. And even when we are discussing AI use cases with the biggest companies in the world, one of the areas that we focus on and have people look at is the solopreneurs out there.
These are folks who have a structural incentive and a need to push the boundaries of what AI can do. And I believe that a lot of the processes and new workflows and new approaches to AI that will eventually find themselves into more traditional enterprises are going to be field tested in the solopreneur and more broadly entrepreneurial worlds first.
Number 12, work where you need a specific perspective and where a simulated first pass from that perspective can be helpful, like reactions from fictional personas. I think that's pretty similar to the hostile, friendly, or naive feedback. Number 13, work that is mere ritual, long severed from its purpose, like certain standardized reports that no one needs. What, in the words of Bob Sutton and Huggy Rau, scatters your attention and makes you less valuable?
What work serves no useful purpose. In an ideal world, you would remove the work, but you can at least reduce its hold on you by having AI help. Though make sure this is indeed the case. Far too many people automate performance reviews, for example, which are meaningful only when done by a human. This is again a big obvious one, but still very important to say out loud. Use AI to get rid of the worst part of your work if you possibly can.
Number 14, work where you want a second opinion. Give an AI access to the data and see if it reaches the same conclusion. Once again, I think this is something that might be interesting to apply to a much broader array of processes and types of work than it might seem at first.
And number 15, work that AIs can do better than humans. This is likely to be the fastest growing category. That, my friends, is going to be a huge amount of what 2025 is about. We're going to see a lot of that instantiated in the form of agents, although there's going to be a ton of testing and discovery and iteration and failing. But yeah, ultimately, how we work is going to be changed in huge ways, in even more significant ways than it feels like now by AI. But let's move now to almost the more interesting side of this list, five times not to use AI.
Ethan says,
He continues,
One interesting thing to add about this, a lot of the companies that we work with have found that there is actually a lower tolerance for errors from AI than there are for humans. And that makes it really hard for them to deploy AI for areas like customer service.
I don't have specific numbers, but they're on the order of something like people on average are okay with a human making an error 5% of the time, but AI less than 1% of the time. This is going to be a really interesting dynamic that we'll have to see how it evolves because it will dictate what we can and can't use AI for in a mainstream context.
Number three, area where not to use AI. When you do not understand the failure modes of AI. AI doesn't fail exactly like a human. You know it can hallucinate, but that's only one form of error. AIs often try to persuade you that they are right or that they might become sycophantic and agree with your incorrect answer. You need to use AI enough to understand these risks. And that I think is the big banner headline point here. A lot of this knowledge is hard won and cannot be replaced even by great essays and valiant readings and discussions of those essays on podcasts.
Number four, where the effort is the point. In many areas, people need to struggle with the topic to succeed. Writers rewrite the same page, academics revisit a theory many times. By short-cutting that struggle, no matter how frustrating, you may lose the ability to reach the vital aha moment. I think this is going to be enormously, enormously challenging for people as AI fully comes online. There are areas, both in learning and in work, where efficiency is not the point, where the
where the messiness of inefficient process, of continued hammering on oneself on an idea, is the only way to actually figure things out. Part of the new pedagogy that we're going to have to develop for students is going to be around helping them navigate which of the scenarios fall into the category where efficiency is fine versus where the struggle matters. And it's not going to be easy.
Lastly, he writes,
Okay, so all of those I agree with clearly, but let's go back to number one. Number one time not to use AI from Ethan is when you need to learn and synthesize new ideas or information. Asking for a summary is not the same as reading for yourself. Asking AI to solve a problem for you is not an effective way to learn, even if it feels like it should be. To learn something new, you are going to have to do the reading and thinking yourself, though you still may find an AI helpful for parts of the learning process.
So maybe I oversold how much I disagree because I don't actually disagree in general. However, I want to call out the specific example of Notebook LM from this year as a counterpoint to this that shows how AI's ability to summarize is actually going to become an integral part of longer future learning processes. It is absolutely the case that just getting a summary from AI is not the same as reading for yourself or doing the work, putting in the time to really fully mentally ingest something.
At the same time, I'm fairly sure that within a couple of years, almost every academic paper will be consumed first through a Notebook LM-style podcast summary, even if someone is then going to dig back in and read it all again for themselves. It is just such a phenomenal way to start and get over the hump of learning. And I would anticipate that over time...
We're going to stumble across a lot of additional learning products like that that transform how effective we can be as learners. The broader point here, which I do agree with, comes back to that number four point that in many cases in learning and in work, effort is the point. It is essential to the process and can't be shortcutted. It's just really cool to see that AI is actually helping make that effort more effective as well.
All right, so those are 20 times to use AI or not. Another great thought-provoking post from Ethan. Hope this was a fun discussion for you. Appreciate you guys listening or watching as always. And until next time, peace.