We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Reid riffs on Tobi’s memo, AI and play, and the tweet that cost trillions

Reid riffs on Tobi’s memo, AI and play, and the tweet that cost trillions

2025/4/16
logo of podcast Possible

Possible

AI Chapters Transcript
Chapters
Shopify CEO Tobi Lütke's memo on mandatory AI adoption sparks debate. Reid Hoffman supports the memo, advocating for AI integration across all company sizes and roles, emphasizing daily usage and continuous learning. He suggests regular check-ins to assess AI's impact and encourage proactive adoption.
  • Tobi Lütke's memo on mandatory daily AI use at Shopify
  • Reid Hoffman's support for the memo
  • Advocacy for AI integration across all company sizes
  • Emphasis on daily AI usage and continuous learning
  • Regular check-ins to assess AI's impact

Shownotes Transcript

I'm Reid Hoffman. And I'm Aria Finger. We want to know what happens if, in the future, everything breaks humanity's way. With support from Stripe, we typically ask our guests for their outlook on the best possible future. But now, every other week, I get to ask Reid for his take. This is Possible. Possible.

All right, Reid, so lovely to be here with you. So recently, Toby Lutka, the CEO of Shopify, put out a memo that was covered by everyone because it had to do with internal employees at Shopify and what they were expected to do with AI. It talked to them about how if you're going to request more resources for your team, you actually better check if AI can do the job better, faster, and you actually don't need that additional headcount.

It also said, like, everyone here should be expected to be using AI every day. And he said, I'm the CEO. I'm no different. We want to grow by 20, 30, 40 percent of year. Every employee needs to grow as well. I think some people were shocked by this memo. Other people found it reasonable. What did you think about the contents of the memo and also Toby sort of putting out this bold statement for the industry? You know, I also found Toby's memo to be exactly right.

I thought it's the kind of leadership that Toby does. And then also thinking about his classic technologist, because he's obviously an engineer, is how we use tools, AI as amplification intelligence, how it is that we get tools.

you know, super agency through doing this. And his memo, I think, is exactly the kind of thing that I think everybody, not just technology companies, should be doing. Every single CEO of anything from a, you know, call it, you know, five to seven person company to a tens of thousands of people company should look at that and say, what's my version of how I should do that and how I should integrate it?

And Toby obviously has given it enough thought to kind of say, look, here are some key kind of checkpoints that work within companies, which is you ask for resources, make sure that you have, if you're going to ask for resources,

how you're asking for more resources in a context of, and here's how I'm already using AI, and here are the reasons I need more resources given how I'm using AI, either AI's lacks or the AI opportunities from doing it. So one of the things that I've been telling, you know, kind of my portfolio companies,

is to actually have kind of weekly, monthly check-ins where everyone has to bring a, and here is the new thing I've learned about how to use AI to help me do my job, help us do our job, help us perform better in our mission as a company for doing that. Because the

The answer is if you actually haven't found something that was useful to you, it's useful to your group, it's useful to your company, you haven't tried hard enough. I think Toby's memo is the kind of thing

That, in fact, CEOs and all group leaders should be looking at saying, great, how do I build on that? Thank you for the open source kind of management technique. What are the things that I should do specifically for our group, for our company, for our mission, for our culture? What is our version of that? And then start iterating in the same way.

I mean, I have to admit, last week on LinkedIn, I saw a marketing agency and they said on LinkedIn, you know, we promise our clients that you will never get a image that was started in an image generator that was using AI. We promise you, you're never going to get a tagline from us where we use chat GPT to create it. And I literally had to look at the posting date because I thought it was April Fool's joke and it wasn't. And like,

I get the nervousness and the scared about your job and about the future, but I just sort of couldn't imagine that this marketing agency was essentially doing the exact opposite of Shopify and sort of banning AI in their workplace. I'm sure it befuddles you as much as it befuddles me. Yeah.

Well, I mean, I think generally speaking, that's the similar idiocy in the education space saying our students shouldn't use chat GBT because the whole answer is you're preparing them for the future. You're preparing them for being citizens, for being workers, for being, you know, you know, people who are navigating life. And here is this fundamental tool. It's kind of like saying.

Hey, none of our people can use anything that uses electricity. And that's how they learn. They have to use pencils and papers and no electricity whatsoever in anything. You're like, well, that was idiotic. Well, it's similar to the chat GPT. And so I think that marketing agency, the question is really, when is it going to have to shift or it's probably going to die or become very esoteric boutique? Right.

Absolutely. If you want to be the most boutique agency, perhaps that's the way to go. So another concern people have with AI, though, is misinformation, disinformation, all of this synthetic media that was created. And actually last week, and this wasn't created by AI, this was a tweet that

sent the market, made an $8 trillion worth of market volatility because someone tweeted that the tariffs were off when they in fact were not. And so if a single tweet can move the market by $8 trillion, like what does this mean for the future when disinformation, misinformation is increasing and perhaps with algorithmic trading and AI able to do this at sort of greater quantities and greater speeds, like how do we protect against that for the future? There's a combination of a...

free market response, which I think is partially correct, and a societal response, which is also partially correct. And so that's the balance that makes this challenging. So the free market response is to simply say, well, if people who are doing trading are going to be idiots and not track false posts and so forth, they're going to lose money and eventually they will be disempowered.

And so what you principally need to do is to just make sure that there is validated sources of information, kind of are the anchors, and then increase that validation, accuracy, availability, and then allow the market to sort it out. And that's a partial answer. And my principal thought there is,

We should not be trying to restrict technology as much as we should be trying to shape technology.

Because the question isn't like, let's not have algorithmic training. And it's like, okay, that's kind of foolish. It's let's have algorithmic training work in the following way, generating the following reports, making sure it's involving the following kinds of data is only deployable by entities that have a method by which they participate in the market in a way that is healthy for not creating problems

you know, crazy volatility swings that damage society. It's a little bit like similar to saying, hey, you know, car manufacturers don't want to manufacture seatbelts. Drivers don't want to wear seatbelts. But actually, in fact, because the cost to the society and the healthcare system and everything else is so high, like you would say, hey, it's a free market. You should decide whether or not you're going to take the risk and they're going to die. It's like, no, no, no. Actually, in fact,

There's so many injuries and so many costs here. And the cost of enforcing you to wear a seatbelt is very low. So let's do that. And like, what are the seatbelt parallels for making the overall system work is I think an ongoing and kind of thoughtful thing that banks and regulators and, and intellectuals and economists should think about, like, what are those minimal kind of, as it were shaping technology or technology ads that,

That keep the cost of transactions down and the cost of not having an overly centralized system and the benefits of all the free market and broad network working while navigating the fact that we kind of live in a more volatile space now. Mm-hmm.

On this podcast, we like to focus on what's possible with AI because we know it's the key to the next era of growth. A truth well understood by Stripe, makers of Stripe Billing, the go-to monetization solution for AI companies. Stripe knows that when launching a new product, your revenue model can be just as important as the product itself.

In fact, every single one of the Forbes top 50 AI companies that has a product on the market today uses Stripe to monetize it. See what Stripe can do for your business at stripe.com.

On a lighter note, if there's any parents out there who are navigating this, I just read the book Lemon Shallows Library with my nine and seven year olds. And a main plot point is a fake Wikipedia post that leads to ruining someone's reputation and the kids who like don't believe it. So anyway, try that out if you're looking to teach your kids about misinformation on the Internet.

But actually moving on to another thing that people think is childlike in play, one of the fun things about our conversation with Demis Hassabis last week was we talked about games. And it was so clear that Demis grew up playing chess and games were so important to him, both in terms of his scientific research, but also in the progression of AI, whether it was AlphaGo or the famous IBM Watson chess software.

chess competition. And so when you think about the future as AI is more enmeshed in our daily lives, will that give humans the opportunity to play more? Are we going to be playing with AI? Are we going to be interacting it solo with teams as a game? How do you see that connection between games and sort of our AI future?

There's a fun book, which, you know, Demis also knows about Homo Ludens, which is like we're not just sapiens, we're game players. Obviously, you know, I have this version of Homo Technae because I think part of games is technologies and the technologies that enable different kinds of gameplay as part of it. But games is a way we think. And as you know, you know, I tend to approach like most of my strategic thinking through the lens of games.

So it's like with a startup, what's your theory of the game? With a project, what's your theory of the game? With creating a book super agency, what's your theory of the game? And so because game playing brings tactics and strategies and transformation, like large language transformers together, and also has a notion of increasing learning and competence because how well are you playing the game? What are the conceptual tools you're bringing to it?

et cetera. So, so games is a way that we operate across, you know, kind of call it intelligent experience. Like almost like is species X intelligence, but,

how do they play games is actually, in fact, kind of directly correlated to that. It's one of the reasons why we know that other kinds of mammals and other things have intelligence because we see dolphins playing games. We see chimps playing games. We play games with our dogs and we play games with our cats. And kind of that initiating gameplay and everything else is part of how that

that tends to operate. We don't just play games solo. We don't just play solitaire. We don't just play games one-on-one. We play games as teams, you know, sports games and everything else. And that's part of how you model what companies go. And when it gets to this kind of super agency future of saying, well, how it is that we're deploying, it's like, well, when

When I deploy now in work, and this is kind of the Toby Lutka memo, it's like I should deploy with agents. I should deploy with these tools. And by the way, we as teams should deploy with these tools. We as companies should deploy with these tools. We as individual scientists, as groups of scientists should deploy with these tools. And that's the kind of the pattern that we're on. And that's the model of games is a good way for us thinking about it.

But it's also a good way for thinking about how do we construct these devices and also how do we interact with them? Part of the original, the very first genius moment that Demis and Shane and Mustafa brought to scalable AI was realizing here is a way you can apply scalable compute and learning systems to creating amazing cognitive. As opposed to we program the AI

The AI learns and it learns at scale because you can use self-play as a way of doing it. And that was actually part of like seeing this genius moment by them was part of what got me back in AI from my, you know, kind of my undergraduate days where I concluded that they, that the, the, the mindsets and program and programming AI would actually not work because

I hadn't gone to, well, what are the scalable compute learnable systems? Because back then, by the way, a single computer was super expensive, let alone how do you create a server farm of 100,000 working in concert and all the rest. By the way, the computers back then were less powerful than the smartphone that's in your pocket.

Possible is produced by Wonder Media Network. It's hosted by Ari Finger and me, Reid Hoffman. Our showrunner is Sean Young. Possible is produced by Katie Sanders, Edie Allard, Sarah Schleed, Vanessa Handy, Aaliyah Yates, Paloma Moreno-Jimenez, and Malia Agudelo. Jenny Kaplan is our executive producer and editor.

Special thanks to Surya Yalamanchili, Sayida Sepieva, Thanasi Dilos, Ian Alice, Greg Viato, Parth Patil, and Ben Rellis.