Medium prioritized human curation to ensure the platform values thoughtful content over popularity signals. Algorithms are still used for matchmaking, but significant qualitative signals from human curators were added to enhance the recommendation system, focusing on the quality of ideas rather than just engagement metrics.
Medium shifted from an ad-driven model to a subscription-based model, allowing the platform to focus on delivering content readers are happy to pay for. This change incentivizes quality over engagement, as the goal is to provide value that justifies the subscription cost.
Medium combines human curation with algorithms by using human curators to provide qualitative signals that algorithms cannot detect. While algorithms handle matchmaking, human curators ensure the content recommended is thoughtful and valuable, creating a balanced and informed reading experience.
Medium's policy prohibits non-humans from publishing, but AI-generated content is not outright banned. Instead, human curators filter out AI-generated content that lacks value, ensuring only thoughtful, human-created content is recommended to readers. Writers are expected to self-report and self-police their use of AI tools.
Medium takes plagiarism seriously and removes plagiarized content when detected. Accounts found plagiarizing are suspended or banned. While Medium relies on copyright holders to report violations, it also controls distribution and payouts to authors, allowing it to revoke access for plagiarizers.
Tony Stubblebine believes the internet is broken due to its reliance on ad-driven business models that prioritize attention-grabbing content over value. Medium aims to fix this by proving that subscription-based models focused on delivering value can be successful and sustainable, offering an alternative to the dominant ad-driven internet.
Medium's subscription model is more profitable than ad-driven models. While ad-driven businesses often earn less than $10 per thousand views (RPM), Medium's RPMs are $20 and heading toward $30, demonstrating that focusing on delivering value over attention can be a better business strategy.
Medium's human editors are largely self-trained through experience. Many editors come from the community and learn by reading and curating content earnestly. The platform relies on their ability to spot quality and authenticity, which emerges naturally from their engagement with the writing.
Medium ensures quality by relying on subject matter experts who curate content based on deep knowledge of their fields. These curators identify the best versions of topics and guide readers toward thoughtful, well-researched content, ensuring the platform maintains high standards.
Tony Stubblebine advises entrepreneurs that success often takes time and persistence. He emphasizes that it's okay to be a late bloomer, as many entrepreneurs achieve their biggest successes in their 40s, 50s, or 60s. He encourages focusing on gradual growth and impact rather than rushing for quick wins.
We really want human creation because that's better than algorithms. We still have to do matchmaking, right? Like an algorithm is really good for that. And so all we haven't replaced the algorithm. What we've done is add really significant qualitative signals into the algorithm that wouldn't otherwise exist. Good morning, good afternoon or good evening, depending on where you're listening.
Welcome back to AI and the Future of Work, Episode 315. I'm your host, Dan Turchin, CEO of PeopleRain, the AI platform for IT and HR employee service. Our community is growing thanks to all of you, our great listeners.
As you probably know, we recently launched a newsletter. Go check it out. We'll share a link to the Beehive newsletter in the show notes. If you like what we do, please tell a friend and give us a like and a rating on Apple Podcasts, Spotify, or wherever you listen. If you leave a comment, I may share it in an upcoming episode like this one from my new buddy, Guillermo in Seattle, Washington, who is in marketing at a little company called Amazon.
And Guillermo listens while making dinner. His favorite episode is that great one with Alison Baum Gates, legendary venture capitalist and author from Semper Virans about where she's investing in AI and the secrets to breaking into a career in venture capital. We will link to that great episode in the show notes. We learn from AI thought leaders weekly on the show. Of course, the added bonus, you get one AI fun fact each week. Today's fun fact.
Christine Patsanetzi writes in the Harvard Gazette that ethical concerns mount as AI takes a bigger decision-making role in more industries. She writes that many worry whether the coming age of AI will bring new, faster, and frictionless ways to discriminate and divide us at scale. Part of the appeal of algorithmic decision-making is that it seems to offer an objective way of overcoming human subjectivity, bias, and prejudice.
But we're discovering that many of the algorithms that decide who should get parole, for example, or who should be presented with employment opportunities or housing, replicate and embed the biases that already exist in our society,
At present, three major areas of ethical concern for society exist, privacy and surveillance, bias and discrimination, and perhaps the deepest, most difficult philosophical question is the role of human judgment. Given the power and expected ubiquity of AI, some argue that its use should be tightly regulated, but there's little consensus about how we regulate it as a
Frequent listeners know that's a topic we unpack frequently on this show. I agree with many points made in this article. We'll continue to unpack the topic of ethical AI and responsible use of AI in upcoming episodes. In fact, maybe we'll delve into that in today's conversation. And of course, we'll link to that article in today's show notes. Now shifting to today's conversation.
Tony Stubblebine is the CEO of wildly popular and very important publishing platform Medium, which recently surpassed the million subscriber mark and was launched in 2012 by Ev Williams, who also co-founded Twitter.
Tony's a serial entrepreneur who also publishes Better Humans on Medium and previously founded habit coaching companies Coach.me and Lyft Worldwide. Tony's a vocal advocate for writers everywhere, and he speaks frequently about human creativity and why the bots won't take over. Tony received his BA in computer science from Grinnell College. Go Pioneers!
And without further ado, Coach Tony, it's my pleasure to welcome you to AI and the Future of Work. Let's get started by having you share a bit more about that illustrious background. Thank you. I'm glad to be here, Dan. And it's certainly Grinnell College will be happy to get that shout out. Very small liberal arts college.
When I took over Medium about two years ago, a little bit more than two years ago, my whole career got condensed just into the phrase technology veteran. And when I saw it, I was like, oh, that's actually pretty dead on. I've been almost every role in a tech company, but mostly smaller companies. I think the largest I ever worked for is maybe 150 employees. And Medium is only around 75 employees.
I've been a programmer. I've been a CEO, an entrepreneur, a manager, a writer, a publisher, an editor. And as an entrepreneur, I've done literally every role in the company. I know what QuickBooks sounds like when you're doing bookkeeping. There's a beep that becomes almost addictive. You want to...
each entry to get that pace of beeps going. So at this point, I actually really like that as the example. But it's kind of funny for you to bring up Grinnell College because I think
That the thing that made it work for me is I knew I wanted to go into computer science, but I knew I wanted a broader education beyond that. And so I got that there where the humanities is really what taught me how to write. And then fast forward 25 years, here I am running a publishing platform. And I don't think that would have been possible if I had gone a more focused route.
Medium is a household name for so many of us. I guess there are around a million of us who are readers, publishers on the platform. But let's say there are a few out there listening who aren't familiar. Describe what Medium is and then also the vision, the mission of Medium.
The voice that I think is most important on the internet and that I think kind of got lost in the modern hustle economy is the amateur writer who has professional or deep personal experience with something. This is what got me excited about the internet in the
you know, the 2005 era where there's this trend of user-generated content. And at the time, I was working for a professional media company, O'Reilly Media, which is writing programming books. Most of the internet at that time was built by people who were reading these O'Reilly books. And so I knew what professional...
information looked like and we saw all this amateur information popping up that was covering all the surface area that we would never cover and doing kind of a better job than we ever would and so like at that moment had a real respect for you know for amateur writing because it's not just the quality of the words that matter it's the quality of the experience underneath it
You fast forward to where we are right now, where the incentives of the internet really require, in order to be heard, you have to be good at
at optimizing for all the algorithms around us. You have to write for Google search algorithm, you have to write for virality. And then that was already the status quo before we got to AI generated content. And now there's a bunch of AI written content flooding the internet on top of all of that. And I think that combination of things made it really hard to find
find the people who are too busy. By definition, the people I want to hear from are too busy living to bother learning all of those hustles. And so when I took over Medium two years ago, basically the major and primary change we made was to redo our recommendation systems
to put human curators back in the loop so that they could spot people based on the qualities of what they were saying rather than the kind of the popularity signals that most recommendation algorithms tend to be based on. So there's the cliche, if you're not paying, you're the product. That's true. As we all know in social media,
the kind of i'm going to coin this term the short formification of content on social media platforms is to me kind of the anti-medium it's like you know where you go to um find a fraction of the truth or a fraction of the the full thought and medium is place where you go to
hear fully formed thoughts and have more kind of thoughtful dialogue. This wasn't intended to be like an ad for Medium, but maybe talk about how you think about social media platforms versus what you're building at Medium. I think you hit it there that there's an incentive problem. And it actually came down to business models. I mean, I'm not
And not to, I think almost on a social media platform, any tweak you make to the platform ends up changing the whole dynamics and system of it. And the major tweak we made was to think what would this look like if we had an incentive system where you were the customer rather than the product to sell. So in an ad driven business model, you are the product, your attention is being sold to the advertisers.
And we knew as writers that changes the incentives around the quality of the writing. It's almost like counterintuitively, it's bad if the writing is too engaging because then you won't notice that. So you want it to be just engaging enough. And that's where you get so much content that is like...
that is so emotional. It elicits all of these emotions because it gets you to click through. We changed, built the whole thing around a paid subscription instead. So we have a million paying subscribers out of somewhere on the order of 100 million people who visit in a month.
And that allowed us to focus on this different goal, which is how can I put a piece of writing in front of you that you will be happy to have paid to read? Just a very different bar to shoot for. And then essentially everything else we did falls from that one decision. So...
The human creation aspect of medium. Anyone who's experienced medium knows it's such a core part of the experience. We all know that the algorithmically generated feeds in kind of the more traditional social media platforms, they suck because they're designed to manipulate.
It's a steady drip, IV narcotic. It gets you to just stay focused and scroll to the next thing. So that sucks. But I got to ask the corollary, which is, man, Tony, you and your team take on a lot of responsibility. When you're the arbiter of what I am interested in, you're making decisions about what I shouldn't see. How do you think about it? How do you balance that responsibility?
You know, it's funny, this is almost like an idea that only exists on social media platforms, that there should be no arbiter. Even though in practice, then the algorithm is the arbiter and the algorithm is terrible at it. You come from media, you're starting positions opposite.
It's like I'm being paid to be the arbiter. And so the way I was able to balance this is when I was coming on board, literally my job proposal was started while I was on vacation by a
in the Caribbean where the point of the vacation was just to read by the beach. And the book I was reading was Neal Stephenson's book, Fall, which I don't think that many people read, but it had this concept of edit streams where depending on how much money you had, you would spend a certain amount of money getting someone to edit the internet for you.
And I really like that idea and what that led to for Medium. It's like, wait a second, we can provide a sane, thoughtful, very informed default curation experience and
And then we can give people ways to edit it by kind of opting in and out. And so that's my view of the design is that we already come from the background of being very comfortable editors and curators. But we also come from the internet where freedom is so important. And you can have both. But let's at least start somewhere sane, which I think we've done a good job of.
So when I read the Washington Post or New York Times or whatever, I know there's a bunch of stuff coming from the Associated Press and it's labeled and I know that there's some writers that they hire. So I choose to subscribe to that publication because I kind of know what I'm going to get. The vast array amount of content that the medium editors have to sort through
arguably increases the importance of that editorial role. And so maybe, I mean, I don't have to go too far afield to help the audience with examples, but if I'm interested in global warming, or if I'm interested in job automation from AI, or if I'm interested in the hurricane, somebody's making decisions that subtly impact my perception of those topics.
And there's no way around some amount of human subjectivity seeping in. How do you coach the curators to navigate that super tricky process? I think when people bring that concern up, I always want to get them into...
more specifics of what is actually being decided between two views. Or the number one decision point between two sets of views is almost always one of the views is much more thoughtfully done and well-researched. So it's like you've got two views on a normal social media platform that are competing over
much attention they generate. But then on Medium, you have two views where one of them is like your gut reaction based on something your kindergarten teacher told you. And the other is backed by a huge amount of science and research.
And so you're going to get the other view. And it's not us subtly changing something on a whim. It's like, no, we just really value thoughtful, deep takes. And my hope is that more of the internet worked that way, that this opposing view would have to compete on the same terms. And...
So I think in practice, it's like, why wouldn't you choose this one view that has so much more depth behind it? And then I think the other one, which is kind of hidden to people, is generally the start of our curation process starts with the subject matter expert. And I mean someone...
who is an expert on a lot of different ways you could say that. But the main way is that they've seen most of what's been written, at least on Medium, about this topic. And so they have some sense of, this is the best version of this, or maybe you don't know...
maybe you're not aware, like you might be looking for A, but actually you should be seeing B. That's something that being well-read can really bring into the mix. And so this isn't like,
you know, someone coming into your kitchen who likes cilantro more than you, right? Like these are, it's actually really specific forms of taste that I think if you, like if you got into it, it wouldn't worry you at all. You'd understand immediately what a value it is.
So as an example, if I subscribe to Better Humans, your publication, presumably they're adjacent publications that people who subscribe to Better Humans also subscribe to. So my kind of seeing it like my curated feed might be everything from Better Humans, as well as some subset of things that I might like because I like Better Humans.
It can be that way, yes. That's the interesting part of the idea of curation paired with algorithms. But when I first got to Medium, I think I explained this in a way that was kind of confrontational. I'd say, we really want human curation because that's better than algorithms. And I've come to understand the necessity of the subtlety of it, which is,
we still have to do matchmaking, right? Like an algorithm is really good for that. And so all, we haven't replaced the algorithm. What we've done is add really significant qualitative signals into the algorithm that wouldn't otherwise exist. So if you're,
interested in one topic and another topic is related and is getting strong qualitative signals. And everyone who reads the topic you follow tends to like, that's a good opportunity for us to suggest it to you. I mean, that's actually where the algorithm is shining because we're just asking it to play matchmaker. We're not asking it to play quality control.
That's why I asked the question. Age old debate in algorithmically managed feeds is if I say I like something, what am I saying? Do I like the author? Do I like the article? Do I like the perspective? Do I like all of it? Maybe I like parts of it. But there's sparse information that we're collecting. And so the algorithm, in the case of a paid where it's an advertising supported, it's very clear. There's only one objective. It's
What it means for me to like it is feeding the algorithm to show me more stuff that's going to keep me on the feed longer. But with Medium, you have a different motivation. So I was wondering what a like means in Medium when you're trying to recommend other content I might like. We're much less personality focused. I think a lot of other media is you're following...
a person because you're building a parasocial relationship with them. And on the other side, as a creator, they need that because they need the consistency of your follow. But a lot of the voices we care about are right rarely because as I said, I like someone that's almost too busy living to get this information out because
And that's where all the really deep wisdom is. So an example of how this might play out as you follow like a paleontology publication is the example I've been using recently, Fossils et al, which is run by a paleontologist.
But they're publishing other paleontologists who might just publish twice there in their entire life. And that kind of connection of you've indicated you like paleontology and there's a subject matter expert curating the world of paleontologists who want to write. And so we can reasonably assume that
that we might be doing you a service to bring this stranger and put, you know, this paleontologist who's a stranger to you and put them in front of you and your feed. I think that's one half of the standard way of we're just kind of doing an interest-based graph, just like every other recommendation system. And one half of it is that we've put a curation signal in and that this paleontologist who runs
the fossils publication is that curator who's helping to say, if you like me, you're also going to like this person. Okay. So a related topic that's really important for us to discuss. Let's say that paleontologists, famous paleontologists, chooses to summarize
the latest paleontological, I think that's the right term, work in the field using LLMs because it's a great way to maybe curate a bunch of recent information and
the policy on medium is non-humans cannot publish. But my question is, just to get a little more nuanced, how human do I have to be? If a portion of what I publish was written by me and a portion was not, does that violate terms? And if it's just a binary, all non-human or all human, does that scale? I mean, where do we go from here?
Right. I think a lot of times policy ends up being so practical that people miss the point, right? The point of medium is to deepen people's understanding of the world. And so then the practicalities of that are that we want to draw a person's human wisdom out in a way where that wisdom can be passed on to someone else. I use wisdom more than knowledge because it's not just the facts, it's
How and why and when are those facts important? That sort of life experience that packages the knowledge is like the point. And so in this example that you're giving, you could plausibly say, oh, we're still getting a lot of this person's wisdom. I mean, they chose the research, right? That alone, right, is huge. And so like that's kind of
That's the key thing. Is this worth reading? That's the question. And a lot of times, you know, it can be boiled down to, well, 99% of the time, if it was AI generated, it's not worth reading. But yeah, if you can find a 1% edge case, then that's the 1% you should read. So to have this like kind of idea of it as a policy is...
it runs into a lot of practicalities. We can't prevent it from being posted because it's impossible to spot and there's such a wide range of usage, how much AI assist is in a piece.
But we can put a lot of human curation between what gets posted and what gets read. And that's really what's happening. And so the human curation, it's like it's sort of you think about like a jury in America. Theoretically,
can overrule the law, but almost never does, right? And so in practice, AI is just not allowed for distribution on Medium because in practice, the way that people use it generates content that's not worth reading. But if that were to change, I think we could allow quite a bit more flexibility. And the policy today is that the writers self-report, self-police, right? Self-report, self-police, within the editors and curators really doing the most
to prevent it from spreading to the rest of the community. Because what's important to us is not what exists on Medium, but what gets read on Medium. That's the thing that we think we can control. There's actually a lot of AI-generated content on Medium, but it's just not part of what gets recommended and sent to users. And because there's a human filter. Right.
the most to your adjective thoughtful content should surface. Absolutely. I should know the answer to this, but we'd love to get it from the source. Policy on plagiarism. So within and beyond AI generated plagiarized works, what is the policy? If we know about it, we take it down. That's a fast track to getting your account suspended and getting banned from the platform. These are kind of standard trust and safety issues.
plagiarism ends up being a copyright violation and there's a whole set of laws for reporting them. Sometimes we don't know, right? Like it's where I think most of the internet does unfortunately in practice need to rely on the copyright holder being the reporter of it. But we have
We have some controls because we control distribution and we control payouts to authors. We have kind of sometimes a smaller set of content that we can look at. And if we see someone who's plagiarizing, then that's like we can revoke them from either side of that.
Recent guest on the podcast, Chris Caron, is the CEO of a company called Turnitin, which is a tool used by like 15,000 academic institutions to automate the process of detecting plagiarism. But then now, certainly an important subset of that is LLM generated content. Do you envision a time when everything could first be filtered through something like that?
I think that we just have never felt comfortable with the false positives in there. That it's like, it's just, it's kind of amazing to watch
a human editor read something and in reading it flag the one paragraph in the piece that's been plagiarized. This is actually kind of a superpower that surprised me about editors. Because they'll read it and then the voice will change. And they'll be like, this isn't written by this person, right? And that'll put them on edge and then they'll just skip the piece, right? And it
We used to, when we were really running a publishing empire on Medium before I worked here, we would use plagiarism tools. And it just wasn't actually any additional help beyond just being the reader, I think. I'd be curious to talk to the professors who use Turnitin. I think it probably speeds them up and also gives them...
Proof, like you need, you know, editor on medium doesn't need proof. They just can say no to a piece. But if you're a professor, you need to be able to give a reason for why you're giving a student an F on the paper, right? So you need that proof there in a way that the editors here don't. It's kind of amazing how much human taste makes a lot of these detection challenges just unnecessary.
What kind of training do you provide the human editors? And the reason I ask is because you gave the example of the jury system in the US, but not completely parallel because the jury's kind of sequestered from the plaintiff and the defendant. But in this case, if the writers are aware of what these particular editors have a preference for, there might be ways to manipulate the system. And your editors ultimately have a lot of responsibility. What
What's it like to go through Medium training to be a curator? I mean, it's a community. Medium's a community. So the publications that are really the heart of Medium are all volunteer-based. They're spun up from the community without any interaction with us. And that's where I came. That actually was one of my big qualifications to work at Medium.
And so I had an experience just personally running a publication, watching a lot of people who had never considered being editors just learn it through experience. And I honestly, I don't think it's a skill, it's an experience. If you care about the writing and you read the pieces earnestly, then
this is one of the emergent properties, is that you come to spot these things. And so I watched that happen in my own publishing group. And then that's what I think has happened for the rest of the publishers. I was missing that, that a lot of the content is through publications, which are self-curated. And so Medium is the platform.
And the readers choose which publications to subscribe to and the publications describe which content to provide. So it's kind of just a naturally reinforcing or self-policing community. Right. It probably is most similar to like Reddit and subreddits where you're mostly experiencing the platform through these moderated sub communities. And that is the majority of how Medium works these days.
I've heard you talk about the vision behind Medium and maybe other companies you've started as kind of fixing the broken internet. I'd love for our listeners to hear what's broken about the internet. It's broken in so many ways. What's Tony's perspective on what's broken?
I mean, medium can't replace the internet. So I just want to tone it down. It's like one of those strong statements to get people's attention. But there has to be alternatives to what is the dominant internet. And I think we've talked about some of those already. If the dominant business model is ads, then the dominant characteristic of what you read is going to be
a motivation to grab your attention. So how can we prove a business model that is focused on delivering value first? So
I actually think we've done a great job of this. And one of the reasons I want to talk about is because I think other companies could give up the ad-driven model and be pretty successful. So Medium was profitable for the first time in August. I think content businesses often judge on dollars per thousand views. And right now, ad-driven businesses are sometimes lower than $10.
If you're buying it, you call it a CPM, but if you're delivering it, call it RPM. So our RPMs are $20 plus headed to $30. And so it's actually a better business for us to focus on delivering value over attention. I think that gives me hope for the rest of the internet that we could, everything is a pendulum and we could swing away from this trend.
massive adoption of attention-based business models. You describe Medium as the Vermont of the internet. Maybe just talk about that analogy. This was like
There's such a funny comment that showed up on something we wrote. This guy was driving around Vermont, loving it because it's beautiful there. And he realized there's no billboards. And then he looked into it. And it turns out it's like it's state law not to have billboards because the billboards will distract you from...
from the content, which is the natural beauty of Vermont. And I feel like the same thing as a writer and as someone who wants to share great writing with other people, the ads get in the way, right? It's the, I mean...
As an industry, we tried to say we build productivity tools. Technology is for productivity. But then we built interruptions into everything, which is the opposite of productivity. Humans are not multitaskers. And so we shouldn't build tools that require multitasking. We should build tools that allow for focus. And Medium can do that too.
in a small way with just making focused reading experiences. But I'd like to see that in all the technology that we use. Tony, we're way over time, but this has been too fascinating. I couldn't possibly cut you off. But you're not getting off the hot seat without answering one last important question for me. So if you roll back the clock to the kids studying CS at Grinnell,
And you fast forward then, and you're the CEO of a very impactful publishing platform, Global Reach. What's different about Tony, CEO of Medium today versus the kids studying computer science?
I just didn't have any idea this was possible. I feel there's a point I tried to get across to other entrepreneurs who are often in a rush, right? It's like Mark Zuckerberg was a billionaire in his 20s, whatever, right? And I just think the more common path is that you have to grind for a while until you start getting these bigger opportunities.
And my grind was almost like climbing Maslow's hierarchy of needs. Like I graduated college just wanting to make as much money as I could so that I could be comfortable. And then I got a job and I had that because I'm in tech and made good money early and was like, what's fulfilling about this? And so just like gradually work my way up wanting to do, have more impact in the world. But I just, you know, at 22,
I had none of those ambitions. I had very, very practical and self-centered ambitions. I think it's okay to be a late bloomer. That's a message I try to get out to people, especially entrepreneurs who just always feel like they're behind. Even though the research says you're best in your 40s and 50s and 60s. So there's some value to, as one of our investors says, taking 10 years to be an overnight success. Yeah.
I'm saying that this is an overnight success, but it's the biggest success I've ever been a part of. And I know how many decades it took to get here. Sage advice. Tony, this has been such a pleasure. Thank you very much for coming and hanging out. Thank you, Dan.
And where can the audience learn more about you and the good work that your team's doing? Oh, you got to follow me on Medium. I'm Coach Tony on Medium. I don't write as much as I used to because I'm so busy writing internal business documents. But that's where I do my best writing. And I think you can find out a lot about at least my past life where I was deep into professional and personal development. And I have, I think, 700 posts on Medium on that topic alone.
Including one popular one about the iPhone hacks, right? Yes. This is a deep dive on how to make your phone a tool rather than a distraction device. Brilliant. Well, that's all the time we have for this week on AI and the Future of Work. Thanks again to the great coach, Tony Stubblebine, for hanging out. And of course, we're back next week with another fascinating guest. ♪