We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode How DeepSeek Showed That Silicon Valley Is Washed

How DeepSeek Showed That Silicon Valley Is Washed

2025/2/3
logo of podcast Better Offline

Better Offline

AI Deep Dive AI Chapters Transcript
People
E
Ed Zitron
一位专注于技术行业影响和操纵的播客主持人和创作者。
Topics
Ed Zitron: 我长期以来一直指出,像OpenAI、Anthropic、谷歌和Meta这样的公司运行的大型语言模型是无利可图且不可持续的。它们基于转换器的架构已经达到顶峰,正在耗尽训练数据,这些模型的实际能力早在2024年3月就达到了顶峰。OpenAI去年亏损50亿美元,而Anthropic在2024年亏损近30亿美元,这突显了这些模型的巨大运营成本。我之前错误地认为,这些公司无法提高效率,但实际上,他们可能只是不愿意这样做。 一家不知名的中国公司DeepSeek发布了一种与OpenAI最新推理模型大致相当的产品,但训练和运行成本却低得多,这彻底颠覆了生成式AI的传统认知。训练大型语言模型的成本非常高昂,GPT-40在2024年中期花费了1亿美元,未来的模型可能花费高达10亿美元甚至更多。业界普遍认为,大型语言模型必须规模庞大且运行成本高昂,因为只有这样才能解锁新的功能,并且随着芯片价格下降,这些强大的模型最终会变得更便宜。然而,为了构建更大、更强大的模型,需要不断购买更强大的GPU,这导致了大量的资金浪费。 美国科技行业对大型语言模型的投入是一个“资本主义死亡邪教”,它建立在剽窃、傲慢和对未来回报的假设之上。大型语言模型公司(包括OpenAI和Anthropic)的模型效率极低,而我之前错误地认为他们已经尽力去提高效率了。DeepSeek的出现揭示了美国科技行业的傲慢和缺乏创新精神。美国科技公司缺乏创新,都在做相同的事情,收取相似的价格,创新方向也相同。谷歌、微软、OpenAI、Meta、亚马逊和Oracle等公司都没有想到或没有能力创造出像DeepSeek这样的产品。DeepSeek的成功并非因为它有什么特别新颖的技术,而是因为它尝试了不同的方法,例如减少内存使用和使用合成数据。 美国科技公司由于模型开发和推理成本过高,从未想过有人会试图取代他们的地位,而中国对人工智能的战略重视并非秘密。美国最强大的科技公司盲目地投入巨资建设大型数据中心和购买GPU,而没有考虑是否存在替代方案。我相信这些美国公司本可以做到DeepSeek所做的事情,但他们要么选择不这样做,要么过于短视。美国科技行业的困境并非仅仅是来自中国的竞争,而是其自身的懒惰、自满和不负责任。OpenAI和Anthropic等公司是硅谷的代表,它们更关注市场营销而不是解决实际问题。DeepSeek的技术并非多么高深,硅谷公司应该首先想到并做到这一点。硅谷和其他美国科技公司只关心以任何代价增长,而忽略了成本节约和效率提升。 硅谷公司没有考虑DeepSeek的技术,这表明他们缺乏创新精神,并且公司之间缺乏真正的竞争。如果生成式AI对这些公司真的重要,他们本应该积极尝试DeepSeek所做的事情。DeepSeek采用了一些非常规的方法来提高效率,包括利用CPU和GPU的不同部分来创建数字处理单元。OpenAI和Anthropic拥有足够的资金和人力资源来创造一个高效的模型,但他们更关心的是增长和建设更大的数据中心。OpenAI的Operator代理是一个功能性很差的产品,这体现了该公司缺乏创新能力。Casey Newton对DeepSeek的评价过于乐观,他忽视了DeepSeek的实际意义。OpenAI和Anthropic正在试图通过推出代理等新产品来筹集更多资金,但这掩盖了他们产品缺乏实际应用的事实。 OpenAI无法简单地将DeepSeek的技术添加到其模型中,因为这在技术上不可行,而且这样做也会损害其形象。DeepSeek的成功让投资者对OpenAI的未来发展产生了怀疑。即使OpenAI复制DeepSeek的技术,也不确定其盈利能力。即使OpenAI创建了一个更小、更高效的模型,它也很难在市场上与DeepSeek竞争,因为DeepSeek已经抢先占领了市场。OpenAI已经失去了其在大型语言模型领域的优势,微软已经开始通过其云服务提供DeepSeek的模型。DeepSeek已经将大型语言模型商品化,任何人都可以构建自己的模型,这将对Nvidia等公司产生影响。对于云服务提供商来说,很难再为GPU市场辩护,因为现在已经证明可以使用旧的硬件构建类似的模型。 中国公司DeepSeek的成功表明,中国不仅可以与美国的人工智能公司竞争,而且还可以以高效的方式做到这一点。将DeepSeek的成功归咎于中国是错误的,DeepSeek的开源性质意味着任何人都可以使用它。虽然DeepSeek的模型更便宜,但其盈利能力仍不明确。大型语言模型公司并不关心生成式AI的普及,他们更关心的是自身的垄断地位。DeepSeek的出现打破了大型语言模型公司对市场的垄断。在DeepSeek出现之前,创建具有竞争力的大型语言模型需要大量的资金,并且需要与微软、谷歌或亚马逊合作。OpenAI和Anthropic等公司并不关心产品的价格和效率,他们更关心的是维持其垄断地位。大型语言模型公司创造了一个“腐败经济”,他们通过不断增长来维持其垄断地位。 大型语言模型公司通过构建大型数据中心和大型语言模型来维持其垄断地位。OpenAI和Anthropic并没有真正关注产品的开发和商业化,他们更关注的是炒作和融资。微软试图将Copilot强加到Microsoft 365中,但用户并不买账。OpenAI的大部分收入来自ChatGPT订阅,而不是销售其模型。OpenAI和Anthropic的产品缺乏实用性和可靠性,其新产品成本过高。OpenAI和Anthropic需要更大的模型来维持其故事,即只有他们才能构建未来。DeepSeek证明,构建与OpenAI领先模型竞争的模型是可能的,而且成本更低。DeepSeek将迫使OpenAI和Anthropic降低其模型和订阅的价格。DeepSeek v3对OpenAI和Anthropic的真正威胁在于它与GPT-4O的竞争。阿里巴巴创建了一个比DeepSeek性能更好的模型,这将进一步加剧价格竞争。 DeepSeek的出现使得大型语言模型的垄断地位被打破,Sam Altman和Dario Amadei的声誉也受到影响。人们以前认为大型语言模型必须昂贵,但DeepSeek证明了这种观点是错误的。DeepSeek使用合成数据进行训练,这挑战了大型语言模型公司对训练数据的依赖。人工智能行业是一个骗局,其成果微不足道。OpenAI缺乏创新能力,其未来的发展前景堪忧。人工智能行业的资金、能源和人才都被浪费了,这要归咎于市场和媒体的失职。OpenAI的未来发展令人担忧,其融资能力令人质疑。人工智能泡沫正在破裂,OpenAI等公司已经失去了竞争力。如果这些资金被用于其他领域,将会产生更大的效益。硅谷需要改变其发展模式,否则将会面临严重的危机。

Deep Dive

Shownotes Transcript

Translations:
中文

- $1.4 billion in NFL quarterback contracts. The untold stories behind the biggest deals in football history. I'm AJ Stevens, Vice President of Client Strategy at Athletes First. Introducing the Athletes First Family Podcast, the quarterback series.

My co-host Brian Murphy, Athletes First CEO, and I are sitting down with the agents who have negotiated contracts for Justin Herbert, Deshaun Watson, Dak Prescott, Tua Tugnavailoa, and Jordan Love. Listen to Athletes First Family Podcast on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. You are cordially invited to...

Welcome to the Party with Tisha Allen is an iHeart Woman sports production in partnership with Deep Blue Sports and Entertainment.

Listen to Welcome to the Party. That's P-A-R-T-E-E on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. The OGs of uncensored motherhood are back and badder than ever. I'm Erica. And I'm Mila. And we're the hosts of the Good Moms Bad Choices podcast, brought to you by the Black Effect Podcast Network every Wednesday. Yeah, we're moms, but not your mommy. Historically, men talk too

much. And women have quietly listened. And all that stops here. If you like witty women, then this is your tribe. Listen to the Good Moms Bad Choices podcast every Wednesday on the Black Effect Podcast Network, the iHeartRadio app, Apple Podcasts, or wherever you go to find your podcast.

We want to speak out and we want this to stop. Wow, very powerful. I'm Ellie Flynn, an investigative journalist, and this is my journey deep into the adult entertainment industry. I really wanted to be a player boy in my adult. He was like, I'll take you to the top, I'll make you a star. To expose an alleged predator and the rotten industry he works in. It's honestly so much worse than I had anticipated. We're an army in comparison to him.

From Novel, listen to The Bunny Trap on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. Chosen by God, perfected by science. I'm Ed Zitron. This is Better Offline.

And as I've written about many, many, many, many times and argued on this very podcast just as often, the large language models run by companies like OpenAI, Anthropic, Google, and Meta are unprofitable and unsustainable. And the transformer-based architecture they run on has peaked. They're running out of training data, and the actual capabilities of these models were peaking as far back as March 2024.

Nevertheless, I'd assumed, incorrectly by the way, that there would be no way to make them more efficient because I had assumed, also incorrectly, that the hyperscalers, along with OpenAI and Anthropic, would be constantly looking for ways to bring down the ruinous cost of their services. After all, OpenAI lost $5 billion last year, and that's after $3.7 billion in revenue too, and Anthropic lost just under $3 billion in 2024.

And in the last episode I told you a little bit about DeepSeek by the way. In this one we're going to get into, well, how fucked things might actually be. But what I didn't wager was that potentially nobody was actually trying to make these models more efficient. My mistake was, if you can believe this, being too generous to the AI companies, assuming that they didn't pursue efficiency because they couldn't, and not because they couldn't be bothered.

But then, as I just hinted at, a little-known Chinese company released a product that was broadly equivalent to OpenAI's latest reasoning models, but cost a fraction of the cost to train and run. And now the conventional understanding of how generative AI should work has been fundamentally upended. You see, the pre-deep-seek status quo was one where several truths, and I say that in the loosest sense of the word, allowed the party to keep going.

So the first one is that these models were incredibly expensive to train. GPT-40 cost $100 million in the middle of 2024, and future models, according to Dario Amadei of Anthropic, might cost as much as $1 billion or more to train. And training future models, by the way, as a result of this would necessitate spending billions of dollars on both data centers and the GPUs necessary to keep training these bigger, huger models.

Now, another thing was these models had to be large because making them large, pumping them full of training data and throwing masses of compute at them would unlock new features such as an AI that helps us accomplish much more than we ever could without AI, which is...

Sam Altman, quote. And in the words of Sam again, you'd be able to get a personal AI team full of virtual experts in different areas working together to create almost anything we can imagine. I don't know, mate. You ever try creating a functional fucking business? Dipshit. Anyway.

Here's another one. These models were incredibly expensive to run. It has to be this way, but it was all worth it because making these models powerful was way more important than making them efficient because once the price of silicon comes down, and this is a refrain I've heard from multiple different people as a defense of the costs of generative AI, we would then have these powerful models that were cheaper somehow because of silicon.

Now you may think, Ed, that sounds like a... not a real argument. That just sounds like something someone said once. And it is. It is something someone said once. Anyone who knows anything about chips know how hard it is to make a new chip. And remember one of the CES episodes when I asked Max Cherney about this? You should go back and listen to him. Anyway...

Another thing, another part of this, was that as a result of this need to make bigger, huger, even bigger models, the most powerful ones, these big, beautiful models, we love them, we look at the big, beautiful models, we would of course need to keep buying bigger, more powerful GPUs, which would continue the American excellence of burning a bunch of money on nothing.

And by following this roadmap, everybody wins. The hyperscalers get the justification they needed to create more sprawling data centers and spend massive amounts of money. And OpenAI and their ilk get to continue building powerful models. And also NVIDIA continues to make money selling GPUs. Remember I've said in the past that things were kind of a death cult? This is what this is. It's a capitalist death cult. It runs on plagiarism and hubris and the assumption that being... that at some point all of this would turn into something meaningful.

Now, I've argued for a while that the latter part of the plan was insane, that there was no profitability for these large language models, as I believe there simply wasn't a way to make these models more efficient. In a way, I was right. The current models developed by both the hyperscalers, so Gemini from Google, Llama from Meta, and so on and so forth, and the multi-billion dollar startups, if you can even fucking call them that, OpenAI and Anthropic, they're horribly inefficient. And I just made the mistake of assuming that they tried to make them more efficient, and they couldn't.

But what we're witnessing right now

isn't some sort of weird China situation. This isn't China being Chinese and doing scary Chinese things to us. No, what we're witnessing is the American tech industry's greatest act of hubris. It's a monument to the barely conscious stewards of so-called innovation who are incapable of breaking the kayfabe of the fake competition where everybody makes the same products, charges about the same amount of money, and mostly innovates in the same direction.

Somehow nobody, not Google, not Microsoft, not OpenAI, not Meta, not Amazon, not Oracle thought to try or was capable of creating something like DeepSeek.

Which doesn't mean that DeepSeek's team is particularly remarkable or found anything super new, but that for all the talent, trillions of dollars of market capitalization, and supposed expertise in American tech oligarchs, not one bright spark thought to try things that DeepSeek had tried, which appeared to be, what if we didn't use as much memory, and what if we tried synthetic data?

And because the cost of model development and inference was so astronomical in the case of American models, they never assumed that anyone would try to usurp their position. This is especially bad considering that China's focus on AI as a strategic part of its industrial priority was really no secret, even if the ways it supported domestic companies kind of is. In the same way that the automotive industry was blindsided by China's EV manufacturers, the same is happening with AI.

Fat, happy and lazy, and most of all oblivious, America's most powerful tech companies sacked back and built bigger, messier models powered by sprawling data centers and billions of dollars of GPUs from NVIDIA, a bacchanalia of spending that strains our energy grid and depletes our fucking water reserves, without, it appears, much consideration of whether an alternative was possible.

I refuse to believe that none of these companies could have done what DeepSeek has done, which means that they either chose not to, or they were so utterly myopic, so excited to burn so much money on so many parts of burning the earth, boiling lakes, and stealing from people in pursuit of further growth that they didn't think to try.

This isn't about China. It's so much fucking easier if we let it be about China. No, no, no, no. It's about how the American tech industry is incurious, lazy, entitled, directionless, and irresponsible. Open AI and anthropic are the antithesis of Silicon Valley.

They're incumbents, public companies wearing startup suits, unwilling to take on real challenges, more focused on optics and marketing than they are on solving actual fucking problems, even the problems that they themselves created with their large language models. By making this about China, we ignore the root of the problem: that the American tech industry is no longer interested in making good software that actually helps people.

DeepSeek shouldn't be scary to Silicon Valley, because Silicon Valley should have come up with this first. It uses less memory, fewer resources, and uses several kind of quirky workarounds to adapt to the limited compute resources available, all things that you'd previously associate with Silicon Valley.

Except now Silicon Valley's only interest, like the rest of the American tech industry, is the rot economy. It only cares about growing, growing at all costs, even if said costs were really things you could mitigate, or if the costs themselves were self-defeating.

To be clear, if the alternative is that all of these companies simply did not come up with this idea, that in and of itself is a damning indictment of the valley. Was nobody thinking of this stuff? If they were, why didn't Sam Altman or Dario Amadei or Satya Nadella or anyone else put serious resources into efficiency? Was it because there was no reason to? Was it because there was, if we're honest, no real competition between any of these companies? Did anybody try anything other than throwing as much computing training data at the model as possible?

It's all just so cynical and antithetical to innovation itself. Surely if any of this shit mattered, if generative AI truly was valid and viable in the eyes of these companies, they would have actively worked to do something like DeepSeek has done.

Don't get me wrong. It appears DeepSeq employed all sorts of weird tricks to make this work, including taking advantage of distinct parts of both CPUs and GPUs to create something called a digital processing unit, essentially redefining how data is communicated within the servers running training and inference. And just as a reminder, inference is the thing where when you type something in, it infers the meaning. Just could have specified that earlier.

DeepSeek had to do things that a company with unrestrained access to capital and equipment wouldn't have to do, and it often used impractical and quirky methods to do so. Nevertheless, OpenAI and Anthropic both have enough money and hiring power to have tried and succeeded in creating a model this efficient and capable of running on older GPUs. Except what they wanted, what they actually wanted, was more goddamn growth and the chance to build even bigger data centers with even more compute that they would own.

OpenAI is as much a lazy cumbersome incumbent as Google or Microsoft, and it's about as innovative too. The launch of its operator agent was a joke. A barely functional product that's allegedly meant to control your computer and take distinct actions like ordering stuff off of Instacart. You know, things you could do with your hands.

But just to be clear, it doesn't work. You'll never guess who was really into it, though. His name is Casey Newton. He writes a blog called Platformer. And he's a man so gratingly credulous that it makes me want to fucking scream. And of course, he wrote the operator when he used it was a compelling demonstration that represented an extraordinary technological achievement that also somehow was significantly slower, more frustrating, and more expensive than simply doing any of these tasks yourself.

Casey, of course, not to worry, had some extra thoughts about DeepSeek. That there were reasons to be worried, but that American AI labs were still in the lead. Saying that DeepSeek was only optimizing technology that OpenAI and others had invented first, before saying that DeepSeek was only last week that OpenAI made available to ProPlan users a computer that can use itself. This statement is bordering on factually incorrect. It is fucking insane that Casey is still doing this. I do not want to... I don't know what to do with this guy. This guy...

Just like... That's a fucking lie. This... The computer can't use itself. This shit can't... Just to explain what Operator is, you're meant to type in something like, hey, order me some milk. Order me some milk off of Instacart. And when Casey tried this, it tried to find milk in Des Moines, Iowa. Just fucking insane. Just...

This is how these companies have got big. It's people like Casey. It's people like Casey who are just like, anything they show, they're like, God damn, that's the most impressive thing I've seen in my life. It's a fucking farce. But let's be frank. These companies aren't building shit. OpenAI and Anthropic are both limply throwing around the idea that agents are possible in an attempt to raise more money to burn. And after the launch of DeepSeek, I have to wonder what any investor thinks they're investing in, other than certain ones I'll get into in a bit.

And to be clear, an agent is meant to be this autonomous thing, which you say, hey, go and do this action. Go and sell things for me and go and email people for me. They don't really work. There are some that kind of do that are really expensive, but large language models are not built for this kind of thing. But let's be honest. DeepSeek, and as I said in the last episode, they've built a more efficient reasoning model. So like OpenAISO 1. And...

You'd think, well, okay, couldn't OpenAI simply add on DeepSeek to its models? Not really. First of all, with the way these models work, you can't just plug it in. It's just not how it works. They could train a new model using DeepSeek's techniques, but the optics of that aren't brilliant. It would be a concession. It would be in the middle that OpenAI slipped and needs to catch up. And not to its main rival, pretend rival, I mean, Anthropic, or to another big tech firm, but to an outgrowth of a hedge fund in China.

A company that few had heard of before December. And, like, really, not that many people had heard of before January 25th. It's very embarrassing. And this, in turn, I think will make any serious investor think twice about writing the company a blank check. They're going to have to dip into some very bothersome pockets.

And as I've said ad nauseum, this is potentially fatal as OpenAI needs to continually raise money, more money than any startup has ever raised in the history of anything. And it really doesn't have a path to breaking even, even if they copy what DeepSeek did. Because we still right now, though DeepSeek is 30 times cheaper than O1, we don't know if that's profitable. We don't know. We haven't found out.

And if OpenAI wants to do its own cheaper, more efficient model, it's likely to have to create it from scratch, like I said. And while it could do distillation to make it kind of more like OpenAI using their own models, by the way, DeepSeek taught itself using OpenAI's outputs, like I mentioned in the last episode, it's kind of what DeepSeek already did. It already has been fed OpenAI bullshit.

Even with OpenAI's much larger team and more powerful hardware, it's hard to see how creating a smaller, more efficient, and almost as powerful version of O1 benefits them in any way. Because said version has, well, already been beaten to market by DeepSeek, and thanks to DeepSeek, will almost certainly have a great deal of competition for a product that to this day lacks any killer apps anyway.

Hey, you guys, I'm Catherine Legg. I'm a racing driver who's literally driven everything with four wheels across the planet. And I've got a new podcast. It's called Throttle Therapy. This season, I'm gearing up to make history, competing in some of the world's most notorious racing events, starting at the Indy 500.

Join me as I travel from racetrack to racetrack in my quest to continue a memorable career in racing. I'm also going to bring you inside stories with legends of sports, new faces from the next generation of auto racing, and conversations with the people who've supported me throughout my career.

We'll be getting into everything from karting to NASCAR, even Formula One. Whether you dream about being a pro athlete or an astronaut, we're talking about what it takes to make it. Listen to Throttle Therapy with Catherine Legg, an iHeart Women's Sports production in partnership with Deep Blue Sports and Entertainment. You can find us on the iHeart Radio app, Apple Podcasts, or wherever you get your podcasts. Presented by Elf Beauty, founding partner of iHeart Women's Sports.

Hey, y'all, this is Reed from the God's Country Podcast. We had the one and only Bobby Bones in the studio this week, and we cover everything from his upbringing to his outdoor experiences with his stepdad, Arkansas Keith, to the state of country music. We may even end the episode with a little jam session led by Bobby himself. Y'all be sure and listen to this episode of God's Country with Bobby Bones on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. Don't go shopping to Target.

with khaki pants and a red shirt on don't go shopping at target with khaki pants and a red polo shirt on that's what you have song right of move an old lady came up to me she said how much for this cream of wheat

Happy holidays from me, Michael Rapaport, and my gift to you is a free subscription to the I Am Rapaport Stereo Podcast, where I discuss entertainment, sports, politics, and anything and everything that catches my attention. I am here to call it as I see it, and there's a whole lot of things catching my eyes these days. Here's a clip from one of my favorite episodes.

You are not a real fighter. You will never be discussed anywhere in boxing history ever. Fake Paul. The movie is The Apprentice and the movie is about young Donald Trump and his apprentice, Roy Cohen. Real character, obviously both are real characters. It kind of has a Scarface vibe to it, which I thought was very interesting.

Listen to the I Am Rap Report Stereo Podcast on the iHeartRadio app, Apple Podcasts, and wherever you get your podcasts. I'm Maura Aarons-Neely, host of The Anxious Achiever. It's a show that looks at where we spend most of our waking hours, work. We explore how work impacts our mental health, how neurodiversity impacts our careers, and how companies impact our well-being.

Is work broken? It's hard to say that work is broken because work is work and the system itself doesn't favor workers. I would say that the system is unsustainable, is capitalism and work just relentless, cruel and unsustainable, which is really my experience and my family's experience. So in that way, yeah, it's broken.

Listen to The Anxious Achiever on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. It's just, it's very frustrating to me. It's very frustrating to me. It drives me a little insane. Reading all this stuff makes me feel crazy. I think you hear it in my voice. You hear the sanity stripping away, but I'm here to podcast and don't worry. But seriously though, anyone can build on top of what Deep Seekers already built.

Where is OpenAI's moat exactly? And where's Anthropix? What are the things that make these companies worth $60 billion or $150 billion or, oh my god, as we'll discuss in a bit, $340 billion? What is the technology they own or the talent they have that justifies these valuations? Because it's kind of hard to argue that their models are particularly valuable anymore. Celebrity? Celebrititum? Cult of personality?

Altman, Sam Altman, he's an artful bullshitter and he's built a career out of being in the right places at the right times, having the right connections and knowing exactly what to say.

especially to credulous tech media ponces without the spine or inclination to push back on his more stupid claims. And already, Altman has tried to shrug off DeepSeek's rise, admitting that while DeepSeek's R1 model is impressive, particularly when it comes to its efficiency, OpenAI will obviously deliver much better models, and also it's legit invigorating for OpenAI to have a new competitor. Yeah, mate, sure. I'll bet you're loving this.

Altman ended his tweet where that came from with, "Look forward to bringing you all AGI and beyond. Something which I add has always been close on the horizon in Altman's world, but it's never really materialized and the timeline keeps moving and there's no actual proof they can do it. AGI is not fucking happening. And if it's possible in any way, it's not coming out of probabilistic models. I'm fucking sick of this."

And OpenAI can't even lean on its relationship with Microsoft, which on Wednesday, January 29th, started offering DeepSeq's models through its own cloud services. OpenAI hasn't got shit.

DeepSeek has commoditized the large language model, publishing both the source code and the guide to building your own. Whether or not someone chooses to pay DeepSeek is largely irrelevant. Someone else will take what they have created and build their own, or people will start running their own DeepSeek instances, renting GPUs from one of the various cloud computing firms. They don't give a shit. They'll take the money.

And while Nvidia will always find other ways to make money, Jensen Huang is amazing at this, it's going to be a hard sell for any hyperscaler to justify spending billions more on GPUs to markets that now know that near identical models can be built for a fraction of the cost with older hardware. Why do you need Blackwell, which is the latest Nvidia GPU? The narrative of "this is the only way to build powerful models" doesn't really hold water anymore.

And the only other selling point is that what if China does something? Well, the Chinese did something. And they've now proven that they can not only compete with American AI companies, but doing so is possible in an efficient way that can effectively crash the market. While there's been a recovery, this is still very worrying.

I also want to address something real quick. Few people on Twitter have been suggesting that talking about DeepSeek positively in any way is some sort of Chinese op. If you believe this, you're a fucking moron. I really must be clear. Take your weird xenophobia and go eat your own shit. I don't fucking care anymore. Yes, there are problems with China. Yes, China does something to America. This is an open source thing.

This is an open source thing, and you can remove China from the equation because it's open source. Someone else is going to use this. If your only defense is the sneaky Chinese are doing something, go to therapy and talk to the therapist about you being paranoid or racist because it's one of them. I should also be clear, concerns about China are very realistic. The Chinese government has tried to interfere with America. It's happened many times. Even if China is funding deep-seeks models,

The fact that they are open sourced means that anyone can run them and anyone can build their own. They can look under the hood. We don't have the training data, but that is it. You cannot win on a xenophobic argument here. You can have realistic concerns about another foreign power. I'm not saying not to.

But what I am saying is that you have to look at this realistically, and you have to take this seriously, and dismissing this as Chinese magic is stupid. It's very goddamn stupid. But like I said earlier, it also isn't clear if these models are actually going to be profitable. It's unclear who funds DeepSeek, like I just said, and whether its current pricing is actually sustainable.

But they're likely going to be a damn sight more profitable than anything OpenAI is currently selling. After all, OpenAI loses money on every single transaction, even their $200 a month ChatGPT Pro subscription. And if OpenAI cuts its prices to compete with DeepSeek, its losses are only going to deepen. And as I've said again and again, this is all so deeply cynical because it's obvious that none of this was ever about the proliferation of generative AI or making sure that generative AI was accessible.

Putting aside my very obvious personal beliefs for a second, it's fairly obvious why these companies, the big hyperscalers and OpenAI and Anthropic, wouldn't want to create something like DeepSeek. Because creating an open source model that uses less resources means that OpenAI, Anthropic and their associate hyperscaler, Findom clients would lose their soft monopoly on large language models. Now what does that mean? I'll explain.

Before DeepSeek, to make a competitive large language model, like GPT-4.0, as in one that you can actually commercialize, required exceedingly large amounts of capital, and to make larger ones effectively required you to kiss the ring or the arse of Microsoft, Google, or Amazon. While it isn't clear what it cost to train OpenAI's O1 reasoning model, we know that GPT-4.0 cost in excess of $100 million, and O1 as a more complex model would likely cost even more.

We also know that OpenAI's training and inference costs in 2024 were around $7 billion, meaning that either refining current models or building new ones is quite costly. The mythology of both OpenAI and Anthropic is that these large amounts of capital weren't just necessary, but the only way to do this. While these companies ostensibly compete, neither of them seem concerned about doing so as actual businesses that made products that were, say...

cheaper and more efficient to run, you know, they made more money than they cost. Because in doing so, these companies would break the illusion that the only way to create powerful artificial intelligence was to hand billions of dollars to one of two companies, and build giant data centers to build even larger language models.

This is AI's rot economy. Two lumbering companies claiming their startups, creating a narrative that the only way to build the future is to keep growing, to build more data centers, to build larger language models, to consume more training data with each infusion of capital, GPU purchase, and data center build-out, creating an infrastructural moat that always leads back to one of a few tech hyperscalers.

OpenAI and Anthropic need the narrative to say "buy more GPUs and build more data centers" because in doing so they create the conditions of that infrastructural monopoly. Because the terms, forget about building software that does stuff for a second, were implicitly that smaller players cannot enter the market because the market is defined as large language models that cost hundreds of millions of dollars and require access to more compute than any startup could ever reasonably access without the infrastructure that a public tech company delivers.

Remember,

Neither Anthropic nor OpenAI has ever marketed themselves based on the products they actually build. Large language models are, in and of themselves, fairly bland software products, which is why we've yet to see any killer apps. This isn't a particularly exciting pitch to investors or the public markets because there's no product, innovation or business model to point to. And if they'd actually tried to productize it and turn it into a business, it's quite obvious at this point that there really isn't a multi-trillion dollar industry for generative AI.

Look at Microsoft and their attempts to strong arm Copilot into Microsoft 365, both personally and commercially. Nobody said, wow, this is great when they demanded you use Copilot in Word. Lots of people, however, asked, why am I being charged significantly more for a product that I don't want or care about?

OpenAI only makes 27% of its revenue from selling access to its models, so allowing people to use their models to build products. Around a billion dollars of annual recurring revenue, by the way, with the rest of their money coming in about $2.7 billion last year, coming from subscriptions to ChatGPT.

If you ignore the hype, OpenAI and Anthropic are actually deeply boring software businesses with unprofitable, unreliable products prone to hallucinations. And the new products, such as OpenAI Sora, cost way too much money to both run and train to get the results that... well, they suck. They're not good. Even OpenAI's push into the federal government with the release of ChatGPT-Gov is unlikely to reverse its dismal fortunes. Seriously, think about it. I'm sure some of you are going to say, well, Trump will just give them money.

These motherfuckers need way more money than Trump is going to give them. And why would Trump bet on a loser? Why would Trump be like, oh yeah, I'm going to give more money to this company that does the same thing. He doesn't understand any of this shit. And he probably just looks at Sam Alton and goes, nah, that's a kind of new money I don't like.

To make this more than a deeply boring software business, OpenAI and Anthropic needed larger models. And they needed them to get larger generally in perpetuity. And for the story to always be that there was only one way to build the future, and that the future cost hundreds of billions of dollars, and that only the biggest geniuses who all happened to work in the same two or three places were capable of doing it.

Post DeepSeek, there really isn't a compelling argument for investing hundreds of billions of dollars of capex in data centers, or buying new GPUs, or even pursuing large language models as they currently stand. It's possible, and DeepSeek through its research papers explained in detail how, to build models competitive with both of OpenAI's leading models, and that's assuming you don't simply build on top of the ones that DeepSeek released.

It also seriously calls into question what it is you're paying OpenAI for in its various subscriptions, most of which, other than the $200 a month pro subscription, have hard limits on how much you can use their most advanced reasoning models. One thing we do know, though, is that OpenAI and Anthropic will now have to either drop the price of accessing their models, and potentially even the cost of their subscriptions, too.

I'd argue that despite the significant price difference between O1 and DeepSeek's R1 reasoning model, the real danger to both OpenAI and Anthropic is DeepSeek v3, which competes with GPT-4O, which is their general purpose model, by the way.

And as I'm recording this episode, by the way, news broke that Alibaba, a behemoth out of China in its own right, has created its own model that outperforms DeepSeek. I'm yet to fully dive into it, but if it's true, it's only going to pile on the price pressure. Though I kind of wonder what they could possibly do. Is it going to be cheaper? Because if it's more powerful, that's just like, I don't know, it doesn't really change shit. Anyway, though.

DeepSeek's narrative shift isn't just about commoditizing LLMs at large, but commoditizing the most expensive ones, run by two monopolists backed by three other monopolists. I mean, the magic's died. There's no halo around Sam Altman or Dario Amadei's head anymore, as their only real argument was "we're the only ones that can do this," something that nobody should have believed in the first place.

Up until this point, people believed that the reason these models were so expensive was because they had to be. And that we had to build more data centers and buy more silicon because that's just how things worked. They believed that reasoning models were the future, even if members of the media didn't really seem to understand what reasoning models did or why they mattered. And that as a result, they had to be expensive, because OpenAI and their ilk were just so fucking smart. Even if it wasn't obvious what reasoning meant or what it allowed you to do or what the products were.

It's just very annoying. And now we're going to find out, by the way, because reasoning is now commoditized, along with large language models in general. Funnily enough, the way that DeepSeq may have been trained using, at least in part, synthetic data also pushes against the paradigm that these companies even need to use other people's training data. Though their argument, of course, will be that they need more training data. Always.

We also don't know the environmental effects, by the way, with DeepSeek. Because even if it's cheaper, these models still require those energy-guzzling GPUs to run, and they're running at full tilt. In any case, if I had to guess, the result will be that the markets are going to be far less tolerant of generative AI, and the idea that generative AI is the future. ♪

Hey, you guys, I'm Catherine Legg. I'm a racing driver who's literally driven everything with four wheels across the planet. And I've got a new podcast. It's called Throttle Therapy. This season, I'm gearing up to make history, competing in some of the world's most notorious racing events, starting at the Indy 500.

Join me as I travel from racetrack to racetrack in my quest to continue a memorable career in racing. I'm also going to bring you inside stories with legends of sports, new faces from the next generation of auto racing, and conversations with the people who've supported me throughout my career.

We'll be getting into everything from karting to NASCAR, even Formula One. Whether you dream about being a pro athlete or an astronaut, we're talking about what it takes to make it. Listen to Throttle Therapy with Catherine Legg, an iHeart Women's Sports production in partnership with Deep Blue Sports and Entertainment. You can find us on the iHeart Radio app, Apple Podcasts, or wherever you get your podcasts. Presented by Elf Beauty, founding partner of iHeart Women's Sports.

Hey, y'all, this is Reed from the God's Country Podcast. We had the one and only Bobby Bones in the studio this week, and we cover everything from his upbringing to his outdoor experiences with his stepdad, Arkansas Keith, to the state of country music. We may even end the episode with a little jam session led by Bobby himself. Y'all be sure and listen to this episode of God's Country with Bobby Bones on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. Don't go shopping at Target.

With khaki pants and a red shirt on Don't go shopping at Target With khaki pants and a red polo shirt on An old lady came up to me She said how much for this cream of wheat

Happy Holidays from me, Michael Rapoport, and my gift to you is a free subscription to the I Am Rapoport Stereo Podcast, where I discuss entertainment, sports, politics, and anything and everything that catches my attention. I am here to call it as I see it, and there's a whole lot of things catching my eyes these days. Here's a clip from one of my favorite episodes.

You are not a real fighter. You will never be discussed anywhere in boxing history ever. Fake Paul. The movie is The Apprentice and the movie is about young Donald Trump and his apprentice, Roy Cohen. Real character, obviously both are real characters. It kind of has a Scarface vibe to it, which I thought was very interesting.

Listen to the I Am Rap Report Stereo Podcast on the iHeartRadio app, Apple Podcasts, and wherever you get your podcasts. I'm Maura Ahrens-Neely, host of The Anxious Achiever. It's a show that looks at where we spend most of our waking hours, work. We explore how work impacts our mental health, how neurodiversity impacts our careers, and how companies impact our well-being.

Is work broken? It's hard to say that work is broken because work is work and the system itself doesn't favor workers. I would say that the system is unsustainable. Is capitalism and work just relentless, cruel and unsustainable, which is really my experience and my family's experience. So in that way, yeah, it's broken.

Listen to The Anxious Achiever on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. OpenAI and Anthropic no longer really have moats. Unless, well, there is another idea. What if there was a huge fucking idiot with a lot of money?

How about billionaire dipshit Masayoshi Son, the CEO of SoftBank, a multinational investment firm that's rumored to be investing anywhere from $15 to $25 billion in OpenAI in a round that values the company at an astonishing $340 billion as part of a round of up to $40 billion.

Now you may think, damn this is a sign that OpenAI is going to make it, but I must remind you how bad SoftBank is at investing. They put $16 billion into famously awful real estate company WeWork and managed to lose, I think, $800 million on the DoorDash IPO and $1.8 billion on their investment in Uber. And in both cases did so because they were desperate to bandwagon on to supposedly surefire bets either just before they crashed or way after an investment made sense.

According to the Wall Street Journal, SoftBank would lead this insane $40 billion round in the company and would, and I quote, help assemble investors for the rest of the round. In doing so, SoftBank would also become OpenAI's largest investor, replacing Microsoft, and yes, SoftBank was the largest investor in WeWork before it went tits up.

It's also important to remember that OpenAI has pledged to put $18-19 billion to fund the Stargate data center project, along with SoftBank, who will be committing the same amount. This is on some level SoftBank handing money to itself to invest in data centers to prop up an industry that's dying. Now this is a developing story, but it's hard to imagine any serious contribution from any respectable investor at this point.

My money is on a few VC firms desperately scraping at the bottom of the barrel. Quarter of a billion here, quarter of a billion there. Maybe Nvidia chucks in some. And on the subject of barrels of stuff, I also expect money from the Kingdom of Saudi Arabia and its associated venture arms. You're going to see a few. Maybe Andreessen Horowitz gets involved. They don't think much.

It's just very fucking silly, and I don't know how this works out. OpenAI burns money, and even if they somehow make more efficient models, the actual total addressable market of generative AI is actually pretty small. Microsoft said in their recent earnings they made $12 to $13 billion of ARR on AI. Just to be clear, that's not profit, and that's not a business unit. There's no AI business unit, which means that that is just spread across delivering cloud compute for AI everywhere.

co-pilot on Microsoft 365 products, which by the way, no one likes, they're having trouble selling, and other associated co-pilot products they sell. I don't think, and like 12 to $13 billion across four quarters, that's not actually good at all. It's just very silly. All of this is so silly. And when I think about it too hard, I feel a little crazy.

OpenAI makes $3.7 billion in revenue, and they do so, as I mentioned, primarily from ChatGPT subscriptions. Even if that somehow, and it won't by the way, turns into $3.7 billion of profit, or even $10 billion of profit a year, that's less than the profits in a single quarter of any given hyperscaler. It would be respectable, sure, but a $340 billion valuation I guess makes sense if it was profit. It doesn't make sense if it's not profitable, though.

It also isn't obvious how OpenAI would actually provide any liquidity to investors, by which I mean allow them to sell their stock, beyond selling shares of people that work at OpenAI, like people who work there who have been given stock grants, selling that to another investor, a really dumb guy maybe. And as an aside, SoftBank bought $1.5 billion of stock from OpenAI employees in a tender at the end of November 2024, just a note for you.

They could also take the company public, but with the unit economics of this fucking company, which boil down by the way to "Our products lose billions of dollars and are extremely commoditized", I'm not really sure what the plan is here. The fundamental problems that OpenAI has are not solved by throwing more money at the problem. This hasn't worked before and it won't work this time. They're burning cash. And in SoftBank's case, it isn't obvious what it is they're getting from OpenAI.

Is it the chance to continue an industry-wide con? The chance to participate in a capitalist death cult? I don't know. Maybe it's the chance to burn money at a faster rate than WeWork ever could dream of. Will this be the time that Microsoft, Amazon, and Google just drop OpenAI and Anthropic and make their own models based on DeepSeek's work? What incentive is there for the hyperscalers to keep funding OpenAI and Anthropic?

They hold all the cards, the GPUs, the infrastructure, and in the case of Microsoft, non-revocable licenses that permit them unfettered use and access to OpenAI's tech. And there's little stopping the hyperscalers from building their own models and just dumping them entirely.

In fact, Microsoft might actually be a little glad to see SoftBank become the biggest investor and pick up the tab for OpenAI's expenses. I can imagine Satya Nadella texting Sam Orton being like, no, don't take their money, LOL, don't do it. Oh, I'd hate that. I'd hate it if this was someone else's problem. And the Stargate thing, by the way...

is an attempt to, the up to 500 billion thing is just bollocks, whatever. It's an attempt to remove themselves from Microsoft. And Microsoft actually allowed OpenAI to alter their deal so that they could get cloud compute from others. Now at the time people were like, "Yeah man, this is a good thing. This shows OpenAI will be independent." No, it doesn't. It just means that they're going to be under Masayoshi-san now.

The funniest, dumbest man in investing. I love Masayoshi-san. I think it's nice that we have an insane guy who isn't instantly murderous in our lives. Anyway, anyway though. As I've said before, I believe we're at peak AI. And now that generative AI has been commoditized, the only thing that open AI and Anthropic have left other than a pile of cash is their ability to innovate. And I don't think they're capable of doing so.

And because we sit in the ruins of Silicon Valley with our biggest startups all doing the same thing in the least efficient way possible, living at the beck and call of public companies with multi-trillion dollar market caps, everyone is trying to do the same thing in the same way based on the fantastical marketing nonsense of a succession of directionless rich guys that all want to create America's next top monopoly.

It's time to wake up and accept that there was never any kind of AI arms race, and that the only reason that hyperscalers built so many data centers and bought so many GPUs is because they're run by people that don't experience real problems and thus don't know what problems real people face. Generative AI does not solve any trillion-dollar problems, nor does it create outcomes that are profitable for any particular business.

DeepSeq's models are cheaper to run, but the real magic trick they pulled is that they showed how utterly replaceable a company like OpenAI and by extension any LLM company really is. There really isn't anything special about any of these companies. They have no moat, their infrastructural advantages moot, and their hordes of talent are relatively irrelevant.

What DeepSeek has proven isn't just technological, it's philosophical. It shows that the scrappy spirit of Silicon Valley builders is dead, replaced by a series of different management consultants that lead teams of engineers to do things based on vibes. You may ask if all of this means generative AI suddenly gets more prevalent. After all, Satya Nadella of Microsoft quoted Yvonne's paradox, which posits that when resources are made more efficient, their use increases.

Sadly, I hypothesize that something else happens. Right now, I do not believe that there are companies that are stymied by the pricing that OpenAI and their ilk offer, nor do I think there are many companies or use cases that don't exist because large language models are too expensive.

AI companies took up a third of all venture capital funding last year. And on top of that, it's fairly easy to try reasoning models like O1 and make a proof of concept without having to make an entire operational company. Sheer open AI barely has one. I don't think anyone has been on the sidelines of generative AI due to costs. And remember, few seem to be able to come up with great use case for O1 or other reasoning models anyway. And DeepSeek's models, while cheaper, don't have any new functionality.

As a result, I don't really see anything changing beyond the eventual collapse of the API market, which is the way you plug these models into things for companies like Anthropic and OpenAI. Large language models and reasoning models, they're niche. The only reason that ChatGPT became such a big deal is because the tech industry has no other growth ideas. And despite the entire industry and public markets screaming about it, I can't think of any mass market product that really matters.

Even if DeepSeek doesn't land the fatal blow, it could set the foundations for another company to drag OpenAI's carcass out behind the barn and hit it with a big stick. One way in which this entire farce could fall is if nasty Mark Zuckerberg decides he wants to simply destroy the entire market for LLMs.

Meta has already formed four separate war rooms to break down how DeepSeek did it, and apparently, to quote the information, "...in pursuing Lama, which is their large language model, CEO Mark Zuckerberg wants to commoditize AI models so that the applications that use such models, including Meta's, generate more money than the sales of the AI models themselves."

That could hurt Meta's AI rivals, such as OpenAI and Anthropic, which are on pace to generate billions of dollars in revenue from such sales. And lose billions! Fucking hell! I love the information, but can you add the most important bit? But I could absolutely see Meta releasing its own version of DeepSeek's models. They've got the GPUs, and Mark Zuckerberg can never be fired, meaning that if he simply decided to throw billions of dollars into specifically creating his own deep-discounted LLMs to wipe out OpenAI, he absolutely could.

After all, a few weeks back, Mark Zuckerberg said that Meta would spend between $60 and $65 billion in capital expenditures in 2025. And this was before the deep-seek situation. And I imagine the markets would love a more modest proposal that involves Meta offering a chat-cheap ETP to simply to fuck over Sam Altman.

And that's the thing. ChatGPT is big because everybody's talking about AI, and ChatGPT is the big brand in AI. It is not essential, and it's only being treated as such because the media and the markets ran away with a narrative that they barely understood.

DeepSeek pierced that narrative, because believing said narrative also required you to believe that Sam Altman is a magician versus an extremely shitty CEO that burned a bunch of money. And I don't believe that, even before DeepSeek, that Altman's peers really bought into the hype. Sure, you can argue that DeepSeek just built on top of software that already existed thanks to OpenAI. Thank you, Casey, by the way.

But this begs a fairly obvious question: Why didn't OpenAI build on top of software invented by OpenAI? And here's another question: Why does it goddamn matter? In any case, the massive expense of running generative models hasn't been the limiter on their deployment or their success. You can blame that on the fact that they, as a piece of technology, are neither artificial intelligence nor capable of providing the kind of meaningful outcomes that would make them the next smartphone or cloud computing.

Honestly, it's all been a con. It's been a painfully obvious one. One I've been screaming about since February, since when I started this podcast, trying to explain that beneath the hype was an industry that provided modest at best outcomes rather than any kind of next big thing.

Without reasoning as its magical new creation, OpenAI really doesn't have anything left. Agents aren't coming. Large language models aren't going to build them. AGI isn't coming either. There's no proof it's possible. All of this is fucking flimflam to cover up how mediocre and unreliable the fundament of the supposed AI revolution really was. All of this money...

All of this energy and all of this talent was wasted thanks to markets that don't actually do anything, markets that don't make for good companies, just growth hogs, and a media industry that fails to hold the powerful to account.

And it looks like everything got broken by some random outgrowth of a Chinese hedge fund. It's so ridiculous, it's so sickening, I can't believe it! Well, I can totally believe it. I'm actually surprised they didn't come up with this idea myself. Just the idea that someone could do this cheaper. It makes me go insane, and what's more insane is that OpenAI is still going to be able to raise that round.

But I think we're approaching the end of days. I'm not calling the end of the bubble yet. I refuse to do that. I'm not going to do that. What I am going to say is it's deflating. And I am going to say I have no idea how they reinflate it. A bunch more money isn't going to change anything. These companies are washed. Sam Altman's washed. He's the Mark Sanchez of the tech industry.

And he's so sickening. All of them are so sickening. Imagine if this money had gone anywhere else. Imagine if it had gone into batteries. Imagine if it had gone into climate stuff. Imagine if it had gone somewhere useful. Imagine if instead of spending billions on this dog shit, they actually fixed their problems. They actually fixed the products that they've made worse. But the problem is that the rot economy is in control. That the growth at all costs mindset is all that you see in the tech industry.

And Silicon Valley needs to repent. Silicon Valley needs to change its ways because when the bubble bursts, and I really think it will, the destruction that follows will be horrifying and it will hit workers. It will hit tens of thousands of tech workers and it will affect the markets. And after that, the markets are going to realize something. The tech industry doesn't have anything left. They don't have another growth market. They're out. They're all out.

And I look forward to telling you how. I look forward to talking about it when it happens. And I'm so grateful for you listening. Thank you for listening to Better Offline. The editor and composer of the Better Offline theme song is Matt Osowski. You can check out more of his music and audio projects at mattosowski.com. M-A-T-T-O-S-O-W-S-K-I dot com.

You can email me at ez at betteroffline.com or visit betteroffline.com to find more podcast links and, of course, my newsletter. I also really recommend you go to chat.whereisyoured.at to visit the Discord and go to r slash betteroffline to check out our Reddit. Thank you so much for listening. Better Offline is a production of Cool Zone Media. For more from Cool Zone Media, visit our website, coolzonemedia.com or check us out on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.

1.4 billion dollars in NFL quarterback contracts. The untold stories behind the biggest deals in football history. I'm AJ Stevens, Vice President of Client Strategy at Athletes First. Introducing the Athletes First Family Podcast, the Quarterback Series.

My co-host Brian Murphy, Athletes First CEO, and I are sitting down with the agents who have negotiated contracts for Justin Herbert, Deshaun Watson, Dak Prescott, Tua Tugnavailoa, and Jordan Love. Listen to Athletes First Family Podcast on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. You are cordially invited to...

Welcome to the Party with Tisha Allen is an iHeart Woman sports production in partnership with Deep Blue Sports and Entertainment.

Listen to Welcome to the Party. That's P-A-R-T-E-E on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. The OGs of uncensored motherhood are back and badder than ever. I'm Erica. And I'm Mila. And we're the hosts of the Good Moms Bad Choices podcast, brought to you by the Black Effect Podcast Network every Wednesday. Yeah, we're moms, but not your mommy. Historically, men talk too

And women have quietly listened. And all that stops here. If you like witty women, then this is your tribe. Listen to the Good Moms, Bad Choices podcast every Wednesday on the Black Effect Podcast Network, the iHeartRadio app, Apple Podcasts, or wherever you go to find your podcasts.

Did you know that 70% of people get hired at companies where they already have a connection? I'm Andrew Seaman, LinkedIn's Editor-at-Large for Jobs and Career Development. And on my podcast, Get Hired, I bring you all the information you need to, well, get hired.

Landing a job may be tough, but Get Hired is here for you every step of the way with advice on resumes, networking, negotiation, and so much more. Listen to Get Hired with Andrew Seaman on the iHeartRadio app, Apple Podcasts, or wherever you like to listen.