We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Sam Altman’s Reflections, NVIDIA’s Robotics Play, Zuckerberg’s Moderation

Sam Altman’s Reflections, NVIDIA’s Robotics Play, Zuckerberg’s Moderation

2025/1/11
logo of podcast Big Technology Podcast

Big Technology Podcast

AI Deep Dive AI Insights AI Chapters Transcript
People
L
Leah Smart
R
Ranjan Roy
一位在 Margins 工作的科技新闻评论员和 podcast 主持人。
Topics
Leah Smart: 我认为Sam Altman的声明,即OpenAI已经掌握了构建AGI的方法,并且预测2025年AI代理将进入职场,是一个大胆的预测。虽然他随后解释了AGI的定义,但仍然引发了业界的广泛讨论。他认为,通过迭代地将强大的工具交付到人们手中,可以带来广泛的积极成果。 Ranjan Roy: 我认为Sam Altman对AGI的定义过于宽泛,将AI代理进入职场等同于AGI是荒谬的。这更像是为了筹集更多资金而采取的策略。AGI的传统定义与AI代理在现有工作流程中做出自主决策之间存在根本区别。 Leah Smart: 我同意Altman在某种程度上是在移动目标点,但他对AGI的定义是AI系统能够胜任重要工作中高技能人类的工作。这似乎降低了AGI的门槛,表明他们已经接近实现AGI。 Ranjan Roy: 我认为OpenAI可能会在GPT-5发布前就宣布达到AGI,因为如果GPT-5没有带来巨大的能力提升,将会是一个巨大的问题。他们可能会将AGI与超智能区分开来,从而保持AI领域的持续关注和融资。 Leah Smart: 我认为OpenAI将目标转向超智能是明智之举,这将成为新的热门话题,并有助于融资和维持AI的整体叙事。超智能工具可以极大地加速科学发现和创新。 Ranjan Roy: 我认为业界对AGI的宣传与实际客户需求之间存在脱节。许多公司并不需要完全自主的AI代理,他们只需要一些能够简化工作的工具。OpenAI仍然更像是一个附加了业务的研究机构,这可能不是通往成功的最佳途径。

Deep Dive

Key Insights

What is Sam Altman's definition of AGI and how does it differ from traditional understanding?

Sam Altman defines AGI as when an AI system can perform tasks that very skilled humans in important jobs can do. This is a lower bar compared to the traditional understanding of AGI, which involves a system capable of performing any intellectual task that a human can. Altman's definition focuses on specific job functions rather than general intelligence.

Why is NVIDIA investing in robotics and what potential does it see in this field?

NVIDIA sees robotics as a multi-trillion dollar opportunity, particularly in the development of AI models for humanoid robots and self-driving car technology. The company believes that real-world understanding through robotics can overcome limitations of text-based AI models, enabling AI to interact with and understand the physical world more effectively.

What are the key changes Mark Zuckerberg announced regarding Facebook's content moderation policies?

Mark Zuckerberg announced five key changes to Facebook's content moderation policies: 1) Replacing fact-checkers with community notes, 2) Simplifying content policies and removing restrictions on topics like immigration and gender, 3) Tuning filters to focus on illegal and high-severity violations, 4) Bringing back civic content to increase news and politics on the platform, and 5) Moving trust and safety teams to Texas. Additionally, Facebook will work with the Trump administration to push back against foreign governments pressuring American companies to censor content.

What concerns are raised about Anthropic's $2 billion fundraising and its $60 billion valuation?

Concerns about Anthropic's $2 billion fundraising and $60 billion valuation include the challenge of justifying such a high valuation through revenue growth, especially given the high costs of AI technology. There are also questions about whether Anthropic can sustain its valuation without significant consumer adoption, as its revenue primarily comes from API usage rather than consumer-facing products like OpenAI's ChatGPT.

What is NVIDIA's new personal AI supercomputer and why is it significant?

NVIDIA's new personal AI supercomputer, priced at $3,000, allows researchers and students to run multi-billion parameter AI models locally rather than through the cloud. This is significant because it democratizes access to powerful AI tools, enabling more individuals to conduct advanced AI research and experiments without relying on cloud infrastructure.

What are the implications of Meta training its AI models on copyrighted works?

Meta's decision to train its AI models on copyrighted works, including pirated datasets like LibGen, raises significant legal and ethical concerns. This practice could undermine Meta's negotiating position with regulators and lead to potential lawsuits. It also highlights the broader issue of AI companies using copyrighted material without proper licensing, which could result in public backlash and legal challenges.

What is the potential impact of Facebook's shift to community notes for fact-checking?

Facebook's shift to community notes for fact-checking could improve the platform's ability to address misinformation by leveraging crowd-sourced corrections. However, it may also lead to challenges in maintaining accuracy and consistency, as community notes rely on user contributions rather than professional fact-checkers. This change reflects a broader trend towards decentralized moderation but raises questions about the effectiveness of such systems in combating misinformation.

What are the key takeaways from Sam Altman's reflections on OpenAI's progress?

Sam Altman's reflections highlight OpenAI's confidence in building AGI, the potential for AI agents to join the workforce by 2025, and the company's focus on superintelligence. He acknowledges the challenges of building a high-velocity company and the stress of operating in uncharted waters. Altman also emphasizes the importance of iterative progress and the need to balance research with business demands.

Shownotes Transcript

Translations:
中文

Sam Altman says OpenAI now knows how to build AGI. Anthropic is raising another $2 billion. NVIDIA looks to a robot future. And Mark Zuckerberg says to hell with fact-checkers. That's coming up on a Big Technology Podcast Friday edition right after this. From LinkedIn News, I'm Leah Smart, host of Every Day Better, an award-winning podcast dedicated to personal development. Join me every week for captivating stories and research to find more fulfillment in your work and personal life.

Listen to Every Day Better on the LinkedIn Podcast Network, Apple Podcasts, or wherever you get your podcasts. Struggling to keep up with customers? With AgentForce and Salesforce Data Cloud, deploy AI agents that know your customers and act on their own. That's because Data Cloud brings all your data to AgentForce, no matter where it lives. Get started at salesforce.com slash data.

Welcome to Big Technology Podcast Friday edition where we break down the news in our traditional cool-headed and nuanced format. We have a great show for you today. We're talking about Sam Altman's new bold statement about AGI. We're also going to cover the latest NVIDIA news, the latest Anthropic fundraising, Mark Zuckerberg talking about how content moderation is no longer as important to Facebook as it has been in the past and what that all means.

whether it's just a play to get in the good graces of the Trump administration, and of course, a little discussion of my visit to China and where the country stands today and where it's heading. Joining us as always on Friday is Ranjan Roy of Margins. Ranjan, great to see you. What better way to continue our 2020 thrive than having Sam Altman tell us AGI is here, at least AGI-ish.

I don't know about you, but I've gotten a lot of 2020 Thrive texts over the past week. To stop it. So I want to thank you. I want to thank you for introducing that to big technology and my life. Thank you, Ranjan. I don't think my wife and family, I think, are definitely over me saying 2020 Thrive, but I'm sticking with it. I'm sticking with it. I will admit I've been saying it as well. When someone talks about how this year is going off to a bad start, I'm like, just 2020 Thrive. Don't worry about it. 2020 Thrive. It's so easy. Manifest. Manifest.

All right. So someone who's manifesting is Sam Altman. He's talking about AGI. He wrote a very interesting post called Reflections, just looking back at the last two years since ChatGPT has released. So basically saying, all right, it's new the new year and I got to look back. We're going to talk about a few things that he's written. But to me, the most interesting statement, one that got the most attention this past week is this. We are now confident we know how to build AGI as we have traditionally understood that.

We believe that in 2025, we may see the first AI agents join the workforce and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great broadly distributed outcomes. I think that's a way of him saying that they're going to declare AGI this year.

it got a lot of a lot of pushback in the popular press i'm curious what you thought seeing that from sam i mean i got a lot of pushback from me as well in my mind because it was just ridiculous it's again

At least I think we were correct that at a certain point, he's just going to say AGI, he's going to move the goalposts to define it however he wants. To say that AGI is the first AI agents joining the workforce in quotes, to me makes absolutely no sense because a

agentic AI, and we have discussed this, we can definitely dig more into it, but it means something completely different, at least in my mind, and at least in the way most of the industry defines it, than how we've thought about artificial general intelligence for years. Because again,

AI agents are simply using large language models to take some autonomous decisions in some kind of existing workflow or business process. It's not that revolutionary or complicated. It's just some kind of like taking something that used to be rules-based and letting an LLM try to apply a bit of logic or reason to it. That is not...

At least in my mind, the robots are taking over or we have developed some kind of super intelligence. If he starts saying super intelligence is now different than AGI, maybe I'm okay with it. But overall, this read like we need to raise more funds to me. Well, okay. So I think there's something that I agree with in what you said and something I disagree with in what you said.

I agree that he's moving the goalposts. I don't necessarily think that he's saying that agents are going to be AGI. It might have just been that he's like stringing these statements together. But he did talk to Bloomberg about what he thinks AGI is. And this is the way that he defined it. And to me, it seemed like kind of a lower bar. He said, the rough way I try to think about it is when an AI system can do what very skilled humans in important jobs can do. I'd call that AGI.

He says, then there's a bunch of follow-on questions like, well, is it a full part of a job or only part of it? Sorry, is it the full job or only part of it? Can it start as a computer program and decide it wants to become a doctor? Can it do what the best people in the field do or the 98th percentile? How autonomous is it? He says, I don't have deep, precise answers there yet.

But if you could hire an AI as a remote employee to be a great software engineer, a lot of people would say, okay, that's AGI-ish. So to me, it seems like he's saying, okay, basically, if we're not quite there yet, we're almost there. AGI-ish, that is the term of the week of the month of the year, I think. Like to be able to say, okay,

that it's kind of there. It's kind of, you know, what we've always promised, but we're just going to check off that box, I think kind of captures this perfectly. Again, like,

replacing specific functions in the workplace were already there. That's what I don't understand. Like GPT-4.0 is good. Gemini 2.0 Flash Experimental is good. Most of these other foundation models can do a lot of what people already do. So, you know, this idea, can it start as a computer program and decide it wants to become a doctor? It's really interesting how he sprinkles in

these kind of fantastical statements within more just kind of monotonous, like mundane things like, yes, it can do a human's job slightly better than them or a skilled human's job, or it could suddenly decide it wants to become a doctor. I think this is, it's so tough to me

That I don't know how he's approaching this. It was a nice post. I like this idea of like taking some time to reflect. I think it was an honestly like a genuinely intention post that it's been two incredible, crazy years. And I want to look back on it. But still, just this whole again is very Sam ish in the way he's he's approaching this.

This is from AI skeptic Gary Marcus. At a conference yesterday, someone with very good knowledge of open AI said something fascinating. More for what was not said than what was said. What was said is that we should expect to see GPT 4.5 soon. What was not said by the well-informed source was that we should expect to see GPT 5 anytime soon.

Is it possible that OpenAI says we've reached AGI before GPT-5 comes out? Yes, 100%, 1,000%. He's setting it up for that. I think, again, we had somewhat said it in jest about him just kind of saying, okay, AGI is here, Microsoft contracts null and void.

I think they're completely setting it up for that. I think they recognize that to release GPT-5, if it's not some massive step change in terms of ability, would be a huge problem. So I completely believe they're going to say AGI. I'm going to say...

By the springtime, by spring, maybe April, May, as we leave the cold months, we will have AGI. That sounds right to me. Maybe sooner. And then I think the discussion is really going to turn towards superintelligence. So I think the past two years, people have been talking about AGI. I think that like they're going to declare, OpenAI is going to declare we've reached it or they've reached it.

And then superintelligence is going to become the new buzzword. And Altman also previews this. He says, we're beginning to turn our aim beyond AGI to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own and in turn massively increase innovation.

abundance, and prosperity. This sounds like science fiction right now and somewhat crazy to even talk about it. That's all right. We've been there before and we are okay being there again.

Abundance and prosperity. Our super intelligent future is just one of greatness for all. I think we're seeing it happening in real time right now. This distinction between AGI and super intelligence, which were conflated, and now lowering the bar of AGI. And then I think that fantastical futuristic vision will be labeled super intelligence going forward. And it's savvy. It's smart.

It'll help on the fundraising side. It'll help keep just kind of the general overall narrative and dream around AI in the overall media. But I think we're very clearly seeing it happen right now.

Roshan's like, just give me an intelligent dashboard. And Sam's like, we're building super intelligence. Yeah. Well, no, I mean, okay. So I said it last week. I'm going to say it again. My theme for 2020 Thrive is there is such a disconnect between what the industry is saying and what actual customers want.

Because the idea that any Fortune 500 business that has infinite complexity and systems and not clean data and relatively unstructured data is going to allow autonomous processes into their actual business environment.

anytime soon and allow it to be completely autonomous is a pipe dream in my mind. I think there's a lot of companies and a lot of steps along the way that will be able to take existing LLM technology and start doing incredible things. But I think like there's still such a disconnect. Everyone I talk to, like,

even business technology people are not asking for totally autonomous agentic AI. They're just asking for stuff that makes life a little bit easier. So I think like he's... I did one thing in the reflections I also liked is...

He did mention that they were a research house who kind of like almost backed into a business, which is something we've talked about a lot. And it's even more clear to me that OpenAI is still a research house that has a business tacked on to it. And I don't think that's the company that's going to win.

Interesting. Yeah. Even in the Bloomberg conversation, he talks about how they've had to protect research from like the standard wants of Silicon Valley style company and they have them in a separate location. So I think that's that's true. And one more thing about this before we move on. So I want to just go back to this statement. We are confident we know how to build AGI as we have traditionally understood that.

What do you think that path is? Is it just to continue scaling this stuff? Or, I mean, it's kind of interesting that he winks at it, but he doesn't exactly say what to do.

This is, again, where I think they've been incredibly savvy. But again, the ARC-AGI test that supposedly gave an 87.5% score to the latest O3 model that's still private but being tested, that gave them enough credibility in the conversation to start saying we are close to AGI. But

But it's one of those things that even within the industry, a lot of people say that this is not actually an accurate test for kind of the traditional understanding of what AGI would be. But they have been able to say, look at the leaps and bounds of development we've done over the last just two years based on this one test and this one benchmark. So we're there. We know how to do it. It's done. We're good. Check. And that's amazing. We'll see if this continues. One of the greatest salesmen of all time. I'll give him that.

I don't want to spend too much more time on his reflections but I did think that as you mentioned it was like a pretty good post talking a little bit about what's happened and the messy parts of it and he's talked about his firing in this post which I don't think is really worth rehashing but I think he really just sums it up at the beginning building a company at such high velocity with so little training is a messy process it's often two steps forward one step back and sometimes one step forward and two steps back

Mistakes get corrected as you go along, but there aren't really any handbooks or guideposts when you're doing original work. Moving at speed in uncharted waters is an incredible experience, but it's also immensely stressful for all the players. Conflicts and misunderstandings abound.

I mean, maybe this is also just clever marketing from Sam, but even if it is, I just don't think you see this from high profile CEOs enough, like being like truly honest about what it's like to build a company and not just being like, you know, full systems go ahead. And we are like taking over the world.

Yeah, which is why I liked the post. I thought it was genuinely, I thought it was reflective. So being called reflections was accurate. I thought he looked at it and it is. This is why I don't want to take away the technological marvel that they have delivered and kind of unleashed into the world over the last two years. It is something incredible. So to stop and take a moment to recognize that, that's the part I like.

I wish we could just take a little more time to reflect on that and use the technology we have currently rather than having to switch instantly into what's coming next.

So speaking of which, Anthropic is back raising more funds. They just raised a bunch last year. I think there is 4 billion last year, and they're going to try to raise another 2 billion. This is from the Wall Street Journal. Anthropic is in advanced talks to raise $2 billion in a deal that would value it at 60 billion, more than tripled its valuation from a year ago.

It's being led by Lightspeed Venture Partners, and it would make Anthropic the fifth most valuable U.S. startup after SpaceX, OpenAI, Stripe, and Databricks. This is from the Journal story. Investors are excited about the potential of generative AI to transform how people work and live and are largely unconcerned that most AI startups are losing money because of the high costs of the technology industry.

An intense competition. I mean, sort of an interesting statement in there about like the concerns about being profitable from investors on this. I guess that's how it always works. But still, like just to have this blanket statement that investors are unconcerned seems wrong to me. That being said, Anthropic hasn't really made a lot of noise recently. I think it's just been quietly building and we don't see these posts from investors.

their CEO, Dario Amadei about, you know, reaching AGI the way that you see it from Sam Altman, he does work a little bit more quietly than the counterparts at OpenAI. But I'm curious what you think this means that Anthropic is going to raise another 2 billion can be valued so highly. It does seem like that amount today is something that AI research house will blow through in about a quarter.

But it is significant nonetheless. Well, the $60 billion is the part that's almost terrifying to me. Because how do you...

like work your way into that valuation, how fast do they have to grow their revenue when you're tripling your valuation from just a year ago? I think this is where I will say like what you said around how Anthropic is basically doing this quietly. I like, I respect. In a way, it's almost nicely strategic that they're kind of riding on the coattails, let Sam make all the noise and

and just kind of be the hype man of the industry and then quietly build. And Claude is, other than its usage limits that you hit even as a pro subscriber, is an incredible product. I think it's probably my most used generative AI product. So they, I think the way they're approaching it is very smart. I think all of these companies, because what do you do at $60 billion?

Are you IPOing? I mean, when you're trying to put together an S1 in public ready financials for a company that probably is not going to look too good, you're not IPOing anytime soon. I think, do you just raise more money at a higher valuation? Do you hope for an acquisition at that scale? It's getting more and more difficult. Remember, when Anthropic at $8 to $10 billion,

plenty of conversation around a lot of buyers at 60 billion, that number starts to dwindle a little bit. Yeah. And I think that there's going to have to be a couple of really large flame outs given the amount of money that's been spent and raised and the amount of money you need to make to justify that. And I'm starting to worry a little bit about Anthropic. And I wasn't worried last year, but this year I am. I mean, you think about OpenAI. They, as we've said, after looking at their financials,

have gone they are planning to make the most of their money from chat GPT chat GPT as Sam Altman noted in his post um hovered at 100 million users for a while and didn't seem like it was going to increase that much and then all of a sudden uh it triples it goes from 100 million to 300 million users and has all this new interesting functionality including the voice stuff

And, and I think that that's important. But Claude is not catching on with consumers in the same way, despite being what I would argue is a better product, outside of the fact that it lacks the voice capabilities. And so where does it go from here? Is it able to build a $60 billion business on API alone, despite the fact that like,

Some of its best features are the fact that it can talk to you in a more human-like way and a warmer way, in a way that people have built these relationships with Claude, as I spoke about with Casey Newton a couple of weeks ago. So I do worry a little bit about Anthropic. Is that unfounded? While I said I worry about the valuation, I actually am not worried about that breakdown. Again, so revenue estimates...

OpenAI is estimated around 27% of its revenue is on the API/enterprise side and 73% is on the ChatGPT side, which would include ChatGPT Plus. Whereas Anthropic, 85% of its revenue is on the API side and only 15% is you and me paying for a Cloud Pro.

So I think they are taking the bet that people building on Claude, building on their foundation models is where they're going to make money. And I actually think that's a smarter bet because I do think...

At a certain point, the consumer side of this gets more and more commoditized. I think Gemini and Google and kind of entrenched Microsoft and Copilot on the what is my chatbot that will help me and be up on my screen throughout the day for the average consumer. That's going to get kind of commoditized away into existing products, into existing products.

ecosystems like Google or Microsoft. So I think, I think Anthropic actually is taking the smarter bet versus OpenAI in this case. I would not be shocked if Anthropic comes out with a thousand dollar a month, unlimited use version of Claude, the same way that OpenAI came out with its $200 a month. Make it a hundred I'm in, 150 maybe. Cause I, I hit that Claude usage limit way too much and I get so mad when

Right. But Altman said that they're losing money on the ChatGPT unlimited subscription, even at 200 a month. That was such an interesting thing to say. Again, ChatGPT, what's it, Pro or Plus Plus or whatever, the $200, whatever the branding they put on it. When you're saying on a...

like consumer facing product. And again, if you're paying 200 bucks, obviously it's still gonna be more business usage, but still like on an individual per seat product, you're losing money at $200 a month. And to say that out loud to investors when you're gonna be asking them for more money,

It's baffling to me, but it almost, again, it still adds to this mystique that the amount of compute and power required for these services, there's something magical that none of you or us understand. That's the only reason I can see someone ever saying that when you're running a business that's trying to raise money.

I think one of the things he might have been trying to touch on is the fact that this is so valuable to some people, and maybe that means there's room to raise the price even more. That's why I think Anthropic is going to quietly take notes and release the $1,000 a month version of Claude. But we'll see. What would you call it? We got Claude Pro. Claude 1,000, man. Claude 1,000. Yeah.

And then Claude 2000 coming soon. Claude 2000 for super special usage. Yep. Okay, so speaking of major opportunities, we had NVIDIA at CES this week talking about robotics and calling it a multi-trillion dollar opportunity. And...

Jensen Wang was out at CES, which is like the Super Bowl for NVIDIA, and he's talking about how the robotics is going to be the next stage for the company's growth. He said he announced a new range, this is according to the Financial Times, he announced a new range of products and partnerships in the physical AI space, including AI models for humanoid robots, and a major deal with Toyota to use NVIDIA's self-driving car technology during a keynote speech in Vegas. So,

To me, I think this is important because we've been talking about how AI can hit a wall with LLMs and what comes next. And one of the potential things that comes next is real world understanding. So instead of just trying to understand the world through text and maybe video, it's AI models getting out there in the world and understanding how they interact with the world that we live in. And that can be done through humanoid robotics, which can understand and perceive and plan things.

you know, effectively a way through the world in a way that you just simply cannot do with a text model. And I found it very interesting in this moment where we're talking about finally hitting AI, maybe AGI, finally hitting a wall with scaling outside of that, that NVIDIA potentially has this new way forward in robotics. What did you think about the Neutron, John? Man, man.

Can we appreciate what we have today for a moment? Again, now even Jensen going and having to go to the next big thing, and I get you got to go to the next big thing. But to me, I do think if someone's going to crack it, I think NVIDIA would be

potentially the one. I also have to say, I love NVIDIA's branding on the way they... We just came up with Cloud 1000. Maybe it's good, maybe it's not. But so they're going to release a new computer called Jetson Thor. Jetson Thor. I mean, just like a mishmash of futuristic things. And then the robots will be powered by Groot.

which stands for generalist robot zero zero technology. I mean, basically Jetson's group, just pulling in all these kind of like sci-fi references and as well. And they're paired. So I got to give it to a NVIDIA for just making it a little fun, making it exciting, making it memorable. I do think yes, in the medium term,

this kind of idea of like robot, robotic, uh, progress, real world, AI, physical AI, how you get things to interact in the physical world. It's gonna be huge. Self-driving is kind of the first real manifestation of that. And that's real. We've both written in a way, Mo it's very real. What are the other applications of that? Again, like, uh,

in warehouses. Amazon for years has been able to use robots. There's lots of, you know, examples of like really special purpose robots already doing things in lots of situations. So I think it's a massive opportunity. I just wish it was not what we were talking about right now. And

We could all just try to build our first agent or two. Right, but NVIDIA, the whole reason why NVIDIA has gotten to the place it is is because it has been...

thinking two steps forward and everybody else has been stuck in the present. - Yeah, but they never said it before. Like when their genius was we have a good video game graphics card. Oh, by the way, we're not gonna tell you and suddenly you're gonna all realize that the entire AI revolution is gonna be powered by something that only gamers would get excited about before. Like they didn't go out and just say it, they just did it.

So I think that that's the part to me that feels different now than before. You know, what's interesting for me was to hear Jensen talk about it as a multi-trillion dollar opportunity as opposed to something scientific, right? To me, is that like, is this just a play to the stock market? Like, do you think that's what's happening? I mean, reading about it in the Financial Times, pretty interesting. Yes, 100%. But also, I'll give...

Like we're talking about OpenAI is a research house with a business tacked on. Jensen Huang is a business person. Like, I mean, he at his core, so he knows what he's doing. So I think on that side, and I also, I don't see him as someone who, I think he's very savvy about signaling on this kind of thing.

introducing this into the conversation, but I do have the confidence that there's a lot of real stuff happening if he is saying it that really could make this a reality. Yeah. I'm pumped about it. I mean, we're going to start to see so much more advances here, especially because with robotics, you can now just like run simulations with robots and then you

do that in like the virtual world and then bring that to the physical world. And we're starting to see some pretty cool stuff with robotics. You know, speaking of the present, there was also some interesting news that Nvidia is also releasing a personal AI supercomputer with its latest and most powerful AI chip Blackwell, which is going to allow researchers and students to run multi-billion parameter AI models. This is from the Financial Times locally rather than through the cloud. And it's going to be available at the initial price tag of three grand.

which I think to me, I mean, the Blackwell chips, what they usually go for 40,000. So the fact that you can get your own personal supercomputer for 3000 and run your own experiments on it, I think is pretty cool.

Yeah, I actually thought this was more interesting than the humanoid robot stuff because one, I saw it and I kind of want it and I don't even know why I want it. So it looks kind of like a souped up Mac mini and just somehow it'll just empower you to do amazing things. But also, NVIDIA, to me, 3000 is still a consumer-ish product. They say it's for students and researchers, but like,

I mean, the Vision Pro was $3,000 and that's a pure consumer product. So to me, this could be quietly, I mean, obviously graphics cards were very consumer focused, but they might be entering the consumer market again.

Yeah, I did a story recently about how universities are like all out of chips and basically cannot compete at all. And to me, this seems like a bit of an antidote to that. I mean, I think that NVIDIA really saw the need there and decided to go for it. And I think this is really positive news. Yeah, I think...

Anything NVIDIA does, they still have, you know, kind of the magic touch that until they really flop on something, I think you have to just buy into whatever they're selling right now.

Last bit of NVIDIA news that I'm actually personally pretty excited about. Basically, NVIDIA is building these souped up NPCs or non-player characters, non-playable characters in video games. And to me, this has always been like one of the dreams and I really regret not writing about it earlier because it's something that's been so obvious and in development. And that is that when you're playing video games, there are these non-playable characters that basically just kind of stand there and they're just like clearly like dumb video game stand-ins.

And the promise with AI is they can actually become much more human-like and that there's no such thing as an NPC anymore because every bot within a video game can be intelligent. And it looks like NVIDIA is also working to power some of these new characters. This is according to The Verge, these characters that can use AI to perceive, plan, and act like human players.

This is according to an NVIDIA blog post powered by gendered AI. This new technology will enable living dynamic game worlds with companions that comprehend and support player goals and enemies that adapt dynamically to player tactics. The characters are powered by small language models that are capable of planning at human like frequencies required for realistic decision making.

as well as multimodal small language models for vision and audio that allow AI characters to hear audio cues and perceive their environments. I mean, I'm just thinking about running around in a video game and realizing that every character there

is like a quote unquote intelligent and has its own personality and feels a lot more like the real world. I think this could be really big for video games. I think this could be big for virtual reality. Heck, maybe it even brings back the metaverse where we populated with AI, human like AIs that sit side by side with actual human players. I think this was pretty cool and it was very interesting.

Yeah, I like, see, this is the kind of stuff versus the medium-term futuristic humanoid robots. This is the kind of stuff I like to see. This is, because this really, I have no doubt, is happening and will actually just, like, unleash really cool things and experiences just in the next, like, year or two. I also noted my other, I think, 2025 prediction is

So they use small language model SLM. I've been seeing a lot of people in the agentic space using LAM's large action models. I think everyone's going to start rebranding LLM into other kinds of terminology. Realizing LLM branding is kind of...

little played out and a little, a little restricted to when's GPT-5 coming. So I think, I think you're going to see every one of these companies start to come up with these alternative terms that are a little more focused, SLM, LAM, things like that. Yeah. The branding and AI. It's just, it's always on point. It's all branding. It's all branding.

Okay, so one last bit of AI news here and sort of the dark side of AI, which again, we're constantly reminded that so much of this AI revolution is built on copyrighted materials. There's a TechCrunch story that says Mark Zuckerberg gave Meta's Lama team the okay to train on copyrighted works.

This is coming out in court filings that Zuckerberg, Mark Zuckerberg, cleared his team to train on LibGen, which is a data set. This is according to internal messages, a data set we know to be pirated that may undermine Meta's negotiating position with regulators. And when that concern was brought up, this is according to the filing, they said,

The decision makers within Meta said that after escalation to MZ, Mark Zuckerberg, the AI team was approved to use LibGen. I get the argument that this is transformation of people's work. I do not fully agree with this idea that therefore companies can go ahead and ingest copyrighted works and use them freely for their purposes without a license.

I don't know, it just seems wrong to me. What do you think, Rajan? Well, we're going to be talking about Mark Zuckerberg's announcements regarding the meta platforms in just a moment. And I think...

There is going to be some significant copyright lawsuit resolution of some sort in 2025, like the New York Times suing OpenAI already. When stuff like this comes out, it's one of those that I really think from like a-- I like that this whole episode is basically AI is all branding and PR. But I genuinely believe that you need to have the public on your side when

when this stuff comes out because on one, it's so crystal clear that obviously there's copyright violations in all of this. I think everyone, it's hard to pretend that that's not the case. However,

these products are so valuable to everyday users. It's like that when everyone supports you and is on your side, I think it's just okay enough that these things will resolve themselves. But I think like if the general public is turning against you, I think this is where there can be some real issues going into the next year, because I think we're going to see more and more of these lawsuits. And at some point there has to be precedent set because this is completely uncharted territory. And

And we are going to there's going to be some kind of court resolution that establishes some kind of precedent. Yeah, no, I'm with you 100%. I mean, ultimately, I get the argument that the AI companies are making and why they'd say it's fine to train on copyrighted material. But there's just something not right about it. And so far, there's been, you know, a small public backlash, not really much of any. And I just think that that's going to come to a head eventually.

All right, so speaking of meta, there's some new moderation policies that when we speak about them, we're sure to make at least half of our listeners mad, maybe all of them mad, and that we will not be daunted. We will discuss them right after this. I'm Jesse Hempel, host of Hello Monday. In my 20s, I knew what I wanted for my career, but from where I am now in the middle of my life, nothing feels as certain.

Work's changing. We're changing. And there's no guidebook for how to make sense of any of it. So every Monday, I bring you conversations with people who are thinking deeply about work and where it fits into our lives. We talk about making career pivots, about purpose and how to discern it, about where happiness fits into the mix, and how to ask for more money. Come join us in the Hello Monday community. Let's figure out the future together.

Listen to Hello Monday with Jesse Hempel wherever you get your podcasts. Struggling to meet the increasing demands of your customers? With AgentForce and Salesforce Data Cloud, you can deploy AI agents that free up your team's time to focus more on building customer relationships and less on repetitive, low-value tasks. That's because Data Cloud brings all your customer data to AgentForce, no matter where it lives, resulting in agents that deeply understand your customer and act without assistance. This is what AI was meant to be.

Get started at salesforce.com slash data. And we're back here on Big Technology Podcast. First half, we talked all about the latest in AI. Now we're going to talk about the second biggest story, I think, in tech this week, which is a meta full about face on its fact checking and content moderation policies.

So basically, Mark Zuckerberg says they're going to take a whole new approach to speech on Facebook. He basically says there are five changes they're going to make. One, they're going to get rid of fact checkers and replace them with community notes like X. Second, they're going to simplify content policies and get rid of a bunch of restrictions on topic like immigration and gender, which Zuckerberg says are just out of touch with mainstream discourse.

Third, they've had filters scanning for any content policy violation. They're going to tune the filters to focus on tackling illegal and high severity violations, leaving the rest to user reports. Fourth, they're bringing back civic content, which is a way to say that there's going to be a lot more news and politics on Facebook. And fifth, they're moving their trust and safety content moderation teams out of California and to Texas.

And they're also, this is like really a sixth thing, they're going to work with President Trump to push back governments around the world that are going after American companies and pushing them to censor more. Basically, they've had other countries lean on them to censor, and they're going to try to work with the Trump administration to push back on that. So those are the big changes coming out of Facebook. There's like

more detailed stuff that we're going to get into but just on its face what do you think about these changes ron john i think mark zuckerberg has had an incredible pr turnaround over the last few years from back in the 2017-18 cambridge analytica days

I think he's definitely risking all of that work with this to make this public statement because there didn't need to be him saying this. That was interesting to me. Like from a policy perspective, from an actual product perspective, they could have just done this to obviously it's so saying it publicly is kind of,

presenting this to Trump very clearly, kind of like presenting this to the world. But I think, and I'm curious your take, my possibly countering to of opinion is I think it's good in the sense that I have long been

wary of misinformation on meta platforms. I think fact checking just at that scale or any of these efforts that have launched over the last six to eight years have been kind of terrible and a fool's errand anyways. So I kind of like just let it rip as awful as that sounds. Like it just, this is what this platform is. When you have an algorithm based content platform like this,

this is the way it's always going to go. So to try to hide it and make editorial decisions or value-based decisions was always going to be a problem. And now they're just saying, sorry, we're not, it's not our, not our problem. Yeah, no, I think going from fact checking to community notes is a good move.

I think Community Notes really works very well on Twitter. In fact, I'm impressed with how well it works. And go ahead. - Okay, sorry. I disagree on that. I don't think from an actual like efficacy standpoint that any of this is going to work well in terms of like creating a better town square conversation. I just think it's the only real decision and it's actually the most honest one.

All right, but I'm yes and on this. I just think community notes is good. I mean, it's amazing how many viral tweets that you used to see that would just go viral without any context now go viral with fact checks, even ads on Twitter. When the advertiser is lying about their product, they get community noted. I mean, and it's not like the community notes generally seem like pretty level headed. And oftentimes we'll have links to like share further context. So why is this not a great solution here?

Well, because, okay, so we have to separate. There's the kind of, there's the fact checking side, which to me was always kind of ridiculous anyways. Then there's the actual kind of like abuse side, which also they've like severely relaxed the overall restrictions of it, which again, if you say something awful, that's not true.

wrong or right factually that is just something that is very like and we'll get into things like immigrants are grubby, filthy pieces of shit, which was one of the examples of now newly permissible speech. I mean, do we need a fact check? Yes or no? Like, so to me, the community note side, it does answer the misrepresentation

misinformation slash fact checking side. I do agree. It's kind of, I love seeing an advertiser get called out with community notes, but to me, that's almost a small part of this. Well, I think that's a big part of it because it's on the fact checking. But then I think that you're right in pointing out that the abuse stuff is a completely different side of it. And that's again, going towards where Zuckerberg talks about how like we're going to simplify our content policies and get rid of restrictions on topics like immigration agenda that are just out of touch and

with mainstream discourse. He says what started as a movement to be more inclusive has increasingly been used to shut down opinions and shut out people with different ideas, and it's gone too far. And like, while I think that's possible, it's kind of interesting to see what's now permissible on Facebook. I mean, you read like a few things, and there are some other things about trans people. Trans people aren't real. They're mentally ill. Trans people

or freaks. It's very interesting to me how like where Zuckerberg says, broaden out the discussion really is just like, let's get into like the most vile parts of our society and let them say the things that

That they want to say, which I guess like, yeah, you don't want to limit everybody's personal expression, of course. I don't think any political opinion should be restricted. And I'm definitely not on like the pro heavy content moderation side on social networks. On the other hand, like you are building a platform that does amplify stuff algorithmically. And like you get to kind of set the parameters of like the type of

you want to build inside. And it was just kind of interesting from like, you know, it's like Zuckerberg says, let's broaden out the political debate. And this is from Platformer. This is from Meta's chief marketing executive. I feel that the actual shock of friends and family seeing me receive abuse as a gay man helped them understand that hatred exists and hardened support.

Most of our progress on rights happened during these periods without mass censorship and pushing it underground has coincided with reversals. Like that is an interesting idea. But the question is like, how much of like the negative stuff do you want to amplify? Right. And I think Casey Newton, who wrote that, wrote the story with this quote, pointed out pretty well, which is basically just like.

Meta's moving to like a more of a for you feed, less of a people you follow feed. So the real question isn't exactly like how much of this do you want to allow? It's how much of it do you want to amplify? Well, but I also think that in the end, up until now with Facebook and Instagram having relative kind of like social network monopoly positioning,

This is the only thing that the average person like you and I are very online. I've dealt with plenty of abuse, especially on Twitter over the years. And it's fine. I, you know, like I am online. I deal with it. I know how to deal with it. Like for just the average Facebook user, which I'm guessing Facebook blue has gotten more normie than ever.

how are these people going to respond when they start getting flamed with just vile remarks and just feeling horrible? Do they stop using the platform? Do they use it less? So I actually think, to me, and I kind of hope this resolves itself from an actual kind of product and business perspective. Maybe that's like overly capitalistic of me, but I do think that

These platforms will become just miserable and unusable the more, again, for me, I know you're still anti-blue sky and still strong on X, but like, I mean. I wouldn't call myself anti-blue sky. I just don't think there's a big bright future there. Well, no, but like when I go on X now,

It's wild. Like, it's crazy. I mean, like the UK grooming stuff and just like, like, and I've been going on it less and less and less. And then when I go on, just the outright, like,

like insane racism, not even like kind of like subtle or even funny or whatever. I mean, just like that as well, just. And so, but the more that happens, the less I inclined to use it. And I mean, that's why I'm even think blue sky has a chance that I, for my own behavior, I use it more and more and more now because as the other alternative becomes less and less usable. So, so I do think it's going to be interesting to see how,

With to see where this goes from an actual business and product perspective, because it all sounds good right now. But is this going to make these things unusable for just a normal person who's just like posting about their kids or whatever else on Facebook and then just starts dealing with just insane content?

insane comments. Well, I did have an observation from the business side of this as well, because it's interesting to me that they're bringing back politics. I mean, the civic content thing, I think to me is the most undercover part of this entire thing. And I think they know that news and politics gives their products engagement they just can't find elsewhere. And so I'll sort of question whether like people flaming each other and blowing each other up in the comments is bad for

Facebook's business. In fact, it seems like Facebook Blue was at its height when the flame wars existed and people would go after each other and they'd be talking about news and politics and whatever it is today just doesn't have the same urgency as it did before. So I think that like

On its face, this is absolutely and I think in truth, this is absolutely a move of Zuckerberg to align Facebook with the current political administration, as it has always done. It always aligns itself with the political conventional wisdom in the country, and I think across the globe. And Zuckerberg has said, point blank, our job here is to just reflect what people want. Like we don't have anything higher purpose than that what people want, we give them. People

People said something in the election, Trump won, and therefore he's aligning himself with Trump. However, second order right now, I think is largely, it's a business thing as well. I think he realizes that to give threads a chance against Twitter and to restore urgency to Facebook,

You need news. News equals engagement. And I've said it a thousand times on the show. When you have news, when you have politics, you have engagement, you have urgency. And I think what Zuckerberg is doing is bringing that back into the platform and, you know, and all the horrible things that come with it in order to revive what is, you know, a platform that seems to be on its way to irrelevance in Facebook. And certainly they're in threats.

Well, I also think what he's doing, which is cynically savvy, is announcing this just a few days before. And today I was January 19th, the deadline for TikTok needing to be divested from ByteDance. And listening today to the Supreme Court hearings about the TikTok ban, suddenly it went from Trump

you know, supporting TikTok again, being against a ban. Two, now Mark Zuckerberg is fully aligned with me. He sees it. He knows it's happening. And I mean, you can start to see suddenly, and I was listening to the hearings, like even the conservative justices seem to be edging towards a ban. So if Trump does not step in on this, I think this could actually be the thing that could push over the ban to a reality. Yeah.

Do you still see that? Do you see the ban happening? Oh, yeah. OK, so today, listening to the Supreme Court hearing, the most fascinating thing, I mean, in this day and age, the nine justices on the Supreme Court seemingly being aligned on anything, I think, is almost non-existent. They all seem to be leaning towards something

And again, it's not a ban. And actually, Amy Coney Barrett kept saying it, but then also Justice Jackson on the liberal side was saying it. They're all like, this is not a ban. It's a divestiture ban.

like all you have to do is divest from ByteDance. Why can you not from a foreign adversary, like a company, why can you not divest? And then the solicitor on the ByteDance/TikTok side keeps saying, this is a matter of free speech.

And they're like, no, it's not speech, just divest. And it's the fact that everyone from both sides, even Clarence Thomas is like, what is exactly, what is TikTok's speech here? I don't understand why a restriction on ByteDance, a Chinese company, represents a limit on TikTok. So I think...

Zuckerberg's move might actually be the defining push where Trump suddenly is like, okay, I don't have to worry about meta on Instagram right now. You know what? I can now go after China. Let it rip. Let it rip. Yeah, that's really interesting. And I think, again, we've talked about this on the show, but if TikTok ends up

just shutting down and not divesting, then that's kind of the game right there. In fact, this is probably a good test to be like, you know, are you going to try to divest or are you going to shut down? I still heart of hearts don't believe that this thing is going to get shut down or sold or, you know, maybe there'll be some declaration of divestment and that'll be that.

However, if it goes through and TikTok says, all right, we're done. It's like, okay, you are a tool of the Chinese Communist Party. But they've already said that. All they had to do, like to say, we are not going to use the ByteDance algorithm and completely divest our data or algorithm, the business from ByteDance. That's not a completely impossible, unreasonable thing. Like that's the part to me that still is almost like confounding. Like,

Why can't they just do that? And that's what you heard it even in the justices when they were talking again across the aisle. Everyone's like, why can't you just do this? Like you're saying it's free speech. It's not unreasonable that the foreign adversary of like an opaque company that we don't know exactly the relationship with the Chinese government is

You should not be controlled by them when you control the narrative of American media. And there's no good answer to that question. Why can't they divest? Again, we're only nine days away. It's going to be a fun, crazy nine days on this one. Next week's show is going to be interesting. Yeah. Definitely. I think we're going to get – all it's going to take is one Trump vote.

Truth social? I don't even forget. What are they called again? Truth social. To say that, no, but what's the actual post called if it's a tweet? I'm not that far down. I still call X posts tweet, even though they call them posts. Okay. Maybe they're called truths. I don't know. Truths. Truths. That sounds good. One truth saying, let TikTok, let it shut down, and then it's over.

And I think he's going to be leaning on that way, could be leaning in that direction, given Zuckerberg's announcement, given just overall. He's going to need anything that makes him look weak on China is always going to be problematic. And so this was always going to be kind of a delicate situation anyways, versus cleanly saying,

I don't want China involved in anything. Shut it down. Yeah. So speaking of China, should we close on the fact that I spent 15 hours in Beijing on Tuesday on a layover back home and just found it very interesting to be in China? You know, you've been there as well. To me, the thing that really stood out was the fact that it really

It really is a surveillance state. I mean, there are cameras everywhere. And we were driving, myself, my wife, and a guide, we were driving out of the airport on our way to the Great Wall to start the day pre-dawn.

And it just felt like every few seconds there was another flash. And I was like, oh, like in Australia, there's lots of speed cameras. And of course I knew about surveillance in China, but I was like, is that like a speed camera? And it's like, nope, that is what they call eagle eye or sort of the nickname for it. That just tracks everybody's movement, does facial recognition to see who's in the car. And if you do things that are wrong, they will get you. And

In some places like in Tiananmen Square, every light fixture seemed to have like a dozen cameras or more affixed to it. And it's just this incredible thing that I don't think you fully grasp until you see it, that like wherever you go, you're being watched. It was very interesting to see in person.

Well, I spent three months in 2009 in Beijing and I lived there. I was taking a language class and that just reminded me the surveillance. So 2009, not quite digital, but it's such a different mindset in the sense of, so it was around the time of swine flu.

And so like already there was like a lot of pandemic-y behaviors going on. And I went out one night and I was a bit hungover the next day and I didn't feel like going to my class. So I emailed the teacher, Alan, I'm feeling a little sick. I don't think I can make it today. Two health officials showed up at my door and

And I was like, yeah. And like literally, and I did, they did not speak any English. My Chinese still is not very good. And at the time was non-existent and literally start asking me and yelling me, yelling at me, asking me all these questions. And that, that, that entire mentality of surveillance and like kind of a collective watching, even in the pre fully digital era was there. So I can't even fathom what it is like right now.

Yeah, it's pretty incredible. Like I, at one point, I was like, Oh, where's my phone? Of course, it was in my pocket. And the guy was like, Don't worry, if you left your phone, like a few steps back, you know, it'll still be there because the cameras will make sure that nobody took it. And if somebody takes it, we'll be able to find out where they went. Thank you. Thank you. Thank you. Oh, my goodness. But, um, you know, I mean, outside of that, I just think that it was like a really great visit, honestly, to be able to see China in person. And it's just like, it was

It was pretty cool to see, you know, just a culture that has been, you know, ongoing for so many years, just reflected in the modern day. As you like walk around Beijing, you could really see it with the architecture and sort of like the dress design.

It was just super cool. And obviously like getting a chance to see the Great Wall, which is just like one of the wonders of the world was really special. So I got a 10-year visa to go. And so 15 hours of those 10 years are in the books and I definitely plan to go back. Did you download...

Douyin, the TikTok, the original TikTok based out of China? I should have. I tried to download Alipay and I did get it to work, but I still had some problems with it. Like you can literally not pay with... You need a local phone number, right? Probably. I think that's what it is. Yeah, in Taiwan, I have to deal with the same thing with... Yeah, yeah.

Yeah. So couldn't use, could not use credit card. Everything is Alipay. There are so many EVs all around. Like you see the EVs with the green license plates and there were so many brands I'd never seen before that are just driving around. It was almost like sensory overload to like see all these things that we had like, you know, read about, I had read about in the press, like the surveillance, the EVs, the mobile payments, everything.

you know, the Soviet influence even, like you'd go from the Forbidden City, which is all ancient Chinese architecture into Tiananmen Square. And then you like look around and there's like the Great Hall of the People, which is the most important building, I would say, in the country where all the government meetings are held. And it's like, oh, that's just like straight up Soviet architecture. And so it's like, you look at the flag. You're there, it's real. Yeah, you're real. Soviet themes in the flag.

You go to the market, there are sculptures of Ayatollah Khomeini. You're like, okay, that bond is real. It's just a very, very interesting trip. I wrote about this in Big Technology this week, but there have been some debates on whether it's worthwhile to travel. And I just find those debates so absurd because...

Not only do you get a chance to meet people from different backgrounds when you go, but you also are left with so many more questions and a little bit more context of all these things that you see. And so I'm strongly on the affirmative side of go out and see the things that you want to understand. And I'm not going to claim to be a China expert after 15 hours in the country. No, you should, you should. Unlike most VCs on Twitter. But it certainly opened a lot of new questions for me. And it was really, really helpful, I think, to see this in action.

are you going back yes you got the 10 years so i will go back uh as long as like things stay kind of calm between the us and china i hope they do um but yeah we'll see i might have to big technology live in beijing not sure how that would play yeah no maybe maybe we're not coming back i don't think she is going to allow that but it's a good idea

All right, Ranjan. Maybe no, maybe he's a listener. Maybe he's a listener. I do know. Yeah, that's one of the things that Xi Jinping enjoys. If you're out there, Xi, come on the podcast. Yeah, I would love to have you on. All right. I mean, can you imagine our first head of state is Xi Jinping? I don't know. I think we have people asking questions.

All right, Ranjan, great speaking with you as always. We'll see you next week. See you next week. All right, everybody. Thank you for listening. We're going to be talking replica on Wednesday, so that'll be a really fascinating conversation. And then Ranjan and I will be back next Friday. Thanks for listening. We'll see you next time on Big Technology Podcast.