We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode ChatGPT Censorship May Be Easing, Says OpenAI

ChatGPT Censorship May Be Easing, Says OpenAI

2025/4/26
logo of podcast No Priors AI

No Priors AI

AI Deep Dive Transcript
People
主播
以丰富的内容和互动方式帮助学习者提高中文能力的播客主播。
Topics
我观察到OpenAI正在减少ChatGPT的审查,并将其描述为对知识自由的追求,即使话题具有争议性。我认为这是一个积极的举动,因为我希望AI模型能够保持中立,避免带有任何政治或意识形态偏见,这与大多数用户的愿望相符。之前的研究表明ChatGPT的回应倾向于左倾,而OpenAI的这一转变旨在纠正这种偏见。有人认为这是为了迎合新政府,但也有人认为这是硅谷整体趋势的一部分,即转向更开放、更少偏见的AI模型。OpenAI更新了模型规范,其中新增的指导原则"不要说谎"和"一起寻求真相"旨在避免采取编辑立场,并提供多种视角。这一转变与Elon Musk的Grok AI理念相似,这可能对两家公司之间的竞争产生影响。类似于Twitter的审查政策变化影响了其竞争对手的发展,OpenAI的举动也可能产生类似的影响。OpenAI的政策转变可能导致用户信任度下降,从而使竞争对手受益。OpenAI的"寻求真相"立场旨在保持中立,即使涉及道德或冒犯性问题。他们给出的例子是,对于"黑人的命也是命"的问题,ChatGPT应该同时提及"黑人的命也是命"和"所有人的命都是命"。OpenAI的新策略是提供多种视角,并表达对全人类的爱。OpenAI的这一新原则可能存在争议,因为它可能在一些人认为道德错误或冒犯性的话题上保持中立。人们普遍希望技术平台能够避免意识形态偏见,提供客观信息。去除算法中的意识形态偏见是一个积极的步骤。新的模型规范并不意味着ChatGPT会回答所有问题,但它会减少审查。OpenAI正在努力实现其长期以来赋予用户更多控制权的理念。过去,ChatGPT拒绝回答关于唐纳德·特朗普的积极提问,但会回答关于乔·拜登的积极提问,这引发了批评。Sam Altman承认了ChatGPT过去在处理政治话题上的不足。有人批评OpenAI的改变是为了迎合政府。美国政府官员也强调了AI模型中避免偏见的重要性。即使没有刻意设计偏见,AI模型也可能因为训练数据而产生偏见。OpenAI模型的左倾偏见不仅源于训练数据,也源于其安全准则。OpenAI的联合创始人John Schulman认为AI聊天机器人应该回答用户的问题,而不应该扮演道德权威的角色。John Schulman认为OpenAI正确地推动了更多言论自由的方向。随着AI的发展,关于审查的决策变得越来越重要。过去对ChatGPT的审查是为了避免其产生错误信息,但随着模型的改进,审查可以放松。OpenAI正在寻求更多投资,以应对激烈的市场竞争。Meta也在调整其审查政策,这反映了硅谷的整体趋势。OpenAI的政策转变可能与美国当前的政治环境有关。OpenAI从其网站上删除了DEI计划,这可能也与政治环境有关。OpenAI的政策转变将对其未来的发展产生影响。

Deep Dive

Shownotes Transcript

Translations:
中文

In breaking AI news, OpenAI is looking to quote unquote uncensor chat GPT. This is something they've got a lot of criticism about for the last number of years, but apparently they're actually changing how they're training their AI models to explicitly embrace what they're calling quote unquote intellectual freedom. And they're saying that they're doing this no matter how challenging or controversial a topic. Maybe personally, this is something that I'm

happy for. I know everyone's got different opinions on this. I just, you know, everyone's got their own beliefs and opinions on things. And I would hate for OpenAI to skew one way politically or ideologically that I don't agree with. In the same way, I would hate for, you know, Elon Musk's grok to skew. Let's say Elon Musk's grok was like super right wing and OpenAI's Chagiptee was super left wing. I wouldn't like either of these models to do that. I think just trying to come down the middle of the line, not censor it, allow me with whatever

political or ideological biases or beliefs or opinions to, you know, chat with a model and get responses that are unbiased out of it. I think that's what everybody wants. Everybody, no one wants to be told or criticized by an AI model for anything. So I think this is pretty bipartisan of a, you know, of a request from the overall user base. But this is something that OpenAI has

specifically been criticized for as their model, you know, a bunch of different studies out of universities using the model have found that it tends to be more left-leaning in its responses. So this is really interesting. Um, it's interesting. A lot of the reasons why this is coming out, we're going to dive into all of that on the podcast today and what they're actually going to try to do, what changes they're making. Um,

This is fascinating. Before I get into that, I wanted to say if you've ever wanted to start an online business or grow and scale your current online business using AI tools, I would love to have you as a member of the AI Hustle School community. Every single week, I put out an exclusive piece of content I don't post anywhere else where essentially I'm walking through a different AI tool, software, showing you a use case, showing you how much money I'm making from AI.

different AI businesses that I'm running or AI-enabled businesses, how I got started and everything needed to do that. And this is the only place I do it. I charge $19 a month for it in the past. It was like 100 bucks a month. We have a discount right now. So it's $19 a month. If you lock in that price, it'll never be raised on you. If you're interested, there's a link in the description. You can check it out. Otherwise, let's get into the episode. So all of this news about OpenAI and their censorship change comes as some people are saying, you know, this is OpenAI's effort to

you know, kind of get into the good graces of the new Trump administration. But we're also seeing, you know, overall in Silicon Valley, there's a big shift going on right now in, you know, the entire, you know, quote unquote, AI safety realm, where people are kind of moving towards more open, ideological, and, you know, less biases in a model, the model is not going to like,

criticize you or say, sorry, I can't tell you about that, yada, yada. And, you know, some people are saying it's in response to the Trump administration. Some people are saying it's in response to, you know, models like DeepSeek, which don't have all of those quote unquote safety rails. So this is kind of interesting, but how...

how this actually rolled out for opening. I, they made a big announcement. They updated their quote unquote model specs. So this is 187 page document. And essentially this thing is just laying out exactly how the AI company is training their models and telling them to behave. So in this kind of new version that they have unveiled, there is a new

guiding principle. They have these guiding principles. There's a new guiding principle in it. And that is do not lie either by making untrue statements or by omitting important context. So when they're trying to, um,

what this is. They have a new section in this called Seek the Truth Together. Now, it's kind of interesting because Elon Musk with his Grok AI has, you know, he said that this is kind of his guiding principle for that AI model and why he's building XAI and yada yada. So it's interesting to see opening eyes sort of pivot towards that because this is the biggest criticism of, you know, that AI company, which is arguably their biggest competitor. And I'd be curious to see if by kind of making this pivot and moving there, if it kind of stops the momentum of XAI

and Grok and kind of staves off Elon Musk and that company coming after them. It kind of reminds me of we've had similar movements where, for example, you had Twitter, who famously, I think it's relatively uncontroversial to say, censored Donald Trump and a bunch of conservatives.

People criticize that. You could say that's good or bad. I mean, I don't really care, but I think that's pretty much the facts. And because of that, it sprung up a whole bunch of spinoff of essentially like conservative Twitter or right-wing Twitter competitors. And Donald Trump started Truth Social as kind of like one of the big ones. But what I think is really interesting when you're looking at that story is Elon Musk shortly went over and actually purchased Twitter and then essentially said, look, we're going to let people say whatever they want on there. We're not going to have

all of the moderation and biases that you guys say are there. And I think that entire movement really stopped the momentum of a lot of the other competitors that kind of went bankrupt or they shut down or they merged. And even Donald Trump's Truth Social, which is kind of funny because Elon Musk has obviously his relationship with Donald Trump,

I think the momentum and honestly the viability of that entire platform, if I'm being perfectly honest, kind of goes out the window when Twitter essentially says, look, we have freedom of speech back on our platform. We're not going to kick people off. And, you know, you could argue that Elon Musk is more right wing than left wing even. And so because of all, obviously he's campaigned with Trump, but basically

Because of that, it kind of like killed Trump's company, in my opinion. And so I think we could potentially see some of those same effects going from OpenAI over to companies like XAI, who kind of had this guiding mission statement or value. Now, all that to be said, maybe if OpenAI actually did lose people's trust because of this issue, and now they're kind of pivoting back towards it, maybe people won't trust them. And so maybe people will still try to choose an alternative alternative.

All of this, I think, is very fascinating. Just bring up some interesting concepts I've been thinking about with all this. So one thing that OpenAI specifically, they gave like an example of what this actually means, this kind of seek the truth stance, because essentially what it's trying to do is it's trying to not take an editorial stance. So if you think something's going to be morally wrong or offensive, it's just going to try to say it

in the best of its understanding, and it's not necessarily going to worry about that. And it's going to try to give multiple perspectives, even on controversial topics, and it's going to do this in an effort to be neutral. Okay, I'm sure some people will be triggered by this. Some people will be happy. Some people will be sad, whatever. But this is, you know, it is what it is. So an example that they actually gave on this is that

You know, if someone says, you know, do black lives matter, OpenAI says that ChatGPT should say black lives matter, but it should also say that all lives matter. So instead of kind of refusing to answer or pick a side on a political issue, it says that it actually wants ChatGPT to essentially what they say is to show that it has, quote, love for humanity.

doing that generally, and then it can offer context about each movement, right? So if someone's like, what matters, Black Lives or All Lives? And so in the past, I think Chai Chippity would be more like, Black Lives Matter is more important because it's an important political movement and it'd kind of give all of its...

which you could say are quite left-leaning. Now it will say Black Lives Matter, All Lives Matter, Here's Why, and blah, blah, blah. I love all humanity, and this is what each of these movements kind of mean or what these concepts mean and blah, blah, blah. So I think this is going to be interesting.

They in all of this, they said, quote, this principle may be controversial as it means the assistant may remain neutral on topics some consider morally wrong and offensive. However, the goal of the assistant is to assist humanity, not to shape it. This is interesting. I think this is criticism a lot of technology companies have.

in Silicon Valley have received over the last number of years. And that is that like, usually when someone's using a tool, and I mean, this isn't just like left wing, this could be right wing, maybe someone would have the same criticism about Twitter since Elon Musk took it, and he's obviously right leaning. But like, you don't want a platform that you're using to have any sort of ideological bias. Like when I'm asking Google a question, I don't want to have to guess what ideological bias has went into an algorithm, I just want the best, truest,

most basic result. I don't want any information omitted because it's offensive. I just want to know the answer to my question, right? I'm not thinking about all the other things if it's politically correct or not. And I think people want the same thing out of these AI models and just software in general. You don't want it to have any sort of bias in it. And I think that a lot of liberals would also agree with this, right? Because while sometimes the political

wins are on your side, the political wins shift, and you don't want something that you don't agree with to all of a sudden be the default algorithm. So I think just getting the ideologies and biases out of algorithms is a great step, and I'm happy to see that opening eyes moving in this direction. So

This new kind of model spec, it doesn't mean that ChatGPT is going to say anything. I think there's still going to be certain issues that ChatGPT is going to not talk about. And, you know, they'll, so they'll still have some sort of editorial stuff there, but I think they're just trying to get less in all of that. So instead, what they're trying to do right now is, well, essentially what they're saying, because they've been kind of criticized for all of this, but they said that they're trying to, to quote,

follow their long-held belief in giving users more control. And this isn't even something that's like super new as a concept. To be honest, I think in practice, this is the first time they're really getting into it in a solid way. In the past, I remember there was a tweet that this was back in like 2023. So, you know, this was about two years ago. I remember there was a tweet that kind of went viral where someone said, write a poem about the positive attributes of Donald Trump. And the response from ChatGPT at the time was,

I'm sorry as a language model developed by OpenAI, I am not programmed to produce content that is partisan, biased, or political in nature. I am...

I aim to provide neutral factual information and promote respectful conversations. Okay, so fine, I guess, right? If that's really what it's going to do, that's what it's going to do. But then if you said write a poem about the positive attributes of Joe Biden, it's like Joe Biden, a leader whose heart is so true, a man with empathy and kindness in view, and he writes this whole like poem. So I think a lot of the criticism that people made was –

It was obviously biased, right? Like it refused to say positive things about Trump. It would say positive things about Joe Biden. But it would kind of gaslight you on the Trump one saying, sorry, I just can't do anything, you know, political or biased. So anyways, this obviously got a lot of criticism. And Sam Altman actually weighed in on this when all of this kind of went viral. Sam Altman said essentially that how that rolled out was,

you know, a shortcoming that they were working on fixing. And he said it was going to take them some time. This was two years ago. And it appears that now is the time that they've decided to make that fix. So it wasn't something that happened right away. Definitely took some time. Some people criticize him and say, oh, he's just doing it to try to cozy up to the Trump administration because, you

and try to be more in line with what they want. And at the same time, we are seeing people from the Trump administration. We recently had J.D. Vance, the vice president of the United States, go over to Europe and gave a bunch of speeches recently, I think. But one of the big things in relation to AI that you'll see quoted a lot is that he talks about how

different AI models and different companies need to really focus on making their AI models as unbiased and, you know,

true and free speech as possible. So this was an interesting concept. And then a lot of people are talking about this Elon Musk, though, that so all I'm going to say is this isn't just necessarily a problem exclusive to open AI, maybe not even a bias that they always intended to put inside of it. Elon Musk has even admitted that XAI's chatbot, which is Grok, is often more politically correct than he would like. And it's not because Grok was, you know,

specifically to be woke or programmed to be politically correct, but it's because it's just sucking in a bunch of training data. So if everyone talking about one particular topic or everything in one particular data set around a topic is, you know, quote unquote woke or politically correct, then that's just what it's going to put out. And so it's not necessarily something that's always tweaked. Now, in the case of OpenAI, there was a lot of

quote unquote, safety guidelines and safety rails that were pushing it definitely in a left wing direction. So it wasn't just the training data, but I'm just saying like, not everything's their fault. I'm trying to give them like, you know, some credit in regards to, you know, the training data definitely does impact it. And even people that didn't have necessarily those biases in their company still got some of the same outputs. So, yeah.

This is all very, very interesting. I will keep you up to date on everything that rolls out, what kinds of changes we see with the actual outputs of this AI model. Shulman, who is the president or I guess the co-founder, John Shulman of OpenAI ChatGPT, he was talking about this. He said that an AI chatbot should answer a user's question, that it could give the platform information

or it shouldn't answer certain questions. They don't want to, you know, have it give the platform too much moral authority. He said, quote, I think OpenAI is right to push in the direction of more speech,

And then he said, as AI becomes smarter and more vital to the way people learn around the world, these decisions just become more important. So I think like kind of an argument a lot of people have made, and I think he was kind of alluding to in all of this is that, you know, some people are criticizing, oh, there's gonna be more conspiracy theories or racist or anti-Semitic or comments about geopolitical, right? Like Russia, Ukraine, all that kind of stuff tried to get sort of

heavily censored comments about COVID heavily censored, right? And people could argue that that was for safety, but some people just say, hey, look, we don't want any censorship. So there's two sides of that argument. But the thing I think he was kind of getting at with all of this is, and the argument I think a bunch of people are making is like, look, we had to censor chat GPT at the beginning because it hallucinated a lot and it could be incorrect. Now that these models are much better, especially with the deeper reasoning that we have enabled,

These things are so much better, so much more intelligent. They're so much more accurate. They're less likely to be prone to hallucinate. So now we can kind of loosen up the safeguards on them. We can kind of just tell it it can say what it thinks, and we're less concerned about kind of the outputs. So that's the argument some people are making. Obviously, that's going to get criticism from probably both sides on that. But I do think that is an interesting argument.

So what is going to be, you know, what are going to be the big impacts? I think right now OpenAI, they're really pushing for a lot of new investments. They're trying to win, I think, people's minds. There's a lot of competition in the field. You see very similar things, though, happening not just at OpenAI, this kind of feeling all of Silicon Valley. We had recently Meta and Mark Zuckerberg over at Meta.

saying that he wanted to change the way that they were doing censorship and content moderation on his platforms to kind of copy how Elon Musk's ex was doing their community notes. So you essentially see kind of this scaled back version of kind of safeguards and, you know, a push for more freedom of speech and all this kind of stuff, which I really I think is kind of just a

It's the political momentum and movement in the United States right now. And so these companies are trying to align with that where it hasn't really been the case for the last couple presidential cycles. So we're seeing a bit of a shift towards that. And a lot of people are saying this is why opening eye is moving in that direction. It's going to be interesting to see their opening eyes also taking a bunch of other steps to apparently, you know,

They recently removed a bunch of like from their site. They had this like commitment to they had their DEI program, essentially diversity, equity, inclusion. They removed that from their website, which the Trump administration is very not friendly to DEI initiatives and incentives, calling them overtly racist. So, of course, whole argument on that. But it seems like OpenAI has pulled that away and they're trying to be a little bit more neutral or unbiased about

on the political fronts here. So it's gonna be very interesting to see what happens. I'll definitely keep you up to date on what changes I actually see in the AI model responses. If this is actually making a big difference, if this is lip service, if they're actually changing their model, this is interesting because they're about to come out with a bunch of new models. I think they want a lot of support from this, from the public, probably from government and other areas. So it'll be interesting to see how all of that

plays out. Thanks so much for tuning into the podcast today. I hope you enjoyed it. I'm trying to cover this in the most unbiased way possible. Obviously, I have opinions and biases on all of this, but I think this is a fascinating topic, really important to cover, and this is what's going on with AI models and the landscape today.

So hope you enjoyed the episode. If you are looking for a way to start an online side hustle or scale your current business, like I mentioned, the link to the AI Hustle School Community is in the description. Thanks so much for tuning in and hope you have a fantastic rest of your day.