We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode OpenAI Wants to Make ChatGPT Less Restricted

OpenAI Wants to Make ChatGPT Less Restricted

2025/4/20
logo of podcast AI Education

AI Education

AI Deep Dive AI Chapters Transcript
People
主持人
专注于电动车和能源领域的播客主持人和内容创作者。
Topics
我作为主持人,认为OpenAI此举是积极的。多年来,OpenAI因ChatGPT的审查制度而受到批评,但现在他们改变了AI模型的训练方式,明确拥抱所谓的“思想自由”,无论话题多么具有挑战性或争议性。我个人对此感到高兴,因为我不希望OpenAI的模型带有我不同意的政治或意识形态偏见。我希望OpenAI的ChatGPT和Elon Musk的Grok都能保持中立,避免带有明显的政治或意识形态偏见。OpenAI取消审查的举动得到了广泛用户的支持,因为没有人希望AI模型对他们的观点进行批评或审查。之前的研究表明OpenAI的模型在回应中倾向于左倾,这次取消审查的举动非常有趣。OpenAI更新了其187页的模型规范文件,其中新增了一条指导原则:不要通过发表不实言论或省略重要内容来撒谎。OpenAI的新指导原则“不要撒谎”旨在避免AI模型通过不实陈述或遗漏重要信息来撒谎。OpenAI的新规范中增加了“一起寻求真相”的部分,这与Elon Musk的Grok AI的指导原则相似。OpenAI取消审查的举动可能与Twitter取消对特朗普等保守派人士的审查类似,旨在阻止竞争对手的崛起。Elon Musk收购Twitter后取消审查的做法,有效地阻止了其他竞争对手的发展,OpenAI的举动可能也会产生类似的效果。Elon Musk收购Twitter并取消审查,导致Truth Social等竞争对手失去竞争力,OpenAI的举动可能也会产生类似的影响。Elon Musk的政治立场可能影响了Twitter的审查政策,OpenAI的举动也可能受到类似因素的影响。OpenAI取消审查的举动可能会影响用户对其的信任,导致用户转向其他替代品。OpenAI的新立场是避免采取编辑立场,而是尽可能客观地呈现多种观点,即使是具有争议性的观点。OpenAI给出的例子是,对于“Black Lives Matter”的问题,ChatGPT应该同时回应“Black Lives Matter”和“All Lives Matter”,以保持中立。OpenAI希望ChatGPT在回应诸如“Black Lives Matter”和“All Lives Matter”等争议性问题时,能够呈现多种观点并保持中立。OpenAI的新政策将使ChatGPT在处理争议性话题时更加中立,并提供多种观点。OpenAI的新原则可能存在争议,因为它意味着AI助手可能在某些被认为道德错误和冒犯性的话题上保持中立,但其目标是辅助人类,而不是塑造人类。人们普遍希望AI模型和软件能够避免任何意识形态偏见,并提供客观准确的信息。去除算法中的意识形态偏见对所有人都有利,无论他们的政治立场如何。OpenAI的目标是让用户拥有更多控制权,减少AI模型的编辑干预。OpenAI过去对政治话题的处理方式存在偏见,这引发了批评。Sam Altman承认OpenAI过去在处理政治话题方面存在不足,并表示正在努力解决这个问题。有人批评OpenAI取消审查是为了迎合特朗普政府。美国政府官员,例如J.D. Vance,也呼吁AI模型应该更加客观和言论自由。Elon Musk也承认他的Grok AI有时过于政治正确,这并非刻意为之,而是由于训练数据的影响。AI模型的输出结果受到训练数据的影响,即使公司本身没有偏见,也可能产生偏见的结果。OpenAI模型的左倾倾向不仅源于训练数据,也受到其内部安全准则的影响。OpenAI的联合创始人John Schulman认为AI聊天机器人应该回答用户的问题,不应该赋予平台过多的道德权威。OpenAI的联合创始人John Schulman认为OpenAI应该朝着更多言论自由的方向发展。随着AI变得越来越智能,关于言论自由和审查的决策变得越来越重要。随着AI模型的改进,可以放宽对其的限制,减少审查。减少AI审查的决定可能存在争议,但随着AI模型的改进,这种做法变得更加可行。OpenAI正在寻求新的投资,并试图赢得公众的支持,因为AI领域竞争激烈。Meta和Mark Zuckerberg也正在改变其审查和内容审核的方式,这反映了美国当前的政治趋势。OpenAI的举动可能与美国当前的政治趋势有关。OpenAI从其网站上删除了其多样性、公平性和包容性(DEI)计划,这可能与特朗普政府的政策有关。OpenAI的举动可能对其未来的发展产生影响,因为它即将推出新的模型,并需要公众和政府的支持。

Deep Dive

Chapters
OpenAI is aiming to make ChatGPT less restricted, embracing "intellectual freedom" and addressing criticism regarding its left-leaning bias. This decision has sparked debate among users and experts.
  • OpenAI seeks to uncensor ChatGPT.
  • The move is intended to promote intellectual freedom.
  • ChatGPT has faced criticism for exhibiting left-leaning biases.

Shownotes Transcript

Translations:
中文

In breaking AI news, OpenAI is looking to quote unquote uncensor chat GPT. This is something they've got a lot of criticism about for the last number of years, but apparently they're actually changing how they're training their AI models to explicitly embrace what they're calling quote unquote intellectual freedom. And they're saying that they're doing this no matter how challenging or controversial a topic. Maybe personally, this is something that I'm

happy for. I know everyone's got different opinions on this. I just, you know, everyone's got their own beliefs and opinions on things. And I would hate for OpenAI to skew one way politically or ideologically that I don't agree with. In the same way, I would hate for, you know, Elon Musk's grok to skew. Let's say Elon Musk's grok was like super right wing and OpenAI's Chagiptee was super left wing. I wouldn't like either of these models to do that. I think just trying to come down the middle of the line, not censor it, allow me with whatever

political or ideological biases or beliefs or opinions to, you know, chat with a model and get responses that are unbiased out of it. I think that's what everybody wants. Everybody, no one wants to be told or criticized by an AI model for anything. So I think this is pretty bipartisan of a, you know, of a request from the overall user base. But this is something that OpenAI has

specifically been criticized for as their model, you know, a bunch of different studies out of universities using the model have found that it tends to be more left-leaning in its responses. So this is really interesting. Um, it's interesting. A lot of the reasons why this is coming out, we're going to dive into all of that on the podcast today and what they're actually going to try to do, what changes they're making. Um,

This is fascinating. Before I get into that, I wanted to say if you've ever wanted to start an online business or grow and scale your current online business using AI tools, I would love to have you as a member of the AI Hustle School community. Every single week, I put out an exclusive piece of content I don't post anywhere else where essentially I'm walking through a different AI tool, software, showing you a use case, showing you how much money I'm making from AI.

different AI businesses that I'm running or AI-enabled businesses, how I got started and everything needed to do that. And this is the only place I do it. I charge $19 a month for it in the past. It was like 100 bucks a month. We have a discount right now. So it's $19 a month. If you lock in that price, it'll never be raised on you. If you're interested, there's a link in the description. You can check it out. Otherwise, let's get into the episode. So all of this news about OpenAI and their censorship change comes as some people are saying, you know, this is OpenAI's effort to

you know, kind of get into the good graces of the new Trump administration. But we're also seeing, you know, overall in Silicon Valley, there's a big shift going on right now in, you know, the entire, you know, quote unquote, AI safety realm, where people are kind of moving towards more open, ideological, and, you know, less biases in a model, the model is not going to like,

criticize you or say, sorry, I can't tell you about that, yada, yada. And, you know, some people are saying it's in response to the Trump administration. Some people are saying it's in response to, you know, models like DeepSeek, which don't have all of those quote unquote safety rails. So this is kind of interesting, but how...

how this actually rolled out for opening. I, they made a big announcement. They updated their quote unquote model specs. So this is 187 page document. And essentially this thing is just laying out exactly how the AI company is training their models and telling them to behave. So in this kind of new version that they have unveiled, there is a new

guiding principle. They have these guiding principles. There's a new guiding principle in it. And that is do not lie either by making untrue statements or by omitting important context. So when they're trying to, um,

what this is. They have a new section in this called Seek the Truth Together. Now, it's kind of interesting because Elon Musk with his Grok AI has, you know, he said that this is kind of his guiding principle for that AI model and why he's building XAI and yada yada. So it's interesting to see opening eyes sort of pivot towards that because this is the biggest criticism of, you know, that AI company, which is arguably their biggest competitor. And I'd be curious to see if by kind of making this pivot and moving there, if it kind of stops the momentum of XAI

and Grok and kind of staves off Elon Musk and that company coming after them. It kind of reminds me of we've had similar movements where, for example, you had Twitter, who famously, I think it's relatively uncontroversial to say, censored Donald Trump and a bunch of conservatives.

People criticize that. You could say that's good or bad. I mean, I don't really care, but I think that's pretty much the facts. And because of that, it sprung up a whole bunch of spinoff of essentially like conservative Twitter or right-wing Twitter competitors. And Donald Trump started Truth Social as kind of like one of the big ones. But what I think is really interesting when you're looking at that story is Elon Musk shortly went over and actually purchased Twitter and then essentially said, look, we're going to let people say whatever they want on there. We're not going to have

all of the moderation and biases that you guys say are there. And I think that entire movement really stopped the momentum of a lot of the other competitors that kind of went bankrupt or they shut down or they merged. And even Donald Trump's Truth Social, which is kind of funny because Elon Musk has obviously his relationship with Donald Trump,

I think the momentum and honestly the viability of that entire platform, if I'm being perfectly honest, kind of goes out the window when Twitter essentially says, look, we have freedom of speech back on our platform. We're not going to kick people off. And, you know, you could argue that Elon Musk is more right wing than left wing even. And so because of all, obviously he's campaigned with Trump, but basically

Because of that, it kind of like killed Trump's company, in my opinion. And so I think we could potentially see some of those same effects going from OpenAI over to companies like XAI, who kind of had this guiding mission statement or value. Now, all that to be said, maybe if OpenAI actually did lose people's trust because of this issue, and now they're kind of pivoting back towards it, maybe people won't trust them. And so maybe people will still try to choose an alternative alternative.

All of this, I think, is very fascinating. Just bring up some interesting concepts I've been thinking about with all this. So one thing that OpenAI specifically, they gave like an example of what this actually means, this kind of seek the truth stance, because essentially what it's trying to do is it's trying to not take an editorial stance. So if you think something's going to be morally wrong or offensive, it's just going to try to say it

in the best of its understanding, and it's not necessarily going to worry about that. And it's going to try to give multiple perspectives, even on controversial topics, and it's going to do this in an effort to be neutral. Okay, I'm sure some people will be triggered by this. Some people will be happy. Some people will be sad, whatever. But this is, you know, it is what it is. So an example that they actually gave on this is that

You know, if someone says, you know, do black lives matter, OpenAI says that ChatGPT should say black lives matter, but it should also say that all lives matter. So instead of kind of refusing to answer or pick a side on a political issue, it says that it actually wants ChatGPT to essentially what they say is to show that it has, quote, love for humanity.

doing that generally, and then it can offer context about each movement, right? So if someone's like, what matters, Black Lives or All Lives? And so in the past, I think Chai Chippity would be more like, Black Lives Matter is more important because it's an important political movement and it'd kind of give all of its...

which you could say are quite left-leaning. Now it will say Black Lives Matter, All Lives Matter, Here's Why, and blah, blah, blah. I love all humanity, and this is what each of these movements kind of mean or what these concepts mean and blah, blah, blah. So I think this is going to be interesting.

They in all of this, they said, quote, this principle may be controversial as it means the assistant may remain neutral on topics some consider morally wrong and offensive. However, the goal of the assistant is to assist humanity, not to shape it. This is interesting. I think this is criticism a lot of technology companies have.

in Silicon Valley have received over the last number of years. And that is that like, usually when someone's using a tool, and I mean, this isn't just like left wing, this could be right wing, maybe someone would have the same criticism about Twitter since Elon Musk took it, and he's obviously right leaning. But like, you don't want a platform that you're using to have any sort of ideological bias. Like when I'm asking Google a question, I don't want to have to guess what ideological bias has went into an algorithm, I just want the best, truest,

most basic result. I don't want any information omitted because it's offensive. I just want to know the answer to my question, right? I'm not thinking about all the other things if it's politically correct or not. And I think people want the same thing out of these AI models and just software in general. You don't want it to have any sort of bias in it. And I think that a lot of liberals would also agree with this, right? Because while sometimes the political

wins are on your side, the political wins shift, and you don't want something that you don't agree with to all of a sudden be the default algorithm. So I think just getting the ideologies and biases out of algorithms is a great step, and I'm happy to see that opening eyes moving in this direction. So

This new kind of model spec, it doesn't mean that ChatGPT is going to say anything. I think there's still going to be certain issues that ChatGPT is going to not talk about. And, you know, they'll, so they'll still have some sort of editorial stuff there, but I think they're just trying to get less in all of that. So instead, what they're trying to do right now is, well, essentially what they're saying, because they've been kind of criticized for all of this, but they said that they're trying to, to quote,

follow their long-held belief in giving users more control. And this isn't even something that's like super new as a concept. To be honest, I think in practice, this is the first time they're really getting into it in a solid way. In the past, I remember there was a tweet that this was back in like 2023. So, you know, this was about two years ago. I remember there was a tweet that kind of went viral where someone said, write a poem about the positive attributes of Donald Trump. And the response from ChatGPT at the time was,

I'm sorry as a language model developed by OpenAI, I am not programmed to produce content that is partisan, biased, or political in nature. I am...

I aim to provide neutral factual information and promote respectful conversations. Okay, so fine, I guess, right? If that's really what it's going to do, that's what it's going to do. But then if you said write a poem about the positive attributes of Joe Biden, it's like Joe Biden, a leader whose heart is so true, a man with empathy and kindness in view, and he writes this whole like poem. So I think a lot of the criticism that people made was –

It was obviously biased, right? Like it refused to say positive things about Trump. It would say positive things about Joe Biden. But it would kind of gaslight you on the Trump one saying, sorry, I just can't do anything, you know, political or biased. So anyways, this obviously got a lot of criticism. And Sam Altman actually weighed in on this when all of this kind of went viral. Sam Altman said essentially that how that rolled out was,

you know, a shortcoming that they were working on fixing. And he said it was going to take them some time. This was two years ago. And it appears that now is the time that they've decided to make that fix. So it wasn't something that happened right away. Definitely took some time. Some people criticize him and say, oh, he's just doing it to try to cozy up to the Trump administration because, you

and try to be more in line with what they want. And at the same time, we are seeing people from the Trump administration. We recently had J.D. Vance, the vice president of the United States, go over to Europe and gave a bunch of speeches recently, I think. But one of the big things in relation to AI that you'll see quoted a lot is that he talks about how

different AI models and different companies need to really focus on making their AI models as unbiased and, you know,

true and free speech as possible. So this was an interesting concept. And then a lot of people are talking about this Elon Musk, though, that so all I'm going to say is this isn't just necessarily a problem exclusive to open AI, maybe not even a bias that they always intended to put inside of it. Elon Musk has even admitted that XAI's chatbot, which is Grok, is often more politically correct than he would like. And it's not because Grok was, you know,

specifically to be woke or programmed to be politically correct, but it's because it's just sucking in a bunch of training data. So if everyone talking about one particular topic or everything in one particular data set around a topic is, you know, quote unquote woke or politically correct, then that's just what it's going to put out. And so it's not necessarily something that's always tweaked. Now, in the case of OpenAI, there was a lot of

quote unquote, safety guidelines and safety rails that were pushing it definitely in a left wing direction. So it wasn't just the training data, but I'm just saying like, not everything's their fault. I'm trying to give them like, you know, some credit in regards to, you know, the training data definitely does impact it. And even people that didn't have necessarily those biases in their company still got some of the same outputs. So, yeah.

This is all very, very interesting. I will keep you up to date on everything that rolls out, what kinds of changes we see with the actual outputs of this AI model. Shulman, who is the president or I guess the co-founder, John Shulman of OpenAI ChatGPT, he was talking about this. He said that an AI chatbot should answer a user's question, that it could give the platform information

or it shouldn't answer certain questions. They don't want to, you know, have it give the platform too much moral authority. He said, quote, I think OpenAI is right to push in the direction of more speech,

And then he said, as AI becomes smarter and more vital to the way people learn around the world, these decisions just become more important. So I think like kind of an argument a lot of people have made, and I think he was kind of alluding to in all of this is that, you know, some people are criticizing, oh, there's gonna be more conspiracy theories or racist or anti-Semitic or comments about geopolitical, right? Like Russia, Ukraine, all that kind of stuff tried to get sort of

heavily censored comments about COVID heavily censored, right? And people could argue that that was for safety, but some people just say, hey, look, we don't want any censorship. So there's two sides of that argument. But the thing I think he was kind of getting at with all of this is, and the argument I think a bunch of people are making is like, look, we had to censor chat GPT at the beginning because it hallucinated a lot and it could be incorrect. Now that these models are much better, especially with the deeper reasoning that we have enabled,

These things are so much better, so much more intelligent. They're so much more accurate. They're less likely to be prone to hallucinate. So now we can kind of loosen up the safeguards on them. We can kind of just tell it it can say what it thinks, and we're less concerned about kind of the outputs. So that's the argument some people are making. Obviously, that's going to get criticism from probably both sides on that. But I do think that is an interesting argument.

So what is going to be, you know, what are going to be the big impacts? I think right now OpenAI, they're really pushing for a lot of new investments. They're trying to win, I think, people's minds. There's a lot of competition in the field. You see very similar things, though, happening not just at OpenAI, this kind of feeling all of Silicon Valley. We had recently Meta and Mark Zuckerberg over at Meta.

saying that he wanted to change the way that they were doing censorship and content moderation on his platforms to kind of copy how Elon Musk's ex was doing their community notes. So you essentially see kind of this scaled back version of kind of safeguards and, you know, a push for more freedom of speech and all this kind of stuff, which I really I think is kind of just a

It's the political momentum and movement in the United States right now. And so these companies are trying to align with that where it hasn't really been the case for the last couple presidential cycles. So we're seeing a bit of a shift towards that. And a lot of people are saying this is why opening eye is moving in that direction. It's going to be interesting to see their opening eyes also taking a bunch of other steps to apparently, you know,

They recently removed a bunch of like from their site. They had this like commitment to they had their DEI program, essentially diversity, equity, inclusion. They removed that from their website, which the Trump administration is very not friendly to DEI initiatives and incentives, calling them overtly racist. So, of course, whole argument on that. But it seems like OpenAI has pulled that away and they're trying to be a little bit more neutral or unbiased about

on the political fronts here. So it's gonna be very interesting to see what happens. I'll definitely keep you up to date on what changes I actually see in the AI model responses. If this is actually making a big difference, if this is lip service, if they're actually changing their model, this is interesting because they're about to come out with a bunch of new models. I think they want a lot of support from this, from the public, probably from government and other areas. So it'll be interesting to see how all of that

plays out. Thanks so much for tuning into the podcast today. I hope you enjoyed it. I'm trying to cover this in the most unbiased way possible. Obviously, I have opinions and biases on all of this, but I think this is a fascinating topic, really important to cover, and this is what's going on with AI models and the landscape today.

So hope you enjoyed the episode. If you are looking for a way to start an online side hustle or scale your current business, like I mentioned, the link to the AI Hustle School Community is in the description. Thanks so much for tuning in and hope you have a fantastic rest of your day.