We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode ChatGPT Could Soon Be More Open, Hints OpenAI

ChatGPT Could Soon Be More Open, Hints OpenAI

2025/4/18
logo of podcast LLM

LLM

AI Deep Dive AI Chapters Transcript
People
播客主持人
播客主持人,专注于英语学习和金融话题讨论,组织了英语学习营,并深入探讨了比特币和美元的关系。
Topics
我个人认为OpenAI减少ChatGPT审查的举动是积极的。长期以来,OpenAI因ChatGPT的审查机制而受到批评,但这项改变表明他们正在努力拥抱所谓的‘知识自由’,并承诺应对具有挑战性或争议性的议题。我不希望OpenAI或其他公司(例如埃隆·马斯克的Grok)的AI模型带有我个人不同意的政治或意识形态偏见。我认为,所有用户都希望与AI模型进行交流时获得公正无偏的回应,没有人希望因为任何观点而受到AI模型的批评。虽然一些研究表明ChatGPT的回应倾向于左倾,但这项改变回应了用户希望AI模型保持中立,避免带有政治或意识形态偏见的诉求。 OpenAI减少审查的举动,可能是为了应对来自政府(例如,美国政府对AI模型中立性的要求)或竞争对手(例如,埃隆·马斯克的Grok)的压力,也可能是为了顺应硅谷整体向更开放、更少偏见的AI模型发展的趋势。OpenAI更新了其AI模型规范,其中新增的指导原则是‘不说谎’,无论是通过虚假陈述还是遗漏重要背景信息。他们还在新的模型规范中增加了‘一起寻求真相’的部分,这与埃隆·马斯克的Grok AI的指导原则类似。 OpenAI减少审查的举动,可能效仿了推特取消对某些政治人物的审查的做法,从而抑制竞争对手的发展。但这项改变也可能影响用户对其的信任度,导致用户转向其他AI模型。OpenAI的新策略是避免采取编辑立场,尽可能呈现多种观点,力求保持中立,即使面对道德上错误或冒犯性的问题,ChatGPT也应给出多种视角,并表达对人类的爱。 从算法中去除意识形态偏见是一个积极的步骤,这将对所有用户都有利。OpenAI的新模型规范并不意味着ChatGPT会无所不谈,但它会减少审查,给予用户更多控制权。OpenAI过去在处理政治敏感话题时存在偏见,这引发了批评。OpenAI承认过去在处理政治敏感话题时存在偏见,并表示正在努力解决这个问题。 即使没有刻意设计偏见,AI模型也可能因为训练数据的原因而产生偏见。OpenAI过去模型的偏见,部分原因是其内部的安全指南和限制。随着AI变得越来越智能,减少审查变得越来越重要。随着AI模型的改进,减少审查的风险降低了。OpenAI正在努力获得更多投资,并争取公众和政府的支持。硅谷许多公司都在减少审查,这反映了美国当前的政治趋势。OpenAI删除了其网站上的多元化、公平与包容计划,这可能与美国政府的政策有关。OpenAI减少审查的举动是一个复杂的问题,值得进一步关注。

Deep Dive

Shownotes Transcript

Translations:
中文

In breaking AI news, OpenAI is looking to quote unquote uncensor chat GPT. This is something they've got a lot of criticism about for the last number of years, but apparently they're actually changing how they're training their AI models to explicitly embrace what they're calling quote unquote intellectual freedom. And they're saying that they're doing this no matter how challenging or controversial a topic. Maybe personally, this is something that I'm

happy for. I know everyone's got different opinions on this. I just, you know, everyone's got their own beliefs and opinions on things. And I would hate for OpenAI to skew one way politically or ideologically that I don't agree with. In the same way, I would hate for, you know, Elon Musk's grok to skew. Let's say Elon Musk's grok was like super right wing and OpenAI's Chagiptee was super left wing. I wouldn't like either of these models to do that. I think just trying to come down the middle of the line, not censor it, allow me with whatever

political or ideological biases or beliefs or opinions to, you know, chat with a model and get responses that are unbiased out of it. I think that's what everybody wants. Everybody, no one wants to be told or criticized by an AI model for anything. So I think this is pretty bipartisan of a, you know, of a request from the overall user base. But this is something that OpenAI has

specifically been criticized for as their model, you know, a bunch of different studies out of universities using the model have found that it tends to be more left-leaning in its responses. So this is really interesting. Um, it's interesting. A lot of the reasons why this is coming out, we're going to dive into all of that on the podcast today and what they're actually going to try to do, what changes they're making. Um,

This is fascinating. Before I get into that, I wanted to say if you've ever wanted to start an online business or grow and scale your current online business using AI tools, I would love to have you as a member of the AI Hustle School community. Every single week, I put out an exclusive piece of content I don't post anywhere else where essentially I'm walking through a different AI tool, software, showing you a use case, showing you how much money I'm making from AI.

different AI businesses that I'm running or AI-enabled businesses, how I got started and everything needed to do that. And this is the only place I do it. I charge $19 a month for it in the past. It was like 100 bucks a month. We have a discount right now. So it's $19 a month. If you lock in that price, it'll never be raised on you. If you're interested, there's a link in the description. You can check it out. Otherwise, let's get into the episode. So all of this news about OpenAI and their censorship change comes as some people are saying, you know, this is OpenAI's effort to

you know, kind of get into the good graces of the new Trump administration. But we're also seeing, you know, overall in Silicon Valley, there's a big shift going on right now in, you know, the entire, you know, quote unquote, AI safety realm, where people are kind of moving towards more open, ideological, and, you know, less biases in a model, the model is not going to like,

criticize you or say, sorry, I can't tell you about that, yada, yada. And, you know, some people are saying it's in response to the Trump administration. Some people are saying it's in response to, you know, models like DeepSeek, which don't have all of those quote unquote safety rails. So this is kind of interesting, but how...

how this actually rolled out for opening. I, they made a big announcement. They updated their quote unquote model specs. So this is 187 page document. And essentially this thing is just laying out exactly how the AI company is training their models and telling them to behave. So in this kind of new version that they have unveiled, there is a new

guiding principle. They have these guiding principles. There's a new guiding principle in it. And that is do not lie either by making untrue statements or by omitting important context. So when they're trying to, um,

what this is. They have a new section in this called Seek the Truth Together. Now, it's kind of interesting because Elon Musk with his Grok AI has, you know, he said that this is kind of his guiding principle for that AI model and why he's building XAI and yada yada. So it's interesting to see opening eyes sort of pivot towards that because this is the biggest criticism of, you know, that AI company, which is arguably their biggest competitor. And I'd be curious to see if by kind of making this pivot and moving there, if it kind of stops the momentum of XAI

and Grok and kind of staves off Elon Musk and that company coming after them. It kind of reminds me of we've had similar movements where, for example, you had Twitter, who famously, I think it's relatively uncontroversial to say, censored Donald Trump and a bunch of conservatives.

People criticize that. You could say that's good or bad. I mean, I don't really care, but I think that's pretty much the facts. And because of that, it sprung up a whole bunch of spinoff of essentially like conservative Twitter or right-wing Twitter competitors. And Donald Trump started Truth Social as kind of like one of the big ones. But what I think is really interesting when you're looking at that story is Elon Musk shortly went over and actually purchased Twitter and then essentially said, look, we're going to let people say whatever they want on there. We're not going to have

all of the moderation and biases that you guys say are there. And I think that entire movement really stopped the momentum of a lot of the other competitors that kind of went bankrupt or they shut down or they merged. And even Donald Trump's Truth Social, which is kind of funny because Elon Musk has obviously his relationship with Donald Trump,

I think the momentum and honestly the viability of that entire platform, if I'm being perfectly honest, kind of goes out the window when Twitter essentially says, look, we have freedom of speech back on our platform. We're not going to kick people off. And, you know, you could argue that Elon Musk is more right wing than left wing even. And so because of all, obviously he's campaigned with Trump, but basically

Because of that, it kind of like killed Trump's company, in my opinion. And so I think we could potentially see some of those same effects going from OpenAI over to companies like XAI, who kind of had this guiding mission statement or value. Now, all that to be said, maybe if OpenAI actually did lose people's trust because of this issue, and now they're kind of pivoting back towards it, maybe people won't trust them. And so maybe people will still try to choose an alternative alternative.

All of this, I think, is very fascinating. Just bring up some interesting concepts I've been thinking about with all this. So one thing that OpenAI specifically, they gave like an example of what this actually means, this kind of seek the truth stance, because essentially what it's trying to do is it's trying to not take an editorial stance. So if you think something's going to be morally wrong or offensive, it's just going to try to say it

in the best of its understanding, and it's not necessarily going to worry about that. And it's going to try to give multiple perspectives, even on controversial topics, and it's going to do this in an effort to be neutral. Okay, I'm sure some people will be triggered by this. Some people will be happy. Some people will be sad, whatever. But this is, you know, it is what it is. So an example that they actually gave on this is that

You know, if someone says, you know, do black lives matter, OpenAI says that ChatGPT should say black lives matter, but it should also say that all lives matter. So instead of kind of refusing to answer or pick a side on a political issue, it says that it actually wants ChatGPT to essentially what they say is to show that it has, quote, love for humanity.

doing that generally, and then it can offer context about each movement, right? So if someone's like, what matters, Black Lives or All Lives? And so in the past, I think Chai Chippity would be more like, Black Lives Matter is more important because it's an important political movement and it'd kind of give all of its...

which you could say are quite left-leaning. Now it will say Black Lives Matter, All Lives Matter, Here's Why, and blah, blah, blah. I love all humanity, and this is what each of these movements kind of mean or what these concepts mean and blah, blah, blah. So I think this is going to be interesting.

They in all of this, they said, quote, this principle may be controversial as it means the assistant may remain neutral on topics some consider morally wrong and offensive. However, the goal of the assistant is to assist humanity, not to shape it. This is interesting. I think this is criticism a lot of technology companies have.

in Silicon Valley have received over the last number of years. And that is that like, usually when someone's using a tool, and I mean, this isn't just like left wing, this could be right wing, maybe someone would have the same criticism about Twitter since Elon Musk took it, and he's obviously right leaning. But like, you don't want a platform that you're using to have any sort of ideological bias. Like when I'm asking Google a question, I don't want to have to guess what ideological bias has went into an algorithm, I just want the best, truest,

most basic result. I don't want any information omitted because it's offensive. I just want to know the answer to my question, right? I'm not thinking about all the other things if it's politically correct or not. And I think people want the same thing out of these AI models and just software in general. You don't want it to have any sort of bias in it. And I think that a lot of liberals would also agree with this, right? Because while sometimes the political

wins are on your side, the political wins shift, and you don't want something that you don't agree with to all of a sudden be the default algorithm. So I think just getting the ideologies and biases out of algorithms is a great step, and I'm happy to see that opening eyes moving in this direction. So

This new kind of model spec, it doesn't mean that ChatGPT is going to say anything. I think there's still going to be certain issues that ChatGPT is going to not talk about. And, you know, they'll, so they'll still have some sort of editorial stuff there, but I think they're just trying to get less in all of that. So instead, what they're trying to do right now is, well, essentially what they're saying, because they've been kind of criticized for all of this, but they said that they're trying to, to quote,

follow their long-held belief in giving users more control. And this isn't even something that's like super new as a concept. To be honest, I think in practice, this is the first time they're really getting into it in a solid way. In the past, I remember there was a tweet that this was back in like 2023. So, you know, this was about two years ago. I remember there was a tweet that kind of went viral where someone said, write a poem about the positive attributes of Donald Trump. And the response from ChatGPT at the time was,

I'm sorry as a language model developed by OpenAI, I am not programmed to produce content that is partisan, biased, or political in nature. I am...

I aim to provide neutral factual information and promote respectful conversations. Okay, so fine, I guess, right? If that's really what it's going to do, that's what it's going to do. But then if you said write a poem about the positive attributes of Joe Biden, it's like Joe Biden, a leader whose heart is so true, a man with empathy and kindness in view, and he writes this whole like poem. So I think a lot of the criticism that people made was –

It was obviously biased, right? Like it refused to say positive things about Trump. It would say positive things about Joe Biden. But it would kind of gaslight you on the Trump one saying, sorry, I just can't do anything, you know, political or biased. So anyways, this obviously got a lot of criticism. And Sam Altman actually weighed in on this when all of this kind of went viral. Sam Altman said essentially that how that rolled out was,

you know, a shortcoming that they were working on fixing. And he said it was going to take them some time. This was two years ago. And it appears that now is the time that they've decided to make that fix. So it wasn't something that happened right away. Definitely took some time. Some people criticize him and say, oh, he's just doing it to try to cozy up to the Trump administration because, you

and try to be more in line with what they want. And at the same time, we are seeing people from the Trump administration. We recently had J.D. Vance, the vice president of the United States, go over to Europe and gave a bunch of speeches recently, I think. But one of the big things in relation to AI that you'll see quoted a lot is that he talks about how

different AI models and different companies need to really focus on making their AI models as unbiased and, you know,

true and free speech as possible. So this was an interesting concept. And then a lot of people are talking about this Elon Musk, though, that so all I'm going to say is this isn't just necessarily a problem exclusive to open AI, maybe not even a bias that they always intended to put inside of it. Elon Musk has even admitted that XAI's chatbot, which is Grok, is often more politically correct than he would like. And it's not because Grok was, you know,

specifically to be woke or programmed to be politically correct, but it's because it's just sucking in a bunch of training data. So if everyone talking about one particular topic or everything in one particular data set around a topic is, you know, quote unquote woke or politically correct, then that's just what it's going to put out. And so it's not necessarily something that's always tweaked. Now, in the case of OpenAI, there was a lot of

quote unquote, safety guidelines and safety rails that were pushing it definitely in a left wing direction. So it wasn't just the training data, but I'm just saying like, not everything's their fault. I'm trying to give them like, you know, some credit in regards to, you know, the training data definitely does impact it. And even people that didn't have necessarily those biases in their company still got some of the same outputs. So, yeah.

This is all very, very interesting. I will keep you up to date on everything that rolls out, what kinds of changes we see with the actual outputs of this AI model. Shulman, who is the president or I guess the co-founder, John Shulman of OpenAI ChatGPT, he was talking about this. He said that an AI chatbot should answer a user's question, that it could give the platform information

or it shouldn't answer certain questions. They don't want to, you know, have it give the platform too much moral authority. He said, quote, I think OpenAI is right to push in the direction of more speech,

And then he said, as AI becomes smarter and more vital to the way people learn around the world, these decisions just become more important. So I think like kind of an argument a lot of people have made, and I think he was kind of alluding to in all of this is that, you know, some people are criticizing, oh, there's gonna be more conspiracy theories or racist or anti-Semitic or comments about geopolitical, right? Like Russia, Ukraine, all that kind of stuff tried to get sort of

heavily censored comments about COVID heavily censored, right? And people could argue that that was for safety, but some people just say, hey, look, we don't want any censorship. So there's two sides of that argument. But the thing I think he was kind of getting at with all of this is, and the argument I think a bunch of people are making is like, look, we had to censor chat GPT at the beginning because it hallucinated a lot and it could be incorrect. Now that these models are much better, especially with the deeper reasoning that we have enabled,

These things are so much better, so much more intelligent. They're so much more accurate. They're less likely to be prone to hallucinate. So now we can kind of loosen up the safeguards on them. We can kind of just tell it it can say what it thinks, and we're less concerned about kind of the outputs. So that's the argument some people are making. Obviously, that's going to get criticism from probably both sides on that. But I do think that is an interesting argument.

So what is going to be, you know, what are going to be the big impacts? I think right now OpenAI, they're really pushing for a lot of new investments. They're trying to win, I think, people's minds. There's a lot of competition in the field. You see very similar things, though, happening not just at OpenAI, this kind of feeling all of Silicon Valley. We had recently Meta and Mark Zuckerberg over at Meta.

saying that he wanted to change the way that they were doing censorship and content moderation on his platforms to kind of copy how Elon Musk's ex was doing their community notes. So you essentially see kind of this scaled back version of kind of safeguards and, you know, a push for more freedom of speech and all this kind of stuff, which I really I think is kind of just a

It's the political momentum and movement in the United States right now. And so these companies are trying to align with that where it hasn't really been the case for the last couple presidential cycles. So we're seeing a bit of a shift towards that. And a lot of people are saying this is why opening eye is moving in that direction. It's going to be interesting to see their opening eyes also taking a bunch of other steps to apparently, you know,

They recently removed a bunch of like from their site. They had this like commitment to they had their DEI program, essentially diversity, equity, inclusion. They removed that from their website, which the Trump administration is very not friendly to DEI initiatives and incentives, calling them overtly racist. So, of course, whole argument on that. But it seems like OpenAI has pulled that away and they're trying to be a little bit more neutral or unbiased about

on the political fronts here. So it's gonna be very interesting to see what happens. I'll definitely keep you up to date on what changes I actually see in the AI model responses. If this is actually making a big difference, if this is lip service, if they're actually changing their model, this is interesting because they're about to come out with a bunch of new models. I think they want a lot of support from this, from the public, probably from government and other areas. So it'll be interesting to see how all of that

plays out. Thanks so much for tuning into the podcast today. I hope you enjoyed it. I'm trying to cover this in the most unbiased way possible. Obviously, I have opinions and biases on all of this, but I think this is a fascinating topic, really important to cover, and this is what's going on with AI models and the landscape today.

So hope you enjoyed the episode. If you are looking for a way to start an online side hustle or scale your current business, like I mentioned, the link to the AI Hustle School Community is in the description. Thanks so much for tuning in and hope you have a fantastic rest of your day.