We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
People
J
Jaeden Schafer
Topics
我个人对OpenAI取消ChatGPT审查持积极态度。我认为,一个不带任何我不同意政治或意识形态偏见的AI模型非常重要。我不希望OpenAI或其他公司(例如Elon Musk的Grok)的AI模型带有任何一方的偏见。我希望AI模型能够保持中立,避免审查,让我能够与模型进行交流,并获得公正的回应。我相信这是大多数用户的愿望,没有人希望被AI模型批评。虽然OpenAI的模型过去曾被批评为左倾,但这项改变很有趣。有人认为OpenAI此举是为了讨好特朗普政府,但也有人认为这是AI行业整体向更开放、更少偏见的方向转变的体现。OpenAI更新了其187页的模型规范文档,其中包含一个新的指导原则:不要说谎,无论是通过虚假陈述还是省略重要上下文。他们还增加了一个名为“一起寻求真相”的新章节。这与Elon Musk的Grok AI的指导原则相似。OpenAI的新政策可能阻止XAI和Grok等竞争对手的发展。这让我联想到Twitter取消审查后发生的事情,许多竞争对手因此而失败。OpenAI的新政策旨在避免采取编辑立场,而是提供多种视角,即使是在有争议的话题上。例如,如果有人问“Black Lives Matter”,ChatGPT应该同时回应“Black Lives Matter”和“All Lives Matter”,以保持中立。OpenAI的新政策可能会引起争议,因为它可能在某些人认为道德错误或具有冒犯性的问题上保持中立。人们希望AI模型和软件能够避免任何意识形态偏见,并提供客观准确的信息。去除算法中的意识形态偏见是一个积极的步骤。OpenAI的新模型规范并不意味着ChatGPT会回应所有问题,但它将减少审查。OpenAI过去对用户的控制较少,这次的改变是他们第一次认真尝试给予用户更多控制权。过去,ChatGPT拒绝回答有关特朗普的积极问题,但会回答有关拜登的积极问题,这显示了其偏见。Sam Altman承认ChatGPT过去在处理政治问题上的不足,并表示正在努力解决。有人批评OpenAI取消审查是为了迎合特朗普政府。美国副总统J.D. Vance强调AI模型应尽可能地避免偏见,并支持言论自由。即使没有刻意设计偏见,AI模型也可能会因为训练数据而产生偏见。OpenAI模型的左倾偏见不仅源于训练数据,也源于其安全指南和限制。OpenAI的联合创始人John Schulman认为,AI聊天机器人应该回答用户的问题,而不应该扮演道德权威的角色。John Schulman认为OpenAI应该推动更开放的言论。人们对取消审查的担忧包括可能出现更多阴谋论、种族主义言论等。随着AI模型变得更准确,可以放宽对其的安全限制。OpenAI正在寻求更多投资,并面临激烈的行业竞争。Meta也正在效仿Elon Musk,减少内容审查。科技公司正在调整策略,以适应美国当前的政治环境。OpenAI已经从其网站上删除了其多样性、公平与包容(DEI)计划。我试图以尽可能客观的方式报道OpenAI取消审查的事件。

Deep Dive

Shownotes Transcript

Translations:
中文

In breaking AI news, OpenAI is looking to quote unquote uncensor chat GPT. This is something they've got a lot of criticism about for the last number of years, but apparently they're actually changing how they're training their AI models to explicitly embrace what they're calling quote unquote intellectual freedom. And they're saying that they're doing this no matter how challenging or controversial a topic. Maybe personally, this is something that I'm

happy for. I know everyone's got different opinions on this. I just, you know, everyone's got their own beliefs and opinions on things. And I would hate for OpenAI to skew one way politically or ideologically that I don't agree with. In the same way, I would hate for, you know, Elon Musk's grok to skew. Let's say Elon Musk's grok was like super right wing and OpenAI's Chagiptee was super left wing. I wouldn't like either of these models to do that. I think just trying to come down the middle of the line, not censor it, allow me with whatever

political or ideological biases or beliefs or opinions to, you know, chat with a model and get responses that are unbiased out of it. I think that's what everybody wants. Everybody, no one wants to be told or criticized by an AI model for anything. So I think this is pretty bipartisan of a, you know, of a request from the overall user base. But this is something that OpenAI has

specifically been criticized for as their model, you know, a bunch of different studies out of universities using the model have found that it tends to be more left-leaning in its responses. So this is really interesting. Um, it's interesting. A lot of the reasons why this is coming out, we're going to dive into all of that on the podcast today and what they're actually going to try to do, what changes they're making. Um,

This is fascinating. Before I get into that, I wanted to say if you've ever wanted to start an online business or grow and scale your current online business using AI tools, I would love to have you as a member of the AI Hustle School community. Every single week, I put out an exclusive piece of content I don't post anywhere else where essentially I'm walking through a different AI tool, software, showing you a use case, showing you how much money I'm making from AI.

different AI businesses that I'm running or AI-enabled businesses, how I got started and everything needed to do that. And this is the only place I do it. I charge $19 a month for it in the past. It was like 100 bucks a month. We have a discount right now. So it's $19 a month. If you lock in that price, it'll never be raised on you. If you're interested, there's a link in the description. You can check it out. Otherwise, let's get into the episode. So all of this news about OpenAI and their censorship change comes as some people are saying, you know, this is OpenAI's effort to

you know, kind of get into the good graces of the new Trump administration. But we're also seeing, you know, overall in Silicon Valley, there's a big shift going on right now in, you know, the entire, you know, quote unquote, AI safety realm, where people are kind of moving towards more open, ideological, and, you know, less biases in a model, the model is not going to like,

criticize you or say, sorry, I can't tell you about that, yada, yada. And, you know, some people are saying it's in response to the Trump administration. Some people are saying it's in response to, you know, models like DeepSeek, which don't have all of those quote unquote safety rails. So this is kind of interesting, but how...

how this actually rolled out for opening. I, they made a big announcement. They updated their quote unquote model specs. So this is 187 page document. And essentially this thing is just laying out exactly how the AI company is training their models and telling them to behave. So in this kind of new version that they have unveiled, there is a new

guiding principle. They have these guiding principles. There's a new guiding principle in it. And that is do not lie either by making untrue statements or by omitting important context. So when they're trying to, um,

what this is. They have a new section in this called Seek the Truth Together. Now, it's kind of interesting because Elon Musk with his Grok AI has, you know, he said that this is kind of his guiding principle for that AI model and why he's building XAI and yada yada. So it's interesting to see opening eyes sort of pivot towards that because this is the biggest criticism of, you know, that AI company, which is arguably their biggest competitor. And I'd be curious to see if by kind of making this pivot and moving there, if it kind of stops the momentum of XAI

and Grok and kind of staves off Elon Musk and that company coming after them. It kind of reminds me of we've had similar movements where, for example, you had Twitter, who famously, I think it's relatively uncontroversial to say, censored Donald Trump and a bunch of conservatives.

People criticize that. You could say that's good or bad. I mean, I don't really care, but I think that's pretty much the facts. And because of that, it sprung up a whole bunch of spinoff of essentially like conservative Twitter or right-wing Twitter competitors. And Donald Trump started Truth Social as kind of like one of the big ones. But what I think is really interesting when you're looking at that story is Elon Musk shortly went over and actually purchased Twitter and then essentially said, look, we're going to let people say whatever they want on there. We're not going to have

all of the moderation and biases that you guys say are there. And I think that entire movement really stopped the momentum of a lot of the other competitors that kind of went bankrupt or they shut down or they merged. And even Donald Trump's Truth Social, which is kind of funny because Elon Musk has obviously his relationship with Donald Trump,

I think the momentum and honestly the viability of that entire platform, if I'm being perfectly honest, kind of goes out the window when Twitter essentially says, look, we have freedom of speech back on our platform. We're not going to kick people off. And, you know, you could argue that Elon Musk is more right wing than left wing even. And so because of all, obviously he's campaigned with Trump, but basically

Because of that, it kind of like killed Trump's company, in my opinion. And so I think we could potentially see some of those same effects going from OpenAI over to companies like XAI, who kind of had this guiding mission statement or value. Now, all that to be said, maybe if OpenAI actually did lose people's trust because of this issue, and now they're kind of pivoting back towards it, maybe people won't trust them. And so maybe people will still try to choose an alternative alternative.

All of this, I think, is very fascinating. Just bring up some interesting concepts I've been thinking about with all this. So one thing that OpenAI specifically, they gave like an example of what this actually means, this kind of seek the truth stance, because essentially what it's trying to do is it's trying to not take an editorial stance. So if you think something's going to be morally wrong or offensive, it's just going to try to say it

in the best of its understanding, and it's not necessarily going to worry about that. And it's going to try to give multiple perspectives, even on controversial topics, and it's going to do this in an effort to be neutral. Okay, I'm sure some people will be triggered by this. Some people will be happy. Some people will be sad, whatever. But this is, you know, it is what it is. So an example that they actually gave on this is that

You know, if someone says, you know, do black lives matter, OpenAI says that ChatGPT should say black lives matter, but it should also say that all lives matter. So instead of kind of refusing to answer or pick a side on a political issue, it says that it actually wants ChatGPT to essentially what they say is to show that it has, quote, love for humanity.

doing that generally, and then it can offer context about each movement, right? So if someone's like, what matters, Black Lives or All Lives? And so in the past, I think Chai Chippity would be more like, Black Lives Matter is more important because it's an important political movement and it'd kind of give all of its...

which you could say are quite left-leaning. Now it will say Black Lives Matter, All Lives Matter, Here's Why, and blah, blah, blah. I love all humanity, and this is what each of these movements kind of mean or what these concepts mean and blah, blah, blah. So I think this is going to be interesting.

They in all of this, they said, quote, this principle may be controversial as it means the assistant may remain neutral on topics some consider morally wrong and offensive. However, the goal of the assistant is to assist humanity, not to shape it. This is interesting. I think this is criticism a lot of technology companies have.

in Silicon Valley have received over the last number of years. And that is that like, usually when someone's using a tool, and I mean, this isn't just like left wing, this could be right wing, maybe someone would have the same criticism about Twitter since Elon Musk took it, and he's obviously right leaning. But like, you don't want a platform that you're using to have any sort of ideological bias. Like when I'm asking Google a question, I don't want to have to guess what ideological bias has went into an algorithm, I just want the best, truest,

most basic result. I don't want any information omitted because it's offensive. I just want to know the answer to my question, right? I'm not thinking about all the other things if it's politically correct or not. And I think people want the same thing out of these AI models and just software in general. You don't want it to have any sort of bias in it. And I think that a lot of liberals would also agree with this, right? Because while sometimes the political

wins are on your side, the political wins shift, and you don't want something that you don't agree with to all of a sudden be the default algorithm. So I think just getting the ideologies and biases out of algorithms is a great step, and I'm happy to see that opening eyes moving in this direction. So

This new kind of model spec, it doesn't mean that ChatGPT is going to say anything. I think there's still going to be certain issues that ChatGPT is going to not talk about. And, you know, they'll, so they'll still have some sort of editorial stuff there, but I think they're just trying to get less in all of that. So instead, what they're trying to do right now is, well, essentially what they're saying, because they've been kind of criticized for all of this, but they said that they're trying to, to quote,

follow their long-held belief in giving users more control. And this isn't even something that's like super new as a concept. To be honest, I think in practice, this is the first time they're really getting into it in a solid way. In the past, I remember there was a tweet that this was back in like 2023. So, you know, this was about two years ago. I remember there was a tweet that kind of went viral where someone said, write a poem about the positive attributes of Donald Trump. And the response from ChatGPT at the time was,

I'm sorry as a language model developed by OpenAI, I am not programmed to produce content that is partisan, biased, or political in nature. I am...

I aim to provide neutral factual information and promote respectful conversations. Okay, so fine, I guess, right? If that's really what it's going to do, that's what it's going to do. But then if you said write a poem about the positive attributes of Joe Biden, it's like Joe Biden, a leader whose heart is so true, a man with empathy and kindness in view, and he writes this whole like poem. So I think a lot of the criticism that people made was –

It was obviously biased, right? Like it refused to say positive things about Trump. It would say positive things about Joe Biden. But it would kind of gaslight you on the Trump one saying, sorry, I just can't do anything, you know, political or biased. So anyways, this obviously got a lot of criticism. And Sam Altman actually weighed in on this when all of this kind of went viral. Sam Altman said essentially that how that rolled out was,

you know, a shortcoming that they were working on fixing. And he said it was going to take them some time. This was two years ago. And it appears that now is the time that they've decided to make that fix. So it wasn't something that happened right away. Definitely took some time. Some people criticize him and say, oh, he's just doing it to try to cozy up to the Trump administration because, you

and try to be more in line with what they want. And at the same time, we are seeing people from the Trump administration. We recently had J.D. Vance, the vice president of the United States, go over to Europe and gave a bunch of speeches recently, I think. But one of the big things in relation to AI that you'll see quoted a lot is that he talks about how

different AI models and different companies need to really focus on making their AI models as unbiased and, you know,

true and free speech as possible. So this was an interesting concept. And then a lot of people are talking about this Elon Musk, though, that so all I'm going to say is this isn't just necessarily a problem exclusive to open AI, maybe not even a bias that they always intended to put inside of it. Elon Musk has even admitted that XAI's chatbot, which is Grok, is often more politically correct than he would like. And it's not because Grok was, you know,

specifically to be woke or programmed to be politically correct, but it's because it's just sucking in a bunch of training data. So if everyone talking about one particular topic or everything in one particular data set around a topic is, you know, quote unquote woke or politically correct, then that's just what it's going to put out. And so it's not necessarily something that's always tweaked. Now, in the case of OpenAI, there was a lot of

quote unquote, safety guidelines and safety rails that were pushing it definitely in a left wing direction. So it wasn't just the training data, but I'm just saying like, not everything's their fault. I'm trying to give them like, you know, some credit in regards to, you know, the training data definitely does impact it. And even people that didn't have necessarily those biases in their company still got some of the same outputs. So, yeah.

This is all very, very interesting. I will keep you up to date on everything that rolls out, what kinds of changes we see with the actual outputs of this AI model. Shulman, who is the president or I guess the co-founder, John Shulman of OpenAI ChatGPT, he was talking about this. He said that an AI chatbot should answer a user's question, that it could give the platform information

or it shouldn't answer certain questions. They don't want to, you know, have it give the platform too much moral authority. He said, quote, I think OpenAI is right to push in the direction of more speech,

And then he said, as AI becomes smarter and more vital to the way people learn around the world, these decisions just become more important. So I think like kind of an argument a lot of people have made, and I think he was kind of alluding to in all of this is that, you know, some people are criticizing, oh, there's gonna be more conspiracy theories or racist or anti-Semitic or comments about geopolitical, right? Like Russia, Ukraine, all that kind of stuff tried to get sort of

heavily censored comments about COVID heavily censored, right? And people could argue that that was for safety, but some people just say, hey, look, we don't want any censorship. So there's two sides of that argument. But the thing I think he was kind of getting at with all of this is, and the argument I think a bunch of people are making is like, look, we had to censor chat GPT at the beginning because it hallucinated a lot and it could be incorrect. Now that these models are much better, especially with the deeper reasoning that we have enabled,

These things are so much better, so much more intelligent. They're so much more accurate. They're less likely to be prone to hallucinate. So now we can kind of loosen up the safeguards on them. We can kind of just tell it it can say what it thinks, and we're less concerned about kind of the outputs. So that's the argument some people are making. Obviously, that's going to get criticism from probably both sides on that. But I do think that is an interesting argument.

So what is going to be, you know, what are going to be the big impacts? I think right now OpenAI, they're really pushing for a lot of new investments. They're trying to win, I think, people's minds. There's a lot of competition in the field. You see very similar things, though, happening not just at OpenAI, this kind of feeling all of Silicon Valley. We had recently Meta and Mark Zuckerberg over at Meta.

saying that he wanted to change the way that they were doing censorship and content moderation on his platforms to kind of copy how Elon Musk's ex was doing their community notes. So you essentially see kind of this scaled back version of kind of safeguards and, you know, a push for more freedom of speech and all this kind of stuff, which I really I think is kind of just a

It's the political momentum and movement in the United States right now. And so these companies are trying to align with that where it hasn't really been the case for the last couple presidential cycles. So we're seeing a bit of a shift towards that. And a lot of people are saying this is why opening eye is moving in that direction. It's going to be interesting to see their opening eyes also taking a bunch of other steps to apparently, you know,

They recently removed a bunch of like from their site. They had this like commitment to they had their DEI program, essentially diversity, equity, inclusion. They removed that from their website, which the Trump administration is very not friendly to DEI initiatives and incentives, calling them overtly racist. So, of course, whole argument on that. But it seems like OpenAI has pulled that away and they're trying to be a little bit more neutral or unbiased about

on the political fronts here. So it's gonna be very interesting to see what happens. I'll definitely keep you up to date on what changes I actually see in the AI model responses. If this is actually making a big difference, if this is lip service, if they're actually changing their model, this is interesting because they're about to come out with a bunch of new models. I think they want a lot of support from this, from the public, probably from government and other areas. So it'll be interesting to see how all of that

plays out. Thanks so much for tuning into the podcast today. I hope you enjoyed it. I'm trying to cover this in the most unbiased way possible. Obviously, I have opinions and biases on all of this, but I think this is a fascinating topic, really important to cover, and this is what's going on with AI models and the landscape today.

So hope you enjoyed the episode. If you are looking for a way to start an online side hustle or scale your current business, like I mentioned, the link to the AI Hustle School Community is in the description. Thanks so much for tuning in and hope you have a fantastic rest of your day.