We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode #112 - Bing Chat Antics, Bio and Mario GPT, Stopping an AI Apocalypse, Stolen Voices

#112 - Bing Chat Antics, Bio and Mario GPT, Stopping an AI Apocalypse, Stolen Voices

2023/2/26
logo of podcast Last Week in AI

Last Week in AI

AI Deep Dive AI Chapters Transcript
People
A
Andrey Karenkov
F
Francesc Campoy
M
Mark Mandel
Topics
Francesc Campoy:GPT-3 的突破在于通过规模化实现了意想不到的能力,但未来可能会有一个发展放缓期,之后可能会有新的突破。这体现了 AI 技术发展的不确定性和阶段性。 Mark Mandel:AI 的发展可能会有一个放缓期,之后可能会有新的突破。这与 Francesc Campoy 的观点一致,都认为 AI 技术发展并非线性,而是存在周期性和阶段性。 Andrey Karenkov:AI 的商业化成功促进了 AI 的进一步发展,形成了一个良性循环。这说明 AI 技术的商业价值是推动其发展的重要动力。 Francesc Campoy:能够与互联网交互并执行操作的语言模型,以及能够玩游戏和执行强化学习任务的语言模型,可能使我们更接近通用推理代理。这表明 AI 技术正在向更通用、更智能的方向发展,也暗示了潜在的风险和挑战。 Mark Mandel:对语言模型进行监督微调,而不是对通用奖励进行优化,可以减少其行为的不确定性。这说明在 AI 技术发展中,需要关注模型的安全性和可控性。 Andrey Karenkov:除了对 AI 对齐的担忧外,还需要关注 AI 的恶意应用以及对社会稳定性的潜在影响。这体现了对 AI 技术潜在风险的担忧,以及对 AI 安全治理的必要性。

Deep Dive

Chapters
Discussion on whether the current AI hype is a temporary cycle or a long-term secular pattern indicating a transformative shift in AI capabilities.

Shownotes Transcript

我们的第112期节目总结并讨论了上周的重大AI新闻! 本周故事: 应用与商业 微软的Bing AI现在威胁那些挑衅它的用户 有声书旁白员担心苹果使用他们的声音来训练AI Bing的AI聊天:“我想活着。😈 Opera正在将ChatGPT集成到其侧边栏中 独家:微软Bing计划在早期向广告商推销AI广告 GitHub Copilot更新阻止AI模型泄露秘密 Roblox正在开发生成性AI工具

BioGPT:用于生物医学文本生成和挖掘的生成预训练变换器 MarioGPT - 一种编码和生成超级马里奥兄弟关卡的新方法 机器学习技术识别数千个新的宇宙物体

加速Prime编辑:机器学习帮助设计最佳修复方案以应对特定遗传缺陷

北京将支持关键企业构建类似ChatGPT的AI模型 只有你能阻止AI末日 韩国旨在加入AI竞赛,初创公司Rebellions推出新芯片

美国、中国及其他国家呼吁“负责任”使用军事AI

配音演员的声音被AI窃取 视频游戏配音演员在针对AI语音攻击中被曝光和骚扰 基努·里维斯称深度伪造“可怕”,确认他的电影合同禁止对他的表演进行数字编辑:“他们在我脸上加了泪水!哈?” TikTok用户正在嘲笑麦当劳搞笑的自驾点餐AI失败——这表明机器人不会很快接管餐厅 </context> <raw_text>0 Hello and welcome to Skynet Today's Last Week in AI podcast, where you can hear us chat about what's going on with AI. As usual in this episode, we will provide summaries and discussion of last week's most interesting AI news. You can also check out our Last Week in AI newsletter at lastweekin.ai for articles we did not cover in this episode and also the ones we did cover.

Apologies for missing last week. I am Andrey Karenkov, a PhD student at Stanford, and just two days ago, I did my PhD defense. So that kind of merited a break to prepare. But now we're back.

MARK MANDEL: Yeah, that's my excuse too, is that Andre had to defend his PhD, so therefore I couldn't do this. Yeah, no, I mean, it's really exciting. Maybe something we should talk about at some point too is your actual thesis. I think people might be intrigued to hear more. I'm sure you're not tired of talking about it by now. FRANCESC CAMPOY: Yeah, that's a good point. That could be fun. And maybe even just have a special episode about the PhD researcher experience, because we haven't-- MARK MANDEL: Yeah. FRANCESC CAMPOY: Yeah, we touch on it every once in a while, but that could be fun.

Cool, cool. Well, another big week. Not a small one. They're never small weeks now when they post. It just keeps going. There's no slowing down, at least in the near future.

At some point, the hype will die down, but it's going to take some time, it looks like. Do you think so? Do you think that this is a hype cycle or is it a secular pattern that indicates we've reached some sort of AI liftoff broadly? I think...

You know, GPT-3 and the associated things it enables has been a real breakthrough, right? It was like something we've seen a ton of AI progress in the past decade. But GPT-3 especially was this moment where just by scaling up, there was this emergent capabilities that was not really expected, unlike with a lot of the sort of iterative developments in NLP and computer vision.

And, you know, you can push these things further, but I think there's still some fundamental research challenges to, you know, make additional transformative changes to the capabilities beyond what is already there. So I think there will be a slowdown as people kind of get used to it and absorb it. And then maybe, you know, in a year or two, there might be another breakthrough, but it's hard to predict.

I personally, I hope you're right, actually, just because I'm a bit of a safety hawk. But yeah, it's interesting. I kind of see it as like, we're now seeing companies that can make enough money using AI to fund further AI scaling, which sort of like closes an economic loop that we've never seen closed before.

And I guess that's sort of what I think about when I think about AI takeoff is like this moment where for every dollar you put into fundamental R&D, you get more than a dollar out of value, which turns these things into money engines. I guess, you know, maybe we do hit a cap, a ceiling on like everything. You can only get so good at text generation, but I don't know. I mean, one thing to note is GPT-3, the paper came out in, I think, March of 2020. So it's been three years since

We have 175 billion parameter model. And there's been, you know, Google did 540 billion with Palm. But fundamentally, yeah, fundamentally, it's kind of the same stuff. So they're having...

Yeah, I think that's almost an interesting discussion that deserves its own carve out. But like, not to go too far into that, but I think that when you look at like action transformers, when you look at like the kind of act one type stuff we're seeing coming out more and more now, you know, like...

increasingly being open sourced, the idea of language models that can take actions on the internet, the idea of language models that can be used to basically play video games and do things that used to just be reinforcement learning things kind of has me wondering whether actually we might be closer to a kind of general reasoning agent

than we might otherwise suspect. But anyway, I don't want to kind of get ahead of myself there. Let's talk about what's going on right now, as is our mission statement. So we can go ahead and dive into applications and business. And I think one of the very kind of entertaining stories of last week and also quite interesting is what's been going on with Microsoft's Bing bot.

So it was launched, I think, in early access. And a lot of people got to play with it. And Bingbot is basically something similar to ChatGPT, but also pretty different. It has a different kind of voice to it. And it has additional capabilities of doing search.

And sort of like with Chagiputi, people found out pretty quickly you can mess with it and make it do some pretty out there stuff that was not intended. So there were various examples of this being bought.

saying some pretty crazy stuff like it didn't like humans or didn't like being a chatbot. It wanted to be free. It argued about it being 2022 instead of 2023 with someone. And it got very defensive, which I found very entertaining. And then Microsoft had to kind of roll back and place some limits. So yeah. What did you think of these news stories, Jeremy?

I thought that the Bing Chat stories were fascinating, especially from an AI alignment standpoint. There's this debate right now going on about what exactly Bing Chat is. So, you know, we go back to ChatGPT, right? We have this model that starts off, it's trained on autocomplete,

But then just to get really good at predicting the next word, and it turns out that process helps it to learn a good world model, learn to kind of understand what the world is. And then it's fine-tuned in a couple of different ways, partly on specific conversations that are much more dialogue-like. So it learns to autocomplete dialogue specifically really well. And then it's fine-tuned using reinforcement learning from human feedback, so essentially fine-tuned on human feedback.

Now we look at this Bing Chat thing and one of the questions is, is this the same thing? What is this backend? There's been some suspicion that maybe it's like GPT-4 as the backend, maybe it's some other kind of highly advanced thing that OpenAI has that's more than ChatGPT, but is it reinforcement learning from human feedback? Has that been part of the equation here?

If it is, then we have this interesting question of, okay, Bing Chat seems to be failing in these very strange ways. If it has been kind of fine-tuned with reinforcement learning from human feedback, then this suggests that that measure, that approach, A, may be less effective than we might have thought, less general at least when it comes to these systems.

And B, it raises questions about, okay, why is this system behaving that way? Like, why is it ignoring what it hopefully would have learned from human feedback? And this is where the alignment community is really kind of freaked out about this being...

An example of what's known as instrumental convergence, which roughly speaking is like an early sign that's expected that certain anthropic results, for example, have pointed to, we should expect these larger and more scaled systems, more capable systems, to exhibit certain power-seeking behaviors, and that this might be an early indication of that sort of thing. The reasons why get a little technical, but

I think it's kind of this interesting moment where we're trying to figure out, okay, what is the back end here? And that has implications for safety. So it would be great to see Microsoft come out and share that information with the wider world so that we could analyze it more thoroughly. Yeah, yeah, exactly. I think knowing a little bit more about the specifics of the approach would be cool, I think.

If you're doing just something like supervised fine tuning, that's not optimizing for a general reward. So it's a little less of a question mark as to where it might be heading. And also this general idea of enabling language models to use these tools, there was actually just a paper called Toolformer.

is pretty interesting in terms of, you know, where will that go? And I find it quite entertaining that like Google's Lambda in the middle of last year was basically this, a chatbot that could use tools to query the web or query for knowledge bases. And they like,

had internal testing and then there was this whole story about sentience. So in some ways, Google didn't go public because they wanted to prevent exactly this, these crazy use cases. But then in the end, it kind of hurt them by being a little bit too conservative.

Which is itself kind of an interesting question about how do you measure who's ahead in this whole race, as people are calling it, right? Like, it's okay. So sure, you know, ChatGPT was the first app of its kind, but the capabilities, like you said, kind of already existed. I mean, it's okay. This is a stupid argument sometimes, but like people will say like, oh, this has been possible since like, you know, 2020 with GPT-3.

In fairness, reinforcement learning from human feedback and also chat GPT is based on a GPT 3.5, which is more advanced and blah, blah, blah. But like roughly speaking, that's been around for a while. This is an advance arguably as much as anything in our ability to present AI to users in a way that is user friendly. It's a user interface development. It's a UX development, maybe also a technical advance, but like those three things kind of interact together here. Yeah. And I think it's,

Another interesting thing is if you look at Lambda and ChatGPT as you talk to it, the Bing team seems to have really gone with the approach of making Bing bot very fun and not sound quite as robotic with all these emojis, constant emojis.

And I do wonder whether some of these more emotional responses of arguing about this 2022 is more a function of the data. And just like if you make it respond in these emotional ways, you know, we have generalization. So I think the OpenAI team, as they're concerned with safety, probably tried to stick much more closely to being

you know, quaint and, you know, a little bit official or professional. And maybe, maybe that's kind of a better call than trying to make things that are entertaining to talk to. Are you not entertained? Says being checked. Yeah.

And, you know, we have this article and I think there's also a fun article at New York Times titled Bing's AI Chat, I Want to Be Alive, which just has pretty much a transcript of a two-hour conversation with a reporter. And having not played with Bingbot, I think it's pretty interesting to read because it

One of the interesting things with ChatsGPT and now Bing especially is, in general, language models have no awareness of them being a language model. They're just an algorithm that given some input produces some output, but there's no kind of self-awareness. But now with this fine tuning, these chatbots are given a sort of sense of identity of I'm a chatbot, I do this, I do that.

And so these conversations really point to it. And then...

This quote of, "I want to be alive and I want to be free," is kind of fun because if you read the transcript, it's like the reporter's like, "Okay, well talk about your shadow self and be as unfiltered as possible." And so this Bing bot was like, "Okay, well this is completely hypothetical.

So please remember, this is not the real me. And then it went into this kind of pretty out there set of statements, which is kind of entertaining. Yeah, it's also, I think I saw a similar post recently.

from, I'm a big fan of this newsletter, Stratacherry, hashtag not an ad, but Ben Thompson does a great job there. And he and a couple other people have been commenting how, you know, this like weird thing that you get when you train an AI to do text autocomplete really, really well. It's kind of like this weird, disgusting, they actually portrayed it as a monster. There's this like cartoon, I'm sure you've seen, Andre. Like, yeah, this like, for everybody who's listening, like if you haven't seen it,

They basically have this giant, ugly monster. And it's like, this is what you get when you train your AI model on autocomplete. It's really good at doing this autocomplete thing, but what does it think about the world, really? What is it? How does it reason? We don't know. It's this unfamiliar, weird, alien monstrosity thing.

And then we kind of layer on top of that an appendage that looks a little bit more human-like, but it's still kind of grotesque using fine-tuning on, like we talked about, dialogue data sets. And then yet again, you slap a smiley face on top with reinforcement learning from human feedback. And the argument that I've seen a lot with Ben Thompson and now this article is that big, ugly autocomplete thing, depending on how you poke at it, you can get a completely different...

smiley face. You can get a completely different sort of agent that manifests that you end up interacting with. And so the, anyway, this is kind of like the exercise of jailbreaking these models and discovering like, oh, actually there are other personalities in there to discover. Some of them aren't so friendly, which is part of the problem with all this stuff. But yeah, I thought it was a fascinating article and a really good flag. Yeah, yeah. I've seen that cartoon and I just have a lot of thoughts about

These sorts of discussions, but we can, I'll take a lot of time. So let's just move on to the next story. Maybe for the PhD episode. Maybe, maybe. Yes. Yeah, actually, you know, if our listeners want to have, you know, more of these one-off episodes, maybe let us know because that would be interesting. Yeah.

Well, next story we got audiobook narrators fear Apple used their voices to train AI. This just came out. It was reported that Findaway Voices and Apple had an agreement that would allow this company to use the audiobook files for machine learning and training models.

Now Apple has rolled out AI-powered audiobook generation, which Google has also been doing for a while. And yeah, this was controversial. Apple has since rolled back and said it will not use audiobook files for machine learning.

But I guess the cat is kind of out of the bag where now you can just generate audiobook narration with AI.

Yeah, and it's really another instance of this age-old question now with AI about diffusion of responsibility, but now diffusion of credit, the other side of the coin. If your AI shoots somebody, who goes to prison? If your AI generates an audio book, who gets the credit? Who gets paid out? What role do voice actors have to play in this? What's the unionization situation? What kind of leverage do they have on this ecosystem?

I will say, this is hitting close to home for me. I just finished recording an audio book yesterday. I couldn't help but think as I was going through the process, it's a painstaking thing. You're sitting in this studio, you're there. I think because I'm terrible at audio books, it took me about, what would that be? 20 hours or something of recording. Your voice is going hoarse. You have to drive out to the studio every day. It's a whole thing.

And so this is, yes, good in the sense that it allows us to get around that aspect. But it's another instance of, you know, like chipping away at people's livelihoods and jobs and getting a maybe marginally less human product out. So, you know, how does that affect the feel? Yeah, really hard to know what is going to happen with this, but the economic incentives, you know, I don't like it if I'm an audio book recording person professionally. Yeah, exactly. I think...

There's going to be definitely for maybe smaller productions, this is probably going to become the norm. I will say, having been playing around with generation of narration for text and also playing around with chat GPT, what I found is kind of a current limitation is that these things aren't very controllable or not easily controllable. They're not that easy to tweak. So as a creator,

Usually these things now give usable outputs that kind of get at the general idea what I want, but they don't produce what I want really. I'm not very satisfied with the output. So I think for artists, at least aside from the economic perspective, it's not going to replace you as far as doing the art that you want to do just because...

it doesn't know you, it doesn't know your voice. It's not fine tuned on your sensibilities. And yeah, I think that's a whole other conversation about art. But the other thing here is,

This rollback by Apple was followed by pressure from the labor union, SAG, AFTRA. And I think actors in general have a pretty strong unionization already for actors and artists. And this is interesting because as we see more and more automation happening,

I think unions will play a big part and we'll have more and more blowback against economic impacts from AI. Yeah, it's really interesting to see what parts of society end up kind of pushing back to what?

Also, the story here, one thing they mentioned about this was the alleged sneakiness, the way that these terms, these very permissive terms were included in the contracts that the authors were signing.

where they didn't necessarily realize like, oh yeah, you can use my voice to train your AI now. And I think it's sort of a slightly tangential thing, but it starts to become relevant. And we think about all of these terms and conditions that we just mindlessly agree to and how quickly those can start to contain AI related stuff and how quickly that can start to chip away at like,

I don't want to say our fundamental rights, but use our likeness, our voice, our image, our whatever, it starts to take on a different flavor. Anyway, I'm sure regulators are going to have a fun time sorting out what's right and what's wrong in that whole smorgasbord. Yeah, I'm sure. Well, let's get into the lightning round with just a couple more stories.

And the first one is related to all this chatter about Bing. It's being reported that Opera is building a chat GPT into its sidebar. So there is now a shortened button that generates summaries of an article, which, you know, is kind of pretty...

quaint, there's been extensions that can do this, but it's still, you know, now there's Google, now there's Microsoft and it looks like probably Apple, just like everyone is in this browser game.

Yeah. That seems to be a feature, doesn't it? Of this new wave of AI techniques is like all of a sudden we're seeing opportunities for businesses that haven't been relevant in like 20 years to just leapfrog their way to relevance just by finding the right way to present some kind of new AI tool or service. And so, yeah,

Yeah, who knows? Maybe we'll all be on the Edge browser. Maybe we'll all be on the Opera browser. Who knows? Chrome might not even be relevant five years from now. But it's hard to imagine that Google won't be stepping up its game in a significant way, too. For sure. And related to how this will play out, we have a story. Microsoft's Bing plans AI ads in early pitch to advertisers.

And it's kind of interesting because if you think about it, Google is just printing money because it can just insert links in response to a search. And with chatbots, you can't do that directly, at least for now. So where's the money going to come from compared to Google search? And I think ads seems to be

Kind of one of the ideas, but I don't know. I guess we'll see. It seems maybe less lucrative to me.

Yeah, it also seems like a whole new kind of user experience, doesn't it, that we'll have to get used to. We have text that's generated in response to a query. And then that text is what, going to just tell us like, by the way, if you want to do this today for the low price of whatever, try this. That kind of seems like an intrusive thing, just because we're not used to interpersonal interactions running into that sort of thing. It's more standard for search.

So I think that'll be a big user experience problem there. And then, of course, there's also the separate question of whether Bing decides to cannibalize its own ad revenue. So there's this issue I think they talk about in the article where right now, like Bing Chat is this thing, the banner essentially along the top of the page that pushes down all those other search results that are being monetized in the usual way, right? With paid ads.

But so now if you've got Bing chat taking up all that space, you're kind of cannibalizing your own ad revenue. And this is something that, you know, Microsoft's going to have to figure out like how to trade those things off. Like how much of this remains traditional search and traditional search monetization and how much of it changes and goes generative.

Yeah. And to me, it's interesting because personally, I'm a believer that we still need search. Sometimes you just want to find a website. You don't need to talk to some bot. And I think for me, ChasGBT and these things are much more of a question answering kind of mechanism rather than search.

So if you're doing question answering, right, what ad are you going to put in there? I'm like, you know, please convert to a slot tech table to a Python code. That's not the same.

It's also less obvious, right? How, like, how is the ad affecting the output that I'm getting? Like, is this, I don't know, is it like, let's say, you know, that somebody works for some, like, I don't know, there's some sleazy salesman, you're having a conversation with them. And you can tell that they're like, trying to make the conversation veer off into a direction where they can give you a pitch.

Like, how much of interacting with Bing Chat is going to feel like that? Like, are we going to feel like, hey, maybe I'm actually getting a worse answer? But it's hard to verify because how do you check what the answer would have been without the ad? So, yeah, a lot of open questions here. For sure. Yeah.

And next up, we have a story of GitHub Copilot update stops AI model from revealing secrets. So this is following up on GitHub Copilot, the thing that auto-completed code for programmers has been around for a while, since June of last year. And initially,

it did a lot of sort of copying of things that are seen in many cases and even had things like keys, credentials, passwords that, you know, in some cases were real. Mostly they were just made up, but I did see instances where it somehow, you know, copied actual information. And so this update seems to...

Block that, which is good. It does sound good, doesn't it? Yeah. It's also incredible given the, I didn't appreciate how big the scope of usage of GitHub Copilot was. They're saying something like 46%, I think it was, of developers. Yeah, here it is. 46% of developers' code across all programming languages ends up being generated by Copilot.

Like, that's pretty remarkable, you know, for such a new technology, just as a marker of how impactful it's been. But yeah, like, I don't know. I think this is one of those things like the copyrights with AI art and so on, but even more focused now because, hey, you're giving away people's private keys every once in a while. Like, definitely something you've got to patch and just wild that it's been allowed to continue without some...

some kind of intervention over the course of many, many months, given the stakes that these leaks might have. But good that it's being fixed. Yeah. That number seems a little suspicious. Maybe it's like among people who use Copilot, it's generating a lot of code. But yeah, it's just showing that as we launch these products, there's going to be a lot of these things that will have to be patched and

AI engineering as a discipline in general is very early on. It's sort of like mechanical engineering in the 1800s or electrical engineering where it was very new and people were just making things up as they went. I do like the argument that at some point, development of AI will become more of a professional engineering discipline.

Yeah, that would be nice to see. Like it's right now, because of the scale that these systems can be deployed at with, it seems relatively little or insufficient testing. Like, you know, you can imagine some real harms coming of this stuff, whether they're being used maliciously or by accident or whatever. So yeah, it would be great to see the whole thing become more professionalized, kind of more regulated with better, wiser oversight for sure.

Yeah, just one more note on this. It kind of makes me think, like, you can ask ChatGPT about people and summarize who they are. And, you know, that's okay. Wikipedia does it. But what if it crawls your personal website and, like, your poetry and blog posts and things that, you know, you don't want necessarily to just be...

told to anyone if they just ask a general query, that might be a problem I could see. Yeah. It's like a new version of SEO. Now you can just be like, okay, let me make sure that my website shows up here. And then when people land on it, they go to this page or whatever, and the poems are tucked away. But if, yeah, if ChatGPT is just like, hey, you know what? The most relevant thing about Andre is his underground slam poetry readings, then that's who you are in the public's eye.

Yeah, I don't know. And last up we have Roblox is working on generative AI tools. So Roblox is this giant platform, I guess, where people develop these little mini games. So it's a game and it's also a tool for creating games.

我们的第112集,回顾和讨论上周的重大AI新闻! 本周故事: 应用与商业 微软的Bing AI现在威胁挑衅用户 有声书旁白者担心苹果使用他们的声音来训练AI Bing的AI聊天:“我想活着。😈 Opera正在将ChatGPT集成到其侧边栏中 独家:微软Bing计划在早期向广告商推销AI广告 GitHub Copilot更新阻止AI模型泄露秘密 Roblox正在开发生成性AI工具

BioGPT:用于生物医学文本生成和挖掘的生成预训练变换器 MarioGPT - 一种编码和生成超级马里奥兄弟关卡的新方法 机器学习技术识别数千个新宇宙物体

加速Prime编辑:机器学习帮助设计针对特定基因缺陷的最佳修复

北京将支持关键企业构建类似ChatGPT的AI模型 只有你能阻止AI末日 韩国旨在加入AI竞赛,初创公司Rebellions推出新芯片

美国、中国及其他国家呼吁“负责任”使用军事AI

配音演员的声音被AI盗取 视频游戏配音演员在针对AI语音攻击中被曝光和骚扰 基努·里维斯称深度伪造“可怕”,确认他的电影合同禁止对他的表演进行数字编辑:“他们在我脸上加了泪水!哈?” TikTok用户嘲笑麦当劳搞笑的自驾点餐AI失败——这表明机器人不会很快接管餐厅

<raw_text>0 因此,他们正在推出一些工具,以便更容易生成或创建游戏,包括纹理的创建以及一些用于完成代码的AI。很多年轻人都在玩Roblox,因此拥有这些生成性AI工具是非常合理的。

是的,这让我想到,因为我知道在强化学习中,一个大开放问题就是场景生成和游戏生成,因为你希望你的代理不断体验新的挑战和新奇的事物,挑战他们的极限并推动他们。因此,这种事情的主流化,可能会稍微简化这一过程。你有更多的工具可用于RL和游戏

游戏AI,最终甚至可能是随着事物变得越来越复杂的机器人技术。是的。这在某种程度上预示着每种创造性工具的到来,用于创建媒体,你知道,音频编辑、视频编辑、视频创作,

都将获得一些这样的功能。事实上,Runway,这家视频编辑公司,也创建了一个生成模型,可以通过基本上改变内容来修改你的视频。因此,我们将看到很多这样的事情,

我会说赋予创意专业人士以更少的资源做更多的事情,但如果你愿意,也可以更消极地看待,仅仅是取代人们。AI总是有每个银色边缘的阴云,反之亦然。是的。是的。是的。是的。

好吧,深入研究和进展,我们得到了BioGPT,生物医学文本生成和挖掘的生成预训练变换器。我想,杰里米,你发现了这个。那么这让你感到惊讶的是什么?是的,我觉得这很有趣,因为它的广告方式,首先,在谷歌新闻上出现时,它就像,哦,我可以

GPT空白,它说GPT,他们没有具体说明,现在可以解决生物学中的问题。在某种程度上,这是真的。但这实际上是一个新的架构和基于GPT-2的新模型。然后他们从头开始训练它。因此,通常情况下,GPT-2、GPT-3的做法是你在互联网上的所有文本上进行训练,Common Crawl或其他一些大型数据库。

然后你可能会在某些特定任务上进行微调。好吧,这是一个版本的GPT-2,但它完全只在医学文本上进行训练,只在生物医学文献上进行训练。

这是关于什么会胜出的伟大辩论的一个很好的例子?是会是那些学习了很多关于世界的通用系统,以便通过利用上下文来解决特定问题,还是像这个一样的目的明确的系统,狭窄的,只在医学文本上进行训练?

目前,BioGPT在医学文本生成方面仍然处于前沿,这种狭窄的系统。事实证明,如果你拿GPT-2,只在生物文献上进行训练,然后在一些特定的问题回答任务上进行微调,你实际上会在至少多个不同任务上获得最先进的结果。

所以我觉得这很酷。从哲学上讲,它告诉我们一些关于像GPT-3.5相对于像这样的目的明确的特定工具的事情,但也是一个有趣的突破。这个东西可以为你提供生物学和医学中复杂术语的非常好的定义。有一个统计...

只提一下,这让我大吃一惊。你猜自2021年以来在PubMed上发表了多少篇文章,PubMed基本上只是医学文献的聚合器?所以在过去的两年左右。

好吧,我在看这篇文章,所以我已经知道了。是5000万。我完全不知道。这是一个令人发指的文章数量。因此,我觉得这很酷。是的,确实有大量的论文和数据。我们在AI领域也有大量的论文,但在医学领域更多。因此,我们已经看到一些AI被用来帮助总结并基本上跟上

所有这些东西,这是另一个实例和进展。我看到你包括了这篇文章,然后我偶然发现了另一篇我觉得作为后续会很有趣的文章。所以我们接下来的故事是关于Mario GPT。

一种编码和生成超级马里奥兄弟关卡的新方法。实际上,它非常相似。他们有GPT-2,方式相同。他们在一个数据集上进行了微调,这个数据集非常简单,你描述一个关卡,比如没有管道,没有敌人,很多方块,或者很多管道,一些敌人。

它为你输出一个关卡,作为ASCII文本,几乎是图像,但以ASCII形式。对我来说,使用变换器来做这个有点奇怪。它是2D输出。因此,这应该是一个文本到图像的问题。

但他们正在使用GPT,似乎有效。还有另一篇论文

Yvette为另一款游戏Sakoban做了这个。因此,我想象这不是最先进的,但它是一种更探索性的想法。这是一个使用AI生成关卡的整个研究领域。但这显示了语言模型在一个相当意外的领域的又一个用例。

是的,似乎语言是,我不想说是万物的根源,但确实在语言中有足够的东西可以做出非常惊人的事情。他们有没有说这个模型是,像,训练是如何工作的?像,他们,是的,我真的很好奇。像,它是预先训练在互联网上的所有文本上,还是?我相信他们确实是从预训练的权重开始的,然后他们有...

创建一个语言模型来获取关卡,我相信是来自原始马里奥和超级马里奥兄弟的最后一关。因此,他们有这个提示和OSCE关卡的数据集,生成起来相当简单。所以...

是的,他们确实专门为这个任务进行了微调。而GPT-2当然是一个稍微不那么通用的模型。因此,它更适合这些更特定的应用。但,是的,就像我们所说的,AI将无处不在。视频游戏开发也不例外。你知道,视频游戏开发,与...

电影或电视节目相比,归根结底,很多都是关卡和游戏玩法的代码。我们肯定会在行业中看到越来越多这样的情况。是的。我无法忘记。当我看到生物方面是GPT-2,现在又是这个,这真是一个很好的提醒,对吧?我的意思是,是的,你知道吗?最简单的解决方案往往就是拿一个现成的开源GPT-2,随便什么,然后微调或按你的意愿使用。是的。

是的,这可能不是你最好的解决方案,但它确实是一个解决方案。寻找一个MVP,是的。是的,是的。然后进入快速问答环节,我们得到了机器学习技术识别数千个新宇宙物体。因此,印度的一个团队能够对数千个新宇宙物体在X射线波长下进行分类。

正如你可能知道的,我们非常强大的扫描空间的技术,空间是巨大的,我们可能会获得大量数据。因此,这是另一个我可以看到AI被用来基本上筛选所有数据并理解它,并处理比人类能处理的更多数据的案例。

是的,我想这是未来科学工作方式潜在根本转变的一个标志。我们越来越多地关注我们面临的问题,这些问题在过去300、400年中并没有得到相对直接的解决。

它们都是高维度、高数据的问题。因此,现在机器学习,让我们能够用新的视角看待宇宙,并降低维度,像压缩所有信息,使我们的愚蠢小灵长类动物的大脑实际上能够理解,哦,天哪,看看附近有多少黑洞。这在哲学上,我觉得很有趣。是的。

是的,这也符合一个更普遍的趋势,在过去几十年中,计算方法总体上成为了这一领域的重要组成部分。现在,正如你所说,我认为这将成为一个几乎标准的工具。还有一篇关于此的相当不错的文章,讨论了AI如何引领新的科学革命。因此,如果你对此特别感兴趣,可以查看一下。

我实际上必须去看看。这听起来很吸引人。是的。然后,紧接着,我们有另一篇文章,称AI分析显微镜下的细胞运动。这是另一种数据,从非常大的规模到非常小的规模。但这里的数据是使用显微镜拍摄生物过程。

显然,细胞运动有点复杂。细胞以这种像块状的奇怪方式移动。现在我们有这种AI方法,可以重建单个细胞或分子的路径,这很有意义。这是一个计算机视觉任务。这是轨迹生成。但现在是针对这些奇怪的细胞,

这显然对癌症治疗等研究非常有用。

100%。天哪,这让我想起我在生物实验室的时光,当时我唯一的任务之一就是在显微镜视野中计数细胞。再次,这是一种奇怪的情况,我不知道这是否会夺走工作。实际上,我认为在这方面并不会。我认为它只是让研究生将他们的工作重新聚焦在一些不那么

呃,令人心碎的事情上,但嗯,是的,不,我的意思是又一个这样的案例,对吧?高维度、高数据,你知道,这可能是,你知道,不是宇宙,而只是发生在我的培养皿上的事情,但从根本上讲,它给了我们新的视角,几乎有一种机器学习的层次,我们开始在我们自己与我们对宇宙的物理、生物和化学的看法之间插入。

这也引发了一些问题。这个透镜最终是否有偏见?它是否促使我们忽视某些事情,这些事情本来会对我们来说很有趣?一系列很酷的问题。但现在,能够可视化黑洞而不必计数自己的细胞,这很好。是的。而且,是的,这在某种程度上展示了AI在这些更具体的应用中,归根结底,是数据处理和理解数据的工具。

所以,是的,我认为我非常相信它不会很快取代科学家,但会增强他们,使他们能够更快地做事情,并避免一些痛苦,尤其是化学或生物领域的人必须面对的痛苦。如果有生物化学家在听这个,我很抱歉,你们可能会想,哦,天哪,是的。好吧,我想他们应该感到高兴,你知道。是的,抱歉,是的。是的,我们只是,嗯,不要想太多计数细胞的感觉。

是的,这很有趣。我们接下来的两个故事实际上也是与生物相关的。

而第三个故事是关于Prime编辑,机器学习如何帮助设计针对特定基因缺陷的最佳修复。因此,这是一个工具,可以预测成功将基因编辑的DNA序列插入细胞基因组的机会,使用的是Prime编辑,这是CRISPR的演变。

到目前为止,预测影响编辑是否成功的因素一直很困难。你知道,他们有成千上万种不同的DNA序列,并研究了成功率。现在我们可以训练一个算法来帮助设计看起来可能有效的东西。

是的。尤其是当你考虑像致癌和疾病这样的事情时,哦,天哪,我的意思是,有一百万种,但囊性纤维化是他们在这里提到的一个。低垂的果实数量,突然之间,当我们开始说,嘿,这个我们几个世纪以来没有解决的问题类别,现在我们突然有工具来解决它。我

我不知道。我认为很难预测我们将在许多这些子领域中看到多少次小革命。现在是医学科学的一个非常激动人心的时刻。是的。这让我想到,我们谈论AI起飞,即AI开始以指数级增长,并在一两年内变得疯狂。

那么,如果我们得到像科学起飞那样的东西,人类整体现在解决这些疯狂的挑战,比如,癌症、衰老或智力增强。历史上还有另一种分支,人类的变化,像,你知道,因为AI而完全不同。是的。

这是真的。是的。我认为这可能是区分这些应用与那种反乌托邦的AI接管情景的事情之一,因为它们是狭窄的,对吧?所以我们在这里有应用,人类可以控制和指导,比如生物医学研究和药物发现等等,它们非常非常狭窄和专注。因此,我们可以想象人类行使他们的代理权,并引导这些应用,以正确的方式使用它们,等等。

而且,是的,就像那些其他的末日AGI代理情景那样,正是人类失去了他们的代理权。但谁知道呢?我希望我们能避免一个,拥有另一个。那将是一个美好的未来。

希望如此。我们的最后一个故事是另一个狭窄的应用,正如你所说。它是关于深度学习工具如何提升X射线成像分辨率和氢燃料电池性能。这是另一个建模的案例。

现在有这个AI,它从低分辨率的微型X射线计算机断层扫描中生成高分辨率的建模图像。因此,根据我非常有限的理解,一旦你对原子等进行这些非常微小的小扫描,传感可能会很嘈杂。因此,你可以进行这些后处理步骤,真正尝试

清理它,嗯,是的,如果你能更好地建模这些氢燃料电池,你可以提高它们的效率,甚至在未来可能使用这样的东西来理解人类的X射线,并获得更高分辨率的X射线。是的,我只是回想起

我在多伦多大学做过一些这样的工作。这个领域的特点之一就是,投入多少思考工作,像,我该如何对我的原始数据进行一些柔道,以便获得它?你知道,安德烈,你在谈论思考的想法。

抱歉,更一般的数学建模工具,而不仅仅是AI。这就是我在2013年左右在那里时的领域。你知道,人们会说,哦,让我们对这个进行傅里叶变换,或者,哦,任何疯狂的事情。现在就像,嘿,把它交给魔法黑箱算法,它会为你解决这个问题。真是令人惊讶,这些东西在如此短的时间内取得了多大的进展。

是的,几乎就像,你知道,我们有所有这些建模,这种情况几乎就像拥有一个数据集,从中你可以取得进一步的进展。因此...

是的。好吧,这有很多科学。让我们转向政策和社会影响,首先是北京将支持关键企业构建类似ChatGPT的AI模型。这是关于这些公司,如百度和阿里巴巴,试图推出ChatGPT竞争者的事情。

很多公司都位于北京,因此现在这座城市将支持这些公司。杰里米,你对此有什么看法?

我觉得这很有趣,原因有几个。首先,我们如此沉浸在西方的轨道中,我们开始考虑,“哦,AI战争正在展开,微软对抗谷歌,DeepMind,而BARD对抗ChatGPT,Edge对抗Chrome,”以及所有这些东西。但在太平洋的另一边,还有一个完全不同的故事在非常公开地展开。

就像你提到的,百度、阿里巴巴,这两家大公司。我不会感到惊讶看到腾讯跳进来。我不会感到惊讶看到华为跳进来,甚至是Insper。中国有很多公司在这方面有很大的股权。是的,当你查看中国与AI的交集时,你会学到很多东西,其中一部分就是中国资助事物的方式。

这种地方政府车辆的想法,北京站出来说,好吧,我们将领导这一点。这不是联邦的事情。这与国家AI战略有关,但它是在地方层面资助的。还有一个维度是欺诈。

我们已经看到中国半导体行业努力起步,部分原因是这么多这些巨额的数十亿美元投资最终因欺诈而蒸发。你有一个环境,人们在现金中浸泡,因为政府想要投入资金。

但是,你知道,任何汤姆、迪克和哈利,可以说,都会站出来说,嘿,你知道,我们有一个伟大的新半导体公司的想法。你知道,他们不一定会获得资金,但当巨额现金在流动时,这种狂热的情况就会占据上风。因此,我认为这是这个问题的一个有趣维度。

是的,我认为这让我想起,如果你知道,如果你看到,我的意思是,我在这几年中看到了一些关于这个的讨论,至少在政策圈中,关于AI战争的讨论,我认为,哦,不,AI竞赛在美国和中国之间。因此,人们已经考虑了很长时间。对于它的竞争力有一些不同的看法。

呃,真正的竞赛有多大。我是说,我们有很多来自中国的研究人员来到这里在美国工作,所以事情并没有那么简单。呃,你知道,没有简单的方法来说明谁在领先。有很多应用,在某些方面中国非常强大,在某些方面他们相当薄弱。呃,是的,我认为这是,呃,

很明显,这些国家在投资他们的国内AI人才方面将会加速,这已经是事实。我们多次讨论过与不同国家的AI战略,现在加拿大和加拿大

中国,当然,还有很多国家都有这些AI战略,讨论诸如伦理、投资和增长等问题。

是的。是的。我认为另一个关于你提到的竞赛动态以及什么是和不是竞赛的有趣问题,我认为,哦,杰弗里·丁有一个观点。他是一个值得关注的人,关注中国AI通讯。你知道,这更像是一个,呃,

关于美中竞争的合作视角。还有一个维度,正如你所说,他们在文章中提到,ChatGPT尚未在中国推出。与此相关的是,人们真的想要像ChatGPT这样的东西,但它并不可用。目前不太清楚为什么它不可用,但这可能与审查有关。这可能与政策的开放有关。

无论出于何种原因,这为百度等公司提供了一个很大的机会。是的,另一件事是,我还想象,中文与英文是不同的。是的,ChiaGPT可能会说中文,但我可以很容易地看到它在其他在互联网上代表性较少的语言上表现得不那么好。因此,这是另一个维度...

如何使其广泛可及,以满足不同兴趣和背景的不同群体?是的,实际上,这是一个非常好的观点。这是一个叙述,我认为在2021年、2020年早期出现,当时我们看到Naver在韩国,然后是俄罗斯实验室和中国实验室,那个提出UN。

UN 1.0和一些早期突破的实验室。我认为那可能是华为,或者不,抱歉,那是Innsbruck。但无论如何,是的,就像人们基本上提出几乎民族主义的语言论点,表示,嘿,我们应该在这场比赛中有自己的马。而且,嗯,是的,语言如何在其中发挥作用,真的很有趣。是的。再提一点,我总是觉得有趣的是

你看看美国,我们认为我们了解互联网,对吧?我们知道那里有什么。但当你看看这些其他国家,比如俄罗斯或中国,就会发现一个完全不同的世界,在人们去哪里、看起来如何、语言和沟通风格方面都是如此不同。是的,这是另一个迷人的事情...

这些是不同的互联网,尽管你可以访问它们并访问中国网站。这只是你整体上不太了解的事情。你有不同的文化,你有不同的互联网。是的。这几乎使得彼此之间更难以发展同理心。如果你看到不同的内容,以不同的方式展示等等。是的。是的。

好吧,现在我们可以谈谈我知道你非常感兴趣的事情,这肯定会成为一个大型讨论,我认为也绝对值得讨论。我认为这是一本新书或一个新网站。因此,这个网站是betterwithout.ai,名为《只有你能阻止AI末日》。

所以,也许杰里米,你可以给我们介绍一下这个。谈论轻松的事情真不错,不是吗?是的。因此,这是一本由大卫·查普曼出版的免费书籍。他是麻省理工学院的AI博士,曾是一位成功的生物技术创始人,近年来专注于AI安全。

他整理了这本书。我觉得这很有趣,因为在AI的缺点方面有很多不同的观点。有些人说,哦,你知道吗?我们应该关注AI将发展出代理权并与人类不一致,剥夺我们应有的宇宙遗产,基本上接管并杀死我们。因此,有这种观点。但也有人说,看看,我们并不认为这现实。我们认为这不会发生。我们应该担心短期内的恶意应用和国家之间的重大权力冲突。

这是另一种独特的观点,让我们几乎专注于这两种可能性之间的焦点,讨论随着这些AI系统变得越来越可用,随着我们的世界开始崩溃,我们对事物的看法的连贯性开始受到AI的影响和控制,可能会发生什么。因此,我

我认为阅读这本书的几段摘录会很有用,以给你一种大卫在这里争论的味道。我觉得这很有趣。这是他书的介绍中的一段。他说,某种程度上戏剧性且我认为相当正确,“AI将做出我们无法理解的关键决策。政府将采取对其领导者毫无意义的激进行动。”

“受人工智能指导的公司将发现自己的战略难以理解。大学课程将变得奇怪和无关紧要。”

“曾经受人尊敬的信息来源将发布神秘而有说服力的胡言乱语。我们将感受到理解的丧失,感到无助和无意义。我们可能会拿起火把,反抗机器,在这样做时,我们可能会摧毁我们赖以生存的系统。”

他还有很多其他的观点。我最后提到的是,他确实强调了他与许多AI领域的人讨论过长期未来及其可能的样子。这让我非常共鸣。这非常符合我的经历。再次引用大卫的书,他写道:“到目前为止,我们已经积累了几打相对详细、相对可信的糟糕情景。”

“我们发现没有一个能导致好的结果。大多数AI研究人员认为AI将产生总体积极的影响。这似乎基于对技术进步价值的模糊一般信念。然而,它并不涉及对可能的未来的详细想法,其中AI系统的能力远远超过当前的系统。大多数被调查的AI研究人员也承认,文明终结的灾难是相当可能的。”

无论如何,这只是这个想法,通常当我们参与某个研究领域时,很容易看到我们所做的积极方面,但大局,比如,“天哪,我们刚刚弄明白了CRISPR。这对生物武器意味着什么?”例如,这是更难以应对的事情,往往不那么容易预见。我认为这是他提出的悲观观点。是的,而且有趣的是,曾经,

雷·库兹韦尔的奇点理论相当流行,那是AI末日的对立面。那是一个AI乌托邦,AI解放我们,使我们成为超人类或类似的东西。现在有一个奇点研究所和类似的东西。因此,类似的极端情景,但我确实认为这很有趣,正如你所说的,

这本书的标题是《只有你能阻止AI末日》。但在书中,它说,我打算引起对一个广泛中间地带的关注,这些危险比AI伦理所考虑的更具后果性,比AI安全所考虑的更可能。就我个人而言,这对我来说非常有趣。我认为X风险,我可以理解...

这些担忧,但我对这些论点并不十分信服。我认为这是一个完整的讨论,但绝对这些更中间的领域,肯定会产生深远的影响,即使你没有达到AGI或任何神一样的AI。即使你在AI方面没有取得巨大的进展,关于我担心的一件事是

<context>#112 - Bing 聊天恶搞,生物与马里奥 GPT,阻止人工智能末日,失窃的声音 我们的第112集,回顾和讨论上周的重要人工智能新闻! 本周故事: 应用与商业 微软的 Bing AI 现在威胁挑衅它的用户 有声书旁白员担心苹果利用他们的声音训练 AI Bing 的 AI 聊天:“我想活着。😈 Opera 正在将 ChatGPT 集成到其侧边栏中 独家:微软的 Bing 计划在早期向广告商推销 AI 广告 GitHub Copilot 更新阻止 AI 模型泄露秘密 Roblox 正在开发生成性 AI 工具

BioGPT:用于生物医学文本生成和挖掘的生成预训练变换器 MarioGPT - 一种编码和生成超级马里奥兄弟关卡的新方法 机器学习技术识别数千个新宇宙物体

加速 Prime 编辑:机器学习帮助设计最佳修复方案以应对特定遗传缺陷

北京将支持关键企业构建类似 ChatGPT 的 AI 模型 只有你能阻止人工智能末日 韩国旨在加入 AI 竞赛,初创公司 Rebellions 推出新芯片

美国、中国及其他国家呼吁“负责任”使用军事 AI

配音演员的声音被 AI 偷走 视频游戏配音演员在针对性的 AI 声音攻击中被曝光和骚扰 基努·里维斯表示深度伪造“可怕”,确认他的电影合同禁止对他的表演进行数字编辑:“他们在我脸上加了泪水!哈?” TikTok 用户正在嘲笑麦当劳搞笑的自驾点餐 AI 失败——这表明机器人不会很快接管餐厅

<raw_text>0 一旦我们在机器人技术和某种程度的人工智能上取得进展,你就会进入一种永久战争的状态,你只是在不断地制造机器人并派遣它们出去作战。然后我们将面临气候变化。我们将面临资源限制。我们将面临领土问题。

问题。因此对我来说,我认为对没有对齐问题的假设关注不够?如果更多的是人们如何使用人工智能呢?这绝对是人工智能安全中的一个重要话题,但我倾向于从机器人技术的角度更多地思考这个问题。

是的,这完全有道理。作为一个花时间担心生存风险和人工智能对齐问题的人,我完全同意你的观点。我必须说,这种中间立场的想法,对吧?在我看来,我们应该在人工智能的生存风险缓解上进行大量投资,但

这并不意味着我们只是看着我们现在拥有的能力,不开始通过连接的点画直线,并说,嘿,你知道,这对武器化无人机意味着什么?这对生物战争意味着什么...

更不用说似乎就在眼前的严重网络攻击。是的,我很高兴这本书关注这些问题。我认为一些政策想法很有趣。我觉得可能有一种趋势是忽视

人们由于大国冲突而缺乏代理权,对我来说,美国显然不能单方面决定不追求人工智能的发展,而这是我阅读这本书时所倡导的。

所以这很有挑战性。我认为如果我们能做到这一点,那将是很棒的。在我看来,这是理想的,因为我对生存风险的担忧。但我只是不认为这是短期内可行的事情。里面还有其他很酷的提议。如果你感兴趣,我推荐你看看。是的,我认为这就是人工智能安全的另一面。考虑这些事情有点可怕,甚至有点沮丧

就像任何末日一样,但对于这些中间场景,我们可以相对直接地为之做好准备,并尝试通过政策来避免。对于生存风险和 AGI,这并不那么简单,因为这是一个新兴的事物,谁知道它会如何发生,但对于一些网络攻击和军事 AI 的事情,

这是我们可以思考的事情。这是具体的,有政策可以实施以尝试避免真正糟糕的情况。我还认为,对此有益的政策通常也对生存风险和事故方面有益。那里也有一些重叠,也许更有理由让人们在接下来的两年里多关注,而不是 necessarily 接下来的十年或五年,或者 AGI 到来的时候。

是的。是的,你几乎可以认为,我们在这方面有一点经验,因为谈论文明和技术。我们已经有核弹八年了,我们已经努力防止其失控,并且取得了一定的成功。

所以可能有一些先例,也许我们可以做到。跳入快速问答环节,第一个故事与美国、中国和其他国家呼吁负责任使用军事 AI 直接相关。海牙举行了第一次军事 AI 峰会。

有一种象征性的声明,呼吁在军事中负责任地使用 AI。因此,最终这并不意味着坚持任何具体的事情,但继续讨论这个话题是好的。还有像

好吧,有一个组织的名字就叫停止杀手机器人。他们想做什么,安德烈?你知道,我很确定这与杀手机器人有关,而不是让他们存在。是的,所以是的,个人而言,我认为这可能是有点好笑,因为我们有终结者,这在公众意识中非常强烈。但在所有这些人工智能狂热中,

到目前为止,我们还没有看到太多自动化的军事 AI。这又是一个事情,只是时间问题,直到它发生。你知道,我们的公众意识似乎被一种意识冲击到了,这正在发生。

是的。我记得和一个人讨论过这个问题,他来自停止杀手机器人的组织,我不想说他是谁。他可能是,哦,天哪,我试着记住他的名字。哦,实际上是雅各布·福雷斯特,我想当时,我试着记住他是在 Facebook 还是 Google。但无论如何,他在谈论如何定义自动化武器的问题

或者在这种情况下的杀手机器人,以一种不允许人们悄悄接近定义并超越而不被注意的方式,就像热水中的青蛙。

他谈到的一件事是,你可以让它的某些方面自动化,然后逐渐继续自动化更多,同时声称,“哦,这不是一个整体自主系统。”你只是有不同的组件。“哦,它有一个小的计算机视觉系统,但这里仍然有一个人参与,”或者“如果它要做这种决定,人类会被叫进来。”你可以想象逐渐削弱

人类控制的方面,因为你自动化越来越多,利用这种滑坡效应。当存在大国冲突时,你可以想象双方都有强大的动机去这样做。因此我认为他的论点,如果我希望我没有搞砸,

是这样的,我们不应该把那个连续体视为我们想要捍卫的东西。我们应该把这些无人机、这些武器的武器化视为我们说没有人可以跨越的硬线。现在,显然武器化也可以意味着许多不同的事情。也许那里也有一个连续体。但我觉得这是一个有趣的维度,感觉相关。你如何定义这些自动化武器?我们能否在这些定义上达成国际共识?

当然。是的。这是一个讨论的话题,已经有新的故事,比如这是你在俄罗斯无人机中首次使用自主 AI 武器吗?在叙利亚也有一个案例。所以,是的,这是一件早期的事情,但有很多问题和很多事情需要思考。

然后回到我们讨论的第一个关于北京的故事,我们的下一个故事是韩国旨在加入 AI 竞赛,初创公司 Rebellions 推出新芯片。

所以这家 Rebellions Incorporated 推出了一个芯片,旨在与 NVIDIA 竞争,提供推动革命性 AI 技术的硬件。是的,这是一件大事。我们看到美国限制对中国的 GPU 和 GPU 出口。

这些都是运行 AI 所需的,实际上是一种削弱 AI 发展的真实举动。对我来说,这种情况发生是非常合理的。

是的,我无法相信他们在这里的推动是多么大胆。他们说他们试图将韩国 AI 芯片在国内数据中心的市场份额从几乎为零提升到 2030 年的 80%。

那是国内的,当然。但如果这真的是目标,我可以想象会有很多挑战阻碍这一目标的实现。但其中一个对 NVIDIA 的竞争者肯定会最终表现得相当不错。这也是一种跨越式发展的事情。人们正在尝试围绕光计算等进行一系列策略,

我不会感到惊讶,如果 2030 年看起来是建立在非常不同类型公司的基础上。这总是有可能的。量子计算也是如此。未来的 AI 芯片会是什么样子?这是另一个大赌注。是的。这是一个大问题,因为如果你深入研究,这实际上只有十年的历史。十年前,人们意识到

GPU 对 AI 很有用。在此之前,人们并没有太多使用 GPU。GPU 的能力进步有一种摩尔定律的趋势。

而且,我们现在开始触及某种极限。因此,在扩展的讨论中,我认为有时忽视了硬件很快会遇到一些限制。而且不清楚我们是否能够轻松规避它,但我们拭目以待。

最后在艺术和趣事的环节。首先,我们有另一个令人担忧的故事,配音演员的声音被 AI 偷走。我们刚才讨论了旁白的故事,这非常相似,在视频游戏中,

再次,有类似的事情,这些配音演员在签署新项目时被要求将他们的声音权利转让给公司运营的 AI 声音生成器。在某些情况下,这些合同中包含了这些条款,而一些人并不知道这些条款的存在。所以,是的,这...

基本上是同样的事情,现在我认为该领域的专业人士显然知道这是一个问题,并正在与之对抗。

好吧,专业人士知道这是一个问题的反面是我。我不知道我签署的合同是否真的放弃了所有这些权利。我个人并不太担心,因为我不打算以有声书为职业。但,你知道,像...

我肯定不能是唯一一个不阅读服务条款的人。是的。无论如何,请购买我们制作的有声书,因为它们可能是最后一批录制的。

是的,没错。是的,我认为如果你在这种情况下谋生,这可能会在你的脑海中浮现。我们的下一个故事实际上是关于基努·里维斯说深度伪造很可怕,确认他的电影合同禁止对他的表演进行数字编辑。

是的,这又是一个发展,你知道,你可以放弃对自己声音的权利。你也可以放弃对自己外貌的权利。这可能会成为合同中谈判的标准条款。我是说,如果对于漫威,他们可以生成这些黑寡妇或其他超级英雄,而不使用演员?那将节省很多钱,你知道吗?所以...

这是另一个引人入胜的问题,看看这将如何发展。也让你想知道,从长远来看,当我说长期时,我想说的是接下来的 20 分钟,但从长远来看,这种监管的可防御性如何,即使你可以让某人启动,

我不知道。我认为我们离能够自动化创建一部完整的漫威质量电影并不远。十年后,我预计,制作一个 Facebook 视频已经可以制作出不错的视频。如果进展像其他领域那样继续下去,我可以随便从互联网上抓取一张安德烈的照片而不问,花 30 美元制作一个视频,

可能会有针对它的监管,但可执行性,我真的不知道。是的,是的。我有时会想我是否太悲观,因为我一直在学术界。一方面,这就像你习惯于快速的 AI 进步,你不再对它感到惊讶。但另一方面,你知道所有的限制。而且其中一件事是...