我们的第117期节目总结并讨论了上周的大型AI新闻! 查看Jeremie的新书《量子物理让我这样做》 阅读我们的文本通讯,网址为https://lastweekin.ai/ 本周故事: 应用与商业谷歌调整助手领导层以专注于Bard AI Alphabet的谷歌和DeepMind暂停恩怨,联手追赶OpenAI
由于人类金属工人难以找到,机器人铁匠开始崭露头角
微软的必应聊天机器人正在投放广告 是的,当然,YC的冬季班充满了AI公司 谷歌与AI初创公司Replit合作,挑战微软的GitHub 销售员的诞生:OpenAI脱下实验室外套,寻求大交易
介绍BloombergGPT,Bloomberg的500亿参数大型语言模型,专为金融量身定制 斯坦福大学关于AI现状的386页报告的要点 快讯 机器人用腿作为手臂攀爬和按按钮
ChatGPT之王并不担心,但他知道你可能会担心 欧盟的人工智能法案解读 快讯 一家小公司如何在几乎没有规则的情况下使假图像走入主流 FTC正在审查人工智能领域的竞争 Midjourney因“极端”滥用而结束其AI图像生成器的免费试用
中国创作者利用Midjourney的AI生成复古城市“摄影” 哈利·波特角色的人工智能深度伪造视频在假Balenciaga时装秀中走红
<raw_text>0 Hello and welcome to Skyner Today's Last Week in AI podcast, where you can hear us chat about what's going on with AI. As usual, in this episode, we'll provide summaries and discuss some of last week's most interesting AI news. You can also check out our Last Week in AI newsletter at lastweekin.ai for articles that we did not cover in this episode.
I am one of your hosts, Andrey Karenkov. I'm currently finishing up my PhD at the Stanford AI Lab, and I'm about to work on generative tools for video games. Oh, okay. So just so everybody knows, that was the first time I found out that Andrey is going to be working on generative tools for video games. Yeah, yeah. I'm just about to start work, so it's brand new. Oh, look at that. Look at that. Can you share the company name?
It's a small kind of stealthy thing. Oh, okay, okay, okay. Oh, Stealth Co. I love Stealth Co. Okay, great. Yeah, yeah. Sorry, I got distracted. I'm your other co-host here, Jeremy Harris. So I do a bunch of work on AI safety stuff with a company called Gladstone AI. We work with AI alignment researchers and policy researchers at some of the world's top labs. The names you'll probably know. And yeah, I work on AI safety. I have a book also, I should mention.
My editors will kill me if I don't. It's called Quantum Physics Made Me Do It. What the hell does that have to do with AI? Well, you can find out by reading the book. And anyway, it actually is connected kind of in a cool way. Think AI sentience, that sort of thing. Anyway, if that's your cup of tea, please feel free to check it out. And without further ado, we have a big week, I guess, to cover here. Eh, Andre? Yeah, we sure do. As usual this year, every week has been...
It's crazy. This podcast used to be like an hour or shorter and now it's just an hour and a half every time. So we'll see how this one goes.
Let's go ahead and dive in. First, our application and business stories, starting with Google shuffles assistant leadership to focus on barred AI. So this is about an internal memo that basically showcases how Google is freaking out about what's going on with ChatGPT and how
Their release of BARD, their version of Bing Chat has seemingly not gone very well. So as a result, there's a bunch of basically turmoil within the company. What did you take away from this, Jeremy? Yeah, I think you're right. It feels...
It feels like turmoil from the outside. It's always hard to know. The narrative, the public narrative is like, oh, OpenAI is setting the course here with chat GPT and GPT-4, and Google is kind of responding, and it seems panicky. Obviously, we don't know. We can't tell what's happening under the hood. But I will say there are kind of two big categories of facts.
that often can give you insight into what a company is doing that a company really can't hide. And one of them is major personnel transfers. That's this. And the other is major equity shuffles. We saw that when Microsoft, for example, took a 49% stake in OpenAI. That's a signal you just can't hide for various legal reasons. There's just no way around it. So I think this falls into that category of very reliable companies
unhideable information. I think the implications are interesting. It was a leaked internal memo too, so worth noting this isn't something that Google just broadcast. But yeah, I think one of the really interesting things is that we're seeing people get pulled off the Google Assistant team and moved into the BARD team, so senior executive in this case, getting pulled off Google Assistant, which is a pretty successful thing in and of itself.
So the theory here is that maybe that executive is now going to be focusing on how to build BARD in such a way as to help with the Assistant product. So we might see Google Assistant potentially moving in the direction of less like an expert system type of setup where it has pre-programmed responses for things and more towards a kind of AI first chat GPT style bot.
Which in its own right, I think is fascinating, right? This is like a fundamental shift in the kind of AI technology, the kinds of home AI systems that we've seen, where we're moving from these pre-programmed response streams that are more controllable, but less flexible to these like AI first strategies. So that was, I think, some of the high level thoughts that I had.
Yeah, I think it's interesting how this highlights Assistant in particular. If anyone doesn't know, if you're an iPhone person, Assistant is basically Google's Siri equivalent. Both text and audio, it kind of does both. So you can easily imagine how something like ChatGPT would plug into something like Siri or Google Assistant in a lot of ways.
And yeah, so to me, it kind of makes sense for them to try and get those two to meet up and sort of glue their code together. As you said, the technology behind Assistant is probably a bit old school. There's a bunch of parsing and querying databases and things like that. So it'll be interesting to see how quickly they can roll out this refined version that's based on something like BARD.
And related to this, I also noticed there was an article in the information that is also not some sort of public release. This is interviewing from employees at Google and DeepMind. So DeepMind is a subsidiary of Alphabet that is a big AI lab, effectively. It does very little business, AlphaGo,
AlphaFold, all these things you probably heard of what they do. And this article shows how now there is teamwork between Google and DeepMind to compete with ChashGPT, with this project that's codenamed Gemini.
So that's another sign of how big a deal it is for Google. Usually Google and DeepMind have been pretty separate. Avuna had some kind of divisions and some disagreements, but now it looks like there's teamwork, all hands on deck for this.
Yeah, and connecting it to the story we just covered too, as we look at this transition from these more expert systems, like you said, pre-programmed, in a sense, pre-programmed responses, querying databases, as we shift from that to these AI-first systems,
The risk starts to go up a little bit too, right? You can imagine you ask this thing for health advice, you ask it for psychological advice, travel advice. It's going to start to affect your behavior in the real world in a way that's more significant. And so now, yeah, with this story, I think we're seeing another consequence of this acceleration that was caused by ChatGPT's super public release.
And Google now sort of being confronted with like, what do we do? And historically, there has been tension between DeepMind and the mother org here, right? We know, for example, that DeepMind was founded by, among others, Demis Hassabis, who was very focused on AI safety, who didn't want to risk losing control over the org in the Google activity.
acquisition in 2014. And so they had a whole bunch of like independent oversight requirements and things like that, that were introduced that caused friction with the mother org. And it's kind of interesting now, if you're a deep mind, it's like, okay, we want to design and develop AI safely. We set up a bunch of kind of blockers, kind of points of friction with the parent work just so that we could do that. But now we have open AI. They're just going to go out and like
like seemingly launched their own stuff anyway. So is it better if we just say, okay, screw it, we're going to partner now fully with the mother org? You know, I'm glossing over a whole bunch of detail here, I'm sure, and there's going to be nuance under the surface. But at a high level, it kind of seems like, you know, we're seeing that race to the bottom on safety and people are making bigger and bigger bets just to gain the extra couple of weeks or months in releasing these new products.
Yeah, exactly. I think it is kind of interesting to note this parallel between DeepMind and OpenAI because going back to 2015, DeepMind was, I think, around 2013 acquired by Google, OpenAI was started in 2015. And initially, they were kind of the same basic thing of a big lab in industry that sought to create general AI with the goal of doing it safely.
roughly speaking. And then over time, their approaches have really diverged where OpenAI put all its chips into scaling up and in particular scaling up language models. In the first two years, OpenAI was doing all this reinforcement learning, playing Dota and having robot hands, and then they sort of just threw it all away and almost exclusively are working on language models. DeepMind
has not gone that course it has done work on this language stuff but it has also done work on you know healthcare ai and a lot of reinforcement learning i think we discussed one of their works recently with the general uh human uh level adaptation or whatever it was so yeah i think
It's not super relevant, but it's kind of interesting to see how over time the two big AI labs diverged and to create different courses.
I think it is actually totally relevant on the safety side. I think you're exactly right. You have these two orgs that kind of agree on the risk, but they have completely different philosophies about where the risk's going to come from. And to add to what you just said, OpenAI on risk seems to have this view that the best way to de-risk these systems is...
is to release them, let the world play with them and learn the ways in which people find to like jailbreak these systems, to kind of make them, to use them for malicious purposes and things like that. And they're hoping, they're essentially putting all their eggs in the basket of hoping that by doing that, you can kind of iteratively work your way to a safe superhuman AI system. Whereas DeepMind seems to have the view that like, well, wait a minute, there might be like kind of a phase transition at some point.
where these things become intrinsically dangerous and we don't know when that's going to happen, but we want to kind of think in terms of that leap and not necessarily default to releasing things. And it's not like we don't know who's right here. We just know that the answer is going to be really important. And that's kind of a weird space to be in, like these two philosophies being so different.
Yeah, no, there's a lot of interesting parallels where, you know, DeepMind is owned by Google or Alphabet and OpenAI is, you know, largely...
You know, Microsoft is doing a lot of steering just because they have the money for the compute. And so Google hasn't really pushed DeepMind to be very profitable or really at all profitable. They've done some revenue generation via different kind of systems, but they never really did try to go for that. And they're still publishing dozens of papers. OpenAI
doesn't really publish papers. They publish reports, but they don't have a lot of details. They used to publish AI safety papers, but I haven't seen that lately. I don't even know if there's a team that's doing that research anymore. DeepMind has published a lot of papers on AI safety. So
I don't know. It's a good contrast and it does seem like maybe AI research is not the priority for either of them right now. Yeah. To what you were saying about DeepMind and profitability, I believe they were officially profitable for the first time about two years ago.
So it's like, and when we say profitable, it's not like they're selling stuff to other companies. They're finding ways to save Google money. And one of the projects that, Andre, you were alluding to just for listeners, like one of the big ones, the big success stories was finding ways to basically cheapen the cost of deep mines like servers running and making them burn less, like cool. I think it was cheaper server cooling or something like that.
So, those internal use cases being really important. And OpenAI does have a safety team and alignment team in their defense on this, and they are focused on it fairly significantly. I think one of the challenges is too, it's starting to get really hard to tell what is safety research and what's capabilities research.
It used to be really clear. Like it used to be, you could say, okay, this paper is about controlling the AI system. And now this paper is about capabilities. But if you look at chat GPT, that arguably is a success in scaling. So capabilities, but also tuning the behavior of that scaled model through human feedback or LHF. And that's, that used to be a,
safety thing. And so I think we're seeing a blurring of the lines and I think people are starting to worry about giving away effectively trade secrets, even just by sharing so-called safety stuff. So not necessarily a great thing if you want safety knowledge to be very widely distributed, but it's an interesting dimension of this for sure. Yeah, that's a good point. I think when this is a commercial product and one of the ways that
that you can be better than the other product is to be more reliable, right? Or to not say bad things. It's a competitive advantage to have better AI safety. So that is kind of awkward. It's good and bad, I guess, because at least people are competing to do it better, but then they're not sharing it. I don't know. I don't know where to fall on that. Yeah. Yeah.
And on to the next story, moving away a bit from language models, we have something about robotics. So the Robotics Business Review has published the kind of set of investments that showed that in February 2023, robotics investments totaled $620 million. This was just published end of March, to be clear. So...
This is a bit of a few months away. It has a pretty cool breakdown showing that
There's 36 major investments. So the total for 2023 so far, I guess for first few months has been over 1 billion. And some of these sectors include drones that are used for a lot of stuff, data analytics for farms or for infrastructure or various things like that.
And then of course there's construction and healthcare, a lot of these different things. And they have some nice breakdowns of investment amounts, stage, country, things like that. So yeah, I found it kind of neat to see this breakdown of what's happening in robotics.
Yeah, it's also like it's always tough when, you know, you're in an industry and you're looking at like 36 deals in total because it can be so noisy from, you know, from one quarter to the next. But overall, yeah, it seems like, you know, good, healthy activity and definitely countering that narrative a little bit that we've seen. I know it's a narrative I myself have been guilty of propagating on the show specifically. So I do apologize to our listeners.
But historically, people have said stuff like, hey, language models are just taking all the air out of the room for these robotics companies. And it is kind of cool to see that, in fact, it's very much an active area of investment. Maybe a good kind of moment to throw out there, just an idea that I've been mulling over about AI companies and robotics.
The purpose to a significant degree of venture capital is to back very fast growing businesses that have long payback time horizons.
And the challenge with a lot of these software-oriented products is, yeah, they might go super fast on the way up, but how long will they actually last until there's a complete paradigm shift? And all of a sudden, there's some version of ChatGPT that is, I don't know, not an embodied thing, but anyway, we just find an audio-based system or something like that that just completely takes out this entire class of product.
It's very much not inconceivable that happens on the order of months now, just given how fast things are moving. Robotics, on the other hand, still has that characteristic of like, it's a moat. It's harder to compete. So it can make more sense to invest in arguably...
think, and this is where I'm just throwing out a half-baked idea that I've been mulling over and talking to some of my angel investor friends about, but maybe hardware becomes the place where VCs see the most returns, paradoxically, in the coming five years. Maybe it's not software so much despite all the hype, but anyway.
Anyway, just a random thought there. Yeah, it's interesting, I guess, if you are into investing, there's always a risk return trade-off for robotics. You probably have less risk just because there's fewer players, but it'll take way longer to get to revenue and profit as well, obviously.
So, yeah, I'm curious to see what will happen for the rest of this year. There's some fun breakdowns here. It shows that, for instance, 25% of these investments were seed and 14 were A. So these are pretty early stage investments for the most part. A lot of them on the US and China, things like that. So I think...
And one other thing that I am curious about, and I don't talk about that here, but it seems to me that probably when you talk about venture capital firms, they have different specializations. So some of them specialize in hardware industries, right? Manufacturing and logistics and things like that. And other ones are more software oriented. And I do feel there is still...
a set of investors that their wheelhouse is just in the physical world. And that means that we're not going to suddenly switch over to invest in ChagGPT or whatever, right? And in a way, there's real problems that you can only do with actual physical robots. So I think, yeah, let's not discount the importance or I guess...
popularity of this stuff. It's not exploding or skyrocketing like all these other things, but I think we will continue to see more and more robotics out there. Yeah, there's an old saying in YC, they say hardware is hard. And I guess this is why. As a last quick question, oh, sorry. Yeah. So speaking of that, I noticed another story I just felt like mentioning related to this. So
Bloomberg just had this article with human metalworkers hard to come by robotic blacksmith step up. And this is covering a particular company, Machino Labs, which is developing these robotic systems that can do what humans have traditionally done, this prototyping work, blacksmithing essentially.
And that is very tricky because you have to work with actual materials and you need some of that sort of, I guess, intuition and hand skills. So now with computer vision and AI, you can actually shape these materials in a robust way.
So yeah, that's just one example I thought was interesting where you would think that blacksmithing is not so much of a thing anymore. But no, actually, you still need people to do it for a lot of these prototyping steps early on. MARK MANDEL: Yeah, and another example of how hard it is to predict which jobs are going to be vulnerable to automation in the coming years. We have yet to see if they'll actually succeed with this at scale, but kind of interesting.
Also, looking back at this, there's a chart that comes with the robotics investments article that we were talking about a minute ago. I noticed China is about a fifth of the investment amount that the US is. I'm trying to decide whether that's surprising.
There's definitely a more developed VC environment in the States, but China tends to be really good on hardware. The whole Shenzhen ecosystem is just really well-developed. I don't know. Did you find that surprising? Now that you mention it, yeah, I think so.
I looked at the pie chart and there's fewer investments, but by amount, it's pretty surprising because you have like 450 million investment in the US and just 80 million in China, which is surprising. I mean, they have a lot of manufacturing going on. So as usual, when you crunch for data, there's a lot of questions as to how it was compiled and so on. But
Yeah, interesting. And I think good to keep an eye on just if you really want to have a comprehensive idea of what's happening in the AI economy, it's worth it to not forget about robotics.
And on to the lightning round of stories. First up, we have Microsoft's Bing chatbot is getting ads. Pretty much what this says, we're getting ads to pay for what is kind of expensive to service models.
Isn't it a rare treat when the actual article title just tells you what's it? Yeah. I mean, if we're going to pick this apart, and I think there are a couple of little nuggets here, one of which is the specific way in which they're showing the ads. So they show an example of a person asking, which is the cheapest Honda car? And you see the response and it says, yes.
The cheapest Honda car for 2023 is blah, blah, blah, according to Truecar. And then there's this little kind of superscript, a little blue box, and it has the word ad in it. And I don't know, I kind of thought this was a fairly nice, not too intrusive, but very clear indication.
I think it's just a cool moment. We should not lose sight of the fact that this is another fundamental change in the way we consume ads and that a whole bunch of micro decisions around user experience and interface design have had to be made in order to make this a reality. And I'm sure we're going to see a lot of iteration on this and we're going to be redefining what it means to serve ads on the internet through this kind of chatbot, but kind of cool to see the first manifestation of that here.
Yeah, it's interesting just to talk about this a bit more. There's a screenshot and basically there's a question of which is the cheapest and then there's a text answer. And then Bing does this thing of providing citations. So the citations here seem to be kind of the ad. So basically what looks to be happening is, right, if you Google or Bing at the top, you'll have a couple of links that are ads, right?
And being part of what it does is retrieve information from websites to answer a question. So what appears to be happening to me is they just retrieve that ad website and, you know, put the text in. So to me, it's kind of interesting, like, well, what is the return on investment for the people doing the advertising? You don't get a click, you don't get ad money. So...
I guess it's a good response and you get your answer out there, but is this what you actually want? Yeah, it's very interesting to see how they can...
make this stuff profitable. And you can just, you can feel that trade-off, like you said, between wanting to serve an explicit ad that you can get credit for making them click on and giving people content that they actually want. And it's, there's just this thing that happens. I don't know. I found myself wondering this as I read that screenshot, you kind of get this like increased cognitive load because you're reading what the chat bot wrote for you and you're sort of treating it like they're a salesman on commission. And you're like, ah,
"Okay, how much of this are you saying because you're trying to serve the ad?" It's already in a context where there's all this question about how accurate chatbots are to begin with. You're constantly experiencing this epistemic challenge where you're like, "What does this thing actually know and can I trust it?" I think something we'll have to see ironed out over the next few months.
Yeah. Yeah. It'll be very interesting. I mean, just thinking of this example, it's asking what is the cheapest Honda car? I could see myself asking, well, what is playing in the movie theater near me? And then the ad is just to switch the ordering of the answer to be like, list this movie first. And that's, I don't know, it feels a bit weird, right? That...
You're talking to this thing in a way, and you may not even be able to tell how its response is shaped by marketing, but it may just be shuffling around the text and emphasizing certain things and de-emphasizing other things based on payments. That's not even getting into reviews and asking about the qualities of products. So yeah, very...
interesting to see where this goes. So far, it's pretty simple. Next up, we have an article from TechCrunch that is about how YC's winter class is oozing with AI companies. YC is Y Combinator.
accelerator of startups. So this is where startups get some investment and basically get started. And yeah, it's showing how about 35% or 91 startups have something to do with AI, which is a pretty big fraction. It used to be much smaller. And I think it goes into some of the details of what companies there are.
Yeah, and I think it is quite noteworthy this is Y Combinator because just like some inside baseball for people who don't track Silicon Valley, like stupid politics and party gossip or whatever. But back in the day, at least back when I went through Y Combinator actually with my company, Sam Altman, who now is the CEO of OpenAI, was then the, I think he was the CEO of Y Combinator at the time.
And so he really, for 10 years, or 10-ish years at least, kind of set the course of Y Combinator, grew it massively, and had a huge influence on it. And so...
As a result, OpenAI enjoys this special, somewhat privileged relationship with Y Combinator. It may be less surprising that you find OpenAI's products may be preferred by the Y Combinator startups. Also, they're just amazing products, of course, but that's some inside baseball that's maybe relevant. Also, I should flag, YC does not tend to fall for hype.
I remember back in 2017, 2018, that was when I was there. You saw that it was the crypto craze was on full steam. People were going nuts for this. Investors could pour money into basically anything that had .eth as a domain.
我们的第117集,回顾和讨论上周的大型AI新闻! 查看Jeremie的新书《量子物理让我这样做》 阅读我们的文本通讯,网址为https://lastweekin.ai/ 本周故事: 应用与商业谷歌调整助理领导层以专注于Bard AI Alphabet的谷歌和DeepMind暂停恩怨,联手追赶OpenAI
由于人类金属工人难以找到,机器人铁匠开始崭露头角
微软的必应聊天机器人正在投放广告 是的,当然,YC的冬季班充满了AI公司 谷歌与AI初创公司Replit合作,挑战微软的GitHub 销售员的诞生:OpenAI脱下实验室外套,寻求大交易
介绍BloombergGPT,彭博社的500亿参数大型语言模型,专为金融领域从零开始构建 斯坦福大学关于AI现状的386页报告的要点 快讯 机器人使用腿作为手臂攀爬和按按钮
ChatGPT国王并不担心,但他知道你可能会担心 欧盟的人工智能法案,解读 快讯 一家小公司如何在几乎没有规则的情况下使假图像成为主流 FTC正在审查人工智能领域的竞争 Midjourney因“极端”滥用而结束其AI图像生成器的免费试用
中国创作者使用Midjourney的AI生成复古城市“摄影” 哈利·波特角色的人工智能深度伪造视频在假巴伦西亚加时装秀中走红
<raw_text>0 但YC的合伙人们真的很冷静。他们支持了一些非常扎实的创始人,非常基础设施级别的初创公司,毫无光彩。我记得Quantstamp就是一个变得非常大的例子。实际上,在...
有趣的是,我认为在我的批次中,我们有来自OpenSea的Devin Finzer和Alex Atala。他们非常冷静,非常清晰。当时NFT还不是一个热门话题,但他们却看到了机会。所以我想说,我认为这不是巧合。我认为这并不是仅仅反映了炒作。如果我必须猜测,我认为这些工具确实是如此具有影响力,而Y Combinator的合伙人们看到了这一点。
是的,确实如此。本文还指出,在34%的初创公司中,有20%(即54家公司)特别是在生成性AI领域。所以像Jadgbt或Midjourney这样的东西。我认为...
是的,我认为这对我来说是有道理的,与加密货币不同,你实际上可以相对快速地构建一些非常有用的东西。因此,有很多更小众的应用。有像SEC合规、特定领域知识、金融、大型语言模型、AI驱动的簿记和财务,帮助随叫随到的开发者,
所有这些东西。因此,所有这些不同的应用程序,AI可以成为你可以利用的工具,无论你处于哪个问题或领域。是的。所以这也许并不令人惊讶,但绝对...
这表明了这一趋势有多大。我认为值得注意的是,YC在几天前举行了它的演示日和校友演示日。我注意到的一件事是,我们谈论这些公司往往以惊人的速度增长,尤其是生成性AI公司。
我在演示日查看了一些演示,我得说,天哪。我记得看到一家公司每月增长400%,这就像,
如果你不习惯投资初创公司,正常的增长量是如果你看到甚至20%,但实际上更像是40%的月增长率,你会觉得,哦,天哪,这在种子阶段真的很令人兴奋。我们在谈论400%,每月增长4倍。
我不知道这是否可持续。我认为可能是,但确实存在一些基本问题,可能会在你身边发生变化,你的基本面被抹去,我们之前谈过。我认为这是一个观察AI初创公司领域的有趣时刻。是的。作为一家初创公司,这是一个有趣的时刻,因为所有这些非AI公司都
在受到投资者的压力,要求他们做一些与AI相关的事情,对吧?所以在这一点上,即使是像小型初创公司,CEO可能只是说,我们需要使用AI。那么我们来想想我们的GPT-4战略是什么?是的。所以我可以看到这种增长部分是由炒作推动的。这是一个有趣的时刻。
说到初创公司,我们的下一个故事是谷歌与AI初创公司Replit合作,挑战微软的GitHub。所以这是一个目标,代码补全,GitHub的副驾驶已经在做的事情,它拥有这种幽灵写作技术,和,
是的,我不知道。你对这个故事有什么看法,Jeremy?是的,Replit实际上是YC的一个宠儿。它实际上是来自Y Combinator本身。我认为它得到了Paul Graham本人的支持,他是Y Combinator的创始人之一。
并且增长速度非常快。我的意思是,好吧,增长速度非常快。ChatGPT在两个月内达到了1亿用户,所以让我们冷静一下。但是,你知道,他们很快就达到了2000万用户,而且他们的用户非常有价值。他们往往是开发者和开发者类型的人。所以这有点像,你知道,
进入与GitHub的竞争。它旨在使编程环境的启动速度非常快,以便你可以快速开始编码。他们正在开发的这个幽灵写作工具是他们对GitHub副驾驶的回应。稍微放远一点,我们开始看到这种技术访问形式,一边是微软支持的公司,包括GitHub和OpenAI。
而另一边,我们看到谷歌支持的公司。这包括Anthropic和Cohere,它们正在构建大型语言模型,但现在也包括Replit。因此,Replit似乎是谷歌阵营对微软阵营GitHub的回应。每个人都必须有自己版本的该死的东西。所以在这里,我们看到的是Amjad Massad,他是一位非常非常有能力的CEO。我的意思是,至少我个人对他印象深刻,仅仅基于我在Twitter上看到的内容。是的,他看到了AI的全部潜力,非常具有加速主义者的特征。
我们看到... 他谈到谷歌将如何帮助他们与GitHub及其副驾驶竞争,基本上通过使用Palm和Lambda,这些是谷歌在后端工作的重型模型,并将其重点放在代码生成上。所以,是的,我认为... 无论如何,这是一位非常重要的参与者。我认为这是一个值得关注的对象。你将在不久的将来看到更多的Replit。是的,我认为这是一个有趣的观点,因为这是一种竞争,因为...
你可以将其视为竞争是好的一个例子。微软有Word和PowerPoint,而谷歌有其Google Drive产品。他们都有数据存储。他们有电子邮件。他们有所有这些东西,双方都有自己的版本。现在,他们都将尽快添加对AI功能的支持,以便于用户,没错?是的。
是的,确实如此。如果你是自动生成代码的粉丝,这对你来说只是好事。看到这与副驾驶的不同之处也很有趣。我们还不知道这里将做出什么样的决策。此外,一个开放的问题是GitHub被微软整体收购,对吧?他们买下了整个公司。Replit与谷歌合作。所以现在你有这些问题,比如,Replit将通过越来越多地基于谷歌产品来承担多少战略风险?
潜在地共享他们的数据,那里会发生什么样的数据共享?Replit的数据是谷歌的数据,反之亦然?那看起来是什么样的?根据这项协议,我们不知道很多事情。收购和合作之间的区别在这里变得非常重要。无论如何,看到微软和谷歌试图在这些公司上大展拳脚并如此戏剧性地相互竞争,真是太酷了。
是的,这几乎就像电视剧《继承》一样,但公司之间的所有这些举动就像一场国际象棋比赛。
进入最后一个故事,我们有销售员的诞生。OpenAI脱下实验室外套,寻求大交易。这来自《信息》。这是一个概述,包含记者的各种采访,报道OpenAI如何转变为更具盈利性的公司,特别是因为他们去年开始进行销售。因此,他们过去几乎是研究公司。
然后他们开始提供这些GPT-3的API,你可以付费获取这些模型的访问权限。现在他们实际上有员工去获取客户,对吧?这就是你在软件公司中通常会有的情况。它涵盖了他们获得的一些主要客户。因此,它涵盖了Khan Academy现在正在做的事情。
为访问ChatGPT支付了相当多的费用。它说OpenAI现在的收入生成速度似乎正在朝着数亿美元的方向发展,显然,根据某些来源,这相当令人印象深刻,我会说。是的,所以这是一个相当有趣的文章,包含各种细节。
是的,我认为看看当你将智能商品化时,价格点最终会是什么样子也会很有趣。在人类历史上,智能一直是这种有限资源,我们尚不清楚在文明规模上这是什么样子,因为我们无法自动化所有智能。但显然,像辅导或教学这样的特定任务,似乎是低垂的果实,无论好坏,它们看起来都非常...
在至少这一代系统的瞄准范围内。是的,我认为还值得注意的是OpenAI已经开始赚取的资金数量。因为当你谈论数亿美元时,你会想到训练GPT-4的成本,对吧?估计各不相同,但通常听到的都是数亿美元。因此,我们现在谈论的是OpenAI达到可以开始自我融资的门槛。
我认为这与不久前我们谈论微软购买OpenAI 49%股份的情况是一致的。我们讨论了为什么这是一个有趣的数字。为什么49%?好吧,如果你是OpenAI,你不会再放弃一个百分点。你会失去对公司的控制。考虑到OpenAI认为自己所处的使命,他们不会愿意这样做。
这与一种说法一致,即“嘿,我们认为这是我们将来最后一次融资。”这也与此一致。现在他们带来了足够的资金,可以自我资助下一代大型AI系统。也许另一个迹象表明OpenAI正在达到经济起飞的点,就像逃逸速度一样。看看这是否在生态系统中更广泛地保持下去将会很有趣。
是的,确实如此。你知道,我认为我们刚刚讨论了有这么多YC初创公司。现在除了OpenAI之外,几乎没有其他API可以使用像ChatGPT或GPT这样的东西。他们是唯一的参与者。我们可能会在今年晚些时候看到谷歌和可能的亚马逊,也许还有Anthropic的竞争者。
但他们确实从这些初创公司中获得了很多这种内置收入,这些初创公司开始自己盈利并为这些东西付费。因此,是的,他们很快就会赚到数十亿美元,我想这样说是公平的。
进入研究与进展的故事。首先,我们介绍Bloomberg GPT,彭博社的500亿参数大型语言模型,专为金融领域从零开始构建。所以如果你不知道,GPT是...
代表生成预训练变换器。这是OpenAI自2018年以来一直在使用的神经网络的名称。现在我们看到所有这些不同领域的GPT变体。我认为我们上周讨论了医疗GPT,现在我们有Bloomberg GPT,这是针对金融的。是的,有一点细节,基本上它完全是关于金融的,正如我们现在可能预期的那样,在所有这些方面都非常出色,并且实际上正在被整合到彭博社的产品中。
是的,我足够老,记得当机器学习论文令人兴奋,因为它们是算法突破。人们会说,哦,看看这个算法。看看我做了什么。它是如此聪明。你会深入研究,你会觉得,哦,我很兴奋想知道他们是如何用这个算法做出这个聪明的事情的。现在似乎创新都在数据上。就像,是的,我们把这堆巨大的高度专有的策划数据扔进了我们一直在反复构建的同一个愚蠢系统中。大致上,这里似乎是这种情况,他们实际上只是使用了用于训练Bloom的代码,这是几个月前发布的开源训练语言模型。他们制作了一个500亿参数的版本,并在这个混合数据集上进行了训练。因此,我认为这里的创新之一是他们拥有一个数据集,一部分是来自网络各地的自然语言通用内容,另一部分是金融特定领域的内容。
他们似乎对这一点感到兴奋的是,这个模型在通用推理方面表现得非常好,同时又不失去其金融的优势。因此,它能够在这两个领域中实现接近最先进的水平,这...
这有点像是一个有趣的中间地带,对吧?我们在之前的几集中谈到过,你说过,关于通用性之间的战略差异,制作一个可以做所有事情的模型,表现得相当不错,而不是制作特定模型,比如我想是叫MedGBT。我试着记住。但它是专门为医学训练的,不能做其他任何事情。这可能是一个中间地带,归根结底,这一切都归结于训练数据,或者至少看起来是这样。是的。
是的,确实如此。我认为上周我们谈到过,似乎出现了一种新兴模板,关于如何设计这些系统。你真的不需要过于关注所有的细节。你采取一种标准的方法,可能就很好。
因此,这再次证明,他们在这里基本上没有做任何太有趣的事情。他们只是组合了一个模型,并展示了它在这个新焦点、新数据集上的有效性。
顺便问一下,Andre,作为一个斯坦福大学的AI博士,你从我们看到的一切都是GPT某种东西中得到了什么启示?即使你必须特别在特定领域数据上进行训练,你认为听众,如果这是...
他们并没有真正考虑到这一方面。你认为“GPT适用于一切”似乎是解决方案的主要信息是什么?我认为这有点令人兴奋。过去几年,学术界和研究界对此一直很兴奋。我认为人们已经在思考,哦,变换器似乎能够做一切。
你知道,OpenAI开始做GPT,因为在2018年,那种做语言建模的变换器和不断增大的趋势是非常流行的。他们并没有发明这一点。他们以某种方式跳上了炒作列车。因此,我认为一般来说,AI研究人员发现这种单一模型类型似乎能够做一切是相当酷的。我认为在研究方面,
人们已经有很多批评,关于那种只想在某个基准上获得最佳数字的研究类型。
因此,作为研究人员,我的看法是,如果我们不再在调整模型或收集数据集或其他方面投入所有这些工作,那可能对学术界是件好事,因为现在的研究将更多地集中在安全性、可解释性或用户界面等我们需要的东西上。
随着我们开始部署这些系统,我们不需要更好的性能。我们需要更好的理解。而这正是学术界可以真正介入并做一些行业不太有动力去做的事情。
酷。是的。我总是觉得有趣,尤其是在这里听到学术界的这种观点,学术界在这里能做些什么?因为这显然是一个永恒的讨论,随着这些东西开始进入2亿美元的训练规模。小型学术实验室如何进行相关研究?是的,我完全同意。我认为安全性和可解释性,以及用户体验等问题开始变得越来越相关。
是的。关于这一点,我发现有趣的另一件事是,他们不仅仅为这个Bloomberg GPT发布了一份新闻稿,他们还发布了一篇完整的论文。
他们计划发布训练日志。这篇论文非常详细,包含数据集、模型、评估的细节,当然,还有伦理和开放性部分,以及关于架构的整个附录。因此,彭博社决定发布如此详细的论文,这有点有趣。我不知道这是否是因为他们试图吸引研究人员并招募人才,或者是什么促使这一举动,但看到我们实际上可以了解这里发生的事情,真是太酷了,不像GPD4。好吧,是的,正如你所说,我认为这是一个数据点,至少表明顶级对冲基金
在顶级投资银行公司的最先进技术实际上将远远超出这一点,对吧?因为你根本不会作为一家依靠信息优势和对市场建模的优越能力赚钱的公司,去分享真正的尖端技术或工具。
因此,我认为如果有的话,这给我们一个指示,彭博社认为,好的,我们的模型比这个要好得多,因此我们可以在多大程度上对这些系统及其工作方式保持相对开放。
是的,我想这也是,明确地说,在某种程度上是辅助性的。它内置于终端中,以帮助终端用户。你知道,彭博社在这些投资者终端上几乎是垄断的,因此他们在共享细节方面没有太多损失。这是正确的。数据优于模型似乎也是一种趋势,是的。
接下来,我们从TechCrunch获得的主要故事是,斯坦福大学关于AI现状的386页报告的要点。因此,...
2023年斯坦福大学人类AI研究所的AI指数报告刚刚发布。这是一份每年发布的报告,已经发布了几年,可能自2020年以来。我不太记得确切的时间。这只是对发生的事情的总结,以及对行业、学术界和社会的全景,涵盖了很多内容。是的,所以有很多
可以从中提取的内容。当然,突出的内容是,AI的发展越来越多地由行业主导,显然,较大的模型和更具影响力的模型,但在研究方面,行业也越来越多地进行出版。事实上,他们在这里还有一个有趣的部分,关于
AI事件和争议的数量正在增加,这是一件有趣的事情。但有一点滞后。因此,他们的数据截至2021年。我们没有看到去年或今年的情况。因此,是的,他们展示了自2010年到2020年之间的指数趋势。
我认为2012年他们开始时,可能只有10个故事。人们并没有那么担心AI。现在它已经遍布新闻等。是的,确实有数十个不同的图表。总体情况是,事情发展迅速,一切都在快速增长。
实际上,那里你去,节省了大约300页的阅读。事情发展迅速。是的,似乎一切都在增长,除了政策响应。人们正在努力提出这些连贯的方法来应对这一切意味着什么。我们的政策周期时间在某些情况下可能是...
多年,而技术周期时间现在感觉有时是几周。我不知道。就像,你知道,在这个节目中,似乎我们只是不断地向上走,走上曲线。因此,是的,我认为他们在这里强调的事情之一是,这个问题,这个风险,以及人们试图撰写像
这样的决定性AI法案。顺便说一下,我们在加拿大看到了这一点。我们在伟大的北方墙上有C-27法案和人工智能与数据法案,这引起了很大的争议,但它试图触及一个非常重要的问题。无论如何,显然欧盟有自己的事情,我们今天稍后会讨论。
看到人们努力应对拥有越来越多的人类级系统意味着什么,真是有趣。我是说,GPT适用于任何事情。我们刚刚谈到过。像这样的能力几乎可以在一夜之间转变,政策制定者该如何应对?是的,我喜欢关于政策的事情,他们确实有一个表格,显示了
与AI相关的法案在美国提出的数量。它显示出在过去五、六年中,数量稳步增加,从基本上为零到接近100。
至少在2022年,九个法案通过了。因此,开始有一些法律出台。我们还看到了一些地方政策的制定,西雅图和一些州,我认为华盛顿州,正在通过不同的法律来处理面部识别。因此,是的,这是一种情况,至少政府正在努力跟上。但-
是的,确实一切都在快速发展。他们还有一个关于与AI相关的法律案件的有趣表格,谈到政策,情况也是如此。在2022年,数量超过100。五年前可能只有10个。因此...
也许一般趋势并不令人惊讶,但如果你确实阅读了这些380页中的一些内容,你可以获得很多有趣的细节。但这是一个非常易读的报告。你不需要成为任何类型的学术人员。如果你感兴趣,很容易就能了解。因此,像往常一样,我们将在描述和last week in .ai中提供所有这些内容的链接。所以请随时跟进,深入了解。
进入快讯。首先,我们有机器人使用腿作为手臂攀爬和按按钮。这是来自CMU的一篇新论文,名为“腿作为操纵器:推动四足灵活性超越运动”。这有点有趣。因此,四足机器人是这些小狗一样的机器人。你可能见过波士顿动力的那个。
他们基本上展示了,除了走路之外,你可以让它们进行操作,也就是说,它们可以按按钮或移动球,或者在需要时攀爬。还有一些关于我们使用的强化学习的更多细节。但我想要点是,如果你想将这些机器人集成到人类环境中,
如果它们可以按按钮打开门,那将大大提高它们的灵活性。是的,我认为这是机器人历史上的一个有趣时刻,历史上,我们之前谈论过专家系统和人类设计的、精心编码的硬规则,这些规则正逐渐被完全端到端训练的系统所取代。
似乎在这项特定研究中,我们处于一个中间地带,我们不会为每一件小事编码。我们将使用AI训练,比如机器学习训练。
我们将把训练分成小的子任务。因此,在这里,他们提到有独立的操作和运动政策,这些政策是相对独立的。因此,不是一个端到端的过程,而是像人类识别的子问题。在某种程度上,这有点像将一些硬规则编码到系统中,一些先验。显然,这种方式最终的发展,我不应该说显然,
我的猜测是,你知道,最终,正如我们在其他领域所见,这种情况会越来越多地被一个巨大的单一网络所吸收。但我们似乎也在朝着这个方向逐渐前进,机器人技术也是如此。看到这一点真是太酷了。是的,我认为我们不断回到,比如,上周或两周前。所有这些都是如此相互关联。是的,我们讨论了这个更一般的问题。
几周前的机器人论文,基本上就是在这个前沿,你有这个通用推理的东西,然后它使用一个可以进行通用操作的变换器。因此,随着时间的推移,我可以看到这些移动操纵器,不仅可以移动东西,还可以移动。最终,它们将成为这个集成系统的一部分,具有某种层次结构,
而不是那么具体。但这是朝着那个方向迈出的一步。
接下来,我们有即时视频可能代表AI技术的下一个飞跃,来自《纽约时报》。因此,这基本上是关于从文本生成AI的趋势的概述。我们上周讨论了新的跑道模型,我想,里面包含了一些更多的例子。是的,只是谈论一种趋势,以及
我们最有可能在几年内看到高分辨率的视频,这些视频是AI生成的,并且很难被识别。
是的,我想,再次,这是我们之前讨论过的一个故事,但那个在Facebook上走红的教皇弗朗西斯的照片,显示出图像生成已经跨越了那个门槛,现在人们实际上会将其视为真实。因此,作为一个群体,我们将不得不开始对这些东西发展抗体。你知道,图像生成已经达到了那个点。视频生成...
似乎也在到来,这正是文章中引用的一位麻省理工学院教授所说的。随着视频生成的出现,我们将看到我们信任这些系统的能力将开始迅速变化。无论如何,这是另一个有趣的数据点。首先,他们来找文本,然后他们来找图像,现在他们来找视频。是的,我想我会说的是
视频可能出乎意料地没有你想象的那么好,基于图像。实际上,它非常奇怪。你可以去文章中看看几秒钟。这些看起来有点奇怪。因此,至少目前来看,这还很遥远。
然后我们在我们的文章中,有AI能否预测你在下次选举中的投票方式?这非常有趣。BYU的一组学生测试了如果你告诉GPT-3回答一个调查,给定某种种族、年龄、意识形态的人口统计数据,他们测试了这些...
我们的第117集,回顾和讨论上周的大型AI新闻! 查看Jeremie的新书《量子物理让我这样做》 阅读我们的文本通讯,网址为 https://lastweekin.ai/ 本周故事: 应用与商业谷歌调整助理领导层,专注于Bard AI Alphabet的谷歌和DeepMind暂停恩怨,联手追赶OpenAI
由于人类金属工人难以找到,机器人铁匠开始崭露头角
微软的Bing聊天机器人正在投放广告 是的,当然,YC的冬季班充满了AI公司 谷歌与AI初创公司Replit合作,挑战微软的GitHub 销售员的诞生:OpenAI脱下实验室外套,寻求大交易
介绍BloombergGPT,彭博社的500亿参数大型语言模型,专为金融领域从零开始构建 斯坦福大学关于AI现状的386页报告的要点 快讯 机器人用腿作为手臂攀爬和按按钮
ChatGPT国王并不担心,但他知道你可能会担心 欧盟的人工智能法案解读 快讯 一家小公司如何在几乎没有规则的情况下使假图像成为主流 FTC正在审查人工智能领域的竞争 Midjourney因“极端”滥用而结束其AI图像生成器的免费试用
中国创作者使用Midjourney的AI生成复古城市“摄影” 哈利·波特角色的人工智能深度伪造视频在假Balenciaga时装秀中走红
<raw_text>0 呃,人工角色会对某些调查和选举作出回应,而与人类相比。他们发现,即使没有直接告诉模型,只需告诉它,呃,表现得像这个人口统计,模型的反应与一个人群的反应相当吻合。呃,这确实是一个有趣的结果。
是的,从恶意使用的角度和选举干预类型的应用来看,你可以真正看到这可能是有用的。让人们模仿某个特定人口统计的风格,让算法找出应该针对谁。因此,这就像乔治城安全与新兴技术中心几周前在他们的大报告中标记的那样,关于这一切的去向。
但从更哲学的角度来看,我发现这种事情很吸引人。如果一个AI算法可以获取关于你的原始人口统计数据,比如你的肤色、性别、你的净资产等等。如果它能够成功预测你的政治观点,你是否应该问自己一些问题,关于你对世界的推理程度?
我的意思是,好吧,公平地说,你可以制作一个相当准确的决策树。这也是事实。是的,然而,准确度的问题是存在的,对吧?当我们开始聚焦于,比如说,如果一个决策树可以预测我的行为,我的投票行为到...
我不知道,20%以内,我可能会觉得,是的,你知道,好吧。但当你开始逐渐降低那个百分比,并高精度地聚焦于一个人的观点时,我并不是说我们已经到了那个地步。我只是认为这是一个有趣的问题。这对人类意味着什么?我们已经看到这种情况在语言中发生,对吧?人们常常将其框架化为:我们并不是在学习AI是超级有能力的。我们在学习的是,人类的能力比我们想象的要低。
我认为这是一个有趣的维度,适用于理性判断和政治哲学。从哲学的角度来看,这里有很多值得思考的内容。
是的,我同意。我认为除了你会投票给谁的问题之外,另一个让我感兴趣的是,他们调查了对一项调查的反应。一旦你进入这个领域,你就深入到这些个别问题的更多细节中。这涉及到心智理论的问题,即它能多准确地想象你思考的方式。似乎
在没有以任何直接方式进行训练的情况下,仅通过常规的GP3进行训练,它可以推断出人口统计的心智理论,这无疑至少是有趣的,甚至令人担忧。现在一切都很好,一切都同时有趣和令人担忧。这就是现实。接受这种二元性吧。是的,是的。
最后一个故事,我们有另一个GPT。我们有Hugging GPT与Chai GPT及其在Hugging Face的朋友们一起解决AI任务。这来自Hugging Face公司,
在AI领域有相当大的影响力。基本上,他们是存储模型和如何在世界各地共享模型的地方。还有论坛,还有很多其他东西,但可以说他们在AI开发者圈子中相当主流。本文介绍了如何基本上可以有某种用户查询
来处理文本或图像。然后你可以有...
像ChatGPT这样的语言模型决定如何使用包含在这个Hugging Face库中的各种AI模型来回答。因此,如果你问,比如说,计算这张图片中有多少个物体,而不是具体告诉你如何做,你可以让ChatGPT说,嗯,对于这个文本查询,做这个是有意义的。这就是你将提供回复的方式。
是的,这有点像,呃,我想这是一个系统论文,但它确实暗示了一种方式,你现在可以拥有这些非常强大的API,在这里...
你知道,你可以支持一般查询,而这个GPT将选择如何做,希望能有效,但也许不会,知道吗?是的。它出现在每个人的脑海中,对吧?我们谈到了OpenAI推出的chat GPT API。就像我必须说,它是什么?两周前?这两周前的新闻。感觉像是很久以前,但
但,是的,这个想法是你有某种模型,直接使用工具。这就是chat GPT API。因此,它可以使用Expedia来预订旅行,或者其他什么。嗯,现在我们有模型使用模型。因此,呃,你知道,再次,这个问题是,呃,
这种架构的演变将会是什么样子?我们会继续拥有使用其他模型的模型吗?这更像是专家系统的感觉,人类至少在某种程度上在硬编码连接器,连接不同的系统,或者最终这一切都变成一个大模型?这个主题的变体将会持续多久?我认为从这个角度来看,看到这一点是非常有趣的,作为这个轴上的另一个点。而且我们显然不知道这个轴会通向哪里,但-
是的,绝对如此。所以,是的,这是一篇相当酷的论文,我绝对可以看到这一点,呃,成为Hugging Face提供的东西。我是说,Hugging Face也是一家
实际上需要赚钱的公司。因此,这是一篇论文,但我可以看到这在某个时候成为一个API。是的,还有,当你查看这样的系统的安全属性时,也有一些有趣的设计问题。比如说,如果因为chat GPT模型将你的查询路由到一个模型,然后搞砸了响应,那该怪谁?
chat GPT模型是否应该将其发送到另一个子模型?子模型本身是否应该正确处理这个问题?是子模型的错吗?无论如何,在这些系统中追踪故障也变得更加棘手。从政策的角度来看,这两个模型的开发者可能是不同的。也许他们认为自己有不同的责任。无论如何,这都是一个美丽而复杂的事情,但这也是另一个有趣的脚注。
是的。那么,让我们转向政策和社会影响。我们的第一个故事是Chat GPT国王并不担心,但他知道你可能会担心。Chat GPT国王是OpenAI首席执行官的一个相当有趣的称号。我本可以直接说OpenAI的首席执行官,但好吧。是的,这是一篇概述文章,以某种方式总结了
Sam Altman的观点和历史,我想你可以称之为Chat GPT的国王。如果你不知道OpenAI和Sam Altman的历史和背景,这确实提供了很多有趣的细节。
你发现有什么新鲜的东西让你觉得有趣吗,Jeremy?几个小细节。但是的,我在我们的...所以我们有一个共享的笔记文档。Andre巧妙地将其整理在一起。然后我添加一些笔记和东西。但我注意到的第一件事是,这是一篇我很久以来读过的最纽约时报风格的文章。你知道,那些以...
在一个雨天,在一条泥土路的中间,马车驶向...你知道,这些与故事没有任何关系。似乎有太多的...这篇文章充满了这种感觉。我在这里是为了它。作为阅读,它很有趣。有几个有趣的小细节。所以...
其中之一是二元性,Sam A.必须在他的脑海中保持所有这些看似表面上矛盾的事情。例如,他必须相信,正如他所做的那样,AI将改变世界,同时,这种变化可能是极其积极的。同时,他也相信这可能会结束世界。
同时,他还认为当前的AI可能被过度炒作。因此,所有这些事情很有趣,看到人们在Twitter上对此的反应,因为他们试图弄清楚,不,但你到底相信什么?而这里的主线似乎是Sam A相信所有这些事情。它们并不是内在矛盾的。
但使这些想法兼容的事情涉及很多细微差别,这些细微差别并不一定能在240个字符中传达出来。因此,我认为这是一个有用的深入探讨,原因在于所有这些因素。在揭示方面,这篇文章揭示的一件事是,Sam Altman实际上在OpenAI没有持有任何股份。因此,至少对于来自初创公司世界的人来说,这真的很令人惊讶。
这反映了他对使命的承诺,可以说,并不是对公司的承诺,但它确实描绘了一个复杂的故事。然后他们谈到了他的童年,以及作为一个在圣路易斯长大的同性恋者所面临的挑战。这非常有趣。他在那里的挑战。
然后有Paul Graham的这句话。正如我们之前提到的,Paul Graham是Y Combinator的创始人,他将Y Combinator传给了Sam Altman。一位记者问他,为什么你认为Sam A在没有股份的情况下工作?你认为他为什么这样做?PG的回答是,为什么他要在一个不会让他变得更富有的事情上工作?
一个答案是,很多人一旦有了足够的钱就会这样做,Sam可能就是这样。另一个是他喜欢权力。因此,无论如何,这种感觉让文章悬而未决,保持了一种持续的模糊感,这正是整篇文章的主题。这是一组明显的矛盾和复杂的想法。我觉得作为阅读这很有趣。
是的,是的。这篇文章有点长,浏览一下,我认为,如果你认为OpenAI高中,那么你可能会发现Sam Altman至少有趣。他确实倾向于让我觉得很有趣,听他表达自己的想法。正如你所说,这篇文章是...
让我看看,文学。让我读一段。后来,当Altman先生喝着甜酒代替甜点时,他将自己的公司与曼哈顿计划进行了比较。就像他在谈论明天的天气预报一样,他说美国在第二次世界大战期间建造原子弹的努力是一个与OpenAI规模相当的项目,是我们所追求的雄心水平。这就像是三年前的回忆,正好设定了一篇文章。所以...
结论是,Sam A.跳过了甜点。这是我从中得到的。无论如何,是的,这是一篇很酷的文章。我认为,如果你真的对聊天感兴趣,了解更多关于OpenAI的事情可能是有意义的。接下来,我们有欧盟的人工智能法案解读,这是对欧盟努力的解释
创建这个非常雄心勃勃的AI法案,以在整个欧盟进行相当全面的监管,实际上超出了任何人迄今为止所做的事情。我是说,这可能与加拿大相当,但也就仅此而已。因此,
是的,如果你不知道这个法案,我们将稍微总结一下。它相当复杂,正如你可能预期的那样,这篇文章确实深入探讨了这一点。但从我们可以提供的高层次来看,该法案的想法是,你将对AI的不同应用进行分类,
将其分类为具有某种风险,没有风险,一些风险,高风险,然后根据风险的不同,有不同的要求,比如你需要进行多少审计,透明度等,显然还有很多关于这将如何运作的问题,这已经在筹备多年,因此这是一个相当的,
雄心勃勃的努力,似乎正在进行中。
是的,我认为他们在这样做时面临的一个挑战就是定义他们的术语。你知道,这是一个法案,这是立法,对吧?欧盟将以某种形式通过它。一旦你定义了那项立法,那么问题是,好吧,你已经说过,例如,正如他们所写的,AI启用的视频游戏和垃圾邮件过滤器是最低风险的AI应用,因此没有限制。
但你所说的AI启用的视频游戏是什么意思?如果我有一个具有X、Y和Z属性的视频游戏,使其实际上有点危险,那么实际会发生什么,我怀疑,他们将不得不将这些术语的实际实施和定义推给监管机构,实际的机构将不得不弄清楚他们在实践中该如何处理这些。
我不清楚原始的氛围在那个层面上会持续多久。这将是一个非常棘手的翻译任务,但对每个人来说都很重要,因为部分原因是,来自欧洲的监管和立法被认为会渗透到更广泛的世界中。有一种叫做布鲁塞尔效应的现象,
即欧盟通过一项法案,基本上因为每个人都在欧洲做生意,Facebook例如,不能忽视欧洲的监管。突然之间,所有人的活动合规的底线都必须提高。无论如何,你都忙于遵守欧洲的GDPR法规或其他什么。你必须在欧洲做到这一点,所以你必须做到这一点。你需要那个基础设施。
因此,从这个意义上说,欧洲确实往往对政策产生过大的影响。跟踪那里发生的事情非常重要,因为它也会激励其他地方的政策反应。因此,不要把这看作只是发生在欧洲的事情。这将影响我们在线的体验,可能很快。这只是一个开放的问题,如何影响。
是的,关于定义事物,我认为人们可能会立即对监管提出的一个简单批评是:这是技术,政治家不懂技术,对吧?但
我过去没有深入研究这个问题。事实上,我认为Yann LeCun在Twitter上曾经提出过这个观点,如何定义AI?如何衡量风险?你如何做这个或那个?你如何分类?我们实际上刚刚发表了一些关于这个的论文,意大利的学者们,我想,写了一整篇关于你如何能够量化什么是AI系统,如何尝试
你知道,给风险水平赋值等等。因此,对我来说,这是一个好迹象,表明这正在被非常仔细地考虑,并且已经进行了多年。是的,我认为我有点乐观,因为我们,即使不考虑chat GPT和所有这些不同的事情,你知道,我们有自动驾驶汽车,我们有面部识别。它们已经是社会的一部分,并产生广泛的影响。
而VUS尚未对此进行太多监管。因此,希望这能为其他人提供一个模板。好吧,针对你的观点,我认为有一个有趣的问题,人们常常会说,糟糕的监管远比没有监管要糟糕得多。在某些情况下,这当然是正确的。这确实取决于监管的糟糕程度。
但在这种情况下,我认为我们有正在改变世界的系统。我们知道,默认情况下,如果对这些事情没有监管关注,事情可能会走得相当糟糕。然后我们就开始玩猜测世界如何被搞砸的游戏。但如果你相信你宁愿不去探索这种可能性,那么某种形式的监管是必需的。如果你只把这看作是一个测试...
那么至少我们正在进行这个测试。因此,也许从更乐观的监管角度结束这个评论。
是的,是的,确实如此。我认为,再次,考虑到我们已经做了大约三年的播客,值得指出的是,这不仅仅关乎未来,还关乎现在。这就是为什么这项工作已经进行了多年。就像AI已经在这里。如果你没有跟上它,主要是通过chat GPT感兴趣,它可能看起来只是刚刚开始到来,但它...
无处不在,以各种方式。因此,我们至少需要一些监管。我想我们可以对此达成一致。接下来是快讯,谈到监管和政策,华盛顿邮报有一篇文章,标题为《一家小公司如何在几乎没有规则的情况下使假图像成为主流》。
所以这里的公司是Stability AI,Midjourney的制造商,这是类似于DALI的主要文本到图像服务之一。实际上,只有几家其他公司在领先。这是对公司的一个总结,特别是它在政策或管理方面的立场。这里有一些有趣的细节,例如--
你可以生成拜登总统或普京或其他人的图像,但不能生成中国总统习近平的图像。公司的创始人兼首席执行官David Holes在去年在Discord上谈到这一点时表示,他们只是想尽量减少戏剧性,而在中国,政治讽刺是相当不合适的。因此,他们希望能够...
让中国的人们使用这项技术,而不是他们生成讽刺的能力。因此,贯穿其中的整体氛围是,他们似乎并没有很好地处理,呃,管理和一般政策。
是的,现在他们实际上最近已经停止了免费层,因为滥用情况太严重,而且你已经看到这些病毒式传播的事情,比如说,特朗普总统被逮捕,我们上周讨论过的事情,某些人我相信对此非常愤怒。因此,是的,这是一篇有趣的文章,深入探讨了许多政策问题,以及,
是的。
而且,历史上,随着公司缓慢增长,通常会发生的事情是,像从种子到IPO大约需要七年左右,粗略估计加减20年。但无论如何,历史上,这将是情况。因此,创始人会逐渐获得越来越多的经验,逐步攀升那条阶梯。你知道,他们会受到打击,变得谦逊,他们会学习,哦,实际上,世界比我想象的要复杂得多。
而现在我们看到这些公司令人震惊的增长曲线,你有几个家伙,比如Stability AI或MidJury或其他这些公司,突然从无到有,进入了所有事情的头条。他们没有经过打磨,或者没有时间去深入思考这些事情的哲学含义。
而从那里看起来的视角与刚开始公司时的视角截然不同。如果你在六个月前开始了公司,而现在你已经在那里了,那就意味着你必须快速适应。因此,我认为这是一个有趣的几乎是公司文化的故事,我们看到那些并不一定适合这个位置的人必须去执行。
是的。这篇文章还突出了这一点,这家公司非常小。它只有10个人。首席执行官和一个八人的工程团队以及一些顾问,但它产生了巨大的影响,而做出允许和不允许等决策的人员非常少。
这与OpenAI的情况截然不同,后者已经在这方面工作多年。回到GPT-2时,他们在潜在的误用方面非常谨慎。因此,是的,看到这项AI到图像技术的幕后情况,以及它是如何被塑造的,可能有点随意。接下来,我们有FTC正在审查AI领域的竞争。因此,FTC,
这是来自FTC主席Lina Khan的消息,基本上表示他们正在密切关注AI的发展,并且据说正在努力确保该领域不被主要科技平台主导。但我不知道,Jeremy,我觉得...任务完成。是的,我很确定这将是一个寡头垄断的情况。
是的,我完全同意。我认为,这并不是说,竞争在解决某些问题方面非常出色。这些问题通常看起来像是可及性的问题。因此,你知道,你想要商品的大规模生产,你想要商品的商品化和便宜,竞争会降低该商品的成本,使其成为每个人都可以生产的商品。
我们确实看到这一点发生在AI领域,随着越来越多的竞争者进入这个领域。我们看到OpenAI不得不降低价格等等。因此,好吧,提升可及性是好的。但如果你从风险的角度来看,特别是如果你从灾难性风险的角度来看,知道,灾难性事故、对齐失败等。
在我们不知道如何控制这些系统的情况下,拥有20家公司都有能力构建人类级AI并不一定是世界上最好的事情,而世界顶级实验室的安全研究人员,特别是安全研究人员,达成共识,认为,是的,
因此,这些东西摧毁世界的可能性可能不低于10%。因此,当你看到这一点时,我绝对欣赏FTC在这里的观点。他们必须关注消费者。这意味着关注价格。这是他们的文化让他们关心的事情。
但还有另一个维度可能不在他们的授权范围内。我认为,许多不同的部门将会在这一点上相互交叉,因为FTC所说的是:是的,我们希望竞争。我们希望AI的普及和民主化。对吧。
但如果你看看以安全为导向的组织,他们会说,不,不,我们需要防止普及。这里存在恶意使用和事故风险。因此,我认为这将是一个有趣的事情,看看政府最终会如何解决这个问题,如果他们有时间解决,考虑到曲线移动的速度。但,是的,FTC的举动确实是棋盘上的一个有趣棋子。
是的,实际上,下一篇文章非常相关。来自MIT科技评论。标题是《AI聊天机器人是如何成为安全灾难的三种方式》。所以这更像是一篇概述文章。它说,例如,它解释了越狱,你可能知道也可能不知道。基本上,就是让这些聊天机器人做他们不应该做的事情。
通过一些聪明的输入。但还有一些我不太了解的其他内容。因此,有协助、诈骗和网络钓鱼。网络钓鱼是伪装成其他人。使用AI,这变得容易得多。我们已经触及到这一点。还有一种叫做隐藏提示注入的东西。因此,你可以在网站上放置一些东西,以干扰AI并使其做你不想让它做的事情。这已经被证明。
还有另一种被称为数据中毒的东西,你可以在网上放置一些东西,让公司抓取并放入他们的训练集中。也许这使得模型以某种方式给你钱。因此,是的,我发现这是一个相当有趣的概述,实际上教会了我一些我自己不知道的事情。
是的,看到这一点,尤其是数据中毒的情况,真的很酷。有一篇论文,我想这就是这里链接的那篇。是的,是的,是的。这是一篇由谷歌、NVIDIA和Robust Intelligence的研究人员撰写的论文,反正他们在这个领域做了一些工作。是的,花60美元,他们能够
让我们说,购买域名并用他们选择的图像填充它们,这些图像随后被抓取到大型数据集中。他们能够编辑并添加句子到维基百科条目,这些条目最终进入了AI模型的数据集中。因此,仅仅花费
来搞砸这些模型的成本正在下降,而模型的能力正在增加,并且它们正在普及。看到我们在构建的这个勇敢新世界中,我们将最终依赖于什么,真的是非常有趣,你知道,那里有...
我认为有一个这样的迷因,有人展示了一个超级复杂且有价值的东西,比如,我不知道,ChatGPT在顶部。然后它由一个复杂的结构支撑。在某个时刻,有一个极薄的支撑,可能是几行开源软件之类的。好吧,在这种情况下,情况有点类似。如果你让那个极薄的支撑变成数据完整性或在线抓取可靠数据,
人们就可以干扰你的模型。是的,我真的很想知道这会导致什么。我是说,有太多的可能性,几乎难以想象。- 绝对如此。对我来说,我认为我们已经讨论过,但我真的很想知道今年网络钓鱼在多大程度上受到AI的影响,尤其是随着生成语音的出现,但也包括文本,
你可以很容易地看到,外国诈骗者通过电子邮件和电话进行所有这些网络钓鱼,突然之间可能会变得更好。如果发生这种情况,很多人将会失去金钱。在那时,政治家必须关心,对吧?因为人们受到了伤害。因此,也许VAT将至少在某种程度上推动一些政策变化,而不是更晚。
最后,我们有另一篇概述故事,也来自科技评论。ChatGPT将改变教育,而不是摧毁它。因此,这是一个...
我们的第117集,回顾和讨论上周的重要AI新闻! 查看Jeremie的新书《量子物理让我这样做》 阅读我们的文本通讯,网址为 https://lastweekin.ai/ 本周故事: 应用与商业谷歌调整助理领导层,专注于Bard AI Alphabet的谷歌和DeepMind暂停恩怨,联手追赶OpenAI
由于人类金属工人难以找到,机器人铁匠开始崭露头角
微软的必应聊天机器人正在投放广告 是的,当然,YC的冬季班充满了AI公司 谷歌与AI初创公司Replit合作,挑战微软的GitHub 销售员的诞生:OpenAI脱下实验室外套,寻求大交易
介绍BloombergGPT,彭博社专为金融领域从零开始构建的500亿参数大型语言模型 斯坦福大学关于AI现状的386页报告的要点 快讯 机器人用腿作为手臂攀爬和按按钮
ChatGPT国王并不担心,但他知道你可能会担心 欧盟的人工智能法案解读 快讯 一家小公司如何在几乎没有规则的情况下使假图像成为主流 FTC正在审查人工智能领域的竞争 Midjourney因“极端”滥用而结束其AI图像生成器的免费试用
中国创作者使用Midjourney的AI生成复古城市“摄影” 哈利·波特角色的人工智能深度伪造视频在假Balenciaga时装秀中走红
<raw_text>0 来自不同教育工作者的对话总结,关于教育工作者对ChatGPT如何成为教育一部分并真正塑造教育的不断演变的观点。因此,它涉及到一些事情,比如一些科技公司,如Duolingo和Quizlet,以及你提到的Khan Academy,
现在正在将ChatGPT整合到他们的教育中。就我个人而言,我觉得这真的很令人兴奋。对于Duolingo,你可以与聊天机器人练习对话。
OpenAI也与教育工作者合作,编制了一份关于如何在学校使用ChatGPT的事实表,并创建了一个工具来识别由聊天机器人撰写的文本。因此,这里有很多小细节,我认为现在我们开始稍微冷静下来,不再对AI撰写的论文感到恐慌,也不再认为教育的终结或其他什么。这只是教育需要改变和发展。
是的,是的。我认为,通常情况下,尘埃在这种事情上会花费数年才能平息,你知道,当计算器被发明时,我们必须进行一些探索并找出答案。奇怪的是,我认为我们没有时间去探索和发现。我认为我们将不得不在探索中发现。
随着更多系统上线,对吧?就像四个月前的ChatGPT。一个月前是GPT-4。接下来会有什么?我认为新的常态可能是在一个元层面上,看看有哪些新工具出现,我们如何能更高效地使用这些工具来更好地自我教育?我不知道,但这就是我想象这些事情归结为的一种角度,因为你没有选择...
即使是考虑课程计划。你是一名教师。你想教授相关的工具。我记得以前,我们都会被拖着。六年级的班级进入计算机实验室,我们在谈论,如何使用Microsoft Word。好吧,这在根本上必须改变。
你不能有一个教师提前六个月甚至三个月准备的课程计划,并希望它能继续相关。这需要更多的灵活性和其他各种东西,如果做得好,可能会带来巨大的好处。对吧?
对吧?就像你提到的Duolingo的例子,想象一下那种持续的情况。这可能真的是惊人的个性化教育等等,但我们必须为此做好准备,并以正确的哲学和态度来对待它。绝对如此。我发现这里有一个有趣的细节,他们提到有一项针对1000名K-12教师和1000名学生的美国调查。
结果发现,超过一半的教师使用过ChatGPT,其中10%的人每天使用。而实际上,只有三分之一的12到17岁的学生使用ChatGPT。所以我认为,值得注意的另一点是,除了学生方面,我认为这将是