We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode OpenAI‘s Codex Update, Boston Dynamics Parkour, Twitter AI Bias

OpenAI‘s Codex Update, Boston Dynamics Parkour, Twitter AI Bias

2021/8/20
logo of podcast Last Week in AI

Last Week in AI

AI Deep Dive AI Chapters Transcript
People
A
Andrey Karenkov
D
Daniel Bashir
S
Sharon Zhou
Topics
Dr. Sharon Zhou: 本期节目讨论了 OpenAI 最新发布的 Codex 模型,该模型能够将自然语言转换成代码,并根据用户的指令进行代码编辑和构建。Codex 的新版本可以根据自然语言指令生成代码,而不仅仅是代码补全,这对于无代码编程的未来具有重要意义。然而,Codex 的版权问题是一个需要解决的难题。 Dr. Sharon Zhou 还讨论了 Plainsight 公司利用 AI 技术革新牲畜管理的案例,以及利用区块链技术追踪牲畜信息的可行性。她认为,计算机视觉和多传感器机器学习在牲畜管理中具有广阔的应用前景,将 NFT 用于追踪牲畜信息,提供数字产权证明,在特定场景下是合理的。 Dr. Sharon Zhou 还介绍了 DeepMind 的 AlphaFold2 和华盛顿大学的 RosettaFold 团队在蛋白质折叠 AI 方面的竞争,以及人工智能技术在痴呆症诊断中的应用。她认为,竞争促进了蛋白质折叠 AI 技术的进步和普及,人工智能技术可以实现一天内诊断痴呆症,这对于早期干预非常重要。 Dr. Sharon Zhou 还讨论了 Twitter 的 AI 算法在裁剪照片时存在的偏见,以及美国将调查特斯拉自动驾驶系统与紧急车辆发生事故的事件。她认为,Twitter 的偏见漏洞赏金计划取得了成功,研究人员利用生成人脸揭示了算法偏见,特斯拉自动驾驶系统存在安全隐患,需要改进。 Dr. Sharon Zhou 还介绍了 AI 艺术作品展现 AI 的诗意误解,以及波士顿动力公司机器人的跑酷视频。她认为,普通人也能利用现有的 AI 工具进行艺术创作,波士顿动力公司机器人的跑酷视频展示了其在运动控制和平衡方面的进步。 Andrey Karenkov: 本期节目讨论了 OpenAI 最新发布的 Codex 模型,该模型能够将自然语言转换成代码,并根据用户的指令进行代码编辑和构建。Codex 的新版本可以根据自然语言指令生成代码,而不仅仅是代码补全,这对于无代码编程的未来具有重要意义。Codex 可以与 Microsoft Word 等应用程序集成,实现代码自动化,这将大大提高工作效率。然而,Codex 的版权问题是一个需要解决的难题。 Andrey Karenkov 还讨论了 Plainsight 公司利用 AI 技术革新牲畜管理的案例,以及利用区块链技术追踪牲畜信息的可行性。他认为,将 NFT 用于追踪牲畜信息,提供数字产权证明,在特定场景下是合理的。Plainsight 使用的 AI 技术相对成熟,主要在于将现有技术应用于新的场景。 Andrey Karenkov 还介绍了 DeepMind 的 AlphaFold2 和华盛顿大学的 RosettaFold 团队在蛋白质折叠 AI 方面的竞争,以及人工智能技术在痴呆症诊断中的应用。他认为,竞争促进了蛋白质折叠 AI 技术的进步和普及,RosettaFold 的发布可能促使 DeepMind 加快了 AlphaFold2 代码的公开速度。 Andrey Karenkov 还讨论了 Twitter 的 AI 算法在裁剪照片时存在的偏见,以及美国将调查特斯拉自动驾驶系统与紧急车辆发生事故的事件。他认为,Twitter 的偏见漏洞赏金计划取得了成功,研究人员利用生成人脸揭示了算法偏见,特斯拉自动驾驶系统存在安全隐患,需要改进。 Andrey Karenkov 还介绍了 AI 艺术作品展现 AI 的诗意误解,以及波士顿动力公司机器人的跑酷视频。他认为,波士顿动力公司机器人的跑酷视频展示了其在运动控制和平衡方面的进步。 Daniel Bashir: 本期节目还讨论了三星利用 AI 自动化芯片设计流程,以及商汤科技计划在香港进行 IPO 的新闻。他介绍了当前基于 AI 的芯片设计工具大多采用强化学习方法,以及大型基础 AI 模型(如 GPT-3)的广泛应用可能导致模型同质化和偏差问题。

Deep Dive

Chapters
OpenAI's Codex can translate natural language into code, enabling non-coders to generate code through simple English commands. This technology builds on prior code completion tools and introduces new capabilities like generating code for specific tasks in applications like Microsoft Word.

Shownotes Transcript

Translations:
中文

Hello and welcome to Skynet today's Let's Talk AI podcast, where you can hear AI researchers chat about what's going on with AI. This is our latest Last Week in AI episode, in which you get summaries, discussion of some of last week's most interesting AI news. I'm Dr. Sharon Zhou. And I am Andrey Karenkov. And this week we'll discuss the new model from OpenAI, the application of AI to cattle management.

some new research, some ethics, especially with respect to Twitter's AI bias contest, and of course, the new video from Boston Dynamics. All right, let's dive straight in to our first article on applications. OpenAI can translate English into code with its new machine learning software, Codex, and that's from The Verge. And there's also a cool demo on YouTube of this.

And what is this? Well, OpenAI basically created an improved version of OpenAI Codex, which is

AI system that takes natural language. So any kind of natural language you type in and converts that into code and Codex does really well with just simple commands and natural language, and it can just execute them and write code automatically, basically. And it can also build on things. So as you're writing, you know, or as you're saying different natural language commands, it can build on things and understand what this is and edit what you want exactly it to build.

Yeah, so this is following up on some prior news where they developed GitHub's Copilot with this Codex technology. And this is building on top of that a new version of Codex. And what's new here is before with GitHub, it was mostly code completion, intelligent code completion. So it could be sort of like filling in some code from comments or filling out a function call.

to continue what you started in a given line of code. But here, what's really interesting is instead of completing code or really working primarily with code, you can give these natural language commands like, you know, make the game character move up and down

And then it writes that code for you. So that's quite different from what they have shown before, which was primarily just giving code and having code be spit out, whereas here it's giving English descriptions and then getting code out of it.

Right. And in the prior one, you could also give English, but they had to be in, you know, almost a nice code comment, basically. A nice doc string, so to speak. And this is very exciting for the no code future where, you know, if you don't know how to code at all and you can't write a doc string, you don't even know what that is. You can still interface with this API and be able to generate code.

Yeah. And as you said, they did show before that you can have documentation as it would usually have in code and you get a function out of it. But these demonstrations are quite different from what we've seen before. So you can say something like create a web page with a menu on the side and title at the top and then get that as code, which is quite different for like a comment for a function. And personally, I was really impressed

And they're excited about this direction. So they showed kind of some game development. But personally, what I was really excited about, they also showed this tying into Microsoft Word, where you could...

For instance, tell it to capitalize every other word or insert lines between particular statements, basically any sort of description of what you want to happen. And then it writes the code necessary to execute that via Word's API.

So in a sense, now you can generate programs to do different tasks and different things like Word or other applications. And that really speeds up things, not just coding, but potentially a huge number of things.

It also can mask out coding entirely. You know, you don't have to see the code at all. It just executes it in Word. And I think I feel like that's really exciting because then you don't have to know how to code to essentially write these very custom pieces of code to change anything in your document.

And there is still a sticky thing with this, and that is copyright. Obviously, it'd be less sticky if OpenAI were still a nonprofit. And it's the fact that, you know, a lot of this AI system is built on a lot of public repositories and public, you know, code, but there are copyrights attached to them. And that gets a little bit tricky. Yeah.

Yeah, in response to questions about it, the CEO kind of just swatted away saying that, you know, we need this discussion, but also this is very helpful to the community. And I do feel like they'll have to address it in some respects. For instance, they may have to modify their training data set.

to actually take into account different copyright instances and potentially in the usage of

this model, which right now it's still in beta. You can play around with it, but ultimately they want to provide as an API similar to GP3 to build products on top of. And once that happens, I think this copyright issue will be quite relevant and something that they do need to address probably.

But yeah, super exciting. And yeah, you should check out there's a video on YouTube showing how you can give it commands to modify a website or you can just Google it and see these demos that are very neat.

But moving on to something maybe less revolutionary, but still interesting and a bit different once we discuss, we have this article from NVIDIA titled Cattle is for the future. Plain Sight revolutionizes livestock management with AI. So as the title says, there's this company called Plain Sight that is helping the meat processing industry improve its operations.

And what they do right now is they automate what's been a very manual process that deals with just counting cows and livestock as they move between different locations. And they also have some examples of how beyond counting, it can help with health monitoring and various applications within this industry.

I feel like this is a great place for computer vision and multi-sensor machine learning. It's also interesting, they also delved a bit into the blockchain

And I definitely have a question around, you know, is it valuable to prove on the blockchain whether the cow is really healthy or owned by a certain farmer? Because they are actually tracking, monitoring the cattle on the blockchain as digital assets. Yeah.

in a secure, you know, unique digital record known as, uh, using an NFT, um, for every single cow. So every individual livestock, um, and it definitely brings up the question of whether this is overkill or not. Um,

I would be interested to see if, you know, a bunch of farmers were to take this seriously or if people or restaurants wanted ownership of cows for their own consumption, maybe to track the lineage of the cow's health. And this also, you know, maybe could feel like it might be important for people to know that these cows are treated humanely and over time and are healthy over time. It might also tie into nutrition.

climate change and monitoring that. But I don't know, that looked like an interesting part of the article as well around the blockchain.

Yeah, exactly. I did not expect that part of it, this idea of NFTs. But actually, as someone who, I mean, I guess many people are skeptical of NFTs as like, you take a GIF and now it's something you can sell. Whereas this actually seems quite reasonable in the sense that, you know, you're having kind of a digital proof of ownership.

And it's actually something that there's only one of, which is a particular animal. And there is attached to that all these records of genomics, health, and various things. So personally, yeah, I found that interesting and something that does seem useful in this context. So we'll see if there's adoption. I don't know what is there right now in terms of records, but

Also interesting. Yeah. Right. It feels like remotely adopting a cow and making sure it is being treated well because you want to drink milk from it or something like that.

This makes me think of Kobe beef where you want to verify that it's actually being massaged or something. Maybe that would happen. I don't know. But I was very skeptical at first when I saw NFTs, but thinking about it more, maybe there is something there. It'd be cool to see it go beyond art. Yeah, for sure. And also I found interesting about this article is...

The actual methods they use are not really that surprising and in some sense very kind of established. So they use segmentation models and they do object detection of passing animals. So it's very kind of straightforward application of existing techniques.

And what's interesting here is more of the actual product and the reliability where they say the accuracy is 99.5%.

So I found it interesting how now there's cases where you can just take existing techniques of AI and you really just need to adapt it to a given setting and make use of it. And you don't need to develop a whole new technology. You just need to kind of recognize the need and then you can actually help improve operations and replace things like counting, which I think also...

is not something that seems like an interesting job or something that humans really need to do. So yeah, it seems like interesting to see this. And I feel like we will see more of this sort of thing where it's not, you know, much R&D so much as just applying existing techniques to a new context. Absolutely. And it'd be interesting to see if, you know, Codex were to come in for this. You could say count all the cows with numbers

Three spots. I don't know. Yeah, now we have two layers of AI work. Codex. You know, you have AI programs, writing AI programs. Exactly, exactly. It's going to be great. It's AI programs all the way down. And on to our research section. First article is without code for DeepMind's protein AI, this lab wrote its own.

All right. So we've already discussed this a bit, but DeepMind did talk about or release AlphaFold, AlphaFold 2 as a protein folding AI. However, they had not released their code. And basically RosettaFold came out from the University of Washington as almost

Almost a response to that to say, you know, we will open sources code and make it available to the world such that scientists can start using this tool much more without a paywall attached to it in efforts to advance science and find more protein.

And yeah. Yeah, yeah. So this is a bit more of a timeline. DeepMind gave this presentation about AlphaFold2 and kind of stated the results, I think around December of last year. But they didn't really say what their timeline is for releasing the code or releasing a detailed paper. It was kind of like a little mysterious. So this team kind of took some of the details from the presentation and then extended their own research to build this Rosetta Fold model. And interestingly enough,

So DeepMind did release the paper and their code in July, but this team actually did it before them in June, a full month before that. And they actually also published a paper in Nature on the same day as DeepMind. There were two papers on this protein folding task. I think one of them was in science. Oh, yeah, you're right. Sorry.

Yeah, and actually there was a paper that was released concurrently to AlphaFold2 from Rosetta Fold team that also described our work. So it's kind of an interesting story and I think there's not been enough recognition of Rosetta Fold. There's a lot of coverage of AlphaFold but not this development which actually preceded the release of AlphaFold.

Well, it's a very, very tight race since they're releasing things at very much the same time. And my conclusion from this is that competition is good because I think this will enable this technology to become not only better and better, but also available to scientists as soon as possible.

Yeah, exactly. In fact, you could even wonder if the release of Rosetta Fold influenced the release of Alpha Fold. Absolutely. In particular, yeah, on June 15th was the release of a model. And then just a few days later, on June 18th,

VCE of DeepMind tweeted that, you know, we've been hands down working on the paper and are working to get the open source code. So I think the release definitely kind of put a bit of pressure on DeepMind to also open up their work. So that was very cool.

And onto our next research story, we have artificial intelligence may diagnose dementia in a day. So kind of like the title implies, uh, there is this new work about diagnosing dementia after a single brain scan, which is opposed to more traditional things where, uh, you take, you know, up to four or several scans and, uh,

The idea here is that this could help with earlier diagnosis, which could improve patient outcomes. And virology may also be able to predict whether the condition will be stable for many years or lead to deterioration or need immediate treatment.

One thing to note is that this is still very early. This is pre-clinical trial and the model can diagnose dementia on the scans that they're working with right now. But as things have been rolling along, the trial, which is at a hospital and other different memory clinics around the country, will test whether it works in an actual clinical setting. And in the first year,

Approximately 500 patients are expected to participate. So that will be very, very exciting. I think this is just really important for early intervention or to motivate people to participate.

actually take on preventative health tasks. Um, if there are any that people can do, uh, preventative medicine is, is kind of unfortunate since people only care when, you know, things are going wrong, not when things are seemingly right. Um,

But like being able to say, hey, you're not on the right track is actually a very, very important little indicator that we could get for not just dementia, but everything. You know, if you were told, OK, you're actually not exercising enough, you probably wouldn't.

probably maybe actually exercise a bit more. Yeah. And it's also nice to see that there is this clinical trial going on. We've seen before how there were a lot of AI models for diagnosing COVID, for instance, and then

you know, a lot of them turned out to be flawed. And we've seen this a lot of medicine. So hopefully they do take that into account and really take a careful study of this approach to evaluate it and then roll it out, which appears to be the case.

And on to our ethics and societal impact articles. Our first article is Twitter AI bias contest shows beauty filters hoodwink the algorithm. That's from CNET. Uh, so basically a researcher in Switzerland's EPFL technical university, uh, won this prize of $3,500 prize from Twitter. Um, and,

Finding that the key Twitter algorithm that is used to crop photos actually does favor faces that are slimmer and younger and with a skin tone that is lighter colored or with warmer tones. And this is obviously, first of all, I think we had signs that this was the case.

But what's really cool is that they used generated faces to show that this was true and they use generated interpolation so they could take one fake face and.

And interpolate it and make the, you know, for example, skin tone, just skin tone lighter. So it's the same person, but with lighter skin tone. And then be able to run it through the Twitter algorithm and find that the saliency, quote unquote, like the fact that whether there's a good face or not, actually, whether that score goes up or down. Yeah.

Yeah, and then this there's a price here because this is following up on something we discussed last week, which is Twitter's bias bug bounty. So this is actually a contest we arrange. And here the results are being announced as opposed to last week. We just discussed that this was happening.

And yeah, I think this is very cool. I think it shows and really kind of demonstrates that this idea has a lot of promise opening up to different researchers to explore, in this case, their cropping algorithm. So I believe they gave kind of access to the code and some of these metrics, like how well the algorithm rates a given portion of the image so they could see the saliency and really evaluate it.

And yeah, this is a pretty interesting approach that this researcher took and something that teams internally at Twitter might not have thought of. So obviously, it's good that this is revealed and presumably, hopefully, Twitter's teams will take this into account and fix it.

And yeah, I think it is pretty impressive that there was this result that is novel as compared to what's been known before about the cropping algorithm. And, you know, it seems to be a big success for this very first bias back bounty idea.

And onto a bit more of a bummer story, not quite as positive. We have U.S. will investigate Tesla's autopilot system over crashes with emergency vehicles. So this just came out where there will be a study of the Tesla autopilot system, which is deployed and usable in hundreds of thousands of vehicles.

And this investigation was prompted by at least 11 accidents in which Teslas using autopilot drove into parked fire trucks, police cars and other emergency vehicles, which the safety agency, the National Highway Traffic Safety Administration disclosed. And these crashes killed one woman and injured two.

17 people. So this appears to be a broad study, a broad investigation, which might be pretty impactful and definitely interesting. I think it'll definitely help define how self-draught can be rolled out.

One interesting thing is that Tesla does tell drivers to use a system only on divided highways, but you can actually activate the system off of divided highways, like smaller roads and streets.

Meanwhile, an interesting way to restrict that is that Cruise or GM, that acquired Cruise, actually is restricted to major highways and they use GPS to actually restrict your use of it.

Just so it doesn't have to go into those tricky cases and cause issues. So that might be one resolution from this, which is we need to actually restrict it in the software such that people can't actually switch it on in problematic issues since there have been.

you know, issues around, there's an article that's titled five Teslas have crashed on the same road in California. And so there are definitely some streets where it is problematic and maybe some way of figuring that out would be useful.

Yeah, and also on that point, there was this article that went into how the Tesla owner's manual instructs drivers to keep their hands on the steering wheel, but the system continues operating even if the drivers only occasionally tap the wheel.

And this also points out kind of a larger issue, which is, you know, you're supposed to be paying for attention, but it's well known that humans have a real difficulty when you have self-driving cars to actually stay engaged. Even if you want to be, it's hard. And then I think we've seen cases where people are actively abusing and ignoring that.

So and there's been studies on using sort of computer vision to keep track of how engaged and attentive people are. So this could push Tesla to really work more on this issue of driving driver monitoring and generally kind of aid with the safety of these things to avoid these sorts of crashes.

Right. And there have been studies around how breaking or parked cars are harder to detect for Teslas in particular. And that does make sense because they don't have LIDAR. Teslas don't have LIDAR, which would help with detecting that kind of range.

So it might also, you know, the result might also be to, hey, say you may have to install this sensor or there may have to be measures to ensure that, you know, those can be detected at a certain, a similar accuracy as other cars.

Yeah, and this is particularly relevant for emergency vehicles, which are often parked kind of on the side of a road, sometimes leaning into the road, right, which humans can recognize and kind of make a turn. But these are kind of edge cases which AI may have a harder time adjusting to. So definitely, and it's quite important, obviously, for emergency vehicles to not be interfered with.

So if nothing else, this study will certainly push Tesla and other developers to be more mindful of these issues and more careful, which certainly Tesla has come under a lot of scrutiny to some extent by us as well for maybe not being very careful.

And on to our fun set of articles. The first one is Appreciating the Poetic Misunderstandings of AI Art. And this is about the Twitter account at AIart.

images underscore AI, which has a decent following over 40,000 followers. And they basically post lots of images of kind of surreal or glitchy and sometimes really pretty and beautiful images created through AI systems.

So it was launched, this Twitter handle, at the end of June. And the account has produced, you know, really interesting results. I really strongly recommend that you go check it out. What's interesting is that they get a lot of requests. And one successful prompt that I thought was really funny was, you know,

Elon Musk experiencing pain, which actually was this collage of grimacing Elon faces or Elon like faces and Tesla chargers. And so that was very, very interesting that it kind of pulled both of those together. Any favorites from you, Andre? Yeah.

I don't know if I can even pick out any because I do follow this and I see a lot of delightful images from this account. So I think, yeah, I just kind of browse and I can't even remember any specifics because I see so many.

And yeah, this is quite a good article covering, there's several images embedded in it. And it also goes into the creator of the account, which is Sam Burton King, a 20-year-old student at Northwestern University. And they are not, you know, in CS or AI. They began as a math major and now are studying philosophy and music.

And so they just created this account and are using tools that are publicly available and are more of a curator of all of these requests and ideas that they get and, you know, post a lot of these images, sometimes dozens per day. So it's an interesting case of how you don't need to be an expert now to use AI to create art. You really just need to have sort of

a bit of taste and a bit of understanding of how to use these tools and not how the tools themselves work. Ah, shoot. You need a bit of taste. I'm kidding. You need to actually be a bit of an artist. Yeah. Uh, I think what's cool about, you know, trends of all the articles we've discussed or many of the articles we've discussed here today is that, you know,

the average user can start to handle some of these AI systems and work with them. And that's really, really exciting. Though, I guess the other side of that is sometimes maybe we shouldn't with the Tesla AI example. And sometimes some people are not, you know, in the best state of mind to necessarily do that, or it's not exactly ready. But for a lot of these systems, it is about getting it to that end user, the average person, you and me, or your partner,

I don't know, your friend, your cousin, everyone. Yeah, that's a good point. And actually, the tool they use here is VQGAN plus Clip, which you've already described, where you just plug in this English description, just natural language description, and then VAI generates an image from that, which actually is what Codex does as well. You just give it a description and it generates code for you instead of an image.

So it's definitely an emerging trend of this sort of like just tell the AI what to do and it does it for you. And you just need to understand how to phrase it and kind of what you can use it and where it's not reliable.

And onto our last fun story, something that I imagine many of our listeners have seen, Boston Dynamics robots can parkour better than you. So this is a new video from Boston Dynamics as they release every once in a while. And here they showed how their humanoid robots could do sort of parkour. They could sort of jump around on platforms

They could do backflips. They could jump over little walls. And it's about a minute, minute and a half of a sort of pre-choreographed routine that is, as usual, super fun to watch. Maybe a little less fun than the dance video they released, but also maybe a little more impressive and sort of exciting because these are very kind of athletic dances.

that require a lot of dexterity and a lot of, I guess, you know, strength to actually pull off.

And the article states that the routine that is shown took actually months of development, according to the company. And I'm really not sure if I can achieve that in months. So props to that, props to these robots. But there is a failure rate to doing a lot of these things. And the vault that the robots actually do have a 50% failure rate. Yeah.

Yeah, so you can actually find some videos, I believe, of these failures that robots have. And those are fun to watch as well because they really take some falls. You know, it's not gentle. They don't have any rope to catch them. They fall on the floor on their face. And it is pretty dramatic. And it's pretty important to build robots that can handle those falls because in the learning process, not even at this point when it's trained,

it is falling a lot and it is going through a lot. So there's a lot of resiliency kind of built into a lot of robotics. Yeah. And these sorts of humanoid robots, some applications they can have is for search and rescue and for navigating, you know, human environments, which of course will be important. So this was a bit of a test on,

how they can maintain their balance while switching behaviors and coding actions. So in addition to being fun, obviously this was kind of them stretching the limits. But this robot, Atlas, isn't a production robot. It's not commercialized. So this is kind of more internal R&D as opposed to their kind of dog robot spot, which they are trying to commercialize and is being released to buy.

Right. There's so much more press now that Hyundai has bought Boston Dynamics. They just keep coming out with new things. I think earlier they had some promotional thing with BTS, the band. So it's interesting to see a lot more come out from the company and be less secretive.

Yeah, they've been doing this for a while and kudos to them. They do keep one-upping themselves and they don't do it too often. They do it usually every few months. But I do wonder if people will sort of

stop being as amazed if we keep getting these sorts of videos and it's just like, oh, another Boston Dynamic robot doing a thing. And that's it for us this episode. If you've enjoyed our discussion of these stories, be sure to share and review the podcast. We'd appreciate it a ton. And now be sure to stick around for a few more minutes to get a quick summary of some other cool news stories from our very own newscaster, Daniel Bashir. Thanks, Andrea and Sharon. Now I'll go through a few other interesting stories we haven't touched on.

Our first story concerns the research side. Following in Google's footsteps, Samsung is using AI to automate the process of designing computer chips. The chip maker is using AI features and software from Synopsys, a chip design software firm. According to Wired, the Synopsys tool, called DSO.AI, might be the most far-reaching tool on the market because the firm works with dozens of companies.

The firm also has another advantage against competitors: years of semiconductor designs that can be used to train an AI algorithm. Besides Samsung and Google, Nvidia and IBM have also worked on AI-driven chip design. According to Mike Demler, a senior analyst at the Lindley Group who looks at chip design software, AI is well-suited to the task of arranging billions of transistors across a chip.

All current approaches for AI-based chip design tools use reinforcement learning. The method automatically draws up the basics of the design, including component placement and wiring. It tries different designs and learns which ones produce the best results.

Our second story, on business, takes us to Asia. Since time, China's largest AI company is working with HSBC to arrange a Hong Kong IPO. According to Bloomberg, that IPO could raise over $2 billion. The AI startup is considering a dual listing in Hong Kong and China.

and plans to file for its Hong Kong IPO in the next few weeks. SenseTime was founded in 2014 and develops AI technology for use in autonomous driving, augmented reality, medical image analysis, and other fields.

The pandemic helped SenseTime's business as demand for facial recognition from local governments in China rose, despite SenseTime's place on the US's blacklist of Chinese companies. While SenseTime's IPO seems near, many details appear to be in flux and could change as deliberations continue. Our final story concerns AI and society.

A large, multidisciplinary group of Stanford professors recently published a paper on what they call "foundation" AI models, like OpenAI's GPT-3. GPT-3, for example, is foundational because it was trained on massive quantities of data to reach state-of-the-art performance across a variety of tasks.

developers can leverage its general capabilities as the basis for software to handle specific tasks. While this sounds like a step forward and a way to make developing AI-based software easier, it does mean that those models will be more homogenized. We could see a future where many different AI models are based off of a single pre-trained model, like GPT-3 or BERT.

As Fast Company notes, "It is already the case that almost all NLP models are built on top of BERT." Percy Leong of Stanford warns that this isn't necessarily a good thing. "We don't understand these models well, what they are capable of, and what happens when they fail."

If biases and other issues are baked into models like GPT-3 and BERT, as appears to be the case, applications built on top of those models will inherit those problems. Although companies employ ethics teams and select training data carefully, private companies might not comply with a set of regulatory standards to ensure unbiased models.

However, as Fei-Fei Li of Stanford stresses, a university setting can provide the variety of perspectives necessary for defining policies and standards.

Thanks so much for listening to this week's episode of Skynet Today's Let's Talk AI podcast. You can find the articles we discussed today and subscribe to our weekly newsletter with even more content at skynetoday.com. Don't forget to subscribe to us wherever you get your podcasts and leave us a review if you like the show. Be sure to tune in when we return next week.