We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Rethinking Conferences, Chinese GPT-3, Farming Robots, Lyft's AV Sale

Rethinking Conferences, Chinese GPT-3, Farming Robots, Lyft's AV Sale

2021/5/7
logo of podcast Last Week in AI

Last Week in AI

AI Deep Dive AI Chapters Transcript
People
A
Andrey Kourankov
D
Daniel Bashir
S
Sharon
国际仲裁专家,擅长复杂争端解决。
Topics
Daniel Bashir: 本周新闻概要涵盖了绿色神经网络训练、Lyft自动驾驶部门出售、AI生成的虚假卫星图像和智能膝盖支具等方面。大型神经网络训练的碳排放问题日益严重,研究人员正在探索降低能耗和碳排放的策略,包括使用更小规模的神经网络、改进处理器、硬件和数据中心选择等。Lyft将其自动驾驶部门Level 5出售给丰田的Woven Planet Holdings子公司,标志着其在自动驾驶领域的努力告一段落。AI生成的虚假卫星图像(deepfake geography)可能被用于制造虚假信息,引发安全和信任问题。Roam Robotics公司推出了一款智能膝盖支具,利用AI技术适应佩戴者的动作。 Andrey Kourankov: 大型AI会议规模过大,存在信息冗余、同行评审困难、抑制创新等问题。可以考虑将大型会议分解成多个小型、专业性更强的会议,以提高效率和针对性。大型神经网络模型的训练和微调会产生巨大的碳足迹,需要采取策略来降低碳排放,包括优化硬件、处理器和数据中心选择等。应该将能源消耗和碳排放纳入机器学习模型的评估指标。华为训练了一个具有2000亿参数的中文版GPT-3模型,标志着中国在大型语言模型领域的进展。不同文化背景下对AI的理解和应用方式可能存在差异。农业机器人技术发展迅速,利用AI技术提高农业生产效率,并有助于减少农药使用,对环境保护具有重要意义。Lyft出售自动驾驶部门,表明其战略重心转向核心业务,这可能是自动驾驶领域整合趋势的一部分。 Sharon: 线上会议降低了碳排放,但也削弱了与会者之间的互动和交流。农业机器人技术有助于减少农药使用,对环境保护具有重要意义。大型语言模型的应用和发展值得关注,不同文化背景下对AI的理解和应用方式可能存在差异。

Deep Dive

Chapters
Discussion on the effectiveness and sustainability of large AI conferences, highlighting issues like size, relevance, and networking opportunities, and proposing a shift towards smaller, more specialized events.

Shownotes Transcript

Translations:
中文

Hello and welcome to Scandia Day's Let's Talk AI podcast, where you can hear from AI researchers about what's actually going on with AI and what is just clickbait headlines. This is our latest Last Week in AI episode, in which you can get a quick digest of last week's AI news, as well as a bit of discussion between two AI researchers as to what you think about this news. To start things off, we'll hand it off to Daniel Bashir to summarize what happened in AI last week.

We'll be back in just a few minutes to dive deeper into these stories and give our takes. Hello, this is Daniel Bashir here with our weekly news summary. This week, we'll look at green neural net training, lift self-driving unit, fake satellite imagery, and a smart knee brace. It's been found in recent years that modern deep learning models have an immense carbon footprint.

As covered by Synced Review, a research team from Google and UC Berkeley examined the energy use and carbon footprint of popular large-scale models. In their publication, "Carbon Emissions and Large Neural Network Training," the team introduces reduction strategies and endorses previous appeals for publication norms to make energy use and emissions more transparent.

Among opportunities for energy efficiency include using deep neural networks of a reduced size that consume less energy without sacrificing accuracy. The team also found that prudent processor, hardware, and data center choices can help reduce the carbon footprint of full deep neural networks by up to 100 to 1,000 times.

Next up, not long after Uber's self-driving unit was absorbed into Aurora, Lyft has sold its own self-driving group Level 5 to Toyota's Woven Planet Holdings subsidiary. As TechCrunch reports, Lyft will receive $550 million in cash, with $200 million paid upfront as part of the acquisition agreement. The Lyft Level 5 team will continue to operate out of its Palo Alto office.

The sale ends Lyft's four-year effort to develop its own self-driving system and removes a costly annual expense from its budget as it pursues profitability. Lyft says the woven planet agreement is not exclusive, and it will continue working with others such as Motional, a partner with whom Lyft launched an experiment to offer rides in autonomous vehicles on the Lyft network.

When we think of deepfakes, we imagine AI-generated misinformation appearing on social media, Twitter bots and the like. But as The Verge reports, some researchers are worried about deepfake geography, AI-generated images of cityscapes and countryside,

AI-generated satellite imagery could be used to create hoaxes about wildfires or floods, or discredit stories based on real satellite imagery. This fits in with broader concerns about how the mere existence of deepfakes could throw what we are willing to believe into question. Deepfake geography could also be a national security issue and impact military planning.

Bo Zhao, professor at the University of Washington, says lying with maps is a centuries-old phenomenon, but as his experiments found, deepfaked satellite images present a new challenge by virtue of being so realistic. For Zhao, the most important thing is to raise awareness so geographers aren't caught off guard.

And finally, many companies out there are working on robotic exoskeletons. While we might not be anywhere close to a functioning Iron Man suit, these technologies do have the ability to impact how people move and work. Roam Robotics is one such company. According to TechCrunch, Roam makes assistive devices out of fabrics instead of metal. While fabric has less strength than metal, it is more suited for everyday use.

Roam has recently introduced a smart knee brace, which was registered with the FDA as a Class 1 medical device and uses AI for an adaptive technology that senses the wearer's movements and adjusts accordingly. The product fits in with Roam's focus on helping the large fraction of the world limited by their mobility with wearable robotics. That's all for this week's News Roundup. Stay tuned for a more in-depth discussion of recent events.

Thanks, Daniel, and welcome back, listeners. Now that you've had that summary of last week's news, feel free to stick around for more laid-back discussion about this news by two AI researchers. I am Andrey Kourankov, a third-year PhD student at the Stanford Vision and Learning Lab. I focus mostly on learning algorithms for robotic manipulation and reinforcement learning for robotics. And with me is my co-host...

And Sharon, you're graduating soon. So one thing you may look back on fondly or not so fondly from your PhD life is conferences, of course. Oh, yes. There are positives and negatives. Positives. Travel. Conferences is something I think most people agree on are a very pleasant part of research academia. You know, you go to a big event, you present your research, you interact with your colleagues.

You know, fellow researchers learn about a bunch of stuff, you know, go out for drinks, visit different parts of the world, etc. So at least until COVID hit, you know, this was a pretty standard part of life.

And we bring it up because the first article or blog post we have here is from a researcher by the name of Julian Togelius, a professor. And he has written about rethinking large conferences, giving a take on kind of the state of conferences and whether we should maybe revise how they're done.

So to set the scene a little bit, basically the inspiration for this is that in AI, because it has become so big, there are now kind of gigantic mega conferences. So something like NeurIPS has more than 10,000 attendees.

I think CVPR, Computer Vision Conference, is similar. And there's a whole bunch of really gigantic, thousands of people, conferences, EMNLP, ICRA, IRAS, lots of them.

And so this blog post is basically about that and questioning whether that's really useful or not. So to give a quick summary, there's a few criticisms of this sort of state of affairs. One of them is that

you are not likely to see many things that are actually relevant to you. So there's like thousands of talks, you know, gajillions of posters and the fraction that's relevant to you personally as a researcher is super low because these are so big. And yeah, there's other issues pointed out where, yeah,

For instance, reviewing is notably quite problematic at this point, given the size, you know, you have thousands of submissions and fellow researchers need to review them. And it's been shown with research that, you know, a lot of this chance, there's really no good. It's kind of broken. And, yeah.

On top of that, it kind of discourages kind of weirder papers. You want to be conservative. You want to follow our practices so that, you know, your viewer isn't freaked out and doesn't have reasons to reject it. And then it also points out that, you know, people may think it's good for networking, but at these giant places, actually, if you don't already know people, maybe not.

Yeah, so that's a quick summary. I'm curious, Sharon, do you generally agree? Are any of these criticisms more or less agreeable to you?

Yeah, so yesterday and actually today, too, was iClear, ICLR, which is one of the big machine learning conferences. And I presented a poster yesterday. I would say it's not exactly the same since I feel like we having these in-person conferences and having in-person anything is just crazy.

It's a different experience, I would say. But that being said, I mean, this reduces carbon emissions by a ton if we don't fly people around all the time. So I think like in some ways it does enable a better experience where people can interact with lots of lots more other people even. But and more people can attend because, you know, sometimes it's harder to travel. But on the other hand, I do think.

the experience was very different than at a conference. And I feel like in terms of my work, it didn't get as much exposure there, which for my case, I don't really care actually. But I think in cases where...

actually there's like a little bit less media attention in your group it would matter um and it would be great to talk to some of the luminaries face to face and get to know them and then go out for dinner with them or something like that afterwards which would happen all these weird ad hoc not weird but like fantastic ad hoc things would happen during conferences and they were

amazing. They would be like, you randomly bump into someone and you're like, want to get dinner together. And it's just, and you get dinner with this big group and it's just so fun. And you meet so many people. And I don't, I don't think that that is there with the virtual stuff. For sure. Yeah. That's an interesting point that actually isn't touched on in his blog post, which is with COVID, everything has gone virtual and everyone just called in, you know, to various Zoom calls and whatnot.

And there were socials in, you know, a Gavrida town. And yeah, it's definitely a different experience. And I think we discussed it before, in fact, where we touched on this topic and touched on it lowers the barrier to entry, it reduces carbon emissions, but at the same time, the experience is definitely weaker and it's kind of harder to get as much out of it. And I think that also touches on something that this blog post kind of proposes, which

which is instead of having just these gigantic conferences, let's have more smaller conferences that are more for each subfield instead of these mega, mega conferences that have like everything. And his point is, you know, we have more of these small conferences and the big conferences may still exist, but they will not have a review process. Instead, they will just accept papers from these smaller conferences, right?

which um seems like an interesting suggestion for sure and uh i think yeah this made me wonder this blog post sort of are we just stuck in this old paradigm of conferences from the 80s or 70s when you didn't really have the internet and so you needed to meet in person you know with your printed papers and your poster to get across what you're doing and um

Are we not taking the opportunity now to really rethink and restructure everything and instead blindly just continuing the same stuff we've been doing for decades?

Yeah, I mean, that's a good point, you know, and I think we were probably bound to split up a little bit at some point anyways, but I guess people are always optimistic, you know, some of these fields would come together, especially with multimodal work and that this stuff could bring about more interdisciplinary work. I'm not sure to what extent it does. Yeah. Another thing that this one doesn't touch on, but I think is relevant is that conferences...

These big conferences usually also have workshops and often your work gets approved to a conference, but also to a workshop. And your workshop is much more specific to your topic. It's really quite specific. There's only maybe a hundred, not many people and only dozens of posters, for instance.

In the most biggest case, there's deep reinforcement learning in Europe, which is still way smaller. So personally, I do think I enjoy workshops more overall. And I do like this idea that he proposes of moving to a different model, kind of rethinking how things work. That way we have more flexible places to go. No, I'm kidding.

But I think right now it's so big that you, there are actually only very few locations in the world where that can host such a big conference. Yeah. And I also, as you mentioned, personally, I feel it's pretty problematic to require travel both because of the carbon costs and also because of, you know, some students, um, may not have funding. It's pretty expensive. Um, so then, you know, it, uh,

elite universities, you know, people with scholarships even more, which is not great for, you know, allowing people from different places, different backgrounds to get into the field. Right, right. Well, speaking of carbon emissions, our next article is titled Google and UC Berkeley proposed green strategies for large neural network training. And this is from Synced Review.

So as these models get bigger and bigger and bigger, obviously there is a giant carbon footprint that is coming out of training these models and perhaps fine tuning these models, which means like retraining things or doing lots, lots of flops over time throughout the training of these models and tuning of these models.

And so Berkeley and Google jointly put together a paper around carbon emissions and large neural networks training. And they introduced different strategies to reduce carbon and

endorse also previous appeals for certain publication norms that are designed to make energy use more efficient and also just transparent about these ML models that we're putting out that are huge. One interesting thing is that they were able to find that we can reduce the carbon footprint of a neural network by 100 or 1000x.

Simply by just making better decisions around hardware and processors. And of course, maybe there's a cost factor with this as well. But if we can reduce it by those orders of magnitude, maybe we should be thinking a little bit more strategically about what we do. And by reporting them, you know, in publications that would encourage people to think about it more consciously.

And I know it would just be kind of like, you know, with the ethics stuff, people are like, oh, broader impacts. How much does that really impact things? I think it does kind of seed this thought and make the researcher feel like, you know, I do have to think about this and I do have to put this together and I do have to prepare this at some point. And hopefully that would start to change things over time. What are your thoughts on this, Andrej?

Yeah, I think this is pretty cool. We've seen sort of the start of similar ideas in the last couple of years. There was notably a paper on kind of how to calculate the carbon cost of different models. But in the past, I think they've mostly emphasized kind of the cost to train a model. Whereas what I really like about this one is it focuses, as far as I can tell, more on the actual lifetime usage of it.

So as we train more models, for instance, GPT-3, that's a deployed model. OpenAI is selling it as a service and has plans to spin it up and keep running in multiple data centers. And of course, Google and Facebook and so on are doing very similar things. They are actually running these models at scale continuously, which is really where the energy and carbon footprint is coming from. Not so much kind of development and testing, I would imagine.

And yeah, this paper points out, for instance, that for data center infrastructure, it says that cloud data centers can be two times more efficient than typical data centers. And machine learning oriented accelerators inside them can be

five times more efficient than off-the-shelf systems. And then when you combine your choice of deep neural net, data center, and processor, they say you can reduce the carbon footprint by up to 100 to 1,000 times. So that's pretty impressive. And I do believe that given how...

you know, young and early into this technology we are in terms of deploying it at scale, this may be a very good point and maybe something that these big companies haven't tackled yet. And the last thing I really like here is, as you said, they also point out that to make calculation of, you know,

of estimations of energy cost. They point out that machine learning papers that use large computational resources should make energy computation and carbon emissions explicit when possible, which is usually not the case. And they even say they're working with a benchmark MLPerf to include energy usage during training and inference as an industry standard benchmark. So

Yeah, I think I'm pretty impressed by how kind of practical and sort of important this seems. And by the way, looking at the offer list now, I noticed that this was out of Google. The last offer is Jeff Dean. So yeah, it's a combination between Google and Berkeley. So yeah.

a way to probably Google actually optimizing their data centers and given their scale, they can really make a push to make this a standard practice. Yeah, it's really good if they can do that. And, you know, it makes sense to think about this as I think a lot of technologies are starting to take on the bigger is better, more is better, compute hungry kind of

I mean, blockchain obviously is blockchain enabled technologies are definitely causing a little bit of this as well. And it certainly is affecting this. And I would be personally curious about, you know,

If this is energy efficient, can we also make this cheaper such that when someone is requesting cloud resources or doing something of that sort, they can do so consciously and also in a way where they don't have to break the bank, especially as a researcher? For sure. And since you mentioned it, one of the reasons I am not a fan of Bitcoin is because of

It's, you know, absurd inefficiency. I think I've seen quotes that say that like 2% of electricity usage in the world is now going toward Bitcoin mining, which is...

nonsensical hard math equations to power what is right now a speculative asset. But anyway, I guess we don't need to get into it. You mean the currency of Mars? No, a joke. The currency of a future. I mean, yeah, it's problematic. I thought it was like 0.6%. But regardless, if it is anywhere near that, it's pretty...

I don't know. It's just a little bit like, okay, well, if this is actually accelerating climate change, that is very sad. Like that is just, we are destroying ourselves in a really sad way. Um,

Actually, I noticed in this paper, it says if Bitcoin were a country, it has a whole comparison to Bitcoin. It would be in the top 30 in terms of CO2 emissions, larger than Argentina, whose population is 45 million. The estimated annual carbon footprint of Bitcoin mining this year is equivalent to roughly 200,000 to 300,000 whole passenger jet emissions.

San Francisco to New York round trips. So not AI, but there's a fun fact for you there. But last thought actually, now that you're on this point, I think AI is a bit similar to Bitcoin, mainly in the sense that unlike data centers in general, as far as I understand, most data centers don't do very computationally intensive work, right? They

you know, give you web pages, they do logic. But with AI, these giant models, there's a lot of compute. And as we keep going and so much software is going to be powered by AI, I mean, no doubt you have, you know, a lot of industries autonomous driving where we're going to have even more of these sort of big models in the cloud. Definitely this would be important, I think, to avoid

inefficiency and the possibility of, you know, even more inefficiencies similar to Bitcoin. Right. Absolutely. Well, speaking of large models that perhaps emit a lot of carbon, our next article from VentureBeat is titled Huawei trained the Chinese language equivalent of GBD3. Chinese GBD3 is here!

I think this is, you know, yet another huge number of authors and a huge number of parameters. So about the same number of parameters, actually more, so around 200 billion as opposed to TPT3's 175 billion parameters. And it's trained on double the amount of text, so 1.1 terabytes of Chinese text versus 570 gigabytes of text for TPT3.

And, okay, this model is huge. It could achieve big things and probably very similar to GPT-3 in terms of qualitatively how it does and helps. And a lot of this data is coming from public data sets just crawled over the internet as well. So a very similar thing and it's just being developed across the ocean. What are your thoughts on this, Andre? Yeah.

Yeah. I mean, it's, it's definitely cool. I think too often we get stuck in sort of American sphere and don't really keep track of what's being done, um, you know, different parts of the world. Uh, but yeah,

we have seen China, you know, obviously become huge in terms of AI research. I think, uh, the kind of ratio of American versus Chinese papers at major conferences have been changing. And, uh, you know, China has its own Googles and Facebooks in terms of, um, AI labs. So it was not too unexpected, but at the same time, um,

still pretty impressive, right? Because GPT-3 was kind of a big milestone and here they did that, but a bit more in terms of more parameters, more data, more compute. You know, there's all sorts of absurd numbers in terms of 1.1 terabytes of Chinese text, 2,048

Ascend 910 AI processors. I don't know what that is, but, you know, sounds impressive. I think it has like 30 to 40 gigabytes of memory. So it's top of the line hardware. And yeah, I think this points to, I guess, the trend of larger and larger models still holding, which maybe is a bit surprising because like how long can this keep going? Right.

But so far, it seems like we'll get to trillions. I mean, we have already gotten to trillions, but soon enough, we'll be seeing more and more of these 500 billion, 1 trillion, whatever parameter models, which will be interesting. There'll be a sight to behold. We will see if there are qualitative differences when we get to that stage. Yeah.

Yeah, but I am curious to see, you know, even though it is a very similar model, I think it might be used and perceived in very different ways since China does approach AI in a different light. And it was noted in the article that, you know, they didn't do as much bias analysis as we did for GPT-3 because that's not necessarily as top of mind yet, hopefully. And so it

It's just culturally the way China, I think, perceives AI is a little bit more positive than the way U.S. is seeing AI. And so we'll see how that dictates, you know, use cases and what those implications are. For sure. Yeah. Notably in the paper, they did, you know, kind of the same, a lot of the same evaluation types that GPQ-3 did in terms of,

you know a broad range of tasks and a few shots and zero shot performance and similarly they showed that this is a really adaptable model that can be applied to lots of things and now with opening eye trying to commercialize gpd3 and you know actually make people pay for this huge model they can do lots of things uh it will be interesting to see if something similar happens here

And, you know, if these models like GPT-3 and this one now will turn out to be kind of a game changer or not. But moving on to not talking about gigantic neural net models, something a little more, you know, smaller and more local. But that does have a lot of potential. Our article here, based on a press release, is Farming Robot Kills 100,000 Weeds Per Hour with Lasers.

So this is a pretty short kind of press release type article that talks about how this company, Carbon Robotics, has unveiled its third generation Atomos reader, which is a smart farming robot that identifies weeds and then destroys them with high power lasers. And this is important because this kind of technology

It doesn't destroy or doesn't harm soil and water. And it obviously makes it so you don't need to use pesticides on weeds, which has its own implications, and also makes it so you don't need to pay manual laborers. Instead, you can use this kind of technology.

And at a high level, it drives down rows of crops. It has 12 cameras scanning the ground. And then there's an onboard computer that has deep learning computer vision algorithms. And then it can use carbon dioxide lasers to zap and kill plants.

And this is interesting partially because this is part of a trend. So another article just a week ago from The Guardian is titled Killer Farm Robot Dispatches Weeds with Electric Bolts, which describes a kind of similar development from a different company. And really there's hundreds of companies that are working on this sort of thing. It's one of the big challenges and opportunities of robotics today.

And AI developments have definitely made that more feasible, but it's definitely ongoing. Yeah, so Sharon, what do you think? Well, we've seen a bit of this before from, let's say, Blue River Robotics.

And it makes sense that this is being continued to roll out. And I think it's interesting that there's also the take of, you know, we want to reduce pesticides because that is kind of the that is the trend towards which we realize, you know, pesticides are not are not good for the environment. If we're going to speak about, you know, climate related things.

And so I think it's really important that these technologies are being brought to bear there. And I think it's also really funny that this is called the killer robot in the article, but it's a good killer robot in the sense that the thing it's killing is weeds. So it is a killer robot, but it's killing weeds. So hopefully that makes it more okay. You know, this is Terminator for Farms, not Terminator for people.

Exactly. Exactly. This is cool, I think, as you said. Blue River actually, I think they got started around 2012, 2011. When I was graduating undergrad in 2015, that's one of the companies I thought about applying to. And they were kind of growing at that point and have since been acquired by John Deere. So it seems pretty safe to assume that these kinds of technologies will be

kind of keep being developed and mature and become commonplace over the next decade. And given the shortage of manual labor that exists, at least in the US, for stuff like this, but also for many other tasks in agriculture, like fruit harvesting or other things, this is definitely a pretty useful field, this agrobots area.

And I think something that doesn't get mentioned a lot, I don't think many people are aware of this with respect to AI. You know, everyone knows about computer vision, whatever, maybe like humanoid robots, but this sort of thing maybe goes under radar. And I do think it's worth kind of appreciating. Yeah.

Oh, absolutely. Because it's actually, it's impacting a real area. Um, and, and the way we're, we're doing things in that, in that area. So I think it's very exciting.

Well, we're just on an emissions streak, climate streak today. Everything is slightly related because the next thing is around self-driving cars. So our last article is from TechCrunch titled Lift Cell Self-Driving Car Unit to Toyota's Woven Planet for $550 Million. Okay.

And so this is a big news that Lyft has sold off its AV unit. So the side that was doing autonomous cars and sold it to Toyota for $550 million, meaning that Lyft is kind of

suggesting that they're pulling out of this race or at least partnering with Toyota and focusing on their main product of being that go-to ride hailing network. And Toyota is kind of doubling down on autonomous vehicles.

Andre, do you want to say some words about this? Any surprises? Probably not hugely surprising. Maybe it is. A little bit. Um, I, I personally didn't realize that this team was so big. Uh, this one points out that, uh, you know, they had 400 people in the U S Munich and London. Um,

which is pretty big, you know, and the sale is 550 million. So obviously this was pretty substantial. My impression was that Lyft's strategy was more to partner with other companies like Waymo instead of, you know, developing its own tech, more like Uber. And it does say here that Lyft will dedicate its resources to what

It was really aiming for, which was to become the go-to ride-hailing network and fleet management platform used by any and all commercial robot taxi services. And it already has partnerships with autonomous vehicle developers, for instance, Hyundai Optiv and Waymo. And they want to keep expanding. And, you know, I always thought that made a lot more sense in Uber's approach, right? Like...

autonomous driving is hard. Google has been at it for more than 10 years. And we've talked about it. I think it was sad, but also a little bit expected how it went to Uber. Like their whole thing just completely fell apart into a giant loss. So this seems like a smart, smart step. And as you said, maybe not too surprising, but

Maybe also kind of part of a trend in that a lot, there is some consolidation. So Zoox, one of the companies has been sold off to Amazon. I do wonder, you know, how many of these startups and how many of these companies that have popped up in the past decade are still around or if it's just sort of a smaller number of really big companies still working on this.

I think it'll consolidate over time because I think the space is really saturated. It is crowded as hell. So I can imagine it getting much smaller as people realize, you know, I still want to stay in the game. We're like, you know, this doesn't really make sense for my business model. Exactly. And it seems like more and more it's really only gigantic companies. Like, for instance, Cruise was a startup, got acquired or...

Something like acquired by GM, obviously Waymo is part of Google, Zoox is part of Amazon. So given the nature of this problem, so capital intensive, so research intensive, so difficult, it does seem like inevitable. And maybe all of these startups will just be acquired by the big players to have their talent. Yeah.

That is quite possible. And I would not blame the big players for doing that. They're probably like, yeah, we'll just wait to see which ones make sense for us to acquire. Yeah. And with that, thank you so much for listening to this week's episode of Skynet Today's Let's Talk AI podcast. You can find the articles we discussed here today and subscribe to our weekly newsletter with similar ones at skynettoday.com.

Subscribe to us wherever you get your podcasts and don't forget to leave us a rating and review if you like the show. Be sure to tune in next week.