We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Hello World, Surveillance, and New Research Practices

Hello World, Surveillance, and New Research Practices

2020/3/13
logo of podcast Last Week in AI

Last Week in AI

AI Deep Dive AI Chapters Transcript
People
A
Andrey Kurenkov
S
Sharon Zhou
Topics
Andrey Kurenkov:讨论了Clearview公司面部识别技术的广泛应用及其引发的隐私和伦理问题,指出该技术缺乏监管,可能被滥用于各种用途,包括执法、商业和个人用途。还讨论了Banjo公司在犹他州大规模部署监控系统的情况,以及布宜诺斯艾利斯使用实时面部识别系统的情况,并分析了这些案例中存在的透明度、可解释性和可靠性问题。 Sharon Zhou:对Clearview公司利用苹果开发者计划绕过审核,在iPhone上分发其面部识别应用表示担忧,并讨论了苹果对其他公司应拥有多少控制权以及如何界定对消费者造成损害的问题。还讨论了AI系统在执法中的应用,以及如何平衡技术进步与社会伦理之间的关系。 Andrey Kurenkov:分析了NeurIPS会议发布的新研究提交指南,该指南要求研究人员包含社会影响声明,并披露财务利益冲突,以促进AI研究的伦理责任。并讨论了该政策引发的争议,以及AI研究人员在应对伦理问题方面的责任。 Sharon Zhou:讨论了AI研究领域的快速发展及其带来的伦理挑战,并探讨了如何平衡研究速度与研究质量和伦理考量之间的关系。 Andrey Kurenkov:讨论了Yoshua Bengio关于重新思考机器学习出版流程的博客文章,该文章指出当前的竞争性文化导致论文数量增加,但质量和深度下降,并提出了新的论文发表模式。 Sharon Zhou:讨论了新冠疫情对AI会议举办方式的影响,以及线上会议模式的探索和发展,并指出线上会议可以减少碳排放,并惠及更多资源有限和行动不便的人群。 Andrey Kurenkov:对Clearview公司面部识别技术的广泛应用及其引发的隐私和伦理问题表示担忧,认为该技术缺乏监管,可能被滥用于各种用途,包括执法、商业和个人用途。并讨论了Banjo公司在犹他州大规模部署监控系统的情况,以及布宜诺斯艾利斯使用实时面部识别系统的情况,分析了这些案例中存在的透明度、可解释性和可靠性问题。还讨论了AI系统在执法中的应用,以及如何平衡技术进步与社会伦理之间的关系。 Sharon Zhou:对苹果公司禁用Clearview应用的决定表示赞同,并讨论了苹果对其他公司应拥有多少控制权以及如何界定对消费者造成损害的问题。还讨论了AI研究领域的快速发展及其带来的伦理挑战,并探讨了如何平衡研究速度与研究质量和伦理考量之间的关系。 Andrey Kurenkov:分析了NeurIPS会议发布的新研究提交指南,该指南要求研究人员包含社会影响声明,并披露财务利益冲突,以促进AI研究的伦理责任。并讨论了该政策引发的争议,以及AI研究人员在应对伦理问题方面的责任,以及Joseph Redman因伦理担忧而离开AI研究领域的情况。 Sharon Zhou:讨论了Yoshua Bengio关于重新思考机器学习出版流程的博客文章,该文章指出当前的竞争性文化导致论文数量增加,但质量和深度下降,并提出了新的论文发表模式。还讨论了新冠疫情对AI会议举办方式的影响,以及线上会议模式的探索和发展,并指出线上会议可以减少碳排放,并惠及更多资源有限和行动不便的人群。

Deep Dive

Chapters
The hosts discuss how the coronavirus pandemic has brought researchers together globally for collaborative work, despite the challenges of working from home.

Shownotes Transcript

Translations:
中文

Hello and welcome to this kind of today's AI talk where you can hear from actual AI researchers about what's actually going on with AI and what is just clickbait headlines. I am Andrey Kurenkov, a third year PhD student at the Stanford Vision and Learning Lab. I mostly focus on learning algorithms for robotics.

And with me is my co-host. Hi, I'm Sharon, a third year PhD student in the machine learning group working with Andrew Ng. I do research on generative models, improving generalization of neural networks and applying machine learning to tackling the climate crisis. All right. So first episode, exciting. Sharon, how are you today? I've been reflecting on how the coronavirus has actually, uh,

brought together researchers globally on a mission to work on something collaboratively. And so I guess that is the silver lining around the coronavirus. But otherwise, I've been holed up at home. Yeah, working from home has been a little challenging, but I guess we are trying to make it work. And as we'll discuss a little later, it seems like you'll have some repercussions for actual conferences and how research gets done in the near term. And how are you, André?

For my part, I'm pretty good. I had a paper deadline just last Sunday, a week and a half ago, for the International Conference of Intelligent Robots and Systems. So yes, after that deadline stretch of work, I've been kind of recovering.

So it's been nice to take it a bit easy. Cool. So yeah, enough about ourselves. Let's go ahead and talk about the news and topics we'll be discussing today. Let's go ahead to our first topic of surveillance. And there was recently this kind of bombshell report that we'll start with, which is about the company Clearview. Clearview is a company that sells facial recognition software. It has a massive database of

And there was a leak that showed kind of their client list. And the client list turned out to be huge. And it turned out that the Justice Department, ICE, Macy's, Walmart, many, many companies

were using this company's product. And so this kind of implies that Clearview is being used not just for large-scale crimes that the Justice Department might be overseeing, but perhaps types of shoplifting from Walmart or Macy's and perhaps smaller things that are being enforced too with the software. And I think this starts to call into question what the boundaries of Clearview might be. Exactly. So they claim their

facial recognition technology is meant for law enforcement and agencies but this leak showed that they have been working with over 2,200 clients who include companies and individuals in addition to law enforcement so that's pretty weird so the leak came from internal documents that were leaked from an anonymous source

And yeah, it showed that not just law enforcement, but also college security departments, attorney general's offices, and multiple countries like Australia and Saudi Arabia have been using their tech.

Very interesting to hear that individuals are leveraging this technology as well. And I can only imagine what kind of applications they are using it for. Perhaps to hunt down very specific types of people, perhaps not even people who've committed real crimes. So that's almost mafia-esque and is certainly concerning.

A senior associate at the Center on Privacy and Technology at the Georgetown Law School called Claire Garvey said, this is completely crazy. Here's why it's concerning to me. There is no clear line between who is permitted access to this incredibly powerful and incredibly risky tool and who doesn't have access. There's not a clear line between law enforcement and non-law enforcement.

And as this article notes, there are currently no laws regulating the use of facial recognition at the federal level. There are proposed bills, but basically it's unregulated and so individuals can use it for all sorts of things. And this article also notes that collectively all these agencies, companies, and institutions have done something like 500,000 searches. So

For images of a person, of maybe your face, they've tried to actually get your name and identity using this technology. What's interesting is that because Clearview is largely driven by profit, it's unclear how they are vetting their potential international clients or even domestic clients, particularly perhaps countries with records of human rights violations or authoritarian regimes.

So this is certainly concerning that the vetting process might be more financially driven than ethically driven. And this article also notes that the CEO of a company did not strongly deny they would sell the technology to countries with, for instance...

where being gay is a crime. So you can imagine there's many potential misuses by authoritarian regimes. And it's really concerning that this company seems to not take a strong stance against it. So I will note that they said that they would never sell to countries that are adverse to the U.S., such as China, Iran, or North Korea.

Of course, if they were to do that, they would probably be shortly shut down by the government in some way. Yeah, so I think this is pretty surprising to see. I mean, Cleary has been around for a little while, but this is the first view into just how prevalent

the use of this technology is. And it's another reminder that, you know, yes, there might be killer robots in our future, but there's a lot to sort of be concerned about with AI right now that's actually happening in the real world. And use on Clearview has just been escalating across the past several weeks and months. And recently, Apple has also disabled Clearview AI's iPhone app for breaking Apple's rules on distribution.

Indeed. So what happened was Clearview has apparently been using the developer program where you can distribute your app to people who are supposed to help with the development of it instead of having it on the store. And that's basically sidestepping a process that Apple has in place for clearing certain regulations and so on. This is an unprecedented course. Facebook previously had a developer program similar to this.

that was used to track teenagers' online habits. Yeah, and it seems like Clearview is basically reliant on this way of operating, of distributing via this method to individuals and so on, so they can use this facial recognition on their phone.

So, um, it, it'll be interesting to see if I can actually find a better, uh, way to make it work. But at least for now, uh, Apple has disabled their ability to do this. What's interesting is that this definitely raises the question of how much, how much agency should Apple have over other companies and, and how much, how much can Apple really be disabling and enabling on their phone? You know, how much control should they have, uh,

vis-a-vis control that Clearview and Facebook perhaps have. And there's definitely a poorly defined line or at least area of what harm is and what harm to the consumer is. Yeah, I think in this case, no one's going to be crying because Clearview can't distribute its

It's a creepy app. But there are instances where maybe we would like to do some research involving crowdsourcing, and it would be beneficial to distribute an app. And it seems maybe not too clear how we could go about it because of how Apple operates. OK, so that's enough about Clearview. Sadly, it looks like it's not the only company to be looking out for in terms of facial recognition.

So there was a recently new story from Vice about how a small company is turning Utah into a surveillance panopticon was the title. It was this company Banjo that has been given real-time access to state traffic cameras, CCTV and public safety cameras and more so that they could detect anomalies. And this is really a huge amount of data to give to a small company for the

this vague goal of detecting anomalies. Banjo has basically argued that they need as much data as possible to enable a better system, which is possibly true actually for a machine learning system, though it is concerning that they are getting multiple streams of data that perhaps violate

people's privacy considerably. Yeah, and it's also not clear just how deployed it is at this point. So if their technology is actually flawed, they claim they can, for instance, identify active shooter situations. Do they get false positives? Do they get false negatives? It seems unclear.

And so it's a little disturbing to see that they have this much access with relatively little proof that they deserve it. They mentioned live shooter events in particular that they want to prevent. And it does make me wonder how much this type of technology will enter the conversation around gun control.

If guns are not controlled that much and people can liberally use it, should we also allow an AI to be surveilling the state? And so I wonder how that would be entering those discussions. Yeah, I think it is an interesting point that in general, the pitch that this company has that

If you have an AI program looking over cameras, you can really catch dangerous situations much faster just because you can track things. So it does seem beneficial for the public. The question is how much scrutiny does this company deserve and how much oversight it has.

So on to our next article, the U.S. fears live facial recognition in Buenos Aires. It's a fact of life.

Essentially, in Buenos Aires, they have live facial recognition systems that can function on 300 camera feeds at a time and automatically send a notification to police over messaging apps like Telegram when they find a potential match. And since there are more than 300 cameras available to the police in the city, the video feeds from both underground and above ground are cycled through.

Up until now, the police have been fairly enthusiastic about the system. And they have recently tweeted about seven cases in which the technology was used to catch fugitives. Yeah, so this is interesting and sort of an example where the technology is actually deployed. It's being used to try and catch criminals.

And this article notes that little is known about the technology or how it's implemented. It seems like there is little transparency here. And it has some stats where the facial recognition system identified 595 people to police. Of those 595 cases, five were false positives. So there are identifications that are wrong. So the system said this person from a camera has this name and it was just wrong.

Lack of interpretability in the system and explainability of the system are particularly concerning if law enforcement or the people wielding this technology don't have a lens into that. So I do recognize that if those qualities were known to the public, then this technology could be exploited as well.

So that's something I would be particularly concerned about if a bad actor, let's say, puts on a mask that looks like someone else. What would that cause, for example? So I think there are ways to exploit such a system that could be pretty dangerous if the explainability, interpretability of these models, of these systems, works.

were known to the public. At the same time, if they are known to the police force and police are using it benignly, that's fine. But they are incentivized to perhaps have more false positives and false negatives, right? So I believe that the incentives are slightly off, but it's almost also at some point or to some degree,

understandable that they wouldn't reveal everything about the system. So it's a really tricky line of ethics. Yeah, it's a complicated topic. It's honestly above our head as AI researchers. We are not law experts or ethics experts. We can maybe, as you mentioned, there are some interesting related topics, like recent research has showed that you can print out a little pattern

and confuses these systems into not identifying you or even identifying you incorrectly. So that does have implications for how much transparency into the algorithm you want. In the end, of course, it seems like we'll need some oversight and we'll need some proof that these things are being deployed with some limitations.

And yeah, in the case of Buenos Aires, it seems like that's less true for now. Hopefully as this technology spreads, we collectively as a species kind of figure out how to go about this technology. One concern is over-reliance on this type of technology where

A police or perhaps human looking at the footage cannot tell who it is, but the algorithm says confidently thinks it's this person. And in fact, it is not this person. Relying on the technology there over human intuition, judgment, or ability to perceive an image, that's very concerning as well. And this ties into something we know quite well as AI researchers, which is that

In general, AI systems right now aren't very reliable. So whenever you do use one, ideally use that together with someone. So you have AI kind of give you something that's detected or recognized, and then you have a human in the loop to verify and follow up. And that's probably the most effective way to go about it, what we have today.

Now, speaking to the topic of this technology spreading and how that's happening. In fact, in the United Kingdom, the police is rolling out live facial recognition to try and likewise basically see live situations where people

someone is doing something dangerous to respond more quickly. So this ties a little bit into recent major events that has been in London. So there were cases where, for instance, someone started attacking people with a knife. That was quite traumatic. And the hope is if you have live facial recognition and live monitoring, you can try and reply quicker and limit harm.

What's interesting is that the article also notes that it should not be left to law enforcers to calculate how societally disruptive their technological reach is becoming.

or to gaze into the mouth of potentially authoritarian outcomes. Yeah, the argument here in this article, which is titled AI and Policing Better Than a Knife Through a Chest, a little bit provocative, is that there's no real evidence that this stuff helps yet. And there are other ways to spend funds to help ensure our security. So you could, for instance, just

have more police in public spaces to help ensure public safety. This article definitely reminds me of how researchers in looking at using AI systems in courts

have told me that if you lead a conversation with telling people, hey, we're using AI to decide on how this trial will go, people freak out and think it's terrible. But if you lead with how biased people, humans actually are and judges actually are, and say that we're building software to help mitigate that, they will...

they will react differently. And I so I think it is a give and take because you

humans are naturally very biased as well. And so it's a matter of how much is this AI mitigating that and how much is it enabling that further and being confirmation bias? Yeah, this offer went out to suggest that the police should consult with civil society organizations about technology and pointed out that we urgently need a regulatory framework to limit

the current unrestrained use of technology. And we need to see an eventual basis for any technology that could result in injustice and discrimination. So it's a tricky line.

We need to know the limitations the police and government have set for themselves so we don't end up with a Big Brother 1984 situation. Correct. And hopefully we have some evidence that this stuff actually works before we kind of agree to it being in our everyday lives. I really hope there's sufficient funding going to some of these non-governmental institutions that are there to keep check on

law enforcement from doing things that might be harmful to civilians, such as Liberty or Big Brother Watch in the UK.

I really hope that there is actually government funding towards them or some kind of funding towards them that could enable them to act independently and help beat this check. Yeah, it's interesting. I think we haven't had as much news in the US yet about this, but we do have the NSA and we have had some news this last decade about government monitoring. So...

It seems like on the amount of time to will have to actually reckon this stuff in a pretty major way. Hopefully, we'll have such civil society organizations to sort of look out for us normal people. Right. Well, San Francisco has banned facial recognition. That's true. We are in a little special zone where everyone is thinking about it and talking about it. That's true.

Okay, so we move on to the next topic, which is a little less depressing and a little more nerdy. But that's where we can put our AI researcher hats on and stop talking about things kind of beyond our understanding. And that's about pretty much AI research and how we do it. So in response to these things, so uses of AI that are concerning,

There's been kind of a lot of developments lately and discussions about how we should do it. And we'll start by mentioning a pretty big one that happened just about two weeks ago. I think the conference NeurIPS, which is a massive AI conference dealing with cutting edge developments in applications of NeurIPS.

modern, very impressive AI, released very new guidelines for submitting new research. And in it, they had a new policy that was kind of interesting, which was that in the papers for the first time, researchers are asked to include a statement of societal impact.

to kind of discuss the possible ramifications of the research, the drawbacks, and basically what to be aware of as far as ethical concerns.

I think this is really compelling and a great precedent. Some people think it's just paying lip service to ethics, but research with potential ethical concerns will actually be given considerable extra consideration and flagged for potential concerns. And that will then be given to a special set of reviewers with expertise in ethics and machine learning to give it a second look before publication. What I do hope is that

This statement has a due date that is after the deadline so that people will not just write a random sentence and actually put some thought into it. I think this would actually change the way I reflect on my work and give me pause and a time to reflect on it in a way that I know that my fellow researchers are also

are also putting in that time and reflection. Yeah, this had some discussion within the community. Some spicy tweets were going on where basically people were saying, you know, oh, now we got to be armchair philosophers and

think about possible consequences. But I also agree that I think it's generally a good thing. We don't have to be experts at ethics or law to be able to say, okay, this technology has these limitations. It shouldn't be deployed in these contexts. These are things to be aware of when you are looking at it from a broader impact standpoint. And also another requirement was that

Offers now need to disclose financial conflicts of interest in a paper. And that's really useful because now there's so much research in industry and huge companies, and there's so much collaboration between academia and industry that

that some really murky situations might arise. So hopefully this disclosure of financial conflict of interest will help avoid anything. I hope these statements will be published along with the paper and that people could actually comment on those sections and ensure that something reasonable is stated there. I think that is...

That would be a reasonable way of incorporating this as well as a more serious way and not just paying lip service to it. I think this is the first step. I don't think it's perfect for sure. And I understand the criticisms around it. But I think this is better than doing nothing. Yeah, it's a start. It seems...

We'll have to see how it works. Maybe we'll refine the idea, but it seems like a good first step. And one thing we should mention, actually, I don't know if you attended NeurIPS, actually, as did I. And there were a couple of papers that had people a little bit uncomfortable.

Correct. Yeah, so I recall there was a paper, I believe, on trying to match a person or to generate an image of a person's face from their voice, which many people noted had multiple ethical problems. Generally, it's a bit transphobic. It's just not scientific, really. And there were also some research about recognizing someone's politics from some data.

So yes, there were actual examples of papers that many agreed were ethically dubious and hopefully this would help with avoiding such situations or help the authors reflect on their work and position it in a more ethically considerate way. Yes.

Of the criticisms, Roger Gross of Toronto's Vector Institute complained that the new policy will lead to trivialization of important issues and argued that social impact should be left to researchers who focus on it. I actually believe that we should be embedding this into our research and that it could actually bolster a piece or an article like the one you mentioned, where if the researchers state the ethical limitations

of their work that could actually lend more light and weight to their work and actually make people say, okay, I can step back for a bit, understand that you see the limitations similar to technical limitations and be able to see the value of your work a little bit, right? And also the limitations. I think that this could actually bolster people's work more than take away people's mind space from it, which is...

what Roger Gross is saying here. Exactly, yeah. And interestingly, as often happens, the discussion happened on Twitter, but for some reason, AI researchers are pretty active. So when Roger Gross had his opinion,

Pretty prominent AI researcher, Joseph Redman, replied and noted that he himself left AI research due to ethical concerns. So Joseph Redman developed a pretty significant experience

AI model called YOLO. It has to do with object recognition and computer vision. And he said that when he saw how much military applications and privacy implications AI had now, he decided to actually leave academia and research and AI development. And

This in turn resulted in a lot of discussion of like, is this the right response to there being dubious applications of AI in your research? Or should you stay and do the research and actually, as we are discussing, maybe do more to proactively discuss the ethical implications, how your work should be used, how it shouldn't be used, stuff like that.

I think Joseph Redman leaving is actually in line with what Roger Gross would want, which is having AI researchers just focus on AI. Though I would say I actually think we need more people who think about both in the field.

to really push forward AI that would be safe for the future. And I think this is very much in parallel with other fields. So this is by no means a precedent whatsoever. For example, in biology and genetics, people have thought quite a bit about this when eugenics were a topic of research. And

Researchers, similar to Roger Gross, really wanted to just say, we just want to focus on the science. And everyone agreed that they should focus on the science until one day some policymakers said, if you only focus on the science, we'll just shut down all research.

of this field. And that's when they started discussing, "Okay, okay. Maybe instead of shutting down all of this field, let's talk about what we could possibly do." And I think that's the danger to it where scientists just want to focus on one thing and maybe... And people think that that makes them more effective but perhaps not, right? That's not what society needs. Yeah. Bradman said that

He'd come to realize that facial recognition basically has more downsides than upside, and then said that the technology would not be developed if more researchers thought about the broader impact of their work. I mean, basically what this conference in New York is asking people to do. And I don't think I necessarily agree. I think as with any technology, of course, there are positive uses and negative uses.

And facial recognition, as we said, can have positive uses of trying to track down lost people, for instance, trying to identify dangerous situations. So another student, another person in discussion on Twitter,

expressed the opinion that instead of leaving, for instance, we could use our positions as experts to try and advocate for the correct use and deployment of this technology. And I think that's mostly where I stand as well.

I think there are many ways to advocate for that. And I think actually what Joseph Redmond did was one way of advocating for it by bringing to light that, hey, he would take this extreme action of abandoning his... I don't want to say life's work. I'm sure he'll do many more great things. But his at least graduate work that he dedicated quite a bit of time to and that has...

achieved quite a bit of acclaim in the field. So I think it was quite a compelling action to take and enabled people in the field to really reflect on this, even just for a little bit. Yeah, and I think it does speak to this interesting place that AI as a research field is in,

where we had kind of a crazy decade in a way where we had some research breakthroughs early around 2011, 2012. And since then, things have just been moving super fast. So we've gotten voice recognition, facial recognition,

Translation, all these things have improved enormously, basically because we could throw a bunch of data and compute at some very powerful techniques. And now that we've basically milked those techniques quite a bit, I think we are starting to catch up with, well, what are the implications?

Now that these technologies are powerful and actually being used out in the real world, what does that mean for us as researchers and how we do our research? And yeah, this is just another reflection of that point where we are. Yeah, and to that point, we can mention quickly, actually, there is an ongoing discussion on this notion of publication norms for responsible AI.

So there's this partnership on AI, which is made up of many big institutions that do research, that has been discussing, has been thinking about how to do responsive publications. If something should not be published, if we should have...

ethics statements, things like that. And so, yeah, again, it's just a very active area of discussion and hopefully even a couple of years, we kind of figure it out. Right. And I think this direction is great though. The partnership on AI does come out with pretty broad guidelines or at least an initiation of broad guidelines. And I do, I, I would like to see more,

much more specifics and much more concrete guidelines that someone else perhaps or they could come out with that would be much more I think effective broad guidelines have generally not been effective yeah I think part of the reason we mentioned it is they laid out a timeline

timeline recently for this effort. So, so far we've had a lot of discussions, but nothing too concrete. And now they actually have a plan for the next few months to try and lay out something more useful. So we'll be keeping an eye on it and hopefully they can add to the discussion in a useful way.

And regarding the publication process, Yoshua Bengio on his blog has written a piece on time to rethink the publication process in machine learning. And we can say Yoshua Bengio, by the way, is like a massive name, right? So he has been around for decades. He's one of the big names in the current AI boom, right? Yoshua Bengio is one of the recent Turing Award winners in AI. And people definitely follow his leadership.

So specifically on culture, Benji said, the research culture has also changed in the last few decades. It is more competitive. Everything is happening fast and putting a lot of pressure on everyone. The field has grown exponentially in size. Students are more protective of their ideas and in a hurry to put them out. I fear that someone else would be working on the same thing elsewhere. And in general, a Ph.D. ends quickly.

up with at least 50% more papers than what I gather it was 20 or 30 years ago. Yeah, so it was a block post where he expressed his thoughts and I think as current PhD students, you and I can broadly agree that this characterization is fair. This is pretty fair. Yeah, it's a very fast-moving field. The expectation is to have at least one paper per year

if not two or three. So it's getting a little crazy.

What I like about this post is that Joshua lays out concrete steps or potentially new models for conference publication or sorry, publication in general in this field. So he first laments the effects of a switch to the conference publication model because conference papers don't get cleaned up, not just journal papers. And the surface level productivity belies papers that have less depth or quality, but

and has more errors, lack of rigor, or are just incremental. Yeah. So the argument is basically we need to slow down a little bit. And we have this site archive, right? Which is basically a website where you can put up your papers. And his argument is that is already enough to make sure we can move fast, which is something a lot of people

people say is good about AI research right now. But at the same time, we need to rethink the structure of how we do paper submission, paper reviewing, paper publication to encourage a bit more reflection, a bit more polish, a bit more scientific rigor. Would you agree with that idea, Sharon?

I think I would broadly agree with that, as long as that would also encourage people to go to conferences in general. I know conferences are largely used to socialize, to meet people in the field and highlight selected work. But if there isn't enough work being presented there, would people actually go? Would they justify that going? Perhaps as long as the most prominent people do go, then it'll be fine. But will that degenerate into what perhaps other fields have?

like in math or something where prominent people do not go. So it doesn't feel... You're less compelled to go. So you have less value at conferences. Benji does put out a new model that says that we should first submit papers to a fast turnaround journal like JMLR, which is the Journal for Machine Learning Research.

And then the program committees can pick papers they like most from those already accepted and reviewed papers from this journal. And what's interesting is that I actually have a fairly recent publication

paper with Yashua, Yashua Bengio here on climate change and tackling climate change. And we did submit to a journal and published there. So perhaps he's already pushing this implicitly before this blog post because it was earlier in February. How did you like the journal submission process? It was actually quite quick. So it was fine. Okay, nice.

Yeah, I think I have, I broadly agree. I think it's a good idea to try and slow things down a little bit and encourage a bit more reflection, reflection and polish at the same time with basic structure where you have a few deadlines per year for conferences and you try to hit them. Does encourage you to actually get the ideas developed and experiments running and everything

So having just had a deadline like last Sunday, as I mentioned, it was kind of a lot of work leading up to it, but at least we finished it up and got it there. Whereas maybe it would have taken a lot longer if we didn't have that forcing factor. So it's a tricky thing. Maybe a hybrid approach would be nice where we have more of a acceptance of journals in addition to conferences. It's something we'll have to see, I guess.

And I think one last thing we can mention before ending this first episode is with regards to conferences, we are in a slightly interesting place now with the COVID virus where

Usually conferences are a huge deal for AI. So that's where you publish your papers. That's where you go to present them and talk to people. And now that's not as such a good idea because, uh, the more large in-person events you have, the more spread there will be. So many, uh, upcoming conferences are sort of considering how to do things. And we may actually end up trying to, uh,

go more online and virtual and it'll be interesting to see if it works. I think this is an opportunity to test out quite naturally how well remote conferences do and how well we can leverage this type of remote conferencing software to enable maybe perhaps not the same experience but hopefully an equally fulfilling experience

and perhaps even better in some respects. It definitely makes me think of one of my favorite papers of all time by Jim Holland, Beyond Being There, whose thesis largely states that when we create virtual software that perhaps tries to mimic presence, we can never get there. We can never perfect things.

or mimic exactly physical presence or physical interactions. But we can create a completely new experience that is better in different respects. So even if we don't mimic it, it could be better in a different way. Mm-hmm.

Yeah, I think many people lament kind of missing out on the coffee chats and accidental encounters you have, which yes, you probably won't be able to do in virtual reality. But as you say, maybe we can do other things that you can't do in person.

And this actually ties into the previous blog post by Yoshua Bengio, in which he said that climate change is a real problem. And having conferences where you fly in thousands of people from all over the world, that's a non-trivial amount of CO2 emission. So moving towards having more remote attendance, more virtual attendance will help us emit less CO2.

carbon gas and also benefit those with fewer financial resources and disabilities. So yeah, now it's interesting that the virus has actually forced the whole community to maybe move faster on it than we would have otherwise. Agreed.

Okay, so that was a pretty fun discussion of the internals of AI research. Hopefully it's interesting to people who aren't just grad students like us. And that'll be it for this episode. Thank you for listening. You can go to skynetoday.com to see all these articles and more and to subscribe. I am again, Andrey Karenkov. And I'm Sharon Zhou.

Okay, so that's the first episode. Hopefully this is actually good and please tune in next week.