We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Mini Episode: Pentagon AI, Deep Learning‘s Limits, Discharging Patients, and Robust AI

Mini Episode: Pentagon AI, Deep Learning‘s Limits, Discharging Patients, and Robust AI

2020/7/20
logo of podcast Last Week in AI

Last Week in AI

AI Deep Dive AI Chapters Transcript
People
D
Daniel Beshear
N
Nand Mulchandani
Topics
Nand Mulchandani:五角大楼联合人工智能中心(JAIC)获得了科技行业的巨大支持,并致力于与国防部合作。该中心将专注于推进军事现有流程和系统,并遵守严格的道德准则,同时拥有内部伦理团队和测试团队以确保软件按预期运行。 Daniel Beshear:JAIC在战场上使用AI,与硅谷的关系将很棘手,当前财政年度的联合作战支出超过所有其他JAIC任务计划的总支出。AI在军事中的应用必然会引发担忧和批评,因为这些流程和系统往往涉及强制和致命武力。深度学习模型的计算成本巨大,其进步依赖于计算能力的提升,未来需要更高效的深度学习方法,否则将会遇到瓶颈。明尼苏达州最大的医院系统使用AI模型辅助患者出院计划,但临床医生没有向患者说明AI系统的作用,这在法规和伦理方面是一个灰色地带。一项新的研究提出了一种训练深度学习系统的方法,使其在安全关键场景中更可靠,但需要谨慎看待其成果,避免夸大其词。 Daniel Beshear: 对五角大楼联合人工智能中心(JAIC)在战场上使用AI表示担忧,认为其与硅谷的关系将很棘手,并且当前财政年度的联合作战支出超过所有其他JAIC任务计划的总支出。同时,深度学习模型的计算成本巨大,未来需要更高效的深度学习方法。此外,评论了明尼苏达州医院系统使用AI辅助患者出院计划的伦理问题,以及一项旨在提高深度学习系统鲁棒性的新研究。

Deep Dive

Shownotes Transcript

Translations:
中文

Hello and welcome. This is Daniel Beshear here with Skyden today's Week in AI. This week, we'll look at the Pentagon's AI Center, how deep learning is approaching its limits, how AI is helping decide when to discharge patients, and a report on recent work in robust AI.

There's a lot of focus around the presidential election right now, but let's talk about the Pentagon. Nand Mulchandani, the new acting director of the Pentagon's Joint Artificial Intelligence Center, or JAIC, claims that there is immense support and interest from the tech industry in working with the center and with the Department of Defense.

Breaking Defense reports that Mulchandani is in a good position to sell this message. He succeeded Lieutenant General Jack Shanahan, who ran a project that created static in Silicon Valley. But relations will be tricky as the two-year-old JAIC moves to battlefield uses of AI. Mulchandani said that for the current fiscal year, spending on joint warfighting is greater than the combined spending on all other JAIC mission initiatives.

Such applications of AI are sure to invite plenty of worry and criticism. Mulchandani states that AI will only be used to further the military's existing processes and systems. But those processes and systems do tend to involve coercion and the use of deadly force.

Mulchandani responds that the Department of Defense operates under strict ethical guidelines. Additionally, the JAIC itself has an in-house ethics team and a testing team to make sure their software performs as intended. There's plenty here to be concerned about, and the risks that AI systems pose will only be multiplied in the context of warfighting and potentially lethal applications. We'll see how the JAIC situation unfolds.

Speaking of issues with AI systems, one that we've been learning about is the immense computational cost they pose. As we reported in a previous podcast, one deep learning model according to a 2019 MIT Technology Review report, get at the same carbon impact as five cars in the span of their lifetimes. The improvements that we've seen in deep learning methods have been reliant on increases in compute power.

VentureBeat reports that a team of researchers from MIT, the MIT-IBM Watson AI Lab, Underwood International College, and the University of Brasilia assert that continued progress will require dramatically more computationally efficient deep learning methods through changes to existing techniques or undiscovered methods.

We've certainly made some progress along this road. An OpenAI study found that the amount of compute needed to train an AI model to the same performance on classifying images in ImageNet has been decreasing by a factor of two every 16 months since 2012.

But deep learning as it is now is famously compute hungry. Unless something changes, whether that be a shift to more computationally efficient models or a change in current deep learning models, these researchers think that we're bound to hit a wall.

Now, how would you feel if you were discharged from the hospital after being hospitalized with an injury or disease, then later told that a computer had helped decide to let you go? That's what tens of thousands of patients in Minnesota's largest hospital system have been experiencing since February 2019. But they haven't been told that their discharge planning decisions were made with the help of an AI model.

Stat News reports that four of the health system's 13 hospitals have rolled out the hospital discharge planning tool, developed by Silicon Valley AI company Qventus.

While the health practitioners view the AI system as a tool to be used in tandem with their expertise, STAT reports that clinicians avoid bringing up the AI system's role in conversations with their patients. Patients who were queried by STAT about this sort of scenario had mixed opinions on whether they'd want to know about the AI system's role. The systems themselves aren't treatments or fully automated diagnostic tools and don't directly determine the kind of therapy a patient may receive.

This means that final judgments, even if informed by the AI systems, are left to the clinicians. This is certainly a gray area as far as both regulations and ethics go. Hopefully we'll hear more about this one in the near future.

Finally, the International Conference on Machine Learning is coming up, and with it, a whole slew of new and exciting research. The MIT Technology Review recently reported on a paper from Bo Lee and colleagues at the University of Illinois at Urbana-Champaign. The paper proposes a new method to train deep learning systems to be more foolproof and therefore trustworthy in safety-critical scenarios.

The article considers a scenario where deep learning systems are used to reconstruct medical images, but an adversarial attack could cause a system to reconstruct an artifact like a tumor in a scan where there isn't one. While this research is incredibly important, the article's title, "A New Way to Train AI Systems Could Keep Them Safer from Hackers," betrays the fact that such research has been going on for a long time, and that any advancement in the area of robust AI systems could be described with similar words.

Certainly, the paper does deserve merit for its advancement on current methods, but readers should be cautious about falling prey to clickbait or thinking that we found anything close to a perfect method. That's all for this week. Thanks so much for listening. If you enjoyed the podcast, be sure to rate and share. If you'd like to hear more news like this, please check out skynetoday.com, where you can find our weekly news digests with similar articles.