我与Kara Swisher的对话中,我们深入探讨了人工智能的未来、风险以及监管的必要性。我坚持认为,对AI研发进行监管将会产生灾难性的后果,这并非危言耸听。
我是一个经典的自由主义者,在欧洲政治光谱中处于中间位置。这或许解释了我为何对Elon Musk攻击高等教育、科学和科学家,以及其对AI风险的论调感到愤怒。我在Meta拥有独立的表达声音的权利,这体现在Meta的研究实验室的开放性上——我们发布所有研究成果并开源代码。我的工作重心在于基础研究,而非公司政策制定。
我在Meta创建AI研究实验室的三个条件至今仍清晰地印在我的脑海里:不搬离纽约、不辞去NYU的工作,以及所有研究都公开进行并开源代码。这三个条件,得到了Mark Zuckerberg的同意。
我认为政府在AI领域的角色至关重要,但其作用应侧重于避免制定使开源AI平台非法的法规,并应为学术界提供足够的计算资源,弥补其在计算能力上的不足。政府更应该更深入地了解AI技术,并可能需要制定产业政策,例如提高芯片领域的竞争力,因为目前AI芯片的生产和设计高度集中。
未来的AI平台必须是开放的,并以分布式方式进行训练,以包含来自世界各地的贡献。这不仅能促进技术进步,也能确保AI的公平性和包容性。我强烈反对对计算能力设定限制,因为AI并非本质上危险。开源软件的成功恰恰在于其平台特性,允许人们修改、改进并使其在各种硬件上运行。而封闭的AI模型,在我看来,更多是为了获得商业优势。
值得一提的是,除了谷歌之外,几乎所有构建AI系统的公司都使用Meta开发的PyTorch开源软件平台。这足以说明开源在AI发展中的重要性。
我们目前对大型语言模型(LLM)的依赖,并不能代表AI的未来。仅仅扩大LLM的规模和数据量并不能带来显著的性能提升,我们已经触及到性能的天花板。我们需要探索新的AI系统架构,超越简单的“预测下一个单词”的机制。LLM的局限性显而易见,未来的AI系统需要具备更强大的理解能力和推理能力。
我们距离人工通用智能(AGI)还有很长的路要走。我并不认为AGI近在眼前。未来的AI系统需要能够理解物理世界并规划行动序列,而这正是我们目前努力的方向。Meta开发AI搜索引擎的目的并非直接与谷歌竞争,而是为了服务于需要AI系统的用户,并最终融入我们对AI助手的长期愿景。
Meta对AI的巨额投资,并非盲目跟风,而是对未来基础设施建设的投入。开源模型比专有模型更安全,因为有更多人参与其中,可以发现并修复潜在的问题。当然,Meta有责任防止其模型被恶意使用,但开源模型的安全性也值得持续关注。值得注意的是,开源LLM已经存在数年,并没有发生严重的恶意事件。
为了使AI成为人类知识的储存库,所有人类知识都应该用于训练模型,即使是非版权材料。这需要文化机构、图书馆和基金会的积极参与。
我对Hinton和Bengio对AI风险的警告表示强烈反对,我认为他们的担忧被夸大了。虽然未来AI系统可能会比人类更聪明,但这需要数年甚至数十年时间,并且我们目前还没有达到人类水平的AI。我用喷气式飞机的例子来比喻AI安全问题:在设计出安全的AI系统之前,谈论如何使其安全是没有意义的。
我的观点是:对AI R&D 不需要进行监管,但对AI产品需要设置防护措施。我建议使用目标驱动型架构来确保AI系统的安全,这类似于我们制定法律来规范人类行为。
我们应该对AI产品而非AI研发进行监管,并强调开放平台的重要性,以防止少数公司控制所有AI系统。这对于维护全球的公平竞争和民主至关重要。
未来的AI系统将能够像人类和动物一样高效地学习新的技能和任务,这需要利用视觉等感官输入进行训练。这将彻底改变学习和生活方式。
最后,作为一名科学家,我拥有科学的诚信。我相信AI是打击仇恨言论和虚假信息的最佳工具,而开源是AI发展的正确方向。 我可能会犯错,但我的出发点是基于证据和科学的理性思考。
Yann LeCun is known as one of the godfathers of AI because of his foundational work on neural networks, which he has been pushing since the 1980s. This work forms the basis for many of today's most powerful AI systems, and he received the 2018 Turing Award for his contributions to deep neural networks.
LeCun is outspoken on social media because he is politically a classic liberal, which places him in the center on the European political spectrum but more on the left in the U.S. He is particularly critical of individuals like Elon Musk and Donald Trump, especially when they attack institutions of higher learning or spread misinformation.
LeCun joined Meta because he was given the opportunity to create a well-funded, large-scale AI research organization with the freedom to publish and share open-source code. This was not possible in academia due to the lack of resources and the closed nature of large tech companies. Meta allowed him to maintain his academic position at NYU while leading groundbreaking research.
LeCun believes that current AI systems are hitting a performance ceiling because they are primarily based on predicting the next word in a text. While these models can pass exams, they struggle to understand the physical world and perform complex tasks like cleaning a house. True human-level intelligence requires new architectures that can understand and interact with the physical world, similar to how babies and young animals learn.
Meta is investing heavily in AI infrastructure to support the growing number of users who will use AI assistants daily. The company forecasts that its AI systems will be used by 600 million people by the end of the year, and more powerful AI systems require more expensive computational resources.
LeCun supports the open-source model because it allows for faster innovation and a more distributed, democratic approach to AI development. He believes that having more people working on and fine-tuning AI systems can lead to better and safer outcomes, and it helps prevent the concentration of AI power in the hands of a few companies. The open-source model also enables a diversity of cultural and linguistic adaptations.
LeCun disagrees with Hinton and Bengio's warnings because he believes the dangers have been exaggerated. He thinks AI is still far from achieving human-level intelligence and that the technical challenges are more significant than the potential existential threats. He also argues that current AI systems are not as capable as some suggest, and that regulation of AI R&D would stifle innovation and progress.
LeCun believes that cultural institutions should make their content available for AI training to ensure that AI systems can understand and speak a diverse range of languages and cultural contexts. This is crucial for preserving and promoting cultural heritage, especially for endangered languages and regional dialects. He envisions a global, distributed AI system that can be fine-tuned for various cultural and value systems.
LeCun thinks regulation of AI R&D is counterproductive because it would make it too risky for companies to distribute open-source AI platforms. This could lead to a concentration of AI power in the hands of a few private companies, which would be detrimental to the diversity and democratization of AI. He believes that regulating products based on AI, rather than the R&D itself, is a more effective approach.
LeCun believes that AI is the best countermeasure against hate speech and disinformation because it can detect and mitigate harmful content more effectively than humans, especially at scale. He points out that AI technology has significantly improved the ability of platforms like Facebook and Instagram to detect hate speech in multiple languages, and that the best protection is having more powerful AI in the hands of the good guys.
We're bringing you a special episode of On With Kara Swisher! Kara sits down for a live interview with Meta's Yann LeCun, an “early AI prophet” and the brains behind the largest open-source large language model in the world. The two discuss the potential dangers that come with open-source models, the massive amounts of money pouring into AI research, and the pros and cons of AI regulation. They also dive into LeCun’s surprisingly spicy social media feeds — unlike a lot of tech employees who toe the HR line, LeCun isn’t afraid to say what he thinks of Elon Musk or President-elect Donald Trump.
This interview was recorded live at the Johns Hopkins University Bloomberg Center in Washington, DC as part of their Discovery Series.
Learn more about your ad choices. Visit podcastchoices.com/adchoices