cover of episode Graphs for Causal AI

Graphs for Causal AI

2025/5/24
logo of podcast Data Skeptic

Data Skeptic

AI Deep Dive AI Chapters Transcript
Topics
Utkarshni Jaimini: 我致力于研究因果神经符号AI,旨在提升人工智能系统的可解释性,使其能够像人类一样理解因果关系,而不仅仅是依赖统计相关性。统计学在处理安全关键型应用时存在局限性,例如医疗保健,仅仅依赖相关性可能会导致错误的结论,危及人类生命。因此,我们需要在AI系统中引入因果推理,以确保其决策的可靠性和安全性。我通过知识图谱来表示因果关系,并使用神经符号方法将知识图谱转换为向量空间,从而实现链接预测等任务。同时,我也关注知识图谱中可能存在的虚假相关关系,并提出了因果LP back等方法来减轻这些关系的影响。我的最终目标是创建一个整体框架,将贝叶斯网络和领域知识整合到一个空间中,从而实现更可靠、更可解释的AI系统。 Utkarshni Jaimini: 为了更深入地理解因果关系,我研究了后门路径问题。后门路径是指连接因果变量和效应变量的非因果路径,它会导致虚假的相关性。例如,吸烟和肺癌之间存在后门路径,即吸烟基因。吸烟基因会导致人们吸烟,也会导致人们患癌症,因此不能简单地说吸烟会导致癌症。在研究吸烟导致癌症的关系时,必须考虑吸烟基因这个后门路径。为了解决后门路径问题,我开发了因果LP back方法,该方法通过考虑后门路径来进行链接预测,并在评估空间中删除后门路径,从而提高链接预测的准确性。此外,我还研究了中介者在因果关系中的作用,并提出了使用超关系图来表示中介者的方法。通过这些研究,我希望能够更全面、更准确地表示因果关系,并将其应用于实际问题中。

Deep Dive

Chapters
The episode begins with a discussion on how network science predicted the election of the new Pope, using data from who ordained whom, official co-membership, and informal relationships among cardinals.
  • Network science was used to predict the election of the new Pope.
  • A research team from Italy used three main sources: who ordained whom, official co-membership, and informal relationships among cardinals.
  • Robert Brevost (Leo XIV) had the highest eigenvector centrality in the network.

Shownotes Transcript

How to build artificial intelligence systems that understand cause and effect, moving beyond simple correlations?

As we all know, correlation is not causation. "Spurious correlations" can show, for example, how rising ice cream sales might statistically link to more drownings, not because one causes the other, but due to an unobserved common cause like warm weather.

Our guest, Utkarshani Jaimini, a researcher from the University of South Carolina's Artificial Intelligence Institute, tries to tackle this problem by using knowledge graphs that incorporate domain expertise. 

Knowledge graphs (structured representations of information) are combined with neural networks in the field of neurosymbolic AI to represent and reason about complex relationships. This involves creating causal ontologies, incorporating the "weight" or strength of causal relationships and hyperrelations. This field has many practical applications such as for AI explainability, healthcare and autonomous driving. Follow our guest Utkarshani Jaimini's Webpage)

Linkedin) Papers in focus CausalLP: Learning causal relations with weighted knowledge graph link prediction, 2024)

HyperCausalLP: Causal Link Prediction using Hyper-Relational Knowledge Graph, 2024)