Abstracts
Invited Talks
On causal explanations of quantum correlations
Robert Spekkens
The framework of causal models is ideally suited to formalizing certain conceptual problems in quantum theory, and conversely, a variety of tools developed by physicists studying the foundations of quantum theory have applications for causal inference. This talk reviews some of the connections between the two fields. In particular, it is shown that certain correlations predicted by quantum theory and observed experimentally cannot be explained by any causal model while respecting the core principles of causal discovery algorithms. Nonetheless, it is argued that by understanding quantum theory as an innovation to the theory of Bayesian inference, one can introduce a quantum generalization of the notion of a causal model and salvage a causal explanation of these correlations without fine-tuning. Furthermore, experiments exhibiting certain quantum features, namely, coherence and entanglement, enable solutions to causal inference problems that are intractable classically. In particular, while passive observation of a pair of variables cannot determine the causal relation that holds between them according to classical physics, this is not the case in quantum physics. In other words, according to quantum theory, certain kinds of correlation *do* imply causation. The results of a quantum-optical experiment confirming these predictions will be presented.
Generalizability of Causal and Statistical Relations
Elias Bareinboim
The problem of generalizability of empirical findings (experimental and observational) to new environments, settings, and populations is one of the central problems in causal inference. Experiments in the sciences are invariably conducted with the intent of being used elsewhere (e.g., outside the laboratory), where conditions are likely to be different. This practice is based on the premise that, due to certain commonalities between the source and target environments, causal claims would be valid even where experiments have never been performed. Despite the extensive amount of empirical work relying on this premise, practically no formal treatments have been able to determine the conditions under which generalizations are valid, in some formal sense.
Our work is the first to develop a theoretical framework for understanding, representing, and algorithmizing the generalization problem as encountered in many practical settings in data-intensive fields. Our framework puts many apparently disparate generalization problems under the same theoretical umbrella. In this talk, I will start with a brief review of the basic concepts, principles, and mathematical tools necessary for reasoning about causal and counterfactual relations. I will then introduce two special problems under the generalization umbrella.
First, I will discuss "transportability," that is, how information acquired by experiments in one setting can be reused to answer causal queries in another, possibly different setting where only passive observations can be collected. This question embraces several sub-problems treated informally in the literature under rubrics such as "external validity," "meta-analysis," "heterogeneity." Further, I will discuss selection bias, that is, how knowledge from a sampled subpopulation can be generalized to the general population when sampling selection is not random, but determined by variables in the analysis, which means units are preferentially excluded from the sample.
In both problems, we provide complete conditions and algorithms to support the inductive step required in the corresponding task. This characterization distinguishes between estimable and non-estimable queries, and identifies which pieces of scientific knowledge need to be collected in each study to construct a bias-free estimate of the target query. The problems discussed in this work have applications in several empirical sciences such as Bioinformatics, Medicine, Economics, Social Sciences as well as in data-driven fields such as Machine Learning, Artificial Intelligence and Statistics.
Accepted for Oral Presentation
How Occam's Razor Provides a Neat Definition of Direct Causation
Alexander Gebharter, Gerhard Schurz
In this paper we show that the application of Occam's razor to the theory of causal Bayes nets gives us a neat definition of direct causation. In particular we show that Occam's razor implies Woodward's definition of direct causation, provided suitable intervention variables exist and the causal Markov condition (CMC) is satisfied. We also show how Occam's razor can account for direct causal relationships Woodward style when only stochastic intervention variables are available.
Constructing Separators and Adjustment Sets in Ancestral Graphs
Benito van der Zander, Maciej Liśkiewicz, Johannes Textor
Ancestral graphs (AGs) are graphical causal models that can represent uncertainty about the presence of latent confounders, and can be inferred from data. Here, we present an algorithmic framework for efficiently testing, constructing, and enumerating m-separators in AGs. Moreover, we present a new constructive criterion for covariate adjustment in AGs that characterizes adjustment sets as m-separators in a subgraph. Jointly these results allow to find all adjustment sets that can identify a desired causal effect with multivariate exposures and outcomes in the presence of latent confounding. Our results generalize and improve upon several existing solutions for special cases of these problems.
Improving Propensity Score Matching for Causal Inference on Relational Data
David Arbour, Katerina Marazopoulou, David Jensen
Propensity score matching (PSM) is a widely used method for performing causal inference on observational data-sets. PSM requires fully specifying the set of confounding variables between treatment and outcome. In the case of networked data, this set may include non-intuitive relational features. In this work, we provide an automated method to derive these relational features based on the network structure and a set of naive confounders. This automatic construction includes entity-identifier and relational degree features. We provide experimental evidence that demonstrates the utility of these features in accounting for certain latent confounders. Finally, through a set of synthetic experiments we show that our method improves the performance of PSM for causal inference on relational data.
Type-II errors of independence tests can lead to arbitrarily large errors in estimated causal effects: an illustrative example
Nicholas Cornia, Joris M. Mooij
Estimating the strength of causal effects from observational data is a common problem in scientific research. A popular approach is based on exploiting observed conditional independences between variables. It is well-known that this approach relies on the assumption of faithfulness. In our opinion, an even more important practical limitation of this approach is that it relies on the ability to distinguish independences from (arbitrarily weak) dependences. We present a simple analysis, based on purely algebraic and geometrical arguments, of how the estimation of the causal effect strength, based on conditional independence tests and background knowledge, can have an arbitrarily large error due to the uncontrollable type II error of a single conditional independence test. The scenario we are studying here is related to the LCD algorithm by Cooper (1997) and to the instrumental variable setting that is popular in epidemiology and econometrics. It is one of the simplest settings in which causal discovery and prediction methods based on conditional independences arrive at non-trivial conclusions, yet for which the lack of uniform consistency can result in arbitrarily large prediction errors.
Toward Learning Graphical and Causal Process Models
Christopher Meek
We describe an approach to learning causal models that leverages temporal information. In our approach, we posit the existence of a graphical description of the causal process that generates observations through time and a statistical process for generating the observations. We explore assumptions connecting the graphical description with the statistical process and what one can infer about the causal structure of the process under these assumptions.
Estimating Causal Effects by Bounding Confounding
Philipp Geiger, Dominik Janzing, Bernhard Schölkopf
Assessing the causal effect of a treatment variable X on an outcome variable Y is usually difficult due to the existence of unobserved common causes. Without further assumptions, observed dependences do not even prove the existence of a causal effect from X to Y. It is intuitively clear that strong statistical dependences between X and Y do provide evidence for X influencing Y if the influence of common causes is known to be weak. We propose a framework that formalizes effect versus confounding in various ways and derive upper/lower bounds on the effect in terms of a priori given bounds on confounding. The formalization includes information theoretic quantities like information flow and causal strength, as well as other common notions like effect of treatment on the treated (ETT). We discuss several scenarios where upper bounds on the strength of confounding can be derived. This justifies to some extent human intuition which assumes the presence of causal effect when strong (e.g. close to deterministic) statistical relations are observed.