Robert Hoffman, CACOR member, book review.
Pearl, Judea and Mackenzie, Dana. (2018). The Book of Why: The New Science of Cause and Effect. New York: Basic Books. (kindle edition)
The Book of Why, written by UCLA professor of computer science Judea Pearl and science writer Dana Mackenzie, describe in non-mathematical language the intellectual content of the Causal Revolution in which Pearl played a major role and how the new science of cause and effect may lead to strong artificial intelligence.
Pearl recognized the need for a theory of causation to fill a fundamental gap between the vocabulary in which we cast causal questions and the traditional vocabulary in which we communicate scientific theories. Statistics tells us that ‘correlation is not causation’, but it does not tell us what causation is.
The theory is expressed in a calculus of causation consisting of two languages: causal diagrams, to express what we know, and a symbolic language, resembling algebra , to express what we want to know.
The causal diagrams are simply dot and arrow pictures that summarize our existing scientific knowledge. The dots represent variables of interest, and the arrows represent known or suspected causal relationships between those variables – namely which variable ‘listens’ to which others. It is important to note that the causal diagrams advocated by Pearl are diagrams of models of systems, not diagrams of the underlying systems. Human intentions and the actions taken in order to realize those intentions are causes in the underlying systems. Diagrams of the information flows that depict the causal model are often confused with diagrams of the underlying system, particularly in the now-conventional system dynamics diagrams.
The language of queries is a new kind of algebra. For example the query ‘ what is the probability P that a patient would survive L years if made to take the drug ‘ can be written symbolically as P(L) l do(D). The do operator signifies an intervention rather than a passive observation; classical statistics lacks this kind of operator. This is in contrast to the expression in classical statistics, P(L) l D, the probability P of Lifespan L conditional on seeing the patients take drug D. The contrast registers the difference between ‘seeing’ and ‘doing’. Classical statistics only summarizes data; it does not provide a language for posing counterfactual or hypothetical questions. Counterfactual reasoning deals with what-ifs. Empirical observation can never confirm or refute answers to such questions. Yet our minds make reliable and reproducible judgments about what might be or might have been [Loc 208]. People who share the same causal model will also share counterfactual judgments [Loc 213]. Human causal intuition is usually sufficient for everyday decisions. But quantitative causal reasoning is generally beyond the power of human intuition. [Loc 699]
At the core of causal reasoning is a causal inference engine that accepts three kinds of inputs – Assumptions, Queries and Data – and produces three kinds of outputs – a Yes/No decision as to whether the query can be answered under the existing causal model, if Yes the inference engine next produces an Estimand, an algorithm for generating the answer from any hypothetical data, and next, after the inference engine has received the Data input, it will produce an actual Estimate for the answer. Note that Data is collected only after the causal model is posited and after the scientific query is stated.
Causal explanations, rather than dry facts, make up the bulk of human knowledge[ Loc 411]. Explanations cannot be derived from raw data. Causes cannot be found if consequences cannot be imagined [Loc 430].
Three levels of causations are identified: the first, association – seeing or observing that entails detection of regularities; the second, intervention – ‘doing’ that entails predicting the effects of deliberate interventions and choosing the interventions that produce a desired outcome; the third level, counterfactuals – ‘imagining’ or holding a theory that explains why an intervention works and what to do if it doesn’t. The third level resides to the highest degree in human beings and the power to imagine is what has made homo sapiens dominant.
Until the Causal Revolution, Artificial intelligence (AI) was premised on the assumption that data alone will guide us to right answers whenever causal questions arise. But the hard task of constructing causal models cannot be avoided if the human capacity for causal reasoning is to be encompassed in artificial intelligence systems.
The main point is this: while probabilities encode our beliefs about a static world, causality tells us whether and how probabilities change when the world changes, be it by intervention or by act of imagination [Loc 822].
It seems that Pearl does make use of the premise of natural philosophy that cause precedes effect in time. Consequently, the examples of inference engines described in the book are appropriate for comparative static analysis with relationships stated in terms of probabilities. However, when inference engines are intended to answer questions concerning future pathways, causal models may take the form of difference or differential equations with time varying parameters. Even if the probability distributions of the time varying parameters were known, the combinatorics are such that calculating probability distributions of output variables is for all intents and purposes impossible. Accordingly, it will be necessary to find ways of stating the logic of inference engines without relying on statistical methods.
For me and my colleagues it is gratifying that some of the lessons learned from the practice of systems modelling over four decades are integral in the theory of causation posited by Judea Pearl.
Essentially, the whatIf models developed over the past three decades are causal inference engines, to use the language of Pearl’s theory of causation. They extend the intuitive causal models that inhabit the human mind and of which we are largely unconscious to encompass complex quantitative causal models needed to address the challenges faced by humanity. The diagrammatic conventions used to design and implement our systems models are in fact causal diagrams in the sense intended by Pearl.
Robert Hoffman
August 7, 2018
ferahtia_FS says
merci
brahim d staps says
Very useful post thanks