cross-reading · Our Methodology · Apr 12, 2026

Four Layers of Causality — From Pearl to Kuhn, a Market Researcher's Causal Literacy

Four thinkers, four books, one question: when you say 'A caused B,' what exactly do you mean?

In late 2024, the most popular causal narrative in the market was: “Fed cuts rates → dollar weakens → gold rises.” Three steps, linear, clean. Then reality pushed back. The rate cut landed, the dollar did weaken for a stretch, but the primary driver of gold was never the dollar — it was structural buying by central banks (according to World Gold Council data, global central bank net gold purchases exceeded 1,000 tonnes in both 2023 and 2024). The causal chain was not wrong. The causal framework was.

This kind of failure is not accidental. The tariff narrative is the same story. “Higher tariffs → higher costs → lower margins” — every step checks out. But the reality of 2025 has been different: based on publicly available cases, supply chain relocation typically requires two to three years of adjustment, currency movements absorbed part of the shock, and many companies simply used tariffs as cover to raise prices. Direction correct, magnitude and timing wrong.

Four books helped me rethink the problem. They span 1962 to 2018, drawn from four disciplines — computer science, system dynamics, game-theoretic economics, and the philosophy of science. They ask the same question: when you say “A caused B,” what exactly do you mean?

  • Judea Pearl, The Book of Why (2018) — the grammar of causation
  • Donella Meadows, Thinking in Systems (2008) — the topology of causation
  • Thomas Schelling, Micromotives and Macrobehavior (1978) — the scale of causation
  • Thomas Kuhn, The Structure of Scientific Revolutions (1962) — the shelf life of causation

This is not a book review. Each author peeled back a different layer of causal understanding. I am trying to lay them side by side and see what emerges.


I. The Seduction and Failure of Linear Causation

The default structure of market narrative is the causal chain: A → B → C. Tariffs rise, costs rise, margins fall. The central bank hikes, valuations compress, tech stocks drop. This type of narrative is popular not because it is correct, but because the human brain craves arrows. One cause, one effect, one line. Comfortable.

In The Book of Why, Pearl divides causal understanding into three rungs, which he calls the “Ladder of Causation”:

Rung one: observation (seeing). You see A and B co-occur and conclude “A is correlated with B.” The vast majority of market commentary stops here. “Oil prices rose, airline stocks fell” — that is observation, not causation.

Rung two: intervention (doing). If I actively change A, what happens to B? This requires distinguishing correlation from causation. Pearl’s core tool is the directed acyclic graph (DAG), which maps causal directions between variables and strips out confounders.

Getting to this rung already leaves most market commentary behind. The third rung is harder still:

Rung three: counterfactual (imagining). If A had not happened, would B still have occurred? This is the highest level of causal inference, and the hardest question to answer.

“To understand causation, we need to move from asking what happened to asking what would have happened if things were different.” — Judea Pearl, The Book of Why, p. 28

Most market analysis stays on the first rung — observing correlation, then packaging it as causal narrative. The number of people who reach the second rung is already small: the Fed cut rates and gold rose in the same period, but was the driving force actually coming from somewhere else? As for the third rung — “If the Fed had not cut rates, would gold still have risen?” — I have almost never seen a sell-side report seriously ask this question.

Pearl’s framework is precise and powerful, but it rests on a key assumption: causal relationships can be expressed in a graph, and that graph is acyclic — causes precede effects, no loops. This assumption holds in the laboratory. In markets, it almost never does.


II. Feedback Loops — The Topology of Causation

Donella Meadows opens Thinking in Systems by negating the sufficiency of linear causation. Her core argument: the causality of real systems is not a chain — it is a loop.

Two basic types. Reinforcing loops are self-amplifying: credit expansion pushes up housing prices, rising prices improve collateral quality, better collateral drives more credit — this is exactly how the US housing market spun from 2005 to 2007. Balancing loops are the opposite, with built-in brakes: inventory builds up, forcing price cuts; lower prices stimulate demand; demand absorbs inventory; the loop self-corrects.

Meadows’s sharpest line: A system’s behavior is determined not by its components, but by how they are connected.

“The least obvious part of the system, its function or purpose, is often the most crucial determinant of the system’s behavior.” — Donella Meadows, Thinking in Systems, p. 14

Bring this into market research and the implication is direct: analyzing the causal direction of a single variable is nowhere near enough — you have to map the loops. The 2022 UK pension crisis is a textbook case. Interest rates rise → gilt prices fall → pension LDI strategies face margin calls → forced gilt sales → gilt prices fall further → margin calls escalate. This was not a causal chain. It was a reinforcing loop in self-excited oscillation under extreme conditions.

Pearl’s tools hit a wall here: the existence of feedback loops breaks the DAG (directed acyclic graph) outright — because loops mean cycles, and cycles mean causal direction is no longer one-way. Pearl’s framework requires an “acyclic” premise, and markets are inherently full of cycles. This does not mean Pearl is wrong; it means his tools are inapplicable under specific conditions — and those conditions happen to be the market’s default state.

Meadows’s limitation is equally clear. Her systems thinking is a qualitative tool — it helps you draw loops and identify reinforcing versus balancing dynamics, but it cannot tell you how fast a loop will spin or at what threshold it will blow up. The “everything is a system” framework easily slides into an all-purpose narrative: any phenomenon can be retroactively explained by drawing a loop, but predicting ex ante which loop will dominate is nearly impossible. The toxicity of systems thinking lies precisely here: it explains everything after the fact and predicts nothing before it.


III. Emergence — The Fracture Between Micro and Macro Causation

In 1978, Thomas Schelling proposed a model that has since been cited across countless disciplines. His “checkerboard segregation model” demonstrated a counterintuitive result: even when each individual has only a mild preference (say, wanting at least one-third of their neighbors to be of the same type), the macro outcome is total segregation.

Schelling’s point boils down to one line: micro does not add up to macro. Each person’s behavior is perfectly rational and perfectly mild at the individual level, but when everyone acts simultaneously, the collective outcome diverges from any individual’s intent. This is emergence — macro phenomena that cannot be predicted by analyzing micro components.

“What is true of the individual is not necessarily true of the aggregate, and what is true of the aggregate may have no counterpart in the motivation of any individual.” — Thomas Schelling, Micromotives and Macrobehavior, p. 14

Market examples of emergence are more extreme than those in sociology. The liquidity crisis of March 2020: every fund’s risk model was executing rationally — cutting positions when volatility breached thresholds, hedging when correlations spiked. Each fund’s behavior was perfectly sensible in isolation. But when thousands of funds executed the same logic simultaneously, the result was a liquidity vacuum in US Treasuries — theoretically the “safest asset” on the planet. The micro-level causal relationship (volatility rises → cut positions) became its own negation at the macro level (everyone cuts → liquidity vanishes → volatility rises further → more cutting).

Returning to Meadows’s framework: emergence is essentially what feedback loops look like in a multi-agent system. Meadows’s loops assume a unified system structure; Schelling showed that when a system is composed of large numbers of independent decision-makers, the behavior of those loops undergoes a qualitative shift. A single agent’s causal model may be entirely correct, but when those models are executed simultaneously by a large number of agents, the macro outcome derails.

Translated into research terms: understanding why one company made a particular decision does not mean understanding why an industry produced a particular trend. Every semiconductor company is expanding capacity — because each one sees its own demand forecast. But the result of all companies expanding simultaneously may be cyclical overcapacity, and that outcome exists in none of their individual causal models.

Schelling’s model is elegant and powerful, but its weakness comes from that same elegance. The checkerboard model uses radically simplified parameters — two types of agents, one preference threshold, one movement rule. When you try to parameterize it for real markets (thousands of agent types, heterogeneous preference functions, asymmetric information sets), the model’s analytical power collapses rapidly. Schelling’s contribution is conceptual — he proved that emergence exists — but he did not provide an operational prediction tool.


IV. Paradigms — The Shelf Life of Causal Explanation Itself

What Kuhn argued in The Structure of Scientific Revolutions (1962) was not about causal relationships per se, but about something more fundamental: the entire framework we use to construct causal explanations has an expiration date.

Kuhn’s core concept is the “paradigm.” During normal periods, scientists work within the same paradigm — sharing the same basic assumptions, methods, and criteria for judgment. A paradigm is not merely a set of theories; it is a meta-framework that determines “what questions are worth asking” and “what answers count as valid.” When anomalies accumulate beyond what the paradigm can digest, a paradigm shift occurs — the old framework is replaced wholesale, not patched.

“The transition from a paradigm in crisis to a new one from which a new tradition of normal science can emerge is far from a cumulative process.” — Thomas Kuhn, The Structure of Scientific Revolutions, p. 84

Markets have their own paradigms. The “Great Moderation” paradigm from the mid-1980s to 2007 assumed that central banks had tamed the business cycle, that low volatility was the new normal, and that risk could be precisely priced and distributed. Within that paradigm, all causal analysis functioned smoothly: tightening spreads meant improving risk appetite; falling VIX meant systemic stability. In 2008, the entire paradigm collapsed. It was not that one causal chain broke — the framework that generated those causal chains had expired.

During a paradigm shift, Pearl’s causal graphs and Meadows’s system loops both fail simultaneously. Not because the graphs were drawn incorrectly or the loops were misidentified, but because the underlying assumptions that supported them (what counts as a cause, what counts as an effect, which variables matter) have been replaced wholesale. This is the highest-order causal risk — not a broken causal chain, but an expired causal framework.

Kuhn’s weakness is this: he can describe cases where paradigm shifts have already happened, but he cannot predict when the next one will occur. He acknowledged as much himself. For market researchers, this means Kuhn’s framework is a diagnostic tool, not a predictive one — it helps you identify “we may be in the early stages of a paradigm shift,” but it does not tell you whether the shift will complete next week or in three years.


V. Stacking the Four Layers

Each book pushed one step further:

LayerThinkerCore toolQuestion answered
Linear causationPearlDAG / counterfactualDid A actually cause B?
Feedback causationMeadowsReinforcing / balancing loopsDo A and B cause each other?
Emergent causationSchellingMulti-agent modelsWhen individual A→B is aggregated, does it become C?
Paradigmatic causationKuhnParadigm / anomaly / shiftIs the causal map we drew still valid at all?

Each layer has a blind spot, and the nature of that blind spot differs. Pearl’s problem: if you drew the graph wrong to begin with — missed a variable, reversed an arrow — the more precise your subsequent inference, the more spectacularly wrong it gets. Meadows’s problem is hindsight bias: anyone can draw a loop after the fact; identifying which loop will dominate beforehand is nearly impossible. Schelling and Kuhn share a similar issue: they proved conceptually that emergence and paradigm shifts exist, but neither provided tools for timing.

The four layers are not a menu — you cannot pick one and make it your religion. When facing a market judgment, I ask four things. Is this real causation or mere correlation? Is the direction one-way or is there a loop? If everyone acts on this simultaneously, does the conclusion still hold? And finally — have the foundational assumptions behind my analysis expired yet?

This is not a formula. It cannot automatically generate answers. But it can make you pause one extra second before making a mistake.


VI. Bridging

This article has a direct connection to ksinq’s earlier silver research. In that series, I analyzed silver’s supply-demand structure — solar demand growth, mine supply rigidity, the gold-silver ratio dislocation. That was a classic “linear causation plus data anchors” analysis.

Looking back through this framework: the silver research reached the first two layers — it distinguished causation from correlation and mapped the reinforcing loop between price, mining equities, and fund flows. But the latter two layers were blind spots. When a large number of investors simultaneously build positions based on the same supply-demand logic, what does the market produce through emergence? I did not ask. And the entire analysis rested on the paradigm that “silver’s industrial demand and monetary demand run on parallel tracks” — if that paradigm itself is disrupted (say, silver-free solar cells become commercially viable within three years), the entire causal map needs to be redrawn.

Writing out these blind spots is not meant to invalidate that research, but to demarcate its boundaries of validity. The value of any market analysis depends not only on what it gets right, but on whether it knows where it might be wrong.


Disclaimer: This article represents ksinq’s independent observations and provides an analytical framework, not investment advice. Any market examples mentioned are for illustrative purposes only and do not constitute recommendations to buy, sell, or hold any specific asset. Past market performance does not guarantee future results. Readers should make independent decisions based on their own judgment and risk tolerance.

All book quotations in this article are used under the fair use doctrine of US copyright law, 17 U.S.C. §107, for purposes of criticism and commentary.