Superdeterminism: The interpretation nobody wants to talk about
There's something deeply unsatisfying about the standard narrative in quantum mechanics. We're told that the universe is fundamentally probabilistic, that entanglement represents "spooky action at a distance," and that Bell's theorem definitively rules out local hidden variable theories. The Copenhagen interpretation reigns supreme in textbooks, many-worlds gains philosophical adherents, and pilot-wave theories occupy a respectable niche. But mention superdeterminism in a room full of physicists, and you'll likely be met with dismissal, if not outright hostility.
Why? Because superdeterminism doesn't just challenge our interpretation of quantum mechanics—it challenges our entire framework for doing physics.
What Is superdeterminism?
At its core, superdeterminism is beautifully simple: measurement settings in experiments are not truly independent of the systems being measured. The choice of what to measure and the state of the particle being measured share common causes in their past light cones. This correlation—which exists at the level of initial conditions rather than through any dynamical violation of relativity—allows for a local, deterministic explanation of quantum phenomena.
Bell's theorem relies on a crucial assumption called "measurement independence" or "free choice." This assumption states that experimenters can freely choose their measurement settings independently of the hidden variables determining particle properties. Superdeterminism rejects this assumption. It says that in a fully deterministic universe, everything—including measurement choices—emerges from initial conditions and local dynamics.
The immediate objection is always the same: "But that means free will doesn't exist!" As if that's somehow a devastating counterargument rather than simply a statement about the nature of reality. Yes, superdeterminism is incompatible with libertarian free will. So what? Our subjective experience of making choices doesn't constitute evidence that those choices aren't deterministic processes.
The conspiracy objection
The more sophisticated dismissal of superdeterminism invokes the "conspiracy" argument. Bell himself raised this objection: doesn't superdeterminism require an implausible fine-tuning of initial conditions such that the universe "conspires" to make our measurement choices correlated with hidden variables in exactly the right way to reproduce quantum statistics?
This objection fundamentally misunderstands what superdeterminism claims. There's no conspiracy—there's just physics. In a deterministic universe, correlations between measurement settings and system states aren't special cases requiring explanation; they're the generic situation. What would require explanation is the opposite: measurement independence itself.
Think about it. We live in a universe governed by local, deterministic dynamics (at least at some fundamental level). Everything in the present is determined by everything in the past via the laws of physics. Experimenters, their brains, their measurement apparatus—all of this evolved from the same initial conditions as the particles they're measuring. Why would we expect these things to be independent?
The "conspiracy" framing reveals a prejudice: we want to believe that our choices as experimenters stand outside the causal structure of the universe. We want measurement settings to be free variables we can dial in from outside the system. But we're not outside the system. We're part of it.
Statistical independence without measurement independence
Here's where things get interesting. Superdeterminism doesn't mean we can't do science or that all statistics are undermined. Critics often claim that rejecting measurement independence makes experimental verification impossible—if we can't trust that our measurement choices are independent of what we're measuring, how can we trust any experimental results?
This conflates two different kinds of independence. Statistical independence in ensembles can emerge from deterministic dynamics even when individual measurement choices are correlated with system states. The key is that the experimenter doesn't have access to the information that would reveal these correlations. From their perspective, measurements appear random and independent, even though they're not.
Consider a classical analogy. Imagine a deterministic system where the initial conditions determine both the spin of a particle and the orientation of a measurement device. From the experimenter's perspective, unable to access these initial conditions with sufficient precision, the measurement outcomes appear random. They can perform statistical tests, verify randomness, and extract meaningful correlations between different measurements. The statistics are real and reproducible—but they emerge from determinism, not fundamental randomness.
This is precisely what superdeterminism proposes for quantum mechanics. The apparent randomness of quantum measurements is epistemic, not ontological. We can't predict individual outcomes because we lack access to the relevant initial conditions, not because those outcomes are fundamentally indeterminate.
't Hooft and Cellular Automata
Gerard 't Hooft has developed one of the more concrete superdeterministic models through his cellular automaton interpretation of quantum mechanics. The basic idea is that reality at its most fundamental level consists of discrete states evolving via deterministic rules—essentially a cellular automaton. Quantum mechanics, in this view, is an emergent effective theory describing our coarse-grained observations of this underlying deterministic substrate.
What makes 't Hooft's approach particularly appealing is that it doesn't just assert superdeterminism—it provides a framework for understanding how quantum phenomena could emerge from classical, local, deterministic dynamics. The apparent nonlocality and randomness of quantum mechanics arise from information loss and coarse-graining, not from the fundamental dynamics.
Now, I'm not convinced 't Hooft's specific cellular automaton models are the final answer. There are technical challenges, and deriving the full structure of quantum field theory from cellular automaton rules remains an open problem. But the program demonstrates something important: superdeterminism isn't just philosophical hand-waving. It's a research program with concrete physical content.
Why (most) physicists hate it
Let's be honest about why superdeterminism faces such resistance. It's not primarily about the physics—it's about what accepting superdeterminism would mean for how we do physics.
First, it undermines the experimenter's sense of agency. Physicists like to think of themselves as free agents probing nature, able to independently choose what to measure. Superdeterminism says this is an illusion. Your measurement choices are as much a part of the deterministic evolution of the universe as the particles you're measuring.
Second, it challenges the interpretive frameworks people have invested careers in. If superdeterminism is correct, much of the cottage industry around quantum interpretations—the endless debates about measurement problems, wave function collapse, and observer effects—becomes moot. Quantum mechanics is just an effective theory. There's no measurement problem because there's no fundamental collapse; there are just deterministic transitions we don't have complete information about.
Third, and perhaps most importantly, it requires rethinking what we mean by physical explanation. We're used to being able to vary parameters independently in thought experiments. Superdeterminism says this independence is often illusory. We need to think more carefully about what questions are physically meaningful in a fully deterministic universe.
It's a real option
Here's what I think: superdeterminism deserves serious consideration not because it's definitely correct, but because the standard alternatives are deeply unsatisfying. Many-worlds multiplies entities beyond necessity. Copenhagen refuses to commit to what's physically happening. Pilot-wave theories preserve locality at the cost of nonlocality in the guidance equation. Every interpretation has costs.
Superdeterminism's cost—giving up measurement independence—seems to me no worse than these alternatives, and arguably better. It preserves locality, determinism, and a realist ontology. The price is accepting that experimenters aren't causally isolated from what they're measuring. That's not a conspiracy. That's just how causality works in a universe with a finite past.
What we need are concrete models that make testable predictions distinguishable from standard quantum mechanics. 't Hooft's program is a start, but we need more. We need to understand exactly how quantum statistics emerge from deterministic substrates, how to recover quantum field theory, and whether there are observable signatures of underlying discreteness or determinism.
The dismissal of superdeterminism isn't based on physical evidence against it—it's based on philosophical discomfort. That's not how science should work. If we're genuinely committed to following the evidence wherever it leads, we need to take superdeterminism seriously as a research program, develop it rigorously, and test it against alternatives.
The universe doesn't care about our philosophical preferences. If determinism is true all the way down, no amount of resistance from physicists will change that. The question is whether we're willing to confront that possibility honestly, or whether we'll continue to dismiss it because it makes us uncomfortable.
I know where I stand.