I just stumbled upon John Baez’s page on Bayesian probability and Quantum Mechanics, which nicely summarizes one of the first difficulties i had with the latter: Born’s interpretation of the wave function as a probability. The problem hinged on my naive (frequentist) interpretation of probabilities, and the conclusion that QM describes only ensembles, not individual systems. For to compute the probability of an experiment’s outcome, i reasoned, you need to repeat the experiment a large number of times. Then, counting the number of times your outcome happens and dividing by the total number of repetitions one obtains the sought for probability. Problem is, what is *large*? Well, nothing short of infinite, it seems. Because, with this frequentist definition of probability, nothing prevents your tossing a coin a hundred times and getting a hundred tails. And such a situation *may* still be compatible with a half and half probability for heads and tails! My unsettling conclusion was that QM predicts *nothing at all* about individual systems! Come to think of it, it doesn’t even predict anything about finite ensembles.

One way out of this conundrum is Everett’s many worlds interpretation: since *all* possible outcomes really happen, frequentist probabilities are well-defined. I still remember being genuinely surprised when i learnt that there existed serious attempts at making sense of such an idea. I still am. John gives an excellent argument to be done with this peculiar interpretation:

Here is a sample conversation between two Everettistas, who have fallen from a plane and are hurtling towards the ground without parachutes:

Mike: What do you think our chances of survival are?

Ron: Don’t worry, they’re really good. In the vast majority of

possible worlds, we didn’t even take this plane trip.

A second way out is revising our definition of probability. We forget (initially) about frequencies, and take a Bayesian stance. In a nutshell, Bayesian probability is not measured from scratch because it is defined as a degree of belief on a given outcome. One starts with an *a priori* value for such a belief, and revises it (if needed) according to experiment. The gist of it is that Bayes’ theorem lets you calculate the likelihood of future outcomes based solely on your a priori probabilities. So, the tale goes, when a wave function collapses as a result of a measurement, there’s nothing real out there undergoing a physical collapse; it’s only that we have improved our knowledge of the system and must update our a priori likelihood assignments accordingly. This view mixes well, by the way, with the orthodox Copenhagen interpretation of QM, which also denies an objective reality of the wave function.

The so called relational interpretations have, i think, a clear Bayesian substrate. Probably the best known relational theory nowadays is Rovelli‘s, whose recent paper Relation EPR (nicely reviewed in Alejandro’s blog) has been widely discussed elsewhere.

While i have nothing against Bayesian probability for describing our knowledge of any system, considering it as a final interpretation of QM makes me feel uneasy. I’d rather have a theory which describes something out there, some kind of (possibly inter-subjective) reality. Atoms, stars and the whole universe seem to care little about our knowledge of them, and the quantum mechanics rules look a bit too simple to explain, out of the blue, our way of acquiring information about the world. I would rather put my money on some sort of objective, physical reduction of the state vector, maybe along the lines of some non-linear modification of Schrödinger’s equation (and probably not as fancy as Penrose’s objective reduction, but who knows!). Call me a (perhaps non-local) realist.

—

One last thing. One of the best ways to learn about Bayesian theory is from “Probability Theory : The Logic of Science” (E. T. Jaynes). The good news is that a draft version of it is available online. (See also Matthew Leifer’s comment below recommending Bruno de Finetti‘s work.)

Technorati Tags: bayes, philosopy of science, quantum mechanics

May 22, 2006 at 2:17 am |

Whilst there is no doubt that Jaynes was a Bayesian, and he did invent a particular flavor of the interpretation, it is a stretch to call him its creator. Unsurprisingly, some of the ideas actually go back to Thomas Bayes. An early serious attempt to fully flesh out the Bayesian point of view is due to Savage. Also, a good portion of modern Bayesians (including myself) are followers of Bruno de Finetti, rather than of Jaynes. If you are serious about this stuff then there is plenty to learn from de Finetti’s 2 volume text on probability in addition to reading the Jaynes book.

May 22, 2006 at 3:25 am |

Matt, thank you for the heads-up (i should have done my homework) and the recommendation: i’ll surely will take a look at Finetti’s work.

January 25, 2007 at 12:29 pm |

At about the time of the entries above I too was being perplexed by such counter-intuitive definitions of ‘probability’ as the frequentist and subjectivist, since I’d naturally assumed (having done physics as a lad) that the probabilities of QM were as objective as the wavefunctions that interact after going through two slits. Consequently I was well chuffed to discover Popper’s propensities, which are quite popular amongst realistic philosophers (click on my anagrammatic pseudonym ‘Enigman’ to get to my realistic defence of the propensity-theoretic approach to physical probability, which I was writing last summer), and I wonder what physicists think about them.