X-Message-Number: 8277
From: eli+@gs160.sp.cs.cmu.edu
Subject: Re: CryoNet #8263 - #8268
Date: Mon, 2 Jun 97 22:48:19 EDT

>Subject: Mr. Ettinger
>Date: Sat, 31 May 1997 17:34:46 -0400
>From: "Perry E. Metzger" <>
>
>Probability is, pure and simple, the measurement of the number of
>fraction of repeated experiments that turn out in a particular way. It
>is a measurement. There are two main ways to assess probability -- one
>is to conduct repeated measurements, and one is to use combinatorics
>to enumerate all equiprobable outcomes.

It's useful to distinguish between mathematical probability and
applied probability.  Theoretical tools such as combinatorics work in
the mathematical realm; they apply to the real world only after you
add assumptions like "these outcomes are equiprobable", which must
ultimately be grounded in experiment.

The flavor of applied probability that you describe is "frequentist".
As you note, people frequently (mis-?)use what are termed "subjective
probabilities".  Are they in error?  Consider that a frequentist can
only speak with certainty about the past; to apply past data to the
future is to assume that your random variable is stationary, an
assumption whose proof is outside the scope of statistics.  Any
statement about a future probability has a subjective element.

There's a large philosophical literature on this frequentist/subjectivist
flamewar, which I can't with a straight face recommend that you read.
Pragmatically, subjective probabilities are too useful to give up: for
example, they are often necessary in the application of Bayes' law.
Probability theory is too valuable in rational decision and discourse
for me to be willing to exclude events from its purview because "either
they'll happen or they won't."

>It is meaningless to speak of the probability of a unique event.

Let's say I have a quantum RNG: press a button and it gives you a
1024-bit value x from [0,1).  On theoretical and empirical grounds, we
believe that it gives independent uniform values.  It has a 1024-bit
register p, whose bits are read-once, stored as individual quantum
states.  Finally, it has a comparator.

The box generates a value and stores it in p.  I pick an arbitrary
512-bit number p_hi and load it into the uppper half of p.  Now
consider the event "x<p".  I would like to speak of its probability,
to say "P[x<p] is almost exactly p_hi".  But nobody has ever run that
trial before, and nobody will in fact ever run it again.  It is
unique.

This differs only in degree from wanting to talk about P[Clinton wins
the election].  In principle, the election could be simulated from
first principles, and repeated to get a large body of data.  (Quantum
measurement difficulties could be `solved' by brute-force enumeration
followed by identification of a prior-state set.)  In practice, this
will never happen.

But it doesn't matter: we can and do talk, with various degrees of
justification, about probabilities that are derived through various
theoretical means other than counting outcomes.  Modeling, analogy,
and intuition may degrade the accuracy of our estimates, but that's
tolerable if we maintain an accurate estimate of our error (which we
are, however, notoriously bad at doing).

My apologies for the length.
-- 
     Eli Brandt  |  eli+@cs.cmu.edu  |  http://www.cs.cmu.edu/~eli/

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=8277