If
AA and
BB are two separate but possibly
dependent random events, then:
-
Probability of AA and
BB occurring together =
PrA,B
,
A
B
-
The conditional probability of
AA, given that
BB occurs =
PrA|B
B
A
-
The conditional probability of
BB, given that
AA occurs =
PrB|A
A
B
From elementary rules of probability (Venn diagrams):
PrA,B=PrA|BPrB=PrB|APrA
,
A
B
B
A
B
A
B
A
(1)
Dividing the right-hand pair of expressions by
PrB
B
gives Bayes' rule:
PrA|B=PrB|APrAPrB
B
A
A
B
A
B
(2)
In problems of probabilistic inference, we are often trying to
estimate the most probable underlying model for a random
process, based on some observed data or evidence. If
AA represents a given set of model
parameters, and
BB represents the
set of observed data values, then the terms in
Equation 2 are given the following terminology:
-
PrA
A
is the prior probability of the model
AA (in the absence of any evidence);
-
PrB
B is the probability
of the evidence
BB;
-
PrB|A
A B
is the likelihood that
the evidence BB was
produced, given that the model was
AA;
-
PrA|B
B
A
is the posterior probability of the model being
AA, given that the evidence is
BB.
Quite often, we try to find the model
AA which maximizes the posterior
PrA|B
B
A
. This is known as
maximum a posteriori or
MAP model selection.
The following example illustrates the concepts of Bayesian model
selection.
Example 1: Loaded Dice
Problem:
Given a tub containing 100 six-sided dice, in which one die
is known to be loaded towards the six to a specified extent,
derive an expression for the probability that, after a given
set of throws, an arbitrarily chosen die is the loaded one?
Assume the other 99 dice are all fair (not loaded in any
way). The loaded die is known to have the following pmf:
pL1=0.05
pL
1
0.05
pL2…pL5=0.15
pL
2
…
pL
5
0.15
pL6=0.35
pL
6
0.35
Here derive a good strategy for finding the loaded die from
the tub.
Solution:
The pmfs of the fair dice may be assumed to be:
∀i,i=1…6:pFi=16
i
i
1
…
6
pF
i
1
6
Let each die have one of two states,
S=L
S
L
if it is loaded and
S=F
S
F
if it is fair. These are our two possible
models for the random process and they have
underlying pmfs given by
pL1…pL6
pL
1
…
pL
6
and
pF1…pF6
pF
1
…
pF
6
respectively.
After NN throws of the chosen
die, let the sequence of throws be
ΘN=θ1…θN
ΘN
θ1
…
θN
, where each
θi∈1…6
θi
1
…
6
. This is our evidence.
We shall now calculate the probability that this die is the
loaded one. We therefore wish to find the
posterior
PrS=L|ΘN
ΘN
S
L
.
We cannot evaluate this directly, but we can evaluate the
likelihoods,
PrΘN|S=L
S
L
ΘN
and
PrΘN|S=F
S
F
ΘN
, since we know the expected pmfs in each case. We
also know the
prior probabilities
PrS=L
S
L
and
PrS=F
S
F
before we have carried out any throws, and these
are
0.010.99
0.01
0.99
since only one die in the tub of 100 is
loaded. Hence we can use Bayes' rule:
PrS=L|ΘN=PrΘN|S=LPrS=LPrΘN
ΘN
S
L
S
L
ΘN
S
L
ΘN
(3)
The denominator term
PrΘN
ΘN
is there to ensure that
PrS=L|ΘN
ΘN
S
L
and
PrS=F|ΘN
ΘN
S
F
sum to unity (as they must). It can most easily be
calculated from:
PrΘN=PrΘN,S=L+PrΘN,S=F=PrΘN|S=LPrS=L+PrΘN|S=FPrS=F
ΘN
,
ΘN
S
L
,
ΘN
S
F
S
L
ΘN
S
L
S
F
ΘN
S
F
(4)
so that
PrS=L|ΘN=PrΘN|S=LPrS=LPrΘN|S=LPrS=L+PrΘN|S=FPrS=F=11+RN
ΘN
S
L
S
L
ΘN
S
L
S
L
ΘN
S
L
S
F
ΘN
S
F
1
1
RN
(5)
where
RN=PrΘN|S=FPrS=FPrΘN|S=LPrS=L
RN
S
F
ΘN
S
F
S
L
ΘN
S
L
(6)
To calculate the likelihoods,
PrΘN|S=L
S
L
ΘN
and
PrΘN|S=F
S
F
ΘN
, we simply take the product of the probabilities
of each throw occurring in the sequence of throws
ΘN
ΘN
, given each of the two modules respectively (since
each new throw is independent of all previous throws, given
the model). So, after
NN throws,
these likelihoods will be given by:
PrΘN|S=L=∏i=1NpLθi
S
L
ΘN
i
1
N
pL
θi
(7)
and
PrΘN|S=F=∏i=1NpFθi
S
F
ΘN
i
1
N
pF
θi
(8)
We can now substitute these probabilities into the above
expression for
RN
RN
and include
PrS=L=0.01
S
L
0.01
and
PrS=F=0.99
S
F
0.99
to get the desired a
posteriori
probability
PrS=L|ΘN
ΘN
S
L
after
NN throws using
Equation 5.
We may calculate this iteratively by noting that
PrΘN|S=L=PrΘ
N
-
1
|S=LpLθn
S
L
ΘN
S
L
Θ
N
-
1
pL
θn
(9)
and
PrΘN|S=F=PrΘ
N
-
1
|S=FpFθn
S
F
ΘN
S
F
Θ
N
-
1
pF
θn
(10)
so that
RN=R
N
-
1
pFθnpLθn
RN
R
N
-
1
pF
θn
pL
θn
(11)
where
R0=PrS=FPrS=L=99
R0
S
F
S
L
99
. If we calculate this after every throw of the
current die being tested (i.e. as
NN increases), then we can either
move on to test the next die from the tub if
PrS=L|ΘN
ΘN
S
L
becomes sufficiently small (say
<10-4
<
10
-4
) or accept the current die as the loaded one when
PrS=L|ΘN
ΘN
S
L
becomes large enough (say
>0.995
>
0.995
). (These thresholds correspond approximately to
RN>104
RN
10
4
and
RN<5×10-3
RN
5-3
respectively.)
The choice of these thresholds for
PrS=L|ΘN
ΘN
S
L
is a function of the desired tradeoff between
speed of searching versus the probability of failure to find
the loaded die, either by moving on to the next die even
when the current one is loaded, or by selecting a fair die
as the loaded one.
The lower threshold,
p1=10-4
p1
10
-4
, is the more critical, because it affects how long
we spend before discarding each fair die. The probability of
correctly detecting all the fair dice before the loaded die
is reached is
1-p1n≈1-np1
1
p1
n
1
n
p1
, where
n≈50
n
50
is the expected number of fair dice tested before
the loaded one is found. So the failure probability due to
incorrectly assuming the loaded die to be fair is
approximately
np1≈0.005
n
p1
0.005
.
The upper threshold,
p2=0.995
p2
0.995
, is much less critical on search speed, since the
loaded result only occurs once, so it is
a good idea to set it very close to unity. The failure
probability caused by selecting a fair die to be the loaded
one is just
1-p2=0.005
1
p2
0.005
. Hence the
overall failure probability =0.005+0.005=0.01
overall failure probability
0.005
0.005
0.01
Note on computation:
In problems with significant amounts of evidence (e.g. large
NN), the evidence probability
and the likelihoods can both get very very small, sufficient
to cause floating-point underflow on many computers if
equations such as
Equation 7 and
Equation 8 are computed
directly. However the ratio of likelihood to evidence
probability still remains a reasonable size and is an
important quantity which must be calculated
correctly.
One solution to this problem is to compute only the ratio of
likelihoods, as in
Equation 11. A
more generally useful solution is to compute
log(likelihoods) instead. The product operations in the
expressions for the likelihoods then become sums of
logarithms. Even the calculation of likelihood ratios such
as
RN
RN
and comparison with appropriate thresholds can be
done in the log domain. After this, it is OK to return to
the linear domain if necessary since
RN
RN
should be a reasonable value as it is the
ratio of very small quantities.