[summary: Bayes' rule relates prior belief and the likelihood of evidence to posterior belief.
These quantities are often denoted using conditional probabilities:
- Prior belief in hypothesis: P(H).
- Likelihood of evidence, conditional on hypothesis: P(e∣H).
- Posterior belief: P(H∣e). ]
Bayes' rule relates prior belief and the likelihood of evidence to posterior belief.
These quantities are often written using conditional probabilities:
- Prior belief in the hypothesis: P(H).
- Likelihood of evidence, conditional on the hypothesis: P(e∣H).
- Posterior belief in hypothesis, after seeing evidence: P(H∣e).
For example, Bayes' rule in the odds form describes the relative belief in a hypothesis H1 vs an alternative H2, given a piece of evidence e, as follows:
P(H1)P(H2)×P(e∣H1)P(e∣H2)=P(H1∣e)P(H2∣e).
Comments
Nate Soares
I suggest making it explicit that P is a distribution over a (possibly infinite) set of variables (or propositions naming symbols, or whatever your preferred formalization is), and that P(x) is shorthand for P(X=x) when X is unambiguous. This is one of those things that I had to figure out myself, which had confused me historically in my youth, and led me to think that all the P notation was probably informal argument rather than formal math.