next up previous
Next: Expectation-Maximization (EM) Algorithm Up: Statistical Inference Previous: Maximum-Likelihood (ML) Estimation


Maximum-a-Posteriori (MAP) Estimation

Sometimes we have a priori information about the physical process whose parameters we want to estimate. Such information can come either from the correct scientific knowledge of the physical process or from previous empirical evidence. We can encode such prior information in terms of a PDF on the parameter to be estimated. Essentially, we treat the parameter $\theta$ as the value of an RV. The associated probabilities $P (\theta)$ are called the prior probabilities. We refer to the inference based on such priors as Bayesian inference. Bayes' theorem shows the way for incorporating prior information in the estimation process:

$\displaystyle P (\theta \vert {\bf x})
= \frac
{ P ({\bf x} \vert \theta) P (\theta) }
{ P ({\bf x}) }$     (35)

The term on the left hand side of the equation is called the posterior. On the right hand side, the numerator is the product of the likelihood term and the prior term. The denominator serves as a normalization term so that the posterior PDF integrates to unity. Thus, Bayesian inference produces the maximum a posteriori (MAP) estimate
$\displaystyle \mathop{\mbox{argmax }}_{\theta}
P (\theta \vert {\bf x})
= \mathop{\mbox{argmax }}_{\theta}
P ({\bf x} \vert \theta)
P (\theta).$     (36)


next up previous
Next: Expectation-Maximization (EM) Algorithm Up: Statistical Inference Previous: Maximum-Likelihood (ML) Estimation
Suyash P. Awate 2007-02-21