How to normalize log-likelihood vectors

What is the best way to convert a log-likelihood vector into the corresponding log-probability vector, i.e. a vector of values whose exponential sum up to one?

TLDR: just subtract the scipy.misc.logsumexp() of that vector.

Note: this notebook is available on Github in case you want to play around with it.

Usage of logs in probability and likelihood computations

Given a set of independent events $D= \{E_i\}$ with probabilities $\{p_i\}$, the probability of the joint occurance of all events in $D$ is simply

$$P_D = \prod\limits_i^N p_i$$

This quantity however quickly gets impossible to compute on a computer as-written above since, each $p_i$ $\in [0,1]$, the result gets smaller and smaller as the multiplication goes on and quickly leads to underflow. Usage of logarithms of probabilities and of likelihoods is a very common computational trick to avoid that underflow which exploits the fact that a logarithm transforms a product into a sum, i.e.

Continue reading ยป