Importance sampling is a powerful technique used in Monte Carlo simulations to efficiently estimate properties of rare events or difficult-to-sample distributions. It is particularly useful when the probability of the event of interest is very small, and direct sampling would require an enormous number of samples to obtain reliable estimates.

Theory Behind Importance Sampling:

Let's say we want to estimate the expectation of a function $f(x)$ over a probability distribution $p(x)$. Mathematically, this can be expressed as:

$$ ⁍ $$

However, if the probability distribution $p(x)$ is difficult to sample from or the function $f(x)$ is zero for most values of $x$, direct Monte Carlo sampling may be inefficient or even infeasible.

Importance sampling introduces a new probability distribution $q(x)$, called the importance distribution or proposal distribution, which is easier to sample from and has a higher probability of generating samples in the regions where $f(x)$ is non-zero. The expectation can be rewritten as:

$$ ⁍ $$

The ratio $\frac{p(x)}{q(x)}$ is called the importance weight or likelihood ratio. It corrects for the bias introduced by sampling from the importance distribution $q(x)$ instead of the original distribution $p(x)$.

The importance sampling estimator (sampling mean) for $E_p[f(x)]$ based on $N$ samples from $q(x)$ is given by:

$$ ⁍ $$

where $x_i$ are the samples drawn from $q(x)$.

Python Script Explanation:

For the simulation and more Unlock Premium Account.