Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available.
1. The Interactive Simulation
Let's visualize how our beliefs about a "biased coin" update with every toss. Suppose we are trying to find the true probability $p$ that a coin lands heads.
Belief Distribution ($p$)
Prior (dashed) vs. Posterior (solid)
2. Python Implementation
In computational Bayesian stats, we often use grid approximation to represent continuous distributions as discrete points.
import numpy as np
def grid_update(prior, heads, n_tosses):
# p_grid represents possible bias levels [0, 1]
p_grid = np.linspace(0, 1, len(prior))
# Binomial Likelihood
likelihood = p_grid**heads * (1-p_grid)**(n_tosses - heads)
# The Bayesian Update: Posterior ∝ Likelihood × Prior
unnorm_posterior = likelihood * prior
# Normalization
return unnorm_posterior / unnorm_posterior.sum()
Knowledge Check
What happens to the Posterior as we collect more data?