Lesson 1

Bayesian Inference

Understanding priors, likelihoods, and posterior distributions through mathematical intuition and live simulation.

Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available.

Bayes' Theorem
$$P(A|B) = \frac{P(B|A)P(A)}{P(B)}$$
POSTERIOR
$P(A|B)$
LIKELIHOOD
$P(B|A)$
PRIOR
$P(A)$
EVIDENCE
$P(B)$

1. The Interactive Simulation

Let's visualize how our beliefs about a "biased coin" update with every toss. Suppose we are trying to find the true probability $p$ that a coin lands heads.

Belief Distribution ($p$)

Prior (dashed) vs. Posterior (solid)

Total Tosses: 0 Heads: 0 MAP Estimate: 0.50

2. Python Implementation

In computational Bayesian stats, we often use grid approximation to represent continuous distributions as discrete points.

inference_engine.py
import numpy as np

def grid_update(prior, heads, n_tosses):
    # p_grid represents possible bias levels [0, 1]
    p_grid = np.linspace(0, 1, len(prior))
    
    # Binomial Likelihood
    likelihood = p_grid**heads * (1-p_grid)**(n_tosses - heads)
    
    # The Bayesian Update: Posterior ∝ Likelihood × Prior
    unnorm_posterior = likelihood * prior
    
    # Normalization
    return unnorm_posterior / unnorm_posterior.sum()

Knowledge Check

What happens to the Posterior as we collect more data?