Lesson 1

Bayesian Inference

Understanding priors, likelihoods, and posterior distributions through mathematical intuition and live simulation.

Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available.

Bayes' Theorem
$$P(A|B) = \frac{P(B|A)P(A)}{P(B)}$$
POSTERIOR
$P(A|B)$
LIKELIHOOD
$P(B|A)$
PRIOR
$P(A)$
EVIDENCE
$P(B)$

1. The Interactive Simulation

Let's visualize how our beliefs about a "biased coin" update with every toss. Suppose we are trying to find the true probability $p$ that a coin lands heads.

Select your initial belief (Prior): Do you think the coin is fair, are you unsure, or are you a skeptic?

?
Total: 0 Heads: 0
MAP Estimate: 0.50

2. Python Implementation

In computational Bayesian stats, we often use grid approximation to represent continuous distributions as discrete points. This allows us to perform integration by summation.

inference_engine.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
import numpy as np def grid_update(prior, heads, n_tosses): """ Updates beliefs based on new evidence using grid approximation. """ # 1. Define the grid of possibilities [0, 1] p_grid = np.linspace(0, 1, len(prior)) # 2. Compute Likelihood (Binomial) likelihood = p_grid**heads * (1 - p_grid)**(n_tosses - heads) # 3. Update: Posterior ∝ Likelihood × Prior unnorm_posterior = likelihood * prior # 4. Normalize to ensure sum is 1 return unnorm_posterior / unnorm_posterior.sum()

Concept Check

Test your understanding of posterior convergence.

What happens to the Posterior distribution as the sample size approaches infinity?