Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available.
The Core Equation
P(A|B) = [P(B|A) · P(A)] / P(B)
Where P(A|B) is the posterior probability, P(B|A) is the likelihood, and P(A) is the prior.
Python Simulation
Let's simulate a coin flip experiment. We start with a uniform prior (assuming the coin could be anything from 0% to 100% biased).
import numpy as np
# 1. Define Prior
prior = np.ones(100) / 100
p_grid = np.linspace(0, 1, 100)
def update(prior, heads, total):
likelihood = p_grid**heads * (1 - p_grid)**(total - heads)
posterior = likelihood * prior
return posterior / posterior.sum()
Summary
As we observe more data, the likelihood dominates the prior, and our posterior distribution converges to the true parameter value.