Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available.
The Core Equation
The fundamental equation involves the Posterior, Likelihood, Prior, and Marginal Likelihood.
- P(A): The Prior probability (what we believe before seeing data).
- P(B|A): The Likelihood (how probable is the data given our hypothesis).
- P(A|B): The Posterior (our updated belief).
Python Implementation
Let's simulate a simple coin flip scenario to update our belief about the fairness of a coin.
import numpy as np
# Define prior: Uniform distribution (any bias is equally likely)
prior = np.ones(100) / 100
p_grid = np.linspace(0, 1, 100)
def update(prior, heads, total):
likelihood = p_grid**heads * (1 - p_grid)**(total - heads)
posterior = likelihood * prior
posterior /= posterior.sum() # Normalize
return posterior
Conclusion
Unlike frequentist methods, Bayesian approaches give us a distribution of probabilities rather than a single point estimate, allowing for richer uncertainty quantification.