Frequentist or Bayesian?
—
Today's Amazon Deals - https://amzn.to/3FeoGyg)
—-
Frequentist or Bayesian?
The most popular definition of probability, and arguably the most intuitive, is the frequentist one (also known as frequentism). According to frequentists, an event’s probability is defined as the limit of the event’s frequency in a large number of trials.
What does this mean? Let’s go back to the example of flipping a fair coin. You said that the probability of rolling heads on a single roll is 50 percent. However, how do you know this to be true? What if you roll tails ten times in a row? Would this change the probability of rolling heads? Obviously not. Intuitively, this makes sense, but why?
I ran a coin-tossing experiment (simulated in the R programming language [1]); you can see the results in Figure 4-1. The proportion of heads very quickly converges to 50 percent.
Figure 4-1: Coin tossing experiment
Larger View
This is the definition of frequentist probability in practice. If you execute an experiment a large number of times, then the frequencies will converge to their true probabilities.
Frequentist statistics have been the orthodox branch of statistics for most of history. In his Rhetoric, Aristotle wrote that “the probable is that which for the most part happens.” The practice of statistics is based on the belief that you can extract a sample from a population, and then study properties of the population. If we treat each entity in this sample as an experiment, then the more samples we collect, the closer we will get to the truth.