Linking Probability and Data
Jaynes presents a few core ideas and requirements for his rational system. Probability emerges as the representation of circumstances in which any given realization of a process is either TRUE or FALSE but both are possible and can be expressed by probabilities.
The formal definition is a linear combination with non-negative coefficients that sum to one. Sound familiar? This reasoning applies pretty broadly.
Weighted averages….
Characterizing expected value for risk neutral agents. Decision trees [radiant has one] formalize this idea. We can also just compute them.
Is of necessity two-dimensional,
Our core concept is a probability distribution just as above. These come in two forms for two types [discrete (qualitative)] and continuous (quantitative)] and can be either:
Distributions are nouns.
Sentences are incomplete without verbs – parameters.
We need both; it is for this reason that the former slide is true.
We do not always have a grounding for either the name or the parameter.
For now, we will work with univariate distributions though multivariate distributions do exist.
The differences are sums versus integrals. Why?
The probability of exactly any given value is zero on a true continuum.
Probability distributions are mathematical formulae expressing likelihood for some set of qualities or quantities.
Like a proper English sentence, both are required.
Important
For our purposes, it is a systematic description of a phenomenon that shares important and essential features of that phenomenon. Models frequently give us leverage on problems in the absence of alternative approaches.
library(patchwork)
Unif <- data.frame(x=seq(0, 1, by = 0.005)) %>% mutate(p.x = punif(x), d.x = dunif(x))
p1 <- ggplot(Unif) + aes(x=x, y=p.x) + geom_step() + labs(title="Distribution Function [cdf/cmf]") + theme_minimal()
p2 <- ggplot(Unif) + aes(x=x, y=d.x) + geom_step() + labs(title="Density Function [pdf/pmf]") + theme_minimal()
p2 + p1f(x|\mu,\sigma^2 ) = \frac{1}{\sqrt{2\pi\sigma^{2}}} \exp \left[ -\frac{1}{2} \left(\frac{x - \mu}{\sigma}\right)^{2}\right]
Is the workhorse of statistics. Key features:
library(patchwork)
Unif <- data.frame(x=seq(0, 1, by = 0.005)) %>% mutate(p.x = punif(x), d.x = dunif(x))
p1 <- ggplot(Unif) + aes(x=x, y=p.x) + geom_step() + labs(title="Distribution Function [cdf/cmf]") + theme_minimal()
p2 <- ggplot(Unif) + aes(x=x, y=d.x) + geom_step() + labs(title="Density Function [pdf/pmf]") + theme_minimal()
p2 + p1The generic z-transformation applied to a variable x centers [mean\approx 0] and scales [std. dev. \approx variance \approx 1] to z_{x} for population parameters.1 In this case, two things are important.
this is the idea behind there only being one normal table in a statistics book.
the \mu and \sigma are presumed known.
z = \frac{x - \mu}{\sigma}
The scale command in R does this for a sample.
z = \frac{x - \overline{x}}{s_{x}} where \overline{x} is the sample mean of x and s_{x} is the sample standard deviation of x.
In samples, the 0 and 1 are exact; these are features of the mean and degrees of freedom. If I know the mean and any n-1 observations, the n^{th} observation is exactly the value such that the deviations add up to zero/cancel out.
Suppose earnings in a community have mean 55,000 and standard deviation 10,000. This is in dollars. Suppose I earn 75,000 dollars. First, if we take the top part of the fraction in the z equation, we see that I earn 20,000 dollars more than the average (75000 - 55000). Finishing the calculation of z, I would divide that 20,000 dollars by 10,000 dollars per standard deviation. Let’s show that.
z = \frac{75000 dollars - 55000 dollars}{\frac{10000 dollars}{SD}} = +2 SD .
I am 2 standard deviations above the average (the +) earnings. All z does is re-scale the original data to standard deviations with zero as the mean. The metric is the standard deviation.
Suppose I earn 35,000. That makes me 20,000 below the average and gives me a z score of -2. I am 2 standard deviations below average (the -) earnings.
z is an easy way to assess symmetry.
Hypo.Income z.Income
1 55994.88 0.1198684
2 46199.58 -0.8675543
3 44239.77 -1.0651147
4 49542.55 -0.5305633
5 62587.74 0.7844678
6 52400.56 -0.2424600
-1 1
495 505
The Michelin tire company has developed a revolutionary new type of steel-belted radial tire. After extensive testing, the population of tire lives is believed to be well represented by a normal distribution with mean tire life \mu = 96,000 miles and standard deviation \sigma = 12,000 miles. The company plans to offer a warranty providing for replacement tires if the original tires fail to last through the warranty period. Before embarking on an in-depth analysis of the warranty problem, we will first warm up with a few standard normal probability calculations.
Probability calculator
Distribution: Normal
Mean : 0
St. dev : 1
Lower bound : -0.89
Upper bound : 1.13
P(X < -0.89) = 0.187
P(X > -0.89) = 0.813
P(X < 1.13) = 0.871
P(X > 1.13) = 0.129
P(-0.89 < X < 1.13) = 0.684
1 - P(-0.89 < X < 1.13) = 0.316
Probability calculator
Distribution: Normal
Mean : 0
St. dev : 1
Lower bound : 0.5
Upper bound : 0.995
P(X < 0) = 0.5
P(X > 0) = 0.5
P(X < 2.576) = 0.995
P(X > 2.576) = 0.005
P(0 < X < 2.576) = 0.495
1 - P(0 < X < 2.576) = 0.505
This empirical rule says that about 95% fall within plus or minus two standard deviations. In this case, 72000 to 120,000.
Probability calculator
Distribution: Normal
Mean : 96000
St. dev : 12000
Lower bound : 80000
Upper bound : Inf
P(X < 80000) = 0.091
P(X > 80000) = 0.909
Suppose the variable of interest is discrete and takes only two values: yes and no. For example, is a customer satisfied with the outcomes of a given service visit?
For each individual, because the probability of yes (1) \pi and no (0) 1-\pi must sum to one, we can write:
f(x|\pi) = \pi^{x}(1-\pi)^{1-x}
For multiple identical trials, we have the Binomial:
f(x|n,\pi) = {n \choose k} \pi^{x}(1-\pi)^{n-x} where {n \choose k} = \frac{n!}{k!(n-k)!}
BinomialR
Informal surveys suggest that 15% of Essex shopkeepers will not accept Scottish pounds. There are approximately 200 shops in the general High Street square. My future boss [and former student] scammed me outta 10 pounds over this mess a few months ago, traded me a 20 Scots for an English tenner but what else was I gonna do….
Interestingly, any given observation has a 50-50 chance of being over or under the median. Suppose that I have five datum.
Everything else.
How many failures before the first success? Now defined exclusively by p. In each case, (1-p) happens k times. Then, on the k+1^{th} try, p. Note 0 failures can happen…
Pr(y=k) = (1-p)^{k}p
Suppose any startup has a p=0.1 chance of success. How many failures?
Suppose any startup has a p=0.1 chance of success. How many failures for the average/median person?
We could also do something like.
How many failures before the r^{th} success? In each case, (1-p) happens k times. Then, on the k+1^{th} try, we get our r^{th} p. Note 0 failures can happen…
Pr(y=k) = {k+r-1 \choose r-1}(1-p)^{k}p^{r}
I need to make 5 sales to close for the day. How many potential customers will I have to have to get those five sales when each customer purchases with probability 0.2.
library(patchwork)
DF <- data.frame(Customers = c(0:70)) %>%
mutate(m.Customers = dnbinom(Customers, size=5, prob=0.2),
p.Customers = pnbinom(Customers, size=5, prob=0.2))
pl1 <- DF %>% ggplot() + aes(x=Customers) + geom_line(aes(y=p.Customers))
pl2 <- DF %>% ggplot() + aes(x=Customers) + geom_point(aes(y=m.Customers))Poisson
Take a binomial with p very small and let n \rightarrow \infty. We get the Poisson distribution (y) given an arrival rate \lambda specified in events per period.
f(y|\lambda) = \frac{\lambda^{y}e^{-\lambda}}{y!}
FAA Decision: Expend or do not expend scarce resources investigating claimed staffing shortages at the Cleveland Air Route Traffic Control Center.
Essential facts: The Cleveland ARTCC is the US’s busiest in routing cross-country air traffic. In mid-August of 1998, it was reported that the first week of August experienced 3 errors in a one week period; an error occurs when flights come within five miles of one another by horizontal distance or 2000 feet by vertical distance. The Controller’s union claims a staffing shortage though other factors could be responsible. 21 errors per year (21/52 errors per week) has been the norm in Cleveland for over a decade.
What would you do and why? Not impossible
After analyzing the initial data, you discover that the first two weeks of August have experienced 6 errors. What would you now decide? Well, once is 0.0081342. Twice, at random, is that squared. We have a problem.
In the geometric example, I was concerned with sales. I might also want to generate revenues because I know the rough mean and standard deviation of sales. Combining such things together forms the basis of a Monte Carlo simulation. More on this in a bit, it is known as calibration.
Customers arrive at a rate of 7 per hour. You convert customers to buyers at a rate of 85%. Buyers spend, on average 600 dollars with a standard deviation of 150 dollars.
Distributions are how variables and probability relate. They are a graph that we can enter in two ways. From the probability side to solve for values or from values to solve for probability. It is always a function of the graph.
Distributions generally have to be sentences.
To this point, the distributions are assumed. These are assumptions we make to gain leverage on a problem because we have little to work with. They are the simplest of models and they are completely or largely data free.
The remainder of the term will take data as given and begin a process known as statistical learning using the core insight that probability distributions, and uncertainty, abound.
This example focuses on calories from FastFood.
First, provide a summary of the mean, standard deviation, and 25th and 75th percentiles of calories for each restaurant chain in the data.
Second, visualize these data using some appropriate visualization method and intepret the relevant visual.
Third, firms are ranked according to the National Institute of Fast Food as Pure Calories evaluates the 75th percentile of menu offerings and ranks the chains from top as 1 to bottom. What are their rankings?
The rest…. Suppose that calories from Mcdonalds items are said to follow a normal distribution, with mean and standard deviation exactly following the observed data.
What are those values, mean and standard deviation?
Provide a plot of the distribution that those values and the assumption imply.
Provide a plot of the actual data on calories. Does this assumption seem reasonable? Do the data seem symmetric?
Given this information so far, would calories best be described by a mean and standard deviation or by a five-number summary? How should symmetry weigh into your decision?
Provide that summary if you have not already.
There is one very large value in the observed data. What is the item? Use your normal distribution from question 2 to calculate how likely an item of that many calories or more should occur given the distribution. What is the z-score for that item? What does this mean?
Assuming the normal, as given in 1 and 2
DADM : Probability Distributions