Lesson 1.10: The Continuous Toolbox

We now fill our continuous toolbox with the three most essential distributions for modeling measurements. Each is a specialized tool for a different kind of real-world scenario: the Uniform for 'anything can happen,' the Exponential for 'waiting for something to happen,' and the Gamma for 'waiting for several things to happen.' These are the building blocks of continuous risk models.

Tool #1: The Uniform Distribution - The Equalizer

The Job: Modeling a situation where all outcomes in a range are equally likely.
Think of a random number generator. Any number is as likely as another.

The Blueprint: U(a, b)

The PDF is a flat line, a rectangle. The total area must be 1, so the height must be 1 / (width).

PDF: f(x)=1baf(x) = \frac{1}{b-a} for x[a,b]x \in [a, b]

CDF: A straight line (a ramp) rising from 0 to 1 over the range.

F(x)=xabafor x[a,b]F(x) = \frac{x-a}{b-a} \quad \text{for } x \in [a, b]

The Specs: Uniform Moments

E[X]=a+b2E[X] = \frac{a+b}{2}
Var(X)=(ba)212\text{Var}(X) = \frac{(b-a)^2}{12}

Key Application: The Foundation of Simulation

The Uniform distribution is the unsung hero of computational statistics. Every time you run a Monte Carlo simulation (Module 5), the computer starts by generating millions of random numbers from U(0,1)U(0, 1). It then uses a technique called the "Inverse Transform Method" to convert these uniform draws into samples from more complex distributions like the Normal. It is the fundamental building block of randomness in computing.

In Python

from scipy.stats import uniform
# PDF at x=0.5 for a U(0,1) distribution
pdf_val = uniform.pdf(x=0.5, loc=0, scale=1) # loc=a, scale=b-a

Tool #2: The Exponential Distribution - The Waiting Time Model

The Job: Modeling the time until the *next* random event occurs.
Think: time between customer arrivals, time until a server fails, time until the next trade signal.

The Blueprint: Exp(λ)

Governed by a single "rate" parameter, λ\lambda (e.g., 5 customers per hour).

PDF: f(x)=λeλxf(x) = \lambda e^{-\lambda x} for x0x \ge 0

CDF: F(x)=1eλxF(x) = 1 - e^{-\lambda x} for x0x \ge 0

The Specs: Exponential Moments

The mean waiting time is the reciprocal of the rate.

E[X]=1λE[X] = \frac{1}{\lambda}
Var(X)=1λ2\text{Var}(X) = \frac{1}{\lambda^2}

Key Property: The Memoryless Nature

This is the most famous—and counter-intuitive—property. It means the time elapsed so far has no impact on the future. P(X>s+tX>s)=P(X>t)P(X > s+t | X > s) = P(X > t).

Analogy: If a lightbulb is modeled by the exponential distribution, and it has already been on for 1000 hours, the probability it will last another 100 hours is the *exact same* as the probability a brand new bulb would last 100 hours. The bulb doesn't "get tired." This makes it a great model for events that have no "wear-and-tear," like the arrival of a packet on a network.

In Python

from scipy.stats import expon
# P(X &lt= 0.5) for lambda=2 (rate=2)
# Note: scipy uses scale = 1/lambda
prob = expon.cdf(x=0.5, scale=1/2)

Tool #3: The Gamma Distribution - The Master Waiting Machine

The Job: Modeling the time until the α\alpha-th event occurs.
This is a flexible "parent" distribution. The Exponential is just a special case of the Gamma.

Introducing the Gamma Function Γ(α)

To define the Gamma PDF, we need the Gamma function, which is the generalization of the factorial function to all positive numbers. For an integer nn, Γ(n)=(n1)!\Gamma(n) = (n-1)!.

Γ(α)=0xα1exdx\Gamma(\alpha) = \int_{0}^{\infty} x^{\alpha-1} e^{-x} \, dx

The Blueprint: Gamma(α, λ)

Defined by a shape parameter α\alpha (the number of events to wait for) and a rate parameter λ\lambda.

f(x)=λαΓ(α)xα1eλxfor x0f(x) = \frac{\lambda^{\alpha}}{\Gamma(\alpha)} x^{\alpha-1} e^{-\lambda x} \quad \text{for } x \ge 0

The Specs: Gamma Moments

E[X]=αλE[X] = \frac{\alpha}{\lambda}
Var(X)=αλ2\text{Var}(X) = \frac{\alpha}{\lambda^2}

Key Role: A Parent to Other Distributions

The Gamma distribution's true power is its relationship to other key statistical tools:

  • If α=1\alpha = 1, the Gamma distribution is the Exponential distribution. Gamma(1,λ)Exp(λ)\text{Gamma}(1, \lambda) \equiv \text{Exp}(\lambda).
  • If α=ν/2\alpha = \nu/2 and λ=1/2\lambda = 1/2, the Gamma distribution is the Chi-Squared (χν2\chi^2_{\nu}) distribution. This is a critical link we will use to justify statistical tests in Module 3.

In Python

from scipy.stats import gamma
# PDF for Gamma(alpha=2, lambda=3)
# Note: scipy uses scale = 1/lambda
pdf_val = gamma.pdf(x=1.5, a=2, scale=1/3)

What's Next? The Continuous Fingerprint

We now have a toolbox of continuous distributions. But how do we prove their properties, like their mean and variance? How do we show that the sum of two Gamma variables is also a Gamma?

Just as before, we need to upgrade our "master tool." The next lesson will introduce the Moment Generating Function (MGF) for Continuous Variables, where we will once again swap summation for integration to unlock the secrets of these distributions.