03/08/2021

Use the left and right arrow keys to navigate the presentation forward and backward respectively. You can also use the arrows at the bottom right of the screen to navigate with a mouse.

FAIR USE ACT DISCLAIMER:

This site is for educational purposes only. This website may contain copyrighted material, the use of which has not been specifically authorized by the copyright holders. The material is made available on this website as a way to advance teaching, and copyright-protected materials are used to the extent necessary to make this class function in a distance learning environment. The Fair Use Copyright Disclaimer is under section 107 of the Copyright Act of 1976, allowance is made for “fair use” for purposes such as criticism, comment, news reporting, teaching, scholarship, education and research.

The following topics will be covered in this lecture:

- Continuous probability distributions
- The probability density function
- The cumulative distribution function
- The expected value of a continuous random variable
- The expected value of a function of a continuous random variable
- The standard deviation and variance of a continuous random variable

Courtesy of Ania Panorska CC

- Recall, that we have a separate kind of model for random experiments in which the units of measurement can be arbitrarily sub-divided.
- A good example to think of is if \( X \) is the daily high temperature in Reno in degrees Celsius.
- If we had a sufficiently accurate thermometer, we could
**measure \( X \) to an arbitrary decimal place**and it would make sense. - \( X \) thus takes today’s weather from the outcome space and
**gives us a number in a continuous unit of measurement**.

**Continuous random variable**is a random variable with an interval (either finite or infinite) of real numbers for its range.- The range of such a random variable is
**uncountably infinite**. - This is to say that if \( X \) is a
**continuous random variable**, there is no possible index set \( \mathcal{I}\subset \mathbb{Z} \) which can enumerate the possible values \( X \) can attain. - For
**discrete random variables**, we could perform this with a possibly**infinite index set**, \( \{x_j\}_{j=1}^\infty \) - This has to do with how the
**infinity of the continuum \( \mathbb{R} \)**is actually**larger than**the**infinity of the counting numbers, \( \aleph_0 \)**; - in the
**continuum**you can**arbitrarily sub-divide the units of measurement**. - The fact that a continuous sample space has an uncountably infinite number of outcomes eliminates the option of assigning a probability to each point as we did in the discrete case with the mass function.

- We will instead look empirically at how we can construct a continuous probability as a
**density**. - Suppose an electronic surveillance monitor is turned on briefly at the beginning of every hour and has a \( 0.905 \) probability of working properly, regardless of how long it has remained in service.
- Let the random variable \( X \) denote the hour at which the monitor
**first fails**and \( A_k \) represent the event that the monitor fails in the \( k \)-th hour. - Then the probability mass \( f(k) \) is the product of \( k \) individual probabilities: \[ \begin{align} f(k) &= P(X= k)\\ &= P\left( A_k \cap_{i=1}^{k-1}\overline{A}_i \right)\\ &=(0.905)^{k-1} \times (0.095) \end{align} \] for any given \( k=1, 2, \cdots \)
- We plot the histogram for the probability mass function to the right

Courtesy of Larsen & Marx. *An Introduction to Mathematical Statistics and Its Applications*. 6th Edition.

- Now we superimpose the exponential curve \( y = 0.1e^{−0.1x} \) onto the histogram.

*An Introduction to Mathematical Statistics and Its Applications*. 6th Edition.

- Notice how closely the area under the curve approximates the area of the bars.
- It follows that the probability that \( X \) lies in some given interval will be numerically similar to the integral of the exponential curve above that same interval.
- For example, the probability that the monitor fails sometime during the first four hours would be the sum \[ \begin{align} P(1\leq X \leq 4) &= \sum_{k=1}^4 f(k) \\ &= (0.905)^{k-1} \times (0.095)\\ &\approx 0.3297 \end{align} \]

- We can generally use the idea of fitting a continuous probability function to approximate an integer- valued discrete probability model.
- The “trick” is to replace the spikes that define \( f(x) \) with rectangles whose heights are \( f(x) \) and whose widths are \( 1 \).
- Doing that makes the sum of the areas of the rectangles corresponding to \( f(x) \) equal to \( 1 \);
- this is the same as the total area under the approximating continuous probability function.
- Because of the equality of those two areas, it makes sense to superimpose (and compare) the “histogram” of \( f(x) \) and the continuous probability function on the same set of axes.
- Imagine that we are forming a
**frequency histogram**for the measurements from a random experiment with a**continuous random variable**\( Y \). - Suppose we have measurements \( y_1,\cdots, y_n \) which we will bin into rectangles over the range for \( Y \).