Hypothesis Tests for One or Two Variances or Standard Deviations

Chi-Square-tests and F-tests for variance or standard deviation both require that the original population be normally distributed.

Testing a Claim about a Variance or Standard Deviation

To test a claim about the value of the variance or the standard deviation of a population, then the test statistic will follow a chi-square distribution with $n-1$ dgrees of freedom, and is given by the following formula.

 $\chi^2 = \dfrac{(n-1)s^2}{\sigma_0^2}$

The television habits of 30 children were observed. The sample mean was found to be 48.2 hours per week, with a standard deviation of 12.4 hours per week. Test the claim that the standard deviation was at least 16 hours per week.

• The hypotheses are:
$H_0: \sigma = 16$
$H_a: \sigma < 16$
• We shall choose   $\alpha = 0.05$.
• The test statistic is   $\chi^2 = \dfrac{(n-1)s^2}{\sigma_0^2} = \dfrac{(30-1)12.4^2}{16^2} = 17.418$.
• The p-value is   $p = \chi^2\text{cdf}(0,17.418,29) = 0.0447$.
• Since   $p < \alpha$,   we reject $H_0$.
• The variation in television watching was less than 16 hours per week.

Testing a the Difference of Two Variances or Two Standard Deviations

Two equal variances would satisfy the equation   $\sigma_1^2 = \sigma_2^2$,   which is equivalent to   $\dfrac{ \sigma_1^2}{\sigma_2^2} = 1$.   Since sample variances are related to chi-square distributions, and the ratio of chi-square distributions is an F-distribution, we can use the F-distribution to test against a null hypothesis of equal variances. Note that this approach does not allow us to test for a particular magnitude of difference between variances or standard deviations.

Given sample sizes of $n_1$ and $n_2$, the test statistic will have   $n_1-1$   and   $n_2-1$   degrees of freedom, and is given by the following formula.

 $F = \dfrac{s_1^2}{s_2^2}$

If the larger variance (or standard deviation) is present in the first sample, then the test is right-tailed. Otherwise, the test is left-tailed. Most tables of the F-distribution assume right-tailed tests, but that requirement may not be necessary when using technology.

Samples from two makers of ball bearings are collected, and their diameters (in inches) are measured, with the following results:

• Acme: $n_1 = 80$, $s_1 = 0.0395$
• Bigelow: $n_2 = 120$, $s_2 = 0.0428$

Assuming that the diameters of the bearings from both companies are normally distributed, test the claim that there is no difference in the variation of the diameters between the two companies.

• The hypotheses are:
$H_0: \sigma_1 = \sigma_2$
$H_a: \sigma_1 \neq \sigma_2$
• We shall choose   $\alpha = 0.05$.
• The test statistic is   $F = \dfrac{s_1^2}{s_2^2} = \dfrac{0.0395^2}{0.0428^2} = 0.8517$.
• Since the first sample had the smaller standard deviation, this is a left-tailed test. The p-value is   $p = \operatorname{Fcdf}(0,0.8517,79,119) = 0.2232$.
• Since   $p > \alpha$,   we fail to reject $H_0$.
• There is insufficient evidence to conclude that the diameters of the ball bearings in the two companies have different standard deviations.

If the two samples had been reversed in our computations, we would have obtained the test statistic   $F = 1.1741$,   and performing a right-tailed test, found the p-value   $p = \operatorname{Fcdf}(1.1741,\infty,119,79) = 0.2232$.   Of course, the answer is the same.