Type I and Type II Errors

From Practical Statistics for Educators
Jump to: navigation, search

Type One Error: An incorrect rejection of the null hypothesis. For example, the researcher falsely states that there is a statistically significant difference between the control group and the experimental group based on their intervention program. This can also apply to correlational tests.


Type Two Error: An incorrect acceptance of the null hypothesis. For example, the researcher does not report a significance between the control and experimental group based on an intervention when, in fact, there is. In other words, an effect truly exists, but the research reports that there is none.


contributed by: Tom Fox, WCSU Cohort 8

Lawrence, S., Meyer, G, & Guarino, A.J. (2017). Applied multivariate research: Design and interpretation. Thousand Oaks, CA: Sage Publications

A cross-disciplinary comparison of Type I and Type II errors and setting significance levels

I found the following comparison illustrative both of the definition of Type I and Type II errors in practical application, and for understanding that there are different impacts of manipulating significance levels – as the tests we’ve run in ED826 have mostly involved setting significance levels at .05 – that have discipline-specific consequences.

Significance is under the direct control of the researcher, and is established relative to the consequences of making a Type I error. While conventionally, significance is set at .05 or .01, lower significance levels are set for example, in medical research where the implications of committing a Type I error are potentially grave. Therefore, if the potential harm to patients if a drug-in-testing is said to be more effective than an existing drug when in fact it is not (rejecting the null hypothesis when it is true), significance levels in medical research are set very low (.001) to minimize the potential of committing a Type I error (Hinkle, Wiersma, & Jurs, 2003, p. 300).

By comparison, the example given in the Hinkle, Wiersma, & Jurs (2003) text is an education researcher who is investigating two different programs types and their respective effects on student achievement. If the cost of the program and impact on teacher time are the same/similar, the consequence of making a Type I error (implementing a program that is not better than the other) is minimal. By comparison, behavioral science researchers can use higher significance levels (.10) to avoid making Type II errors (e.g. not implementing a superior program) (p.300).

Thus, setting the “right” significance level is really a matter of mitigating risk and in some fields – like medicine -- that risk is greater than in others.

References: Hinkle, D.E., Wiersma, W., & Jurs, S.G. (2003). Applied statistics for the behavioral sciences (5th edition). Boston, M.A.: Houghton Mifflin Company.

contributed by Emily Kilbourn