Science, Tech, Math › Math Type I and Type II Errors in Statistics Which is Worse: Incorrectly Rejecting the Null or Alternative Hypothesis? Share Flipboard Email Print Tatiana Kolesnikova/Getty Images Math Statistics Inferential Statistics Statistics Tutorials Formulas Probability & Games Descriptive Statistics Applications Of Statistics Math Tutorials Geometry Arithmetic Pre Algebra & Algebra Exponential Decay Functions Worksheets By Grade Resources View More By Courtney Taylor Professor of Mathematics Ph.D., Mathematics, Purdue University M.S., Mathematics, Purdue University B.A., Mathematics, Physics, and Chemistry, Anderson University Courtney K. Taylor, Ph.D., is a professor of mathematics at Anderson University and the author of "An Introduction to Abstract Algebra." our editorial process Courtney Taylor Updated July 31, 2017 Type I errors in statistics occur when statisticians incorrectly reject the null hypothesis, or statement of no effect, when the null hypothesis is true while Type II errors occur when statisticians fail to reject the null hypothesis and the alternative hypothesis, or the statement for which the test is being conducted to provide evidence in support of, is true. Type I and Type II errors are both built into the process of hypothesis testing, and though it may seem that we would want to make the probability of both of these errors as small as possible, often it is not possible to reduce the probabilities of these errors, which begs the question: "Which of the two errors is more serious to make?" The short answer to this question is that it really depends on the situation. In some cases, a Type I error is preferable to a Type II error, but in other applications, a Type I error is more dangerous to make than a Type II error. In order to ensure proper planning for the statistical testing procedure, one must carefully consider the consequences of both of these types of errors when the time comes to decide whether or not to reject the null hypothesis. We will see examples of both situations in what follows. Type I and Type II Errors We begin by recalling the definition of a Type I error and a Type II error. In most statistical tests, the null hypothesis is a statement of the prevailing claim about a population of no particular effect while the alternative hypothesis is the statement that we wish to provide evidence for in our hypothesis test. For tests of significance there are four possible results: We reject the null hypothesis and the null hypothesis is true. This is what is known as a Type I error.We reject the null hypothesis and the alternative hypothesis is true. In this situation the correct decision has been made.We fail to reject the null hypothesis and the null hypothesis is true. In this situation the correct decision has been made.We fail to reject the null hypothesis and the alternative hypothesis is true. This is what is known as a Type II error. Obviously, the preferred outcome of any statistical hypothesis test would be the second or third, wherein the correct decision has been made and no error occurred, but more often than not, an error is made in during the course of hypothesis testing—but that's all part of the procedure. Still, knowing how to properly conduct a procedure and avoid "false positives" can help reduce the number of Type I and Type II errors. Core Differences of Type I and Type II Errors In more colloquial terms we can describe these two kinds of errors as corresponding to certain results of a testing procedure. For a Type I error we incorrectly reject the null hypothesis—in other words, our statistical test falsely provides positive evidence for the alternative hypothesis. Thus a Type I error corresponds to a “false positive” test result. On the other hand, a Type II error occurs when the alternative hypothesis is true and we do not reject the null hypothesis. In such a way our test incorrectly provides evidence against the alternative hypothesis. Thus a Type II error can be thought of as a “false negative” test result. Essentially, these two errors are inverses of one another, which is why they cover the entirety of errors made in statistical testing, but they also differ in their impact if the Type I or Type II error remains undiscovered or unresolved. Which Error Is Better By thinking in terms of false positive and false negative results, we are better equipped to consider which of these errors are better—Type II seems to have a negative connotation, for good reason. Suppose you are designing a medical screening for a disease. A false positive of a Type I error may give a patient some anxiety, but this will lead to other testing procedures which will ultimately reveal the initial test was incorrect. In contrast, a false negative from a Type II error would give a patient the incorrect assurance that he or she does not have a disease when he or she in fact does. As a result of this incorrect information, the disease would not be treated. If doctors could choose between these two options, a false positive is more desirable than a false negative. Now suppose that someone had been put on trial for murder. The null hypothesis here is that the person is not guilty. A Type I error would occur if the person were found guilty of a murder that he or she did not commit, which would be a very serious outcome for the defendant. On the other hand, a Type II error would occur if the jury finds the person not guilty even though he or she committed the murder, which is a great outcome for the defendant but not for society as a whole. Here we see the value in a judicial system that seeks to minimize Type I errors.