What Level of Alpha Determines Statistical Significance?

the Greek letter alpha


Not all results of hypothesis tests are equal. A hypothesis test or test of statistical significance typically has a level of significance attached to it. This level of significance is a number that is typically denoted with the Greek letter alpha. One question that comes up in a statistics class is, “What value of alpha should be used for our hypothesis tests?”

The answer to this question, as with many other questions in statistics is, “It depends on the situation.” We will explore what we mean by this. Many journals throughout different disciplines define that statistically significant results are those for which alpha is equal to 0.05 or 5%. But the main point to note is that there is not a universal value of alpha that should be used for all statistical tests.

Commonly Used Values Levels of Significance

The number represented by alpha is a probability, so it can take a value of any nonnegative real number less than one. Although in theory any number between 0 and 1 can be used for alpha, when it comes to statistical practice this is not the case. Of all levels of significance, the values of 0.10, 0.05 and 0.01 are the ones most commonly used for alpha. As we will see, there could be reasons for using values of alpha other than the most commonly used numbers.

Level of Significance and Type I Errors

One consideration against a “one size fits all” value for alpha has to do with what this number is the probability of. The level of significance of a hypothesis test is exactly equal to the probability of a Type I error. A Type I error consists of incorrectly rejecting the null hypothesis when the null hypothesis is actually true. The smaller the value of alpha, the less likely it is that we reject a true null hypothesis.

There are different instances where it is more acceptable to have a Type I error. A larger value of alpha, even one greater than 0.10 may be appropriate when a smaller value of alpha results in a less desirable outcome.

In medical screening for a disease, consider the possibilities of a test that falsely tests positive for a disease with one that falsely tests negative for a disease. A false positive will result in anxiety for our patient but will lead to other tests that will determine that the verdict of our test was indeed incorrect. A false negative will give our patient the incorrect assumption that he does not have a disease when he in fact does. The result is that the disease will not be treated. Given the choice, we would rather have conditions that result in a false positive than a false negative.

In this situation, we would gladly accept a greater value for alpha if it resulted in a tradeoff of a lower likelihood of a false negative.

Level of Significance and P-Values

A level of significance is a value that we set to determine statistical significance. This ends up being the standard by which we measure the calculated p-value of our test statistic. To say that a result is statistically significant at the level alpha just means that the p-value is less than alpha. For instance, for a value of alpha = 0.05, if the p-value is greater than 0.05, then we fail to reject the null hypothesis.

There are some instances in which we would need a very small p-value to reject a null hypothesis. If our null hypothesis concerns something that is widely accepted as true, then there must be a high degree of evidence in favor of rejecting the null hypothesis. This is provided by a p-value that is much smaller than the commonly used values for alpha.


There is not one value of alpha that determines statistical significance. Although numbers such as 0.10, 0.05 and 0.01 are values commonly used for alpha, there is no overriding mathematical theorem that says these are the only levels of significance that we can use. As with many things in statistics, we must think before we calculate and above all use common sense.