Hypothesis testing is a widespread scientific process used across statistical and social science disciplines. In the study of statistics, a statistically significant result (or one with statistical significance) in a hypothesis test is achieved when the p-value is less than the defined significance level. The p-value is the probability of obtaining a test statistic or sample result as extreme as or more extreme than the one observed in the study whereas the significance level or alpha tells a researcher how extreme results must be in order to reject the null hypothesis. In other words, if the p-value is equal to or less than the defined significance level (typically denoted by α), the researcher can safely assume that the observed data are inconsistent with the assumption that the null hypothesis is true, meaning that the null hypothesis, or premise that there is no relationship between the tested variables, can be rejected.

By rejecting or disproving the null hypothesis, a researcher is concluding that there is a scientific basis for the belief is some relationship between the variables and that the results were not due to sampling error or chance. While rejecting the null hypothesis is a central goal in most scientific study, it is important to note that the rejection of the null hypothesis is not equivalent to the proof of the researcher’s alternative hypothesis.

### Statistical Significant Results and Significance Level

The concept of statistical significance is fundamental to hypothesis testing. In a study that involves drawing a random sample from a larger population in an effort to prove some result that can be applied to the population as a whole, there is the constant potential for the study data to be a result of sampling error or simple coincidence or chance. By determining a significance level and testing the p-value against it, a researcher can confidently uphold or reject the null hypothesis. The significance level, in the simplest of terms, is the threshold probability of incorrectly rejecting the null hypothesis when it is in fact true. This is also known as the type I error rate. The significance level or alpha is therefore associated with the overall confidence level of the test, meaning that the higher the value of alpha, the greater the confidence in the test.

### Type I Errors and Level of Significance

A type I error, or an error of the first kind, occurs when the null hypothesis is rejected when in reality it is true. In other words, a type I error is comparable to a false positive. Type I errors are controlled by defining an appropriate level of significance. Best practice in scientific hypothesis testing calls for selecting a significance level before data collection even begins. The most common significance level is 0.05 (or 5%) which means that there is a 5% probability that the test will suffer a type I error by rejecting a true null hypothesis. This significance level conversely translates to a 95% level of confidence, meaning that over a series of hypothesis tests, 95% will not result in a type I error.