Scientific experiments involve variables, controls, hypotheses, and a host of other concepts and terms that might be confusing.

## Glossary of Science Terms

Here is a glossary of important science experiment terms and definitions:

**Central Limit Theorem:**States that with a large enough sample, the sample mean will be normally distributed. A normally distributed sample mean is necessary to apply the*t-*test, so if you are planning to perform a statistical analysis of experimental data, it's important to have a sufficiently large sample.**Conclusion:**Determination of whether the hypothesis should be accepted or rejected.**Control Group:**Test subjects randomly assigned to not receive the experimental treatment.**Control Variable:**Any variable that does not change during an experiment. Also known as a**constant variable.****Data****(singular: datum)**: Facts, numbers, or values obtained in an experiment.**Dependent Variable:**The variable that responds to the independent variable. The dependent variable is the one being measured in the experiment. Also known as the**dependent measure**or**responding variable.****Double-Blind:**When neither the researcher nor the subject knows whether the subject is receiving the treatment or a placebo. "Blinding" helps reduce biased results.**Empty Control Group:**A type of control group that does not receive any treatment, including a placebo.**Experimental Group:**Test subjects randomly assigned to receive the experimental treatment.**Extraneous Variable:**Extra variables (not independent, dependent, or control variables) that might influence an experiment but are not accounted for or measured or are beyond control. Examples might include factors you consider unimportant at the time of an experiment, such as the manufacturer of the glassware in a reaction or the color of paper used to make a paper airplane.**Hypothesis:**A prediction of whether the independent variable will have an effect on the dependent variable or a prediction of the nature of the effect.**Independence**or**Independently:**When one factor does not exert influence on another. For example, what one study participant does should not influence what another participant does. They make decisions independently. Independence is critical for a meaningful statistical analysis.**Independent Random Assignment:**Randomly selecting whether a test subject will be in a treatment or control group.**Independent Variable:**The variable that is manipulated or changed by the researcher.**Independent Variable Levels:**Changing the independent variable from one value to another (e.g., different drug doses, different amounts of time). The different values are called "levels."**Inferential Statistics:**Statistics (math) applied to infer characteristics of a population-based on a representative sample from the population.**Internal Validity:**When an experiment can accurately determine whether the independent variable produces an effect.**Mean:**The average calculated by adding all the scores and then dividing by the number of scores.**Null Hypothesis:**The "no difference" or "no effect" hypothesis, which predicts the treatment will not have an effect on the subject. The null hypothesis is useful because it is easier to assess with a statistical analysis than other forms of a hypothesis.**Null Results (Nonsignificant Results):**Results that do not disprove the null hypothesis. Null results don't prove the null hypothesis because the results may have resulted from a lack of power. Some null results are type 2 errors.**p < 0.05:**An indication of how often chance alone could account for the effect of the experimental treatment. A value*p*< 0.05 means that five times out of a hundred, you could expect this difference between the two groups purely by chance. Since the possibility of the effect occurring by chance is so small, the researcher may conclude the experimental treatment did indeed have an effect. Other*p,*or probability, values are possible. The 0.05 or 5% limit simply is a common benchmark of statistical significance.**Placebo (Placebo Treatment):**A fake treatment that should have no effect outside the power of suggestion. Example: In drug trials, test patients may be given a pill containing the drug or a placebo, which resembles the drug (pill, injection, liquid) but doesn't contain the active ingredient.**Population:**The entire group the researcher is studying. If the researcher cannot gather data from the population, studying large random samples taken from the population can be used to estimate how the population would respond.**Power:**The ability to observe differences or avoid making Type 2 errors.**Random****or Randomness:**Selected or performed without following any pattern or method. To avoid unintentional bias, researchers often use random number generators or flip coins to make selections.**Results:**The explanation or interpretation of experimental data.**Simple Experiment**: A basic experiment designed to assess whether there is a cause and effect relationship or to test a prediction. A fundamental simple experiment might have only one test subject, compared with a controlled experiment, which has at least two groups.**Single-Blind:**When either the experimenter or subject is unaware whether the subject is getting the treatment or a placebo. Blinding the researcher helps prevent bias when the results are analyzed. Blinding the subject prevents the participant from having a biased reaction.**Statistical Significance:**Observation, based on the application of a statistical test, that a relationship probably is not due to pure chance. The probability is stated (e.g.,*p*< 0.05) and the results are said to be**statistically significant.****T-Test:**Common statistical data analysis applied to experimental data to test a hypothesis. The*t*-test computes the ratio between the difference between the group means and the standard error of the difference, a measure of the likelihood the group means could differ purely by chance. A rule of thumb is that the results are statistically significant if you observe a difference between the values that is three times larger than the standard error of the difference, but it's best to look up the ratio required for significance on a**t-table**.**Type I Error (Type 1 Error):**Occurs when you reject the null hypothesis, but it was actually true. If you perform the*t*-test and set*p*< 0.05, there is less than a 5% chance you could make a Type I error by rejecting the hypothesis based on random fluctuations in the data.**Type II Error (Type 2 Error):**Occurs when you accept the null hypothesis, but it was actually false. The experimental conditions had an effect, but the researcher failed to find it statistically significant.