Scientific experiments involve variables, controls, a hypothesis, and a host of other concepts and terms that may be confusing. This is a glossary of important science experiment terms and definitions.

### Glossary of Science Terms

**Central Limit Theorem:** states that with a large enough sample, the sample mean will be normally distributed. A normally distributed sample mean is necessary to apply the *t* test, so if you are planning to perform a statistical analysis of experimental data, it's important to have a sufficiently large sample.

**Conclusion:** determination of whether the hypothesis should be accepted or rejected.

**Control Group:** test subjects randomly assigned to *not* receive the experimental treatment.

**Control Variable:** any variable that does not change during an experiment. Also known as **constant variable**

**Data:** (singular: datum) facts, numbers, or values obtained in an experiment.

**Dependent Variable:** the variable that responds to the independent variable. The dependent variable is the one being measured in the experiment. Also known as **the dependent measure**, **responding variable**

**double-blind:** neither the researcher nor the subject knows whether the subject is receiving the treatment or a placebo. "Blinding" helps reduce biased results.

**Empty Control Group:** a type of control group which does not receive any treatment, including a placebo.

**Experimental Group:** test subjects randomly assigned to receive the experimental treatment.

**Extraneous Variable:** extra variables (not the independent, dependent, or control variable) that may influence an experiment, but are not accounted for or measured or are beyond control. Examples may include factors you consider unimportant at the time of an experiment, such as the manufacturer of the glassware in a reaction or the color of paper used to make a paper airplane.

**Hypothesis:** a prediction of whether the independent variable will have an effect on the dependent variable or a prediction of the nature of the effect.

**Independence **or** Independently:** means one factor does not exert influence on another. For example, what one study participant does should not influence what another participant does. They make decisions independently. Independence is critical for a meaningful statistical analysis.

**Independent Random Assignment:** randomly selecting whether a test subject will be in a treatment or control group.

**Independent Variable:** the variable that is manipulated or changed by the researcher.

**Independent Variable Levels:** refers to changing the independent variable from one value to another (e.g., different drug doses, different amounts of time). The different values are called "levels".

**Inferential Statistics:** applying statistics (math) to infer characteristics of a population based on a representative sample from the population.

**Internal Validity:** an experiment is said to have internal validity if it can accurately determine whether the independent variable produces an effect.

**Mean:** the average calculated by adding up all the scores and then dividing by the number of scores.

**Null Hypothesis:** the "no difference" or "no effect" hypothesis, which predicts the treatment will not have an effect on the subject. The null hypothesis is useful because it is easier to assess with a statistical analysis than other forms of a hypothesis.

**Null Results (Nonsignificant Results):** results that do not disprove the null hypothesis. Null results don't *prove* the null hypothesis, because the results may have resulted from a lack of power. Some null results are type 2 errors.

**p < 0.05:** This is an indication of how often chance alone could account for the effect of the experimental treatment. A value *p* < 0.05 means that 5 times out of a hundred, you could expect this difference between the two groups, purely by chance. Since the chance of the effect occurring by chance is so small, the researcher may conclude the experimental treatment did indeed have an effect. Note other *p* or probability values are possible. The 0.05 or 5% limit simply is a common benchmark of statistical significance.

**Placebo (Placebo Treatment):** a fake treatment that should have no effect, outside of the power of suggestion. Example: In drug trials, test patients may be given a pill containing the drug or a placebo, which resembles the drug (pill, injection, liquid) but doesn't contain the active ingredient.

**Population:** the entire group the researcher is studying. If the researcher cannot gather data from the population, studying large random samples taken from the population may be used to estimate how the population would respond.

**Power:** the ability to observe differences or avoid making Type 2 errors.

**Random** **or Randomness:** selected or performed without following any pattern or method. To avoid unintentional bias, researchers often use random number generators or flip coins to make selections. (learn more)

**Results:** the explanation or interpretation of experimental data.

**Statistical Significance:** observation, based on the application of a statistical test, that a relationship probably is not due to pure chance. The probability is stated (e.g., *p* < 0.05) and the results are said to be *statistically significant*.

**Simple Experiment**: basic experiment designed to assess whether there are a cause and effect relationship or test a prediction. A fundamental simple experiment may have only one test subject, compared with a controlled experiment, which has at least two groups.

**Single-blind:** when either the experimenter or subject is unaware whether the subject is getting the treatment or a placebo. Blinding the researcher helps prevent bias when the results are analyzed. Blinding the subject prevents the participant from having a biased reaction.

**T-test:** common statistical data analysis applied to experimental data to test a hypothesis. The t-test computes the ratio between the difference between the group means and the standard error of the difference (a measure of the likelihood the group means could differ purely by chance). A rule of thumb is that the results are statistically significant if you observe a difference between the values that are three times larger than the standard error of the difference, but it's best to look up the ratio required for significance on a *t* table.

**Type I Error (Type 1 error):** occurs when you reject the null hypothesis, but it was actually true. If you perform the t-test and set *p* < 0.05, there is less than a 5% chance you could make a Type I error by rejecting the hypothesis based on random fluctuations in the data.

**Type II Error (Type 2 error):** occurs when you accept the null hypothesis, but it was actually false. The experimental conditions had an effect, but the researcher failed to find it statistically significant.