What are Zero-order, Partial, and Part Correlations?

Quantitative Results

In statistics, there are many different types of correlations that you can conduct to determine the relationship between two variables. Some of these types of correlations include the Pearson correlation, Spearman correlation, point biserial correlation, and the Kendall correlation. You also may have come across the terms “zero-order,” “partial,” and “part” in reference to correlations. These terms refer to correlations that involve more than two variables. More specifically, these types of correlations are relevant when you have a dependent (outcome) variable, an independent (explanatory) variable, and one or more confounding (control) variables. Here we will explain the differences between zero-order, partial, and part correlations.

Zero-order correlation

First, a zero-order correlation simply refers to the correlation between two variables (i.e., the independent and dependent variable) without controlling for the influence of any other variables. Essentially, this means that a zero-order correlation is the same thing as a Pearson correlation. So why are we discussing the zero-order correlation here? When conducting an analysis with more than two variables (i.e., multiple independent variables or control variables), it may be of interest to know the simple bivariable relationships between the variables to get a better sense of what happens when you begin to control for other variables. This is why SPSS gives you the option to report zero-order correlations when running a multiple linear regression analysis.

Partial correlation

Next, a partial correlation is the correlation between an independent variable and a dependent variable after controlling for the influence of other variables on both the independent and dependent variable. For instance, a researcher studying occupational stress might examine the correlation between tenure and stress while controlling for confounding variables like age and pay rate. In partial correlation, the analysis accounts for the influence of control variables on both the independent and dependent variables. In our example, this would mean that the partial correlation between time with the company and stress would take into account the impact of age and pay rate on both time with the company AND stress.

Part correlation

This leads us to part correlation, also known as “semipartial” correlation. Like the partial correlation, the part correlation is the correlation between two variables (independent and dependent) after controlling for one or more other variables. However, in part correlation, the analysis only accounts for the influence of the control variables on the independent variable. In other words, the part correlation does not control for the influence of the confounding variables on the dependent variable.

In terms of our earlier example, this means that the part correlation between time with the company and stress would only take into account the impact of age and pay rate on time with the company. You might wonder why you would only want to control for effects on the independent variable and not the dependent variable? The primary reason for conducting the part correlation would be to see how much unique variance the independent variable explains in relation to the total variance in the dependent variable, rather than just the variance unaccounted for by the control variables.

Share
Two women looking at statistics paper work
Step Boldly to Completing your Research

If you’re like others, you’ve invested a lot of time and money developing your dissertation or project research.  Finish strong by learning how our dissertation specialists support your efforts to cross the finish line.