Cronbach’s alpha is a convenient test used to estimate the reliability, or internal consistency, of a composite score. Now, what on Earth does that mean? Let’s start with reliability. Say an individual takes a Happiness Survey. Your happiness score would be highly reliable (consistent) if it produces the same or similar results when the same individual re-takes your survey, under the same conditions. However, say an individual, who, at the same level of real happiness, takes this Happiness Survey twice back-to-back, and one score shows high happiness and the other score shows low happiness—that measure would not be reliable at all.
Cronbach’s alpha gives us a simple way to measure whether or not a score is reliable. You use it assuming multiple items measure the same underlying construct. For example, in the Happiness Survey, five questions may ask different things but together measure overall happiness.
Theoretically, Cronbach’s alpha results should give you a number from 0 to 1, but you can get negative numbers as well. A negative number indicates that something is wrong with your data—perhaps you forgot to reverse score some items. The general rule of thumb is that a Cronbach’s alpha of .70 and above is good, .80 and above is better, and .90 and above is best.
Cronbach’s alpha does come with some limitations: scores that have a low number of items associated with them tend to have lower reliability, and sample size can also influence your results for better or worse. Despite its limitations, Cronbach’s alpha remains a widely used measure. If your committee asks for proof of your instrument’s internal consistency or reliability, Cronbach’s alpha is a good option!