I typically blog about goings-on in my life as an assistant professor, but one of my favorite blogs does that and also occasionally posts invaluable statistics help. Today, I want to talk about an article and stats technique that I recently found very helpful.
The citation is: Bernardi, R. A. (1994). Validating research results when Cronbach's alpha is below .70: A methodological procedure. Educational and Psychological Measurement, 54, 766-775.
Bernardi addresses the age-old problem about what to do with bad reliability scores. If you are like me, you often throw out the variable altogether or use it cautiously and hope reviewers and editors will be happy with your cautious language. However, Bernardi presents a solution. Let's say you have a 3-item scale measuring job satisfaction and a sample size of 200. The reliability is .40, completely unacceptible. First, compute the variable based on the scores that you have and calculate the mean, s.d., confidence intervals around the mean (plus or minus 2 standard errors), and correlations with other variables. Is the variable related to a dependent variable of interest in the study? If so, then there needs to be steps taken to make the variable useable. Calculate the distance between scale items for each respondent. For two items, you would do that by calculating the absolute difference between items. For three or more, it's a little more complicated, but the principle is the same. When you have the absolute difference, sort your data based on that difference. The goal now is to select out cases that have the largest differences between scale items, thus increasing the consistency between scores (which is what alpha measures). How many to leave out depends on how far about scores are. So let's say our original sample size of 200 is selected down to 164.
But wait! What good social scientist would intentionally tamper with data in order to get better results?! Bernardi isn't finished yet. Compute a new variable based on the new data and calculate the mean, s.d., confidence intervals around the mean, and correlations with other variables. Now compare the new variable to the first variable. Are the means signficantly different (nonoverlapping confidence intervals)? Does the new variable correlation in comparable ways to over variables in the study? If so, Bernardi would argue that the low alpha score is a product, not of poor reliability, but of a highly homogenous sample. The trick then is to write all of this to honestly describe how you handled poor reliabilities in such as way as to persuade reviewers that you know what you are doing.
Wednesday, February 25, 2009
Fixing Problem Reliabilities
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment