Difference between revisions of "Cronbach's Alpha Values"
(Created page with "Internal consistency Cronbach's alpha Internal consistency 0.9 ≤ α Excellent 0.8 ≤ α < 0.9 Good 0.7 ≤ α < 0.8 Acceptable 0.6 ≤ α < 0.7 Questionable") |
|||
(9 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
Internal consistency | Internal consistency | ||
+ | {| class=wikitable | ||
+ | |- | ||
+ | !Cronbach's Alpha | ||
+ | !Internal Consistency | ||
+ | |||
+ | |- | ||
+ | |0.9 ≤ α | ||
+ | |Excellent | ||
− | + | |- | |
− | + | |0.8 ≤ α < 0.9 | |
− | 0.8 ≤ α < 0.9 Good | + | |Good |
− | 0.7 ≤ α < 0.8 | + | |
− | 0.6 ≤ α < 0.7 Questionable | + | |- |
+ | |0.7 ≤ α < 0.8 | ||
+ | |Adequate | ||
+ | |||
+ | |- | ||
+ | |0.6 ≤ α < 0.7 | ||
+ | |Questionable | ||
+ | |||
+ | |} | ||
+ | |||
+ | ''contributed by Frank LaBanca, EdD'' | ||
+ | |||
+ | See also [[Interpreting Cronbach's Alpha]] | ||
+ | |||
+ | Cronbach's alpha measures internal consistency, meaning how much the items on a scale actually measure the same dimension. For example, when considering instrumentation for quantitative research, part of assessing a reliable instrument would include reviewing the Cronbach's alpha values for the scales. An example of this is reported below, for the School Attitudes Assessment Survey - Revised (SAAS-R): | ||
+ | As reported by McCoach and Siegle (2003), the scores demonstrated a classical theory internal consistency reliability coefficient of at least .85 on each of the five factors. | ||
+ | |||
+ | McCoach, D. B., & Siegle, D. (2003). The school attitude assessment survey – revised: A new instrument to identify academically able students who underachieve. | ||
+ | ''Educational and Psychological Measurement, 63''(3), 414-429. DOI: 10.1177/0013164402251057. | ||
+ | |||
+ | ''contributed by Lauren Moyer'' | ||
+ | |||
+ | Cronbach's Alpha is a measure of the correlations between all the variables that make up a scale. The concept behind this measure is to determine if items measure the same concept. If so, they will be highly correlated and have a high alpha, indicating a high level of internal consistency. However, the more items in a particular scale, the higher the alpha tends to be, even if the items don't measure the same thing. It is suggested that the researcher should also run a factor analysis to strengthen the reliability of the scale (Muijs, 2011). | ||
+ | |||
+ | ''contributed by Joseph W. Sullivan'' |
Latest revision as of 07:52, 11 May 2020
Internal consistency
Cronbach's Alpha | Internal Consistency |
---|---|
0.9 ≤ α | Excellent |
0.8 ≤ α < 0.9 | Good |
0.7 ≤ α < 0.8 | Adequate |
0.6 ≤ α < 0.7 | Questionable |
contributed by Frank LaBanca, EdD
See also Interpreting Cronbach's Alpha
Cronbach's alpha measures internal consistency, meaning how much the items on a scale actually measure the same dimension. For example, when considering instrumentation for quantitative research, part of assessing a reliable instrument would include reviewing the Cronbach's alpha values for the scales. An example of this is reported below, for the School Attitudes Assessment Survey - Revised (SAAS-R): As reported by McCoach and Siegle (2003), the scores demonstrated a classical theory internal consistency reliability coefficient of at least .85 on each of the five factors.
McCoach, D. B., & Siegle, D. (2003). The school attitude assessment survey – revised: A new instrument to identify academically able students who underachieve. Educational and Psychological Measurement, 63(3), 414-429. DOI: 10.1177/0013164402251057.
contributed by Lauren Moyer
Cronbach's Alpha is a measure of the correlations between all the variables that make up a scale. The concept behind this measure is to determine if items measure the same concept. If so, they will be highly correlated and have a high alpha, indicating a high level of internal consistency. However, the more items in a particular scale, the higher the alpha tends to be, even if the items don't measure the same thing. It is suggested that the researcher should also run a factor analysis to strengthen the reliability of the scale (Muijs, 2011).
contributed by Joseph W. Sullivan