Main Page

From Practical Statistics for Educators
Revision as of 19:31, 11 December 2019 by Christina (talk | contribs)
Jump to: navigation, search

Practical Statistics for Educators edited and maintained by Frank LaBanca, EdD



Philosophy

Quantitative statistical analyses can be intimidating for many educators pursing an advanced academic degree. The thought of computational math can sometimes trigger unwarranted fears.

Here, we approach statistics from a straightforward conceptually-based perspective. Our goal is to collaborate and provide insight for statistics that make them meaningful tools in the educational arena.

Each "module" corresponds with the topics presented each week, and will expand as the course progresses. A topical outline can be found @ [1]

Comments and edits are welcome and encouraged! Please give yourself credit as you contribute. At the end of a section you insert please add the following in italics: contributed by <your name> If you are modifying content, add the following under the contribution line: modified by <your name> We are glad to accept as many modifications as necessary to give the most meaning to each section. As we asynchronously socially construct knowledge together, we can recognize the accomplishments and contributions of each writer.

contributed by Frank LaBanca, EdD

Contributions

Our contributors contributions here.

Please submit your contribution at [2]

Modules

1.1 The Greek Alphabet and its significance in statistics

1.2 An introduction to probability PowerPoint @[3]


2.1 Types of Data

2.2 Visualizing Data

2.3 Visually representing data PowerPoint @ [4]

2.3.1 Table 2 from LaBanca dissertation @ [5]

2.3.2 Cool graph of movie box office from NY Times [6]

2.3.4 Histograms

2.3.5 Scatterplots YouTube @ [7]

2.4 Shapes of distribution

2.5 Survey of Attitudes Toward Statistics (SATS) Data Set @ [8]

2.6 Data Screening


3.1 Central Tendency

3.1.1 Central Tendency and Normal Distribution PowerPoint @ [9]

3.1.2 Central Tendency YouTube @ [10]

3.2 Interquartile ranges

3.2.1 The Box Plot

3.3 Standard deviation

3.3.1 Identifying percentile ranks and scores based on standard deviation

3.3.1.a Practice Identifying percentile ranks and scores based on standard deviation

3.4 z-scores


4.1 Percentile Rank 4.1.1 Areas under the standard normal curve for z values @ [11]

4.1.2 z scores corresponding to divisions of the area under the normal curve @ [12]

4.2 Conversion of data PowerPoint @ [13]

4.2.1 Descriptive analysis of USRT data @ [14]

4.3 Normal Curve Equivalent scores

4.3 Standard Error of Measurement

4.4 Confidence Intervals

4.4 z score machine @ [15]


5.1 Pearson r

5.2 Rules of thumb for interpreting the size of a correlation coefficient

5.3 Critical values for the correlation coefficient @ [16]

5.4 Spearman rho

5.5 Correlation PowerPoint @ [17]

5.6 Writing samples for correlations


6.1 Inferential Statistics Definition

6.2 Sampling

6.3 Sampling distributions

6.4 t test PowerPoint @ [18]

6.4.1 t -t test video [19]

6.5 Sample data set @ [20]

6.6 Critical values for t @ [21]


7.1 Effect size

7.1.1 Effect size calculator @ http://www.campbellcollaboration.org/resources/effect_size_input.php

7.1.2 Rules of thumb for interpreting effect sizes

7.2 Effect size PowerPoint @ [22]

7.3 Hypothesis testing

7.4 Hypothesis testing PowerPoint @ [23]

7.5.1 Hypothesis testing template for a correlation @ [24]

7.5.2 Hypothesis testing template for a t test @ [25]


8.1 Type I and Type II Errors

8.2 Type I and Type II Errors PowerPoint @ [26]

8.3 Levene's p versus the test statistic p

8.4 Analysis of Variance

8.5 ANOVA PowerPoint @ [27] 8.5.1 ANOVA video [28]

8.6 ANOVA Case study

8.7 Critical values for the F statistic @ [29]

8.7 Rules of thumb for interpreting effect sizes of ANOVAs


9.1 Post Hoc test PowerPoint @ [30]

9.2 Selecting a Post Hoc test

9.3 Hypothesis testing template for ANOVA @ [31]


10.1 Chi square

10.2 Example for calculating chi square

10.3 Critical values for chi square @ [32]

10.4 Chi square analysis description/sample writing

10.5 Chi square PowerPoint @ [33]


11.1 Beyond the ANOVA

11.2 Beyond ANOVA PowerPoint @ [34]

11.3 2-way ANOVA PowerPoint @ [35]

11.4 2-way ANOVA template @ [36]

11.5 1-way ANOVA Annotated SPSS Output @ [37]

11.6 2-way ANOVA Annotated SPSS Output @ [38]


12.1 MANOVA

12.2 Homogeneity vs Homoscedacity (Levene vs Box's M)

12.3 Post Hoc ANOVAs for MANOVA (univariate)

12.4 Post Hoc Discriminant Analysis (multivariate)

12.5 Covariates

12.6 MANCOVA

12.7 MANOVA Annotated SPSS Output @ [39]

12.8 MANCOVA Annotated SPSS Output @ [40]

13.1 Multiple Regression Analysis

13.1.1 Collinearity

13.2 Multiple Linear Regression

13.3 Reading the MLR Output: An annotated output [41]

13.4 MLR Annotated SPSS Output @ [42]


14.1 Internal Consistency[43]

14.2 Cronbach's Alpha[44]

14.2.1 Cronbach's Alpha Values 14.2.2. Cronbach's Alpha in SPSS [45]

Applied Research Designs

15.1 Instrumentation The School Attitude Assessment Survey – Revised (SAAS - R) is designed to measure academic self-perceptions, attitude toward school, attitudes toward teachers, goal valuation, and motivation/self-regulation in secondary school students. The purpose of measuring these factors is to distinguish underachievers from achievers in a secondary school setting. The instrument measures factors through 36 questions in the format of a 7-point Likert-type agreement scale. Scoring of the instrument is standardized, and the score is derived from means. McCoach and Siegle (2003) report the SAAS-R demonstrates evidence of adequate internal consistency reliability. A confirmatory factor analysis exhibited reasonable fit (55) = 1,581.7, CFI = .911, TLI = .918, RMSEA = .059, SRMR = .057 (McCoach & Siegle, 2003). As reported by McCoach and Siegle (2003), the scores demonstrated a classical theory internal consistency reliability coefficient of at least .85 on each of the five factors. Interfactor correlations for the five factors of the SAAS-R range from .86 to .91, demonstrating appropriate domains between the subscales.

References: McCoach, D. B., & Siegle, D. (2003). The school attitude assessment survey – revised: A new instrument to identify academically able students who underachieve. Educational and Psychological Measurement, 63(3), 414-429. DOI: 10.1177/0013164402251057.

15.2 Limitations

15.3 Practice determining the stat


Getting started