Population and Sample
Parameter and Statistic Nominal scale, ordinal scale, interval scale, and ratio scale Frequency Distribution, Proportion and PercentagePercentile, and Percentile Rank: corresponding to the proportion of the left of the score in question.
Range, and Interquartile range
Sum of the Squared Deviation, Variance, and Standard Deviation
Zscore: Location of scores and standardized distributions
Standard Distributions: μ = 0, and σ = 1;
Probability: Any particular outcome as a fraction or proportion
Random Sample: must satisfy two requirements
Distribution of sample means: the collection of sample means for all the possible random samples of a particular size (n) that can be obtained from a population
Zscore for each M in distribution of sample means (Known population mean and standard deviation)
Hypothesis Testing: a statistical method that uses sample data to evaluate a hypothesis about a population parameter.
Type I Error: rejecting a true H0, determined by the alpha level (α)
Type II Error: failure to reject a false H0, presented by symbol beta (β) (probability of Type II Error).
Effect Size
Power (= 1  β): The portion of the treatment distribution located beyond the boundary (critical value) of the critical region. (1) When α increase, power increase; (2) OneTailed power > TwoTailed power; (3) When sample size (N) increase, power increase.
tstatistic: used when the population standard deviation σ (or variance σ2) is unknown.
Effect size
tstatistic (IndependentMeasures): Used when data from two separate samples to draw inferences about the mean difference between two population or between two different treatment conditions
Effect size
Homogeneity of Variance: two or more population have equal variance
tstatistic (RepeatedMeasures or MatchedSubjects Design): Remove or reduces individual differences which in turn lowers sample variability and tends to increase the chances for obtaining a significant result.
Effect size
Estimation: How much treatment effect there is
ANOVA (IndependentMeasures): A statistical technique that is used to test for mean differences among two or more treatment conditions
Total SS = Σ(X^2)  G^2/N df = N  1


Between SS = Σ(T^2)  G^2/N df = k  1 MS = SS/df

Within SS = Σ_{i}(SS_{i}) df = n  k MS = SS/df 
fratio = (MS_{between treatments})/(MS_{within treatments}) 
Effect size
Post hoc test
ANOVA (RepeatedMeasures): for repeatedmeasures or matchedsubject design. Eliminates the influence of individual differences from the analysis.
Total SS = Σ(X^2)  G^2/N df = N  1


Between treatments SS = Σ(T^2)  G^2/N df = k  1 MS = SS/df

Within Treatments SS = Σ_{i}(SS_{i}) df = n  k 

Between Subjects SS = Σ(P^2/k)  G^2/N df = n  1

Error SS = SS_{within treatments }_{ }SS_{between treatments} df = (N  k)  (n  1) MS = SS/df 

fratio = (MS_{between treatments})/(MS_{error}) 
Effect size
Post hoc test
ANOVA (twofactor design): Two independent variables A, and B. Independent subjects
Total (n: # of subjects; b: levels of factor B; a: levels of factor A) SS = Σ(X^2)  G^2/N df = N  1


Between SS = Σ(T^2/n)  G^2/N df = b*a  1

Within: (i: cells) SS = Σ_{i}(SS_{i}) df = Σ_{i}(df_{i}) MS = SS/df


Factor A SS = Σ(T_{a}^2/n_{a})  G^2/N df = a  1 MS = SS/df 
Factor B SS = Σ(T_{b}^2/n_{b})  G^2/N df = b  1 MS = SS/df

Interaction SS = SS_{between}  SS_{A}  SS_{B} df = df_{between}  df_{A}  df_{B} MS = SS/df 

Each fratio = (MS_{treatments})/(MS_{within treatments}) 
Effect size
Post hoc test
ANOVA (twofactor design and repeatedmeasures design): Two independent variables A, and B. Repeatedmeasures or matchedsubjects design
Correlation: Not a causal relationship, expressed in three parameters
Spearman Correlation: Both variables are measured on ordinal scales
PointBiserial correlation: One of the two variables is dichotomous
PhiCoefficient: Both variables are dichotomous
Regression (Leastsquares method)
ChiSquare tests: for nonparametric technique. Tests hypothesis about the form of the entire frequency distribution; each observed frequency reflects a different individual, no individual can produce a response classified in more than one category (or frequency in one category)
fe = p*n, where p: hypothesized proportion, and n: the size of the sample;
chisquare χ2 = Σ((fo  fe)^2/fe), where fo observed frequency;
df = C  1, where C: # of categories in the variable
fe = (fc*fr)/n, where fc total column frequency (C), fr total row frequency (R), and n total sizes of the samples;
df = (R  1)*(C  1), where R: # of row categories, C: # of column categories;
To use chisquare statistic, fe should be greater than 5
Effect size for chisquare test
Sign test: use binomial test
MannWhitney test (U): nonparametric alternatives to the independentmeasures tstatistic. A small value of U (near zero) is evidence of a difference between the two treatments.
Wilcoxon test (T): nonparametric alternatives to repeatedmeasures tstatistic. A small value of T (near zero) provides evidence of difference.
KruskalWallis test: nonparametric alternatives to single factor ANOVA. The test statistic H, is equivalent to a chisquare statistic with degrees of freedom equal to the number of treatment conditions minus one.
Relationship (or conversion)