upgrade
upgrade

🎣Statistical Inference

Effect Size Measures

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Effect size measures help us understand the strength and practical significance of relationships in data. They go beyond p-values, offering insights into how meaningful findings are in real-world contexts, which is crucial for making informed decisions in statistical inference.

  1. Cohen's d

    • Measures the standardized difference between two group means.
    • Commonly used in t-tests to assess the magnitude of treatment effects.
    • Values typically interpreted as small (0.2), medium (0.5), and large (0.8).
    • Helps in understanding the practical significance of results beyond p-values.
  2. Pearson's correlation coefficient (r)

    • Quantifies the strength and direction of a linear relationship between two continuous variables.
    • Ranges from -1 to 1, where 0 indicates no correlation.
    • Positive values indicate a direct relationship, while negative values indicate an inverse relationship.
    • Important for assessing the degree of association in regression analyses.
  3. Eta-squared (η²)

    • Represents the proportion of variance in the dependent variable explained by the independent variable(s).
    • Values range from 0 to 1, with higher values indicating a greater effect size.
    • Commonly used in ANOVA to assess the impact of categorical predictors.
    • Provides insight into the practical significance of group differences.
  4. Odds ratio

    • Compares the odds of an event occurring in one group to the odds in another group.
    • Commonly used in case-control studies and logistic regression.
    • An odds ratio of 1 indicates no difference between groups, while values greater than 1 indicate increased odds in the first group.
    • Useful for understanding the strength of associations in categorical data.
  5. Risk ratio (relative risk)

    • Compares the probability of an event occurring in the exposed group to the unexposed group.
    • Values greater than 1 indicate increased risk in the exposed group, while values less than 1 indicate decreased risk.
    • Commonly used in cohort studies and clinical trials.
    • Provides a clear interpretation of risk associated with exposure.
  6. Standardized mean difference

    • A general term for effect size measures that standardize differences between group means.
    • Useful for comparing results across different studies with varying scales.
    • Includes measures like Cohen's d and Hedges' g.
    • Helps in meta-analysis to synthesize findings from multiple studies.
  7. Glass's delta

    • Similar to Cohen's d but uses the standard deviation of the control group for standardization.
    • Particularly useful when the sample sizes of groups differ significantly.
    • Provides a measure of effect size that is less biased by the variability of the treatment group.
    • Helps in understanding the impact of interventions in experimental designs.
  8. Hedges' g

    • A variation of Cohen's d that corrects for small sample sizes.
    • Provides a more accurate estimate of effect size when sample sizes are less than 20.
    • Values interpreted similarly to Cohen's d, aiding in the assessment of treatment effects.
    • Useful in meta-analyses to combine effect sizes from different studies.
  9. R-squared (R²)

    • Represents the proportion of variance in the dependent variable explained by the independent variable(s) in regression models.
    • Ranges from 0 to 1, with higher values indicating better model fit.
    • Helps in assessing the explanatory power of the model.
    • Important for understanding the effectiveness of predictors in explaining outcomes.
  10. Partial eta-squared

    • A measure of effect size that assesses the proportion of variance explained by a factor while controlling for other factors.
    • Commonly used in factorial ANOVA designs.
    • Values range from 0 to 1, with higher values indicating a stronger effect.
    • Provides insight into the unique contribution of each predictor in multivariate analyses.