upgrade
upgrade

Key Concepts of Statistical Significance Tests

Study smarter with Fiveable

Get study guides, practice questions, and cheatsheets for all your subjects. Join 500,000+ students with a 96% pass rate.

Get Started

Statistical significance tests help determine if observed differences in data are meaningful or just due to chance. These tests, like Z-tests and T-tests, guide researchers in making informed decisions based on sample data compared to population parameters.

  1. Z-test

    • Used to determine if there is a significant difference between sample and population means when the population variance is known.
    • Applicable for large sample sizes (n > 30) or when the population is normally distributed.
    • Assumes that the data is continuous and follows a normal distribution.
    • Commonly used in hypothesis testing to assess the significance of results.
  2. T-test (one-sample, two-sample, paired)

    • One-sample T-test: Compares the mean of a single sample to a known population mean.
    • Two-sample T-test: Compares the means of two independent samples to see if they are significantly different.
    • Paired T-test: Compares means from the same group at different times (e.g., before and after treatment).
    • Suitable for smaller sample sizes (n < 30) and when population variance is unknown.
  3. Chi-square test

    • Used to assess the association between categorical variables.
    • Compares observed frequencies in each category to expected frequencies under the null hypothesis.
    • Commonly used in contingency tables to evaluate independence or goodness of fit.
    • Requires a minimum sample size to ensure validity of results.
  4. F-test

    • Used to compare two variances to determine if they are significantly different.
    • Commonly applied in the context of ANOVA to test the equality of means across multiple groups.
    • Assumes that the data is normally distributed and that samples are independent.
    • Helps in assessing the overall significance of a model in regression analysis.
  5. ANOVA (one-way, two-way)

    • One-way ANOVA: Tests for differences in means among three or more independent groups based on one factor.
    • Two-way ANOVA: Examines the effect of two independent variables on a dependent variable and their interaction.
    • Assumes normality, homogeneity of variances, and independence of observations.
    • Useful for determining if at least one group mean is different from the others.
  6. Regression analysis

    • Used to model the relationship between a dependent variable and one or more independent variables.
    • Can be simple (one independent variable) or multiple (more than one independent variable).
    • Helps in predicting outcomes and understanding the strength of relationships.
    • Assumes linearity, independence, homoscedasticity, and normality of residuals.
  7. Mann-Whitney U test

    • A non-parametric test used to compare differences between two independent groups.
    • Does not assume normal distribution and is suitable for ordinal data or non-normally distributed interval data.
    • Tests whether one of the two groups tends to have larger values than the other.
    • Useful when sample sizes are small or when data does not meet the assumptions of the T-test.
  8. Wilcoxon signed-rank test

    • A non-parametric test used to compare two related samples or repeated measurements on a single sample.
    • Tests for differences in medians rather than means, making it suitable for ordinal data.
    • Does not assume normality and is an alternative to the paired T-test.
    • Useful for analyzing before-and-after scenarios or matched pairs.
  9. Kruskal-Wallis test

    • A non-parametric alternative to one-way ANOVA for comparing three or more independent groups.
    • Tests whether the samples originate from the same distribution without assuming normality.
    • Suitable for ordinal data or non-normally distributed interval data.
    • Determines if at least one group differs significantly from the others.
  10. Pearson correlation test

    • Measures the strength and direction of the linear relationship between two continuous variables.
    • Produces a correlation coefficient (r) ranging from -1 to 1, indicating the degree of association.
    • Assumes that both variables are normally distributed and have a linear relationship.
    • Useful for identifying potential relationships that may warrant further investigation.