The density test of the running variable is a statistical method used to examine the distribution of a continuous variable around a cutoff point in regression discontinuity designs. This test helps to ensure that there are no manipulation or discontinuities in the running variable, which is crucial for the validity of causal inferences drawn from sharp and fuzzy regression discontinuity designs. A smooth density around the cutoff indicates that the treatment assignment is random and not subject to bias, which is essential for establishing causal relationships.
congrats on reading the definition of density test of running variable. now let's actually learn it.