Abstract
The use of survey experiments has surged in political science as a method for estimating causal effects. By far, the most common design is the between-subjects design in which the outcome is only measured posttreatment. This design relies heavily on recruiting a large number of subjects to achieve adequate statistical power. Alternative designs that involve repeated measurement of the dependent variable promise greater precision, but are rarely used out of fears that these designs will bias treatment effects (e.g., due to consistency pressures). Across six studies, we assess this conventional wisdom by testing experimental designs against each other. Our results demonstrate that repeated measures designs substantially increase precision, while introducing little to no bias. These designs also offer new insights into the nature of treatment effects. We conclude by encouraging researchers to adopt repeated measures designs and providing guidelines for when and how to use them.
Supplementary materials
Title
Appendix
Description
Appendix for “Increasing Precision in Survey Experiments Without Introducing Bias”
Actions