Researchers have become increasingly interested in response times to survey items as a measure of cognitive effort. We used machine learning to develop a prediction model of response times based on 41 attributes of survey items (e.g., question length, response format, linguistic features) collected in a large, general population sample. The developed algorithm can be used to derive reference values for expected response times for most commonly used survey items.
Audit correspondence studies are field experiments that test for discriminatory behavior in active markets. Researchers measure discrimination by comparing how responsive individuals ("audited units") are to correspondences from different types of people. This paper elaborates on the tradeoffs researchers face between sending audited units only one correspondence and sending them multiple correspondences, especially when including less common identity signals in the correspondences. We argue that when researchers use audit correspondence studies to measure discrimination against individuals that infrequently interact with audited units, they raise the risk that these audited units become aware they are being studied or otherwise act differently. We also argue that sending multiple pieces of correspondence can increase detection risk. We present the result of an audit correspondence study that demonstrates how detection can occur for these reasons, leading to significantly attenuated (biased towards zero) estimates of discrimination.