Routinely randomize potential sources of measurement reactivity to estimate and adjust for biases in subjective reports.

Ruben C. Arslan*, Anne K. Reitz, Julie C. Driebe, Tanja M. Gerlach, Lars Penke

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

10 Citations (Scopus)

Abstract

With the advent of online and app-based studies, researchers in psychology are making increasing use of repeated subjective reports. The new methods open up opportunities to study behavior in the field and to map causal processes, but they also pose new challenges. Recent work has added initial elevation bias to the list of common pitfalls; here, higher negative states (i.e., thoughts and feelings) are reported on the first day of assessment than on later days. This article showcases a new approach to addressing this and other measurement reactivity biases. Specifically, we employed a planned missingness design in a daily diary study of more than 1,300 individuals who were assessed over a period of up to 70 days to estimate and adjust for measurement reactivity biases. We found that day of first item presentation, item order, and item number were associated with only negligible bias: Items were not answered differently depending on when and where they were shown. Initial elevation bias may thus be more limited than has previously been reported or it may act only at the level of the survey, not at the item level. We encourage researchers to make design choices that will allow them to routinely assess measurement reactivity biases in their studies. Specifically, we advocate the routine randomization of item display and order, as well as of the timing and frequency of measurement. Randomized planned missingness makes it possible to empirically gauge how fatigue, familiarity, and learning interact to bias responses. Translational Abstract: Planned missingness designs, in which researchers expressly decide to ask each participant only a random subset of questions on each measurement occasion, are a useful tool to keep surveys short and constructs broad. Here, we emphasize another benefit of randomly determining whether and where an item should be shown: It allows researchers to estimate biases related to measurement reactivity. Especially researchers who use repeated measures are aware of the potential bias caused by repeatedly answering the same items. In our case study, we find only negligible bias, but even non-negligible biases can be adjusted for. We recommend that researchers should routinely randomize potential sources of measurement reactivity in their substantive research. Adopting this approach can lead psychological research from a culture in which we estimate measurement reactivity correlationally, discuss it in footnotes, and hope for the best to one in which we randomize and estimate sources of measurement reactivity to adjust for and prevent these biases.

Original languageEnglish
Pages (from-to)175-185
Number of pages11
JournalPsychological Methods
Volume26
Issue number2
DOIs
Publication statusPublished - 01 Apr 2021
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2020 American Psychological Association

Keywords

  • experience sampling
  • measurement reactivity
  • planned missingness
  • repeated measures

ASJC Scopus subject areas

  • Psychology (miscellaneous)

Fingerprint

Dive into the research topics of 'Routinely randomize potential sources of measurement reactivity to estimate and adjust for biases in subjective reports.'. Together they form a unique fingerprint.

Cite this