Stephanie Ruth Young, Elizabeth M Dworak, Miriam A Novack, Aaron J Kaat, Hubert Adam, Cindy J Nowinski, Zahra Hosseinian, Jerry Slotkin, Jordan Stoeger, Saki Amagai, Maria Varela Diaz, Anyelo Almonte Correa, Keith Alperin, Larsson Omberg, Michael Kellen, Monica R Camacho, Bernard Landavazo, Rachel L Nosheny, Michael W Weiner, Richard Gershon
{"title":"Development and validation of an episodic memory measure in the Mobile Toolbox (MTB): Arranging Pictures.","authors":"Stephanie Ruth Young, Elizabeth M Dworak, Miriam A Novack, Aaron J Kaat, Hubert Adam, Cindy J Nowinski, Zahra Hosseinian, Jerry Slotkin, Jordan Stoeger, Saki Amagai, Maria Varela Diaz, Anyelo Almonte Correa, Keith Alperin, Larsson Omberg, Michael Kellen, Monica R Camacho, Bernard Landavazo, Rachel L Nosheny, Michael W Weiner, Richard Gershon","doi":"10.1080/13803395.2024.2353945","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>Arranging Pictures is a new episodic memory test based on the NIH Toolbox (NIHTB) Picture Sequence Memory measure and optimized for self-administration on a personal smartphone within the Mobile Toolbox (MTB). We describe evidence from three distinct validation studies.</p><p><strong>Method: </strong>In Study 1, 92 participants self-administered Arranging Pictures on study-provided smartphones in the lab and were administered external measures of similar and dissimilar constructs by trained examiners to assess validity under controlled circumstances. In Study 2, 1,021 participants completed the external measures in the lab and self-administered Arranging Pictures remotely on their personal smartphones to assess validity in real-world contexts. In Study 3, 141 participants self-administered Arranging Pictures remotely twice with a two-week delay on personal iOS smartphones to assess test-retest reliability and practice effects.</p><p><strong>Results: </strong>Internal consistency was good across samples (ρ<sub>xx</sub> = .80 to .85, <i>p</i> < .001). Test-retest reliability was marginal (ICC = .49, <i>p</i> < .001) and there were significant practice effects after a two-week delay (ΔM = 3.21 (95% CI [2.56, 3.88]). As expected, correlations with convergent measures were significant and moderate to large in magnitude (ρ = .44 to .76, <i>p</i> < .001), while correlations with discriminant measures were small (ρ = .23 to .27, <i>p</i> < .05) or nonsignificant. Scores demonstrated significant negative correlations with age (ρ = -.32 to -.21, <i>p</i> < .001). Mean performance was slightly higher in the iOS compared to the Android group (M<sub>iOS</sub> = 18.80, N<sub>iOS</sub> = 635; M<sub>Android</sub> = 17.11, N<sub>Android</sub> = 386; t(757.73) = 4.17, <i>p</i> < .001), but device type did not significantly influence the psychometric properties of the measure. Indicators of potential cheating were mixed; average scores were significantly higher in the remote samples (F(2, 850) = 11.415, <i>p</i> < .001), but there were not significantly more perfect scores.</p><p><strong>Conclusion: </strong>The MTB Arranging Pictures measure demonstrated evidence of reliability and validity when self-administered on personal device. Future research should examine the potential for cheating in remote settings and the properties of the measure in clinical samples.</p>","PeriodicalId":15382,"journal":{"name":"Journal of clinical and experimental neuropsychology","volume":null,"pages":null},"PeriodicalIF":1.8000,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11309919/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of clinical and experimental neuropsychology","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1080/13803395.2024.2353945","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/5/16 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"CLINICAL NEUROLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Introduction: Arranging Pictures is a new episodic memory test based on the NIH Toolbox (NIHTB) Picture Sequence Memory measure and optimized for self-administration on a personal smartphone within the Mobile Toolbox (MTB). We describe evidence from three distinct validation studies.
Method: In Study 1, 92 participants self-administered Arranging Pictures on study-provided smartphones in the lab and were administered external measures of similar and dissimilar constructs by trained examiners to assess validity under controlled circumstances. In Study 2, 1,021 participants completed the external measures in the lab and self-administered Arranging Pictures remotely on their personal smartphones to assess validity in real-world contexts. In Study 3, 141 participants self-administered Arranging Pictures remotely twice with a two-week delay on personal iOS smartphones to assess test-retest reliability and practice effects.
Results: Internal consistency was good across samples (ρxx = .80 to .85, p < .001). Test-retest reliability was marginal (ICC = .49, p < .001) and there were significant practice effects after a two-week delay (ΔM = 3.21 (95% CI [2.56, 3.88]). As expected, correlations with convergent measures were significant and moderate to large in magnitude (ρ = .44 to .76, p < .001), while correlations with discriminant measures were small (ρ = .23 to .27, p < .05) or nonsignificant. Scores demonstrated significant negative correlations with age (ρ = -.32 to -.21, p < .001). Mean performance was slightly higher in the iOS compared to the Android group (MiOS = 18.80, NiOS = 635; MAndroid = 17.11, NAndroid = 386; t(757.73) = 4.17, p < .001), but device type did not significantly influence the psychometric properties of the measure. Indicators of potential cheating were mixed; average scores were significantly higher in the remote samples (F(2, 850) = 11.415, p < .001), but there were not significantly more perfect scores.
Conclusion: The MTB Arranging Pictures measure demonstrated evidence of reliability and validity when self-administered on personal device. Future research should examine the potential for cheating in remote settings and the properties of the measure in clinical samples.
期刊介绍:
Journal of Clinical and Experimental Neuropsychology ( JCEN) publishes research on the neuropsychological consequences of brain disease, disorders, and dysfunction, and aims to promote the integration of theories, methods, and research findings in clinical and experimental neuropsychology. The primary emphasis of JCEN is to publish original empirical research pertaining to brain-behavior relationships and neuropsychological manifestations of brain disease. Theoretical and methodological papers, critical reviews of content areas, and theoretically-relevant case studies are also welcome.