Pub Date : 2024-12-01Epub Date: 2024-02-07DOI: 10.1177/10731911231225191
Bronwen Perley-Robertson, Kelly M Babchishin, L Maaike Helmus
Missing data are pervasive in risk assessment but their impact on predictive accuracy has largely been unexplored. Common techniques for handling missing risk data include summing available items or proration; however, multiple imputation is a more defensible approach that has not been methodically tested against these simpler techniques. We compared the validity of these three missing data techniques across six conditions using STABLE-2007 (N = 4,286) and SARA-V2 (N = 455) assessments from men on community supervision in Canada. Condition 1 was the observed data (low missingness), and Conditions 2 to 6 were generated missing data conditions, whereby 1% to 50% of items per case were randomly deleted in 10% increments. Relative predictive accuracy was unaffected by missing data, and simpler techniques performed just as well as multiple imputation, but summed totals underestimated absolute risk. The current study therefore provides empirical justification for using proration when data are missing within a sample.
{"title":"The Effect of Missing Item Data on the Relative Predictive Accuracy of Correctional Risk Assessment Tools.","authors":"Bronwen Perley-Robertson, Kelly M Babchishin, L Maaike Helmus","doi":"10.1177/10731911231225191","DOIUrl":"10.1177/10731911231225191","url":null,"abstract":"<p><p>Missing data are pervasive in risk assessment but their impact on predictive accuracy has largely been unexplored. Common techniques for handling missing risk data include summing available items or proration; however, multiple imputation is a more defensible approach that has not been methodically tested against these simpler techniques. We compared the validity of these three missing data techniques across six conditions using STABLE-2007 (<i>N</i> = 4,286) and SARA-V2 (<i>N</i> = 455) assessments from men on community supervision in Canada. Condition 1 was the observed data (low missingness), and Conditions 2 to 6 were generated missing data conditions, whereby 1% to 50% of items per case were randomly deleted in 10% increments. Relative predictive accuracy was unaffected by missing data, and simpler techniques performed just as well as multiple imputation, but summed totals underestimated absolute risk. The current study therefore provides empirical justification for using proration when data are missing within a sample.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11490059/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139696868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-02-12DOI: 10.1177/10731911241229568
Amanda M Raines, Kate E Clauss, Dustin Seidler, Nicholas P Allan, Jon D Elhai, Jennifer J Vasterling, Joseph I Constans, Kelly P Maieritsch, C Laurel Franklin
The PTSD Checklist for DSM-5 (PCL-5) and the Clinician-Administered PTSD Scale for DSM-5 (CAPS-5) are two of the most widely used and well-validated PTSD measures providing total and subscale scores that correspond with DSM-5 PTSD symptoms. However, there is little information about the utility of subscale scores above and beyond the total score for either measure. The current study compared the proposed DSM-5 four-factor model to a bifactor model across both measures using a sample of veterans (N = 1,240) presenting to a Veterans Affairs (VA) PTSD specialty clinic. The correlated factors and bifactor models for both measures evidenced marginal-to-acceptable fit and were retained for further evaluation. Bifactor specific indices suggested that both measures exhibited a strong general factor but weak lower-order factors. Structural regressions revealed that most of the lower-order factors provided little utility in predicting relevant outcomes. Although additional research is needed to make definitive statements about the utility of PCL-5 and CAPS-5 subscales, study findings point to numerous weaknesses. As such, caution should be exercised when using or interpreting subscale scores in future research.
{"title":"A Bifactor Evaluation of Self-Report and Clinician-Administered Measures of PTSD in Veterans.","authors":"Amanda M Raines, Kate E Clauss, Dustin Seidler, Nicholas P Allan, Jon D Elhai, Jennifer J Vasterling, Joseph I Constans, Kelly P Maieritsch, C Laurel Franklin","doi":"10.1177/10731911241229568","DOIUrl":"10.1177/10731911241229568","url":null,"abstract":"<p><p>The PTSD Checklist for <i>DSM-5</i> (PCL-5) and the Clinician-Administered PTSD Scale for <i>DSM-5</i> (CAPS-5) are two of the most widely used and well-validated PTSD measures providing total and subscale scores that correspond with <i>DSM-5</i> PTSD symptoms. However, there is little information about the utility of subscale scores above and beyond the total score for either measure. The current study compared the proposed <i>DSM-5</i> four-factor model to a bifactor model across both measures using a sample of veterans (<i>N</i> = 1,240) presenting to a Veterans Affairs (VA) PTSD specialty clinic. The correlated factors and bifactor models for both measures evidenced marginal-to-acceptable fit and were retained for further evaluation. Bifactor specific indices suggested that both measures exhibited a strong general factor but weak lower-order factors. Structural regressions revealed that most of the lower-order factors provided little utility in predicting relevant outcomes. Although additional research is needed to make definitive statements about the utility of PCL-5 and CAPS-5 subscales, study findings point to numerous weaknesses. As such, caution should be exercised when using or interpreting subscale scores in future research.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139721338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-02-21DOI: 10.1177/10731911241229575
Jone Martínez-Bacaicoa, Miguel A Sorrel, Manuel Gámez-Guadix
Technology-facilitated sexual violence (TFSV) includes different forms of digital violence, such as online gender-based violence, online gender- and sexuality-based violence, digital sexual harassment, online sexual coercion, and nonconsensual pornography. The aim of this study was to design and validate a measure to assess the perpetration and victimization of each dimension of TFSV. The relationships between the different dimensions and differences by gender and sexual orientation were also analyzed. The participants were a sample of 2,486 adults (69% women) from Spain, aged between 16 and 79 (M = 25.95; DT = 9.809) years. The Technology-Facilitated Sexual Violence Scales were found to be valid and reliable instruments, supporting our recommendation for the use of these scales. Network analysis and solution-based exploratory factor analyses showed that the dimensions of online sexual coercion and nonconsensual pornography clustered together. All the perpetration variables were related to sexism. Finally, cis women and nonheterosexual people reported higher victimization scores overall compared to cis men and heterosexuals, respectively, while cis men reported higher perpetration scores overall than cis women.
{"title":"Development and Validation of Technology-Facilitated Sexual Violence Perpetration and Victimization Scales Among Adults.","authors":"Jone Martínez-Bacaicoa, Miguel A Sorrel, Manuel Gámez-Guadix","doi":"10.1177/10731911241229575","DOIUrl":"10.1177/10731911241229575","url":null,"abstract":"<p><p>Technology-facilitated sexual violence (TFSV) includes different forms of digital violence, such as online gender-based violence, online gender- and sexuality-based violence, digital sexual harassment, online sexual coercion, and nonconsensual pornography. The aim of this study was to design and validate a measure to assess the perpetration and victimization of each dimension of TFSV. The relationships between the different dimensions and differences by gender and sexual orientation were also analyzed. The participants were a sample of 2,486 adults (69% women) from Spain, aged between 16 and 79 (<i>M</i> = 25.95; <i>DT</i> = 9.809) years. The Technology-Facilitated Sexual Violence Scales were found to be valid and reliable instruments, supporting our recommendation for the use of these scales. Network analysis and solution-based exploratory factor analyses showed that the dimensions of online sexual coercion and nonconsensual pornography clustered together. All the perpetration variables were related to sexism. Finally, cis women and nonheterosexual people reported higher victimization scores overall compared to cis men and heterosexuals, respectively, while cis men reported higher perpetration scores overall than cis women.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139911951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-02-09DOI: 10.1177/10731911241229060
R Noah Padgett, Matthew T Lee, Renae Wilkinson, Heather Tsavaris, Tyler J VanderWeele
An individual's flourishing is sustained by and dependent on their community's well-being. We provide one of the first studies of a measure of communal subjective well-being, focusing on individuals' relationships with their community. Using two samples from the Greater Columbus, Ohio region, we provide evidence of the reliability and validity of the Subjective Community Well-being (SCWB) assessment. The five domains of the SCWB are Good Relationships (α = .92), Proficient Leadership (α = .93), Healthy Practices (α = .92), Satisfying Community (α = .88), and Strong Mission (α = .81). A community-based sample (N = 1,435) and an online sample of Columbus residents (N = 692) were scored on the SCWB and compared across domains. We found evidence that the SCWB scores differentiate between active and less active community members. We discuss the appropriate uses of the SCWB as a measure of well-being and provide recommendations for research that could profitably utilize the SCWB measure to examine community well-being.
{"title":"Reliability and Validity of a Multidimensional Measure of Subjective Community Well-Being.","authors":"R Noah Padgett, Matthew T Lee, Renae Wilkinson, Heather Tsavaris, Tyler J VanderWeele","doi":"10.1177/10731911241229060","DOIUrl":"10.1177/10731911241229060","url":null,"abstract":"<p><p>An individual's flourishing is sustained by and dependent on their community's well-being. We provide one of the first studies of a measure of communal subjective well-being, focusing on individuals' relationships with their community. Using two samples from the Greater Columbus, Ohio region, we provide evidence of the reliability and validity of the Subjective Community Well-being (SCWB) assessment. The five domains of the SCWB are Good Relationships (α = .92), Proficient Leadership (α = .93), Healthy Practices (α = .92), Satisfying Community (α = .88), and Strong Mission (α = .81). A community-based sample (<i>N</i> = 1,435) and an online sample of Columbus residents (<i>N</i> = 692) were scored on the SCWB and compared across domains. We found evidence that the SCWB scores differentiate between active and less active community members. We discuss the appropriate uses of the SCWB as a measure of well-being and provide recommendations for research that could profitably utilize the SCWB measure to examine community well-being.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139708972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-02-27DOI: 10.1177/10731911241234104
Julia Simner, Louisa J Rinaldi, Jamie Ward
Misophonia is a sound sensitivity disorder characterized by a strong aversion to specific sounds (e.g., chewing). Here we present the Sussex Misophonia Scale for Adults (SMS-Adult), within an online open-access portal, with automated scoring and results that can be shared ethically with users and professionals. Receiver operator characteristics show our questionnaire to be "excellent" and "good-to-excellent" at classifying misophonia, both when dividing our n = 501 adult participants by recruitment stream (self-declared misophonics vs. general population), and again when dividing them with by a prior measure of misophonia (as misophonics vs. non-misophonics). Factor analyses identified a five-factor structure in our 39 Likert-type items, and these were Feelings/Isolation, Life consequences, Intersocial reactivity, Avoidance/Repulsion, and Pain. Our measure also elicits misophonia triggers, each rated for their commonness in misophonia. We offer our open-access online tool for wider use (www.misophonia-hub.org), embedded within a well-stocked library of resources for misophonics, researchers, and clinicians.
{"title":"An Automated Online Measure for Misophonia: The <i>Sussex Misophonia Scale for Adults</i>.","authors":"Julia Simner, Louisa J Rinaldi, Jamie Ward","doi":"10.1177/10731911241234104","DOIUrl":"10.1177/10731911241234104","url":null,"abstract":"<p><p>Misophonia is a sound sensitivity disorder characterized by a strong aversion to specific sounds (e.g., chewing). Here we present the <i>Sussex Misophonia Scale for Adults</i> (<i>SMS-Adult</i>), within an online open-access portal, with automated scoring and results that can be shared ethically with users and professionals. Receiver operator characteristics show our questionnaire to be \"excellent\" and \"good-to-excellent\" at classifying misophonia, both when dividing our <i>n</i> = 501 adult participants by recruitment stream (self-declared misophonics vs. general population), and again when dividing them with by a prior measure of misophonia (as misophonics vs. non-misophonics). Factor analyses identified a five-factor structure in our 39 Likert-type items, and these were <i>Feelings/Isolation</i>, <i>Life consequences</i>, <i>Intersocial reactivity</i>, <i>Avoidance/Repulsion</i>, and <i>Pain</i>. Our measure also elicits misophonia triggers, each rated for their commonness in misophonia. We offer our open-access online tool for wider use (www.misophonia-hub.org), embedded within a well-stocked library of resources for misophonics, researchers, and clinicians.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11528938/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139982285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-03-04DOI: 10.1177/10731911241234734
Tjhin Wiguna, Kusuma Minayati, Fransiska Kaligis, Sylvia Dominic Teh, Maria Krishnandita, Nabella Meriem Annisa Fitri, Raden Irawati Ismail, Adilla Hastika Fasha, Steven, Raymond Bahana
Executive function influences children's learning abilities and organizes their cognitive processes, behaviors, and emotions. This cross-sectional study examined whether an Indonesian Computer-Based Game (ICbG) prototype could be used as a Computer-Based Game Inventory for Executive Function (CGIEF) in children and adolescents. The study was conducted with 200 children, adolescents, and their parents. The parents completed the Behavior Rating Inventory of Executive Functioning (BRIEF) questionnaire, and the children and adolescents completed the CGIEF. Confirmatory factor analysis (CFA) and structural equation modeling (SEM) were performed using LISREL Version 8.80. The construct of CGIEF was valid/fit with normal theory-weighted least squares = 15.75 (p > .05). SEM analysis showed that the theoretical construct of the CGIEF was a valid predictor of executive function. The critical t value of the pathway was 2.45, and normal theory-weighted least squares was 5.74 (p > .05). The construct reliability (CR) for CGIEF was 0.91. Concurrent validity was assessed using the Bland-Altman plot, and the coefficient of repeatability (bias/mean) was nearly zero between the t scores of total executive functions of the CGIEF and BRIEF. This preliminary study showed that the CGIEF can be useful as a screening tool for executive dysfunction, metacognitive deficits, and behavioral regulation problems among children and adolescents in clinical samples.
{"title":"Using the Indonesian Computer-Based Game Prototype as a Computer-Based Game Inventory for Executive Function in Children and Adolescents: A Confirmatory Factor Analysis and Concurrent Validity Study.","authors":"Tjhin Wiguna, Kusuma Minayati, Fransiska Kaligis, Sylvia Dominic Teh, Maria Krishnandita, Nabella Meriem Annisa Fitri, Raden Irawati Ismail, Adilla Hastika Fasha, Steven, Raymond Bahana","doi":"10.1177/10731911241234734","DOIUrl":"10.1177/10731911241234734","url":null,"abstract":"<p><p>Executive function influences children's learning abilities and organizes their cognitive processes, behaviors, and emotions. This cross-sectional study examined whether an Indonesian Computer-Based Game (ICbG) prototype could be used as a Computer-Based Game Inventory for Executive Function (CGIEF) in children and adolescents. The study was conducted with 200 children, adolescents, and their parents. The parents completed the Behavior Rating Inventory of Executive Functioning (BRIEF) questionnaire, and the children and adolescents completed the CGIEF. Confirmatory factor analysis (CFA) and structural equation modeling (SEM) were performed using LISREL Version 8.80. The construct of CGIEF was valid/fit with normal theory-weighted least squares = 15.75 (<i>p</i> > .05). SEM analysis showed that the theoretical construct of the CGIEF was a valid predictor of executive function. The critical <i>t</i> value of the pathway was 2.45, and normal theory-weighted least squares was 5.74 (<i>p</i> > .05). The construct reliability (CR) for CGIEF was 0.91. Concurrent validity was assessed using the Bland-Altman plot, and the coefficient of repeatability (bias/mean) was nearly zero between the <i>t</i> scores of total executive functions of the CGIEF and BRIEF. This preliminary study showed that the CGIEF can be useful as a screening tool for executive dysfunction, metacognitive deficits, and behavioral regulation problems among children and adolescents in clinical samples.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140027286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-03-08DOI: 10.1177/10731911241234220
Daniel Ventus, Patrik Söderberg
Research on resilience is a growing field, and resilience has been conceptualized and operationalized in multiple ways. The aim of this study was to compare the Brief Resilient Coping Scale (BRCS), a conventional measure of resilience, with within-person process indicators derived from experience sampling method (ESM). A sample of 177 teachers from southern Finland participated in the study, commencing with a startup session followed by an 8-day ESM period. Through twice-daily prompts, participants reported their immediate positive and negative affect as well as recent stressors encountered, such as workload and challenging social interactions. As expected, within-person variation in affect was predicted by stressors. However, contrary to expectations, individual differences in affective reactivity to stressors were not predicted by BRCS (βpositive affect [95% CI] = -.20, [-.51, .11]; βnegative affect = .33, [-.07, .69]). Item response theory analyses of the BRCS revealed problems with precision. The results call into question the validity of measuring resilience using single administrations of retrospective self-report questionnaires such as the BRCS.
{"title":"Are In-the-Moment Resilience Processes Predicted by Questionnaire-Based Measures of Resilience?","authors":"Daniel Ventus, Patrik Söderberg","doi":"10.1177/10731911241234220","DOIUrl":"10.1177/10731911241234220","url":null,"abstract":"<p><p>Research on resilience is a growing field, and resilience has been conceptualized and operationalized in multiple ways. The aim of this study was to compare the Brief Resilient Coping Scale (BRCS), a conventional measure of resilience, with within-person process indicators derived from experience sampling method (ESM). A sample of 177 teachers from southern Finland participated in the study, commencing with a startup session followed by an 8-day ESM period. Through twice-daily prompts, participants reported their immediate positive and negative affect as well as recent stressors encountered, such as workload and challenging social interactions. As expected, within-person variation in affect was predicted by stressors. However, contrary to expectations, individual differences in affective reactivity to stressors were not predicted by BRCS (β<sub>positive affect</sub> [95% CI] = -.20, [-.51, .11]; β<sub>negative affect</sub> = .33, [-.07, .69]). Item response theory analyses of the BRCS revealed problems with precision. The results call into question the validity of measuring resilience using single administrations of retrospective self-report questionnaires such as the BRCS.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11484166/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140058577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-03-11DOI: 10.1177/10731911241235465
Luciano Giromini, Claudia Pignolo, Alessandro Zennaro, Martin Sellbom
Our study compared the impact of administering Symptom Validity Tests (SVTs) and Performance Validity Tests (PVTs) in in-person versus remote formats and assessed different approaches to combining validity test results. Using the MMPI-2-RF, IOP-29, IOP-M, and FIT, we assessed 164 adults, with half instructed to feign mild traumatic brain injury (mTBI) and half to respond honestly. Within each subgroup, half completed the tests in person, and the other half completed them online via videoconferencing. Results from 2 ×2 analyses of variance showed no significant effects of administration format on SVT and PVT scores. When comparing feigners to controls, the MMPI-2-RF RBS exhibited the largest effect size (d = 3.05) among all examined measures. Accordingly, we conducted a series of two-step hierarchical logistic regression models by entering the MMPI-2-RF RBS first, followed by each other SVT and PVT individually. We found that the IOP-29 and IOP-M were the only measures that yielded incremental validity beyond the effects of the MMPI-2-RF RBS in predicting group membership. Taken together, these findings suggest that administering these SVTs and PVTs in-person or remotely yields similar results, and the combination of MMPI and IOP indexes might be particularly effective in identifying feigned mTBI.
{"title":"Using the MMPI-2-RF, IOP-29, IOP-M, and FIT in the In-Person and Remote Administration Formats: A Simulation Study on Feigned mTBI.","authors":"Luciano Giromini, Claudia Pignolo, Alessandro Zennaro, Martin Sellbom","doi":"10.1177/10731911241235465","DOIUrl":"10.1177/10731911241235465","url":null,"abstract":"<p><p>Our study compared the impact of administering Symptom Validity Tests (SVTs) and Performance Validity Tests (PVTs) in in-person versus remote formats and assessed different approaches to combining validity test results. Using the MMPI-2-RF, IOP-29, IOP-M, and FIT, we assessed 164 adults, with half instructed to feign mild traumatic brain injury (mTBI) and half to respond honestly. Within each subgroup, half completed the tests in person, and the other half completed them online via videoconferencing. Results from 2 ×2 analyses of variance showed no significant effects of administration format on SVT and PVT scores. When comparing feigners to controls, the MMPI-2-RF RBS exhibited the largest effect size (d = 3.05) among all examined measures. Accordingly, we conducted a series of two-step hierarchical logistic regression models by entering the MMPI-2-RF RBS first, followed by each other SVT and PVT individually. We found that the IOP-29 and IOP-M were the only measures that yielded incremental validity beyond the effects of the MMPI-2-RF RBS in predicting group membership. Taken together, these findings suggest that administering these SVTs and PVTs in-person or remotely yields similar results, and the combination of MMPI and IOP indexes might be particularly effective in identifying feigned mTBI.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140100920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-03-11DOI: 10.1177/10731911241236315
Pablo Ezequiel Flores-Kanter, Jesús M Alvarado
The adoption of open science practices (OSPs) is crucial for promoting transparency and robustness in research. We conducted a systematic review to assess the frequency and trends of OSPs in psychometric studies focusing on measures of suicidal thoughts and behavior. We analyzed publications from two international databases, examining the use of OSPs such as open access publication, preregistration, provision of open materials, and data sharing. Our findings indicate a lack of adherence to OSPs in psychometric studies of suicide. The majority of manuscripts were published under restricted access, and preregistrations were not utilized. The provision of open materials and data was rare, with limited access to instruments and analysis scripts. Open access versions (preprints/postprints) were scarce. The low adoption of OSPs in psychometric studies of suicide calls for urgent action. Embracing a culture of open science will enhance transparency, reproducibility, and the impact of research in suicide prevention efforts.
{"title":"The State of Open Science Practices in Psychometric Studies of Suicide: A Systematic Review.","authors":"Pablo Ezequiel Flores-Kanter, Jesús M Alvarado","doi":"10.1177/10731911241236315","DOIUrl":"10.1177/10731911241236315","url":null,"abstract":"<p><p>The adoption of open science practices (OSPs) is crucial for promoting transparency and robustness in research. We conducted a systematic review to assess the frequency and trends of OSPs in psychometric studies focusing on measures of suicidal thoughts and behavior. We analyzed publications from two international databases, examining the use of OSPs such as open access publication, preregistration, provision of open materials, and data sharing. Our findings indicate a lack of adherence to OSPs in psychometric studies of suicide. The majority of manuscripts were published under restricted access, and preregistrations were not utilized. The provision of open materials and data was rare, with limited access to instruments and analysis scripts. Open access versions (preprints/postprints) were scarce. The low adoption of OSPs in psychometric studies of suicide calls for urgent action. Embracing a culture of open science will enhance transparency, reproducibility, and the impact of research in suicide prevention efforts.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140100919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-02-15DOI: 10.1177/10731911241229566
Tiffany Wu, Christina Weiland, Meghan McCormick, JoAnn Hsueh, Catherine Snow, Jason Sachs
The Hearts and Flowers (H&F) task is a computerized executive functioning (EF) assessment that has been used to measure EF from early childhood to adulthood. It provides data on accuracy and reaction time (RT) across three different task blocks (hearts, flowers, and mixed). However, there is a lack of consensus in the field on how to score the task that makes it difficult to interpret findings across studies. The current study, which includes a demographically diverse population of kindergarteners from Boston Public Schools (N = 946), compares the predictive and concurrent validity of 30 ways of scoring H&F, each with a different combination of accuracy, RT, and task block(s). Our exploratory results provide evidence supporting the use of a two-vector average score based on Zelazo et al.'s approach of adding accuracy and RT scores together only after individuals pass a certain accuracy threshold. Findings have implications for scoring future tablet-based developmental assessments.
{"title":"One Score to Rule Them All? Comparing the Predictive and Concurrent Validity of 30 Hearts and Flowers Scoring Approaches.","authors":"Tiffany Wu, Christina Weiland, Meghan McCormick, JoAnn Hsueh, Catherine Snow, Jason Sachs","doi":"10.1177/10731911241229566","DOIUrl":"10.1177/10731911241229566","url":null,"abstract":"<p><p>The Hearts and Flowers (H&F) task is a computerized executive functioning (EF) assessment that has been used to measure EF from early childhood to adulthood. It provides data on accuracy and reaction time (RT) across three different task blocks (hearts, flowers, and mixed). However, there is a lack of consensus in the field on how to score the task that makes it difficult to interpret findings across studies. The current study, which includes a demographically diverse population of kindergarteners from Boston Public Schools (<i>N</i> = 946), compares the predictive and concurrent validity of 30 ways of scoring H&F, each with a different combination of accuracy, RT, and task block(s). Our exploratory results provide evidence supporting the use of a <i>two-vector average</i> score based on Zelazo et al.'s approach of adding accuracy and RT scores together only after individuals pass a certain accuracy threshold. Findings have implications for scoring future tablet-based developmental assessments.</p>","PeriodicalId":8577,"journal":{"name":"Assessment","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139740249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}