Pub Date : 2022-08-09DOI: 10.1027/1015-5759/a000724
Denis G. Dumas, Yixiao Dong, Daniel M. McNeish
Abstract. The degree to which test scores can support justified and fair decisions about demographically diverse participants has been an important aspect of educational and psychological testing for millennia. In the last 30 years, this aspect of measurement has come to be known as consequential validity, and it has sparked scholarly debate as to how responsible psychometricians should be for the fairness of the tests they create and how the field might be able to quantify that fairness and communicate it to applied researchers and other stakeholders of testing programs. Here, we formulate a relatively simple-to-calculate ratio coefficient that is meant to capture how well the scores from a given test can predict a criterion free from the undue influence of student demographics. We posit three example calculations of this Consequential Validity Ratio (CVR): one where the CVR is quite strong, another where the CVR is more moderate, and a third where the CVR is weak. We provide preliminary suggestions for interpreting the CVR and discuss its utility in instances where new tests are being developed, tests are being adapted to a new population, or the fairness of an established test has become an empirical question.
{"title":"How Fair Is My Test?","authors":"Denis G. Dumas, Yixiao Dong, Daniel M. McNeish","doi":"10.1027/1015-5759/a000724","DOIUrl":"https://doi.org/10.1027/1015-5759/a000724","url":null,"abstract":"Abstract. The degree to which test scores can support justified and fair decisions about demographically diverse participants has been an important aspect of educational and psychological testing for millennia. In the last 30 years, this aspect of measurement has come to be known as consequential validity, and it has sparked scholarly debate as to how responsible psychometricians should be for the fairness of the tests they create and how the field might be able to quantify that fairness and communicate it to applied researchers and other stakeholders of testing programs. Here, we formulate a relatively simple-to-calculate ratio coefficient that is meant to capture how well the scores from a given test can predict a criterion free from the undue influence of student demographics. We posit three example calculations of this Consequential Validity Ratio (CVR): one where the CVR is quite strong, another where the CVR is more moderate, and a third where the CVR is weak. We provide preliminary suggestions for interpreting the CVR and discuss its utility in instances where new tests are being developed, tests are being adapted to a new population, or the fairness of an established test has become an empirical question.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":"52 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2022-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57277489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-03DOI: 10.1027/1015-5759/a000725
L. Menghini, M. Pastore, C. Balducci
Abstract. Experience sampling methods are increasingly used in workplace stress assessment, yet rarely developed and validated following the available best practices. Here, we developed and evaluated parsimonious measures of momentary stressors (Task Demand and Task Control) and the Italian adaptation of the Multidimensional Mood Questionnaire as an indicator of momentary strain (Negative Valence, Tense Arousal, and Fatigue). Data from 139 full-time office workers that received seven experience sampling questionnaires per day over 3 workdays suggested satisfactory validity (including weak invariance cross-level isomorphism), level-specific reliability, and sensitivity to change. The scales also showed substantial correlations with retrospective measures of the corresponding or similar constructs and a degree of sensitivity to work sampling categories (type and mean of job task, people involved). Opportunities and recommendations for the investigation and the routine assessment of workplace stress are discussed.
{"title":"Workplace Stress in Real Time","authors":"L. Menghini, M. Pastore, C. Balducci","doi":"10.1027/1015-5759/a000725","DOIUrl":"https://doi.org/10.1027/1015-5759/a000725","url":null,"abstract":"Abstract. Experience sampling methods are increasingly used in workplace stress assessment, yet rarely developed and validated following the available best practices. Here, we developed and evaluated parsimonious measures of momentary stressors (Task Demand and Task Control) and the Italian adaptation of the Multidimensional Mood Questionnaire as an indicator of momentary strain (Negative Valence, Tense Arousal, and Fatigue). Data from 139 full-time office workers that received seven experience sampling questionnaires per day over 3 workdays suggested satisfactory validity (including weak invariance cross-level isomorphism), level-specific reliability, and sensitivity to change. The scales also showed substantial correlations with retrospective measures of the corresponding or similar constructs and a degree of sensitivity to work sampling categories (type and mean of job task, people involved). Opportunities and recommendations for the investigation and the routine assessment of workplace stress are discussed.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2022-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49383611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-27DOI: 10.1027/1015-5759/a000721
D. Jankowska, M. Karwowski
Abstract. Across five studies (total N > 3,600), we report the psychometric properties of the Polish version of the Vividness of Visual Imagery Questionnaire (VVIQ-2PL). Confirmatory factor analysis confirmed a unidimensional structure of this instrument; measurement invariance concerning participants’ gender was established as well. The VVIQ-2PL showed excellent test-retest reliability, high internal consistency, and adequate construct validity. As predicted, art students scored significantly higher in visual mental imagery than the non-artist group. We discuss these findings alongside future research directions and possible modifications of VVIQ-2PL.
{"title":"How Vivid Is Your Mental Imagery?","authors":"D. Jankowska, M. Karwowski","doi":"10.1027/1015-5759/a000721","DOIUrl":"https://doi.org/10.1027/1015-5759/a000721","url":null,"abstract":"Abstract. Across five studies (total N > 3,600), we report the psychometric properties of the Polish version of the Vividness of Visual Imagery Questionnaire (VVIQ-2PL). Confirmatory factor analysis confirmed a unidimensional structure of this instrument; measurement invariance concerning participants’ gender was established as well. The VVIQ-2PL showed excellent test-retest reliability, high internal consistency, and adequate construct validity. As predicted, art students scored significantly higher in visual mental imagery than the non-artist group. We discuss these findings alongside future research directions and possible modifications of VVIQ-2PL.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42706368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-27DOI: 10.1027/1015-5759/a000726
W. Hart, Joshua T. Lambert, Charlotte Kinrade
Abstract. Entitlement has attracted interest across various social science disciplines due to its broad connection to selfish decision-making outcomes and mental health. Although unidimensional entitlement scales have been widely used, these scales conflate vulnerable- and grandiose-based entitlement forms. The Psychological Entitlement Scale – Grandiose-Based and Vulnerable-Based (PES-G/V) was recently devised to measure these entitlement forms. Prior work has supported the structure and construct validity of the PES-G/V, but no research has addressed the measurement invariance (MI) of the PES-G/V. Hence, we examined MI in relation to gender, two popular sampling frames in psychology studies (US MTurk participants and US college participants), and age. Results supported scalar MI across levels of each of the grouping variables. In sum, the structural properties of the PES-G/V seemed robust to the group distinctions.
{"title":"Investigating Measurement Invariance of the Psychological Entitlement Scale – Grandiose-Based and Vulnerable-Based","authors":"W. Hart, Joshua T. Lambert, Charlotte Kinrade","doi":"10.1027/1015-5759/a000726","DOIUrl":"https://doi.org/10.1027/1015-5759/a000726","url":null,"abstract":"Abstract. Entitlement has attracted interest across various social science disciplines due to its broad connection to selfish decision-making outcomes and mental health. Although unidimensional entitlement scales have been widely used, these scales conflate vulnerable- and grandiose-based entitlement forms. The Psychological Entitlement Scale – Grandiose-Based and Vulnerable-Based (PES-G/V) was recently devised to measure these entitlement forms. Prior work has supported the structure and construct validity of the PES-G/V, but no research has addressed the measurement invariance (MI) of the PES-G/V. Hence, we examined MI in relation to gender, two popular sampling frames in psychology studies (US MTurk participants and US college participants), and age. Results supported scalar MI across levels of each of the grouping variables. In sum, the structural properties of the PES-G/V seemed robust to the group distinctions.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44134511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-27DOI: 10.1027/1015-5759/a000723
Boris Forthmann, R. Beaty, D. Johnson
Abstract. Semantic distance scoring provides an attractive alternative to other scoring approaches for responses in creative thinking tasks. In addition, evidence in support of semantic distance scoring has increased over the last few years. In one recent approach, it has been proposed to combine multiple semantic spaces to better balance the idiosyncratic influences of each space. Thereby, final semantic distance scores for each response are represented by a composite or factor score. However, semantic spaces are not necessarily equally weighted in mean scores, and the usage of factor scores requires high levels of factor determinacy (i.e., the correlation between estimates and true factor scores). Hence, in this work, we examined the weighting underlying mean scores, mean scores of standardized variables, factor loadings, weights that maximize reliability, and equally effective weights on common verbal creative thinking tasks. Both empirical and simulated factor determinacy, as well as Gilmer-Feldt’s composite reliability, were mostly good to excellent (i.e., > .80) across two task types (Alternate Uses and Creative Word Association), eight samples of data, and all weighting approaches. Person-level validity findings were further highly comparable across weighting approaches. Observed nuances and challenges of different weightings and the question of using composites vs. factor scores are thoroughly provided.
{"title":"Semantic Spaces Are Not Created Equal – How Should We Weigh Them in the Sequel?","authors":"Boris Forthmann, R. Beaty, D. Johnson","doi":"10.1027/1015-5759/a000723","DOIUrl":"https://doi.org/10.1027/1015-5759/a000723","url":null,"abstract":"Abstract. Semantic distance scoring provides an attractive alternative to other scoring approaches for responses in creative thinking tasks. In addition, evidence in support of semantic distance scoring has increased over the last few years. In one recent approach, it has been proposed to combine multiple semantic spaces to better balance the idiosyncratic influences of each space. Thereby, final semantic distance scores for each response are represented by a composite or factor score. However, semantic spaces are not necessarily equally weighted in mean scores, and the usage of factor scores requires high levels of factor determinacy (i.e., the correlation between estimates and true factor scores). Hence, in this work, we examined the weighting underlying mean scores, mean scores of standardized variables, factor loadings, weights that maximize reliability, and equally effective weights on common verbal creative thinking tasks. Both empirical and simulated factor determinacy, as well as Gilmer-Feldt’s composite reliability, were mostly good to excellent (i.e., > .80) across two task types (Alternate Uses and Creative Word Association), eight samples of data, and all weighting approaches. Person-level validity findings were further highly comparable across weighting approaches. Observed nuances and challenges of different weightings and the question of using composites vs. factor scores are thoroughly provided.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44236298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-21DOI: 10.1027/1015-5759/a000719
V. Arias, Fernando P. Ponce, A. Martínez-Molina
Abstract. In survey data, inconsistent responses due to careless/insufficient effort (C/IE) can lead to problems of replicability and validity. However, data cleaning prior to the main analyses is not yet a standard practice. We investigated the effect of C/IE responses on the structure of personality survey data. For this purpose, we analyzed the structure of the Core-Self Evaluations scale (CSE-S), including the detection of aberrant responses in the study design. While the original theoretical model of the CSE-S assumes that the construct is unidimensional ( Judge et al., 2003 ), recent studies have argued for a multidimensional solution (positive CSE and negative CSE). We hypothesized that this multidimensionality is not substantive but a result of the tendency of C/IE data to generate spurious dimensions. We estimated the confirmatory models before and after removing highly inconsistent response vectors in two independent samples (6% and 4.7%). The analysis of the raw samples clearly favored retaining the two-dimensional model. In contrast, the analysis of the clean datasets suggested the retention of a single factor. A mere 6% C/IE response rate showed enough power to confound the results of the factor analysis. This result suggests that the factor structure of positive and negative CSE factors is spurious, resulting from uncontrolled wording variance produced by a limited proportion of highly inconsistent response vectors. We encourage researchers to include screening for inconsistent responses in their research designs.
{"title":"How a Few Inconsistent Respondents Can Confound the Structure of Personality Survey Data","authors":"V. Arias, Fernando P. Ponce, A. Martínez-Molina","doi":"10.1027/1015-5759/a000719","DOIUrl":"https://doi.org/10.1027/1015-5759/a000719","url":null,"abstract":"Abstract. In survey data, inconsistent responses due to careless/insufficient effort (C/IE) can lead to problems of replicability and validity. However, data cleaning prior to the main analyses is not yet a standard practice. We investigated the effect of C/IE responses on the structure of personality survey data. For this purpose, we analyzed the structure of the Core-Self Evaluations scale (CSE-S), including the detection of aberrant responses in the study design. While the original theoretical model of the CSE-S assumes that the construct is unidimensional ( Judge et al., 2003 ), recent studies have argued for a multidimensional solution (positive CSE and negative CSE). We hypothesized that this multidimensionality is not substantive but a result of the tendency of C/IE data to generate spurious dimensions. We estimated the confirmatory models before and after removing highly inconsistent response vectors in two independent samples (6% and 4.7%). The analysis of the raw samples clearly favored retaining the two-dimensional model. In contrast, the analysis of the clean datasets suggested the retention of a single factor. A mere 6% C/IE response rate showed enough power to confound the results of the factor analysis. This result suggests that the factor structure of positive and negative CSE factors is spurious, resulting from uncontrolled wording variance produced by a limited proportion of highly inconsistent response vectors. We encourage researchers to include screening for inconsistent responses in their research designs.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2022-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42402763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-15DOI: 10.1027/1015-5759/a000722
Sophie Gerdel, Anna Dalla Rosa, M. Vianello
Abstract. This paper reports on the development of a unidimensional short scale for measuring career calling (UMCS-7). The scale has been developed drawing from the theoretical model behind the Unified Multidimensional Calling Scale (UMCS; Vianello et al., 2018 ), according to which calling is composed of Passion, Prosociality, Purpose, Pervasiveness, Sacrifice, Transcendent Summons, and Identity. The UMCS-7 integrates classical and modern conceptualizations of career calling and can be used when time constraints prevent using the UMCS. The UMCS-7 has been validated in a sample of Italian workers ( N = 1,246) using exploratory and confirmatory factor analysis. A sample of US employees ( N = 165) was used to estimate measurement invariance across languages, establishing the equivalence of factor loadings, all but two intercepts, and all error variances. The UMCS-7 demonstrated nearly perfect convergent validity with the UMCS ( r = .97), excellent internal consistency (αItaly = .86; αUS = .87), and satisfactory concurrent validity with job satisfaction, life satisfaction, and turnover intentions.
{"title":"Psychometric Properties and Measurement Invariance of a Short Form of the Unified Multidimensional Calling Scale (UMCS)","authors":"Sophie Gerdel, Anna Dalla Rosa, M. Vianello","doi":"10.1027/1015-5759/a000722","DOIUrl":"https://doi.org/10.1027/1015-5759/a000722","url":null,"abstract":"Abstract. This paper reports on the development of a unidimensional short scale for measuring career calling (UMCS-7). The scale has been developed drawing from the theoretical model behind the Unified Multidimensional Calling Scale (UMCS; Vianello et al., 2018 ), according to which calling is composed of Passion, Prosociality, Purpose, Pervasiveness, Sacrifice, Transcendent Summons, and Identity. The UMCS-7 integrates classical and modern conceptualizations of career calling and can be used when time constraints prevent using the UMCS. The UMCS-7 has been validated in a sample of Italian workers ( N = 1,246) using exploratory and confirmatory factor analysis. A sample of US employees ( N = 165) was used to estimate measurement invariance across languages, establishing the equivalence of factor loadings, all but two intercepts, and all error variances. The UMCS-7 demonstrated nearly perfect convergent validity with the UMCS ( r = .97), excellent internal consistency (αItaly = .86; αUS = .87), and satisfactory concurrent validity with job satisfaction, life satisfaction, and turnover intentions.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2022-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42552979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1027/1015-5759/a000732
D. Iliescu, Samuel Greiff
{"title":"Some Thoughts and Considerations on Accommodations in Testing","authors":"D. Iliescu, Samuel Greiff","doi":"10.1027/1015-5759/a000732","DOIUrl":"https://doi.org/10.1027/1015-5759/a000732","url":null,"abstract":"","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48914106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-23DOI: 10.1027/1015-5759/a000718
C. Martarelli, A. Baillifard, C. Audrin
Abstract. The Short Boredom Proneness Scale (SBPS) has recently been developed. Using a standard confirmatory factor analysis, we report on the structural validation of the French SBPS, which provided support for the original construct. A network analysis ( n = 490) revealed the structure of the relationships between the SBPS and the two facets of Curiosity and Exploration Inventory-II (CEI-II). The analysis revealed positive connections between the boredom and curiosity items, whereas the connections between the boredom and exploration items were negative. To evaluate measurement invariance, we compared the French-speaking sample ( n = 490) with an English-speaking sample ( n = 364). Full configural, metric, and scalar invariance was established; thus, we provide a valid French translation of a widely used measure of boredom that may advantage future research.
{"title":"A Trait-Based Network Perspective on the Validation of the French Short Boredom Proneness Scale","authors":"C. Martarelli, A. Baillifard, C. Audrin","doi":"10.1027/1015-5759/a000718","DOIUrl":"https://doi.org/10.1027/1015-5759/a000718","url":null,"abstract":"Abstract. The Short Boredom Proneness Scale (SBPS) has recently been developed. Using a standard confirmatory factor analysis, we report on the structural validation of the French SBPS, which provided support for the original construct. A network analysis ( n = 490) revealed the structure of the relationships between the SBPS and the two facets of Curiosity and Exploration Inventory-II (CEI-II). The analysis revealed positive connections between the boredom and curiosity items, whereas the connections between the boredom and exploration items were negative. To evaluate measurement invariance, we compared the French-speaking sample ( n = 490) with an English-speaking sample ( n = 364). Full configural, metric, and scalar invariance was established; thus, we provide a valid French translation of a widely used measure of boredom that may advantage future research.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2022-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43517150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-23DOI: 10.1027/1015-5759/a000716
A. Robe, A. Dobrean, R. Balázsi, R. Georgescu, C. Păsărelu, E. Predescu
Abstract. The purpose of this study was to examine evidence of reliability, validity, and equity, for the Romanian version of The Screen for Child Anxiety Related Emotional Disorders (SCARED), the 41-item child- (1,106 children and adolescents ranging from 9 to 16 years old) and parent-ratings (485 parents). Both versions of the instrument showed moderate to high internal consistency, with most subscales reaching acceptable levels. Results showed support for the original five-factor structure of the scale. Positive correlations with other measures of anxiety symptoms, such as The Penn State Worry Questionnaire, The Social Anxiety Scale for Adolescents, The Children’s Automatic Thoughts Scale, whereas weak correlations with the syndrome scales for rule-breaking and aggressive behavior of the Youth Self-Report, respectively, Child Behavioral Checklist have demonstrated similar construct validity for the Romanian version of the scale as compared to the original one. Also, strict measurement invariance across age, gender, and clinical status was established. The current research provides evidence of reliability, validity, and equity for SCARED, arguing for its utility as a screening instrument for anxiety symptoms. Implications for theory, assessment, and future research are discussed.
{"title":"Factor Structure and Measurement Invariance Across Age, Gender, and Clinical Status of the Screen for Children Anxiety Related Emotional Disorders","authors":"A. Robe, A. Dobrean, R. Balázsi, R. Georgescu, C. Păsărelu, E. Predescu","doi":"10.1027/1015-5759/a000716","DOIUrl":"https://doi.org/10.1027/1015-5759/a000716","url":null,"abstract":"Abstract. The purpose of this study was to examine evidence of reliability, validity, and equity, for the Romanian version of The Screen for Child Anxiety Related Emotional Disorders (SCARED), the 41-item child- (1,106 children and adolescents ranging from 9 to 16 years old) and parent-ratings (485 parents). Both versions of the instrument showed moderate to high internal consistency, with most subscales reaching acceptable levels. Results showed support for the original five-factor structure of the scale. Positive correlations with other measures of anxiety symptoms, such as The Penn State Worry Questionnaire, The Social Anxiety Scale for Adolescents, The Children’s Automatic Thoughts Scale, whereas weak correlations with the syndrome scales for rule-breaking and aggressive behavior of the Youth Self-Report, respectively, Child Behavioral Checklist have demonstrated similar construct validity for the Romanian version of the scale as compared to the original one. Also, strict measurement invariance across age, gender, and clinical status was established. The current research provides evidence of reliability, validity, and equity for SCARED, arguing for its utility as a screening instrument for anxiety symptoms. Implications for theory, assessment, and future research are discussed.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2022-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42325737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}