Pub Date : 2023-01-01DOI: 10.1027/1015-5759/a000760
{"title":"Call for Papers: “Digital Transformation and Psychological Assessment”","authors":"","doi":"10.1027/1015-5759/a000760","DOIUrl":"https://doi.org/10.1027/1015-5759/a000760","url":null,"abstract":"","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42390684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.1027/1015-5759/a000755
Mark S. Allen, D. Iliescu, Samuel Greiff
{"title":"Direct Replication in Psychological Assessment Research","authors":"Mark S. Allen, D. Iliescu, Samuel Greiff","doi":"10.1027/1015-5759/a000755","DOIUrl":"https://doi.org/10.1027/1015-5759/a000755","url":null,"abstract":"","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43596237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-16DOI: 10.1027/1015-5759/a000747
Bo Wang, Wendy Andrews, M. Bechtoldt, S. Rohrmann, Reinout E. de Vries
Abstract. We readdressed the multidimensionality of the Clance Impostor Phenomenon Scale (CIPS) by reanalyzing Rohrmann et al.’s (2016) dataset, which led to the development of an improved 10-item CIPS (CIPS-10). The validity of the CIPS-10 was further examined by correlating it with HEXACO personality traits and work-related outcomes in a newly collected working adult sample ( N = 294). Factor analyses, reliability coefficients, and validity coefficients indicated that reporting and interpreting the total scores of both the CIPS and CIPS-10 was sufficient. We found the CIPS-10 to be positively related to Emotionality, job stress, turnover intention, and negatively related to Conscientiousness, Honesty-Humility, Extraversion, Agreeableness, and job satisfaction. The findings offer support for the validity of the CIPS-10.
{"title":"Validation of the Short Clance Impostor Phenomenon Scale (CIPS-10)","authors":"Bo Wang, Wendy Andrews, M. Bechtoldt, S. Rohrmann, Reinout E. de Vries","doi":"10.1027/1015-5759/a000747","DOIUrl":"https://doi.org/10.1027/1015-5759/a000747","url":null,"abstract":"Abstract. We readdressed the multidimensionality of the Clance Impostor Phenomenon Scale (CIPS) by reanalyzing Rohrmann et al.’s (2016) dataset, which led to the development of an improved 10-item CIPS (CIPS-10). The validity of the CIPS-10 was further examined by correlating it with HEXACO personality traits and work-related outcomes in a newly collected working adult sample ( N = 294). Factor analyses, reliability coefficients, and validity coefficients indicated that reporting and interpreting the total scores of both the CIPS and CIPS-10 was sufficient. We found the CIPS-10 to be positively related to Emotionality, job stress, turnover intention, and negatively related to Conscientiousness, Honesty-Humility, Extraversion, Agreeableness, and job satisfaction. The findings offer support for the validity of the CIPS-10.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2022-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41362021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-16DOI: 10.1027/1015-5759/a000740
E. de Beurs, S. Oudejans, B. Terluin
Abstract. The diversity of measures in clinical psychology hampers a straightforward interpretation of test results, complicates communication with the patient, and constitutes a challenge to the implementation of measurement-based care. In educational research and assessment, it is common practice to convert test scores to a common metric, such as T scores. We recommend applying this also in clinical psychology and propose and test a procedure to arrive at T scores approximating a normal distribution that can be applied to individual test scores. We established formulas to estimate normalized T scores from raw scale scores by regressing IRT-based θ scores on raw scores. With data from a large population and clinical samples, we established crosswalk formulas. Their validity was investigated by comparing calculated T scores with IRT-based T scores. IRT and formulas yielded very similar T scores, supporting the validity of the latter approach. Theoretical and practical advantages and disadvantages of both approaches to convert scores to common metrics and alternative approaches are discussed. Provided that scale characteristics allow for their computation, T scores will help to better understand measurement results, which makes it easier for patients and practitioners to use test results in joint decision-making about the course of treatment.
{"title":"A Common Measurement Scale for Self-Report Instruments in Mental Health Care","authors":"E. de Beurs, S. Oudejans, B. Terluin","doi":"10.1027/1015-5759/a000740","DOIUrl":"https://doi.org/10.1027/1015-5759/a000740","url":null,"abstract":"Abstract. The diversity of measures in clinical psychology hampers a straightforward interpretation of test results, complicates communication with the patient, and constitutes a challenge to the implementation of measurement-based care. In educational research and assessment, it is common practice to convert test scores to a common metric, such as T scores. We recommend applying this also in clinical psychology and propose and test a procedure to arrive at T scores approximating a normal distribution that can be applied to individual test scores. We established formulas to estimate normalized T scores from raw scale scores by regressing IRT-based θ scores on raw scores. With data from a large population and clinical samples, we established crosswalk formulas. Their validity was investigated by comparing calculated T scores with IRT-based T scores. IRT and formulas yielded very similar T scores, supporting the validity of the latter approach. Theoretical and practical advantages and disadvantages of both approaches to convert scores to common metrics and alternative approaches are discussed. Provided that scale characteristics allow for their computation, T scores will help to better understand measurement results, which makes it easier for patients and practitioners to use test results in joint decision-making about the course of treatment.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2022-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46799734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-16DOI: 10.1027/1015-5759/a000741
Beatrice Rammstedt, L. Roemer, D. Danner, Clemens M. Lechner
Abstract. When formulating questionnaire items, generally accepted rules include: Keeping the wording as simple as possible and avoiding double-barreled items. However, the empirical basis for these rules is sparse. The present study aimed to systematically investigate in an experimental design whether simplifying items of a personality scale and avoiding double-barreled items (i.e., items that contain multiple stimuli) markedly increases psychometric quality. Specifically, we compared the original items of the Big Five Inventory-2 – most of which are either double-barreled or can be regarded as complexly formulated – with simplified versions of the items. We tested the two versions using a large, heterogeneous sample ( N = 2,234). The simplified versions did not possess better psychometric quality than their original counterparts; rather, they showed weaker factorial validity. Regarding item characteristics, reliability, and criterion validity, no substantial differences were identified between the original and simplified versions. These findings were also replicated for the subsample of lower-educated respondents, who are considered more sensitive to complex item formulations. Our study thus suggests that simplifying item wording and avoiding double-barreled items in a personality inventory does not improve the quality of a questionnaire; rather, using simpler (and consequently more vague) item formulations may even decrease factorial validity.
{"title":"Don’t Keep It Too Simple","authors":"Beatrice Rammstedt, L. Roemer, D. Danner, Clemens M. Lechner","doi":"10.1027/1015-5759/a000741","DOIUrl":"https://doi.org/10.1027/1015-5759/a000741","url":null,"abstract":"Abstract. When formulating questionnaire items, generally accepted rules include: Keeping the wording as simple as possible and avoiding double-barreled items. However, the empirical basis for these rules is sparse. The present study aimed to systematically investigate in an experimental design whether simplifying items of a personality scale and avoiding double-barreled items (i.e., items that contain multiple stimuli) markedly increases psychometric quality. Specifically, we compared the original items of the Big Five Inventory-2 – most of which are either double-barreled or can be regarded as complexly formulated – with simplified versions of the items. We tested the two versions using a large, heterogeneous sample ( N = 2,234). The simplified versions did not possess better psychometric quality than their original counterparts; rather, they showed weaker factorial validity. Regarding item characteristics, reliability, and criterion validity, no substantial differences were identified between the original and simplified versions. These findings were also replicated for the subsample of lower-educated respondents, who are considered more sensitive to complex item formulations. Our study thus suggests that simplifying item wording and avoiding double-barreled items in a personality inventory does not improve the quality of a questionnaire; rather, using simpler (and consequently more vague) item formulations may even decrease factorial validity.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2022-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45455647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-16DOI: 10.1027/1015-5759/a000752
K. Locke, Chris C. Martin
Abstract. The Circumplex Team Scan (CTS) assesses the degree to which a team’s interaction/communication norms reflect each segment (16th) of the interpersonal circle/circumplex. We developed and evaluated an abbreviated 16-item CTS-16 that uses one CTS item to measure each segment. Undergraduates ( n = 446) completing engineering course projects in 139 teams completed the CTS-16. CTS-16 items showed a good fit to confirmatory structural models (e.g., that expect greater positive covariation between items theoretically closer to the circumplex). Individuals’ ratings sufficiently reflected team-level norms to justify averaging team members’ ratings. However, individual items’ marginal reliabilities suggest using the CTS-16 to assess general circumplex-wide patterns rather than specific segments. CTS-16 ratings correlated with respondents’ and their teammates’ ratings of team climate (inclusion, justice, psychological safety). Teams with more extraverted (introverted) members were perceived as having more confident/engaged (timid/hesitant) cultures. Members predisposed to social alienation perceived their team’s culture as relatively disrespectful/unengaged, but their teammates did not corroborate those perceptions. The results overall support the validity and utility of the CTS-16 and of an interpersonal circumplex model of team culture more generally.
{"title":"Evaluating an Abbreviated Version of the Circumplex Team Scan Inventory of Within-Team Interpersonal Norms","authors":"K. Locke, Chris C. Martin","doi":"10.1027/1015-5759/a000752","DOIUrl":"https://doi.org/10.1027/1015-5759/a000752","url":null,"abstract":"Abstract. The Circumplex Team Scan (CTS) assesses the degree to which a team’s interaction/communication norms reflect each segment (16th) of the interpersonal circle/circumplex. We developed and evaluated an abbreviated 16-item CTS-16 that uses one CTS item to measure each segment. Undergraduates ( n = 446) completing engineering course projects in 139 teams completed the CTS-16. CTS-16 items showed a good fit to confirmatory structural models (e.g., that expect greater positive covariation between items theoretically closer to the circumplex). Individuals’ ratings sufficiently reflected team-level norms to justify averaging team members’ ratings. However, individual items’ marginal reliabilities suggest using the CTS-16 to assess general circumplex-wide patterns rather than specific segments. CTS-16 ratings correlated with respondents’ and their teammates’ ratings of team climate (inclusion, justice, psychological safety). Teams with more extraverted (introverted) members were perceived as having more confident/engaged (timid/hesitant) cultures. Members predisposed to social alienation perceived their team’s culture as relatively disrespectful/unengaged, but their teammates did not corroborate those perceptions. The results overall support the validity and utility of the CTS-16 and of an interpersonal circumplex model of team culture more generally.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2022-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46501962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-11DOI: 10.1027/1015-5759/a000746
Fuyi Yang, Jianzhong Xu, Kasia Gallo, J. C. Núñez
Abstract. This investigation assessed the psychometric properties of the Homework Approach Scale (HAS) using 1,072 students in Grades 7 and 8. Having randomly divided the sample ( n = 1,072) into two subsamples, we conducted exploratory factor analysis (EFA) on subsample 1 and confirmatory factor analysis (CFA) on subsample 2. Factorial results indicated that the HAS contained two factors: Deep Approach and Surface Approach. Provided with sufficient measurement invariance, the factor means were tested across gender and grade levels. Males scored significantly lower in Deep Approach yet higher in the Surface Approach. There were nonsignificant differences in Deep Approach and Surface Approach across the grade level. Congruent with theoretical predictions, homework completion and mathematics achievement were related positively to Deep Approach and negatively to Surface Approach. This investigation offers robust evidence that the HAS is a valid measure for assessing students’ approaches to homework.
{"title":"Homework Approach Scale for Middle School Students","authors":"Fuyi Yang, Jianzhong Xu, Kasia Gallo, J. C. Núñez","doi":"10.1027/1015-5759/a000746","DOIUrl":"https://doi.org/10.1027/1015-5759/a000746","url":null,"abstract":"Abstract. This investigation assessed the psychometric properties of the Homework Approach Scale (HAS) using 1,072 students in Grades 7 and 8. Having randomly divided the sample ( n = 1,072) into two subsamples, we conducted exploratory factor analysis (EFA) on subsample 1 and confirmatory factor analysis (CFA) on subsample 2. Factorial results indicated that the HAS contained two factors: Deep Approach and Surface Approach. Provided with sufficient measurement invariance, the factor means were tested across gender and grade levels. Males scored significantly lower in Deep Approach yet higher in the Surface Approach. There were nonsignificant differences in Deep Approach and Surface Approach across the grade level. Congruent with theoretical predictions, homework completion and mathematics achievement were related positively to Deep Approach and negatively to Surface Approach. This investigation offers robust evidence that the HAS is a valid measure for assessing students’ approaches to homework.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2022-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48852175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-11DOI: 10.1027/1015-5759/a000745
Mohsen Joshanloo
Abstract. This study examined the cross-group and temporal measurement invariance of the Satisfaction With Life Scale in Korea. A nationally representative sample ( N = 13,824) and a convenience sample collected at four-time points over approximately 14 months ( N = 338) were used. Full measurement invariance (i.e., equal factor loadings and intercepts) was supported across groups based on gender, age, education, data collection method (face-to-face versus non-face-to-face), and two alternative translations of the scale. Temporal measurement invariance was also supported. Accordingly, the same underlying construct is measured, and the items of the scale are understood and answered similarly across groups and across time in Korea. Supplemental analysis revealed that Item 5 was not invariant between Korea and Japan, with Korean respondents tending to rate this item higher than Japanese respondents.
{"title":"Measurement Invariance of the Satisfaction With Life Scale in South Korea","authors":"Mohsen Joshanloo","doi":"10.1027/1015-5759/a000745","DOIUrl":"https://doi.org/10.1027/1015-5759/a000745","url":null,"abstract":"Abstract. This study examined the cross-group and temporal measurement invariance of the Satisfaction With Life Scale in Korea. A nationally representative sample ( N = 13,824) and a convenience sample collected at four-time points over approximately 14 months ( N = 338) were used. Full measurement invariance (i.e., equal factor loadings and intercepts) was supported across groups based on gender, age, education, data collection method (face-to-face versus non-face-to-face), and two alternative translations of the scale. Temporal measurement invariance was also supported. Accordingly, the same underlying construct is measured, and the items of the scale are understood and answered similarly across groups and across time in Korea. Supplemental analysis revealed that Item 5 was not invariant between Korea and Japan, with Korean respondents tending to rate this item higher than Japanese respondents.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2022-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43249267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-02DOI: 10.1027/1015-5759/a000736
Timo Gnambs, Lena Nusser
Abstract. Children with special educational needs in the area of learning (SEN-L) have severe learning disabilities and often exhibit substantial cognitive impairments. Therefore, standard assessment instruments of basic cognitive abilities designed for regular school children are frequently too complex for them and, thus, unable to provide reliable proficiency estimates. The present study evaluated whether out-of-level testing with the German version of the Cognitive Abilities Test using test versions developed for younger age groups might suit the needs of these children. Therefore, N = 511 children with SEN-L and N = 573 low achieving children without SEN-L attending fifth grades in Germany were administered four tests measuring reasoning and verbal comprehension that were designed for fourth graders. The results showed that children with SEN-L exhibited significantly more missing responses than children without SEN-L. Moreover, three of the four tests were still too difficult for them. Importantly, no substantial differential response functioning was found for children with and without SEN-L. Thus, out-of-level testing might represent a feasible strategy to assess basic cognitive functioning in children with SEN-L. However, comparative interpretations would require additional norms or linked test versions that place results from out-of-level tests on a common metric.
{"title":"Out-of-Level Cognitive Testing of Children with Special Educational Needs","authors":"Timo Gnambs, Lena Nusser","doi":"10.1027/1015-5759/a000736","DOIUrl":"https://doi.org/10.1027/1015-5759/a000736","url":null,"abstract":"Abstract. Children with special educational needs in the area of learning (SEN-L) have severe learning disabilities and often exhibit substantial cognitive impairments. Therefore, standard assessment instruments of basic cognitive abilities designed for regular school children are frequently too complex for them and, thus, unable to provide reliable proficiency estimates. The present study evaluated whether out-of-level testing with the German version of the Cognitive Abilities Test using test versions developed for younger age groups might suit the needs of these children. Therefore, N = 511 children with SEN-L and N = 573 low achieving children without SEN-L attending fifth grades in Germany were administered four tests measuring reasoning and verbal comprehension that were designed for fourth graders. The results showed that children with SEN-L exhibited significantly more missing responses than children without SEN-L. Moreover, three of the four tests were still too difficult for them. Importantly, no substantial differential response functioning was found for children with and without SEN-L. Thus, out-of-level testing might represent a feasible strategy to assess basic cognitive functioning in children with SEN-L. However, comparative interpretations would require additional norms or linked test versions that place results from out-of-level tests on a common metric.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49254415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}