Pub Date : 2023-08-23DOI: 10.1027/1015-5759/a000785
Anastasia Ushakova, K. Mckenzie, C. Hughes, Johanna Stoye, A. Murray
Abstract: Understanding how levels, patterns, predictors, and outcomes of mental health issues differs in students relative to non-students can inform more effective and better tailored prevention and intervention for mental health in higher education contexts. However, comparisons of mental health in student and non-student groups depend on the critical but seldom-tested assumption of measurement invariance. In this study, we use data from the UK household longitudinal study (UKLS) to evaluate the measurement invariance of the scores from a commonly used mental health measure: the General Health Questionnaire 12-item version (GHQ-12) across students and non-students. Using a bifactor model to take account of wording factors we found measurement invariance up to the scalar level for students and non-student groups. This provides support for the use of instruments for comparing mental health issue levels and candidate risk factors and outcomes across students and non-students.
{"title":"Measurement Invariance of the General Health Questionnaire GHQ 12-Item Version (GHQ-12)","authors":"Anastasia Ushakova, K. Mckenzie, C. Hughes, Johanna Stoye, A. Murray","doi":"10.1027/1015-5759/a000785","DOIUrl":"https://doi.org/10.1027/1015-5759/a000785","url":null,"abstract":"Abstract: Understanding how levels, patterns, predictors, and outcomes of mental health issues differs in students relative to non-students can inform more effective and better tailored prevention and intervention for mental health in higher education contexts. However, comparisons of mental health in student and non-student groups depend on the critical but seldom-tested assumption of measurement invariance. In this study, we use data from the UK household longitudinal study (UKLS) to evaluate the measurement invariance of the scores from a commonly used mental health measure: the General Health Questionnaire 12-item version (GHQ-12) across students and non-students. Using a bifactor model to take account of wording factors we found measurement invariance up to the scalar level for students and non-student groups. This provides support for the use of instruments for comparing mental health issue levels and candidate risk factors and outcomes across students and non-students.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45998260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-16DOI: 10.1027/1015-5759/a000781
N. Dippel, Johannes Zimmermann, E. Brakemeier, H. Christiansen
Abstract: The interpersonal circumplex (IPC) is an established model to describe individual and dyadic interpersonal phenomena along the orthogonal dimensions of control and affiliation. This study aims to adapt and validate the Impact Message Inventory (IMI) to assess impact messages (perceptions of and covert reactions to interpersonal styles) in parents and children according to the IPC. We adapted the German IMI ( Caspar et al., 2016 ) for a young age. Overall, 531 parents and 162 children completed the IMI@YoungAge (IMI@YA). We investigated the reliability and circumplex structure of the octant scales. We also examined the complementarity of impact messages of parents and children and associations with health-related constructs. Most IMI@YA scales demonstrated acceptable internal consistency. The expected circumplex structure could not be replicated. When using factor scores based on exploratory factor analysis, we were able to confirm the complementarity hypothesis in terms of affiliation, but not control. We detected low-to-moderate correlations with health-related constructs. The IMI@YA aims to assess the impact messages of parents and children, but the lack of circumplex structure implies that the items and scales need to be adjusted. We discuss the IPC’s potential for investigating parent-child interaction.
摘要人际环是沿控制和隶属正交维度描述个体和双元人际现象的一种已建立的模型。本研究旨在根据IPC调整和验证影响信息量表(IMI),以评估父母和儿童的影响信息(对人际关系风格的感知和隐蔽反应)。我们将德国的IMI (Caspar et al., 2016)适用于年轻人。总共有531名家长和162名孩子完成了IMI@YoungAge (IMI@YA)。我们研究了八象限量表的可靠性和复形结构。我们还研究了父母和儿童的影响信息的互补性以及与健康相关结构的关联。大多数IMI@YA量表显示出可接受的内部一致性。无法复制预期的环形结构。当使用基于探索性因子分析的因子得分时,我们能够在隶属关系方面证实互补性假设,但不能控制。我们检测到与健康相关的构念存在低到中等的相关性。IMI@YA的目的是评估父母和孩子的影响信息,但缺乏复杂的结构意味着项目和尺度需要调整。我们讨论了IPC在调查亲子互动方面的潜力。
{"title":"Capturing Impact Messages in Parent–Child Interactions","authors":"N. Dippel, Johannes Zimmermann, E. Brakemeier, H. Christiansen","doi":"10.1027/1015-5759/a000781","DOIUrl":"https://doi.org/10.1027/1015-5759/a000781","url":null,"abstract":"Abstract: The interpersonal circumplex (IPC) is an established model to describe individual and dyadic interpersonal phenomena along the orthogonal dimensions of control and affiliation. This study aims to adapt and validate the Impact Message Inventory (IMI) to assess impact messages (perceptions of and covert reactions to interpersonal styles) in parents and children according to the IPC. We adapted the German IMI ( Caspar et al., 2016 ) for a young age. Overall, 531 parents and 162 children completed the IMI@YoungAge (IMI@YA). We investigated the reliability and circumplex structure of the octant scales. We also examined the complementarity of impact messages of parents and children and associations with health-related constructs. Most IMI@YA scales demonstrated acceptable internal consistency. The expected circumplex structure could not be replicated. When using factor scores based on exploratory factor analysis, we were able to confirm the complementarity hypothesis in terms of affiliation, but not control. We detected low-to-moderate correlations with health-related constructs. The IMI@YA aims to assess the impact messages of parents and children, but the lack of circumplex structure implies that the items and scales need to be adjusted. We discuss the IPC’s potential for investigating parent-child interaction.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47072157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-02DOI: 10.1027/1015-5759/a000787
C. Lau, F. Chiesi, A. Fermani, M. Muzi, Gonzalo del Moral Arroyo, Francesco Bruno, W. Ruch, L. Quilty, D. Saklofske, Carla Canestrari
Abstract: The PhoPhiKat-30 is a self-report instrument for describing personality related to laughter and ridicule including gelotophobia, gelotophilia, and katagelasticism. The present study assessed the measurement properties of the newly translated Italian PhoPhiKat-30 across participants in Italy and Canada using multidimensional item response theory. Italian ( N = 326) and Canadian ( N = 1,467) participants completed the Italian and English PhoPhiKat-30, respectively. The parallel analysis supported the three-factor model in Italy. Conditional reliability estimates showed strong precision (> 0.80) of gelotophobia and gelotophilia along the latent continuum (−1.15 < θ < 3.08 and −1.69 < θ < 3.09, respectively). Katagelasticism showed a limited range (0.98 < θ < 2.85) for the latent attribute precisely measured, suggesting that new items that address the low to moderate difficulty of katagelasticism should be added in future studies. Item discrimination parameters varied across Reckase’s multidimensional normal-ogive model (MDISC mean = 0.79). Five items had uniform differential item functioning (DIF; McFadden’s pseudo R2 > .035 or β > .10) when comparing the Italian and English PhoPhiKat-30, with English items showing more agreement at the same level of the latent trait. The Italian PhoPhiKat-30 has good item discrimination across the latent continuum and showed cross-cultural equivalence for most items.
{"title":"Measuring Gelotophobia, Gelotophilia, and Katagelasticism in Italy and Canada Using PhoPhiKat-30","authors":"C. Lau, F. Chiesi, A. Fermani, M. Muzi, Gonzalo del Moral Arroyo, Francesco Bruno, W. Ruch, L. Quilty, D. Saklofske, Carla Canestrari","doi":"10.1027/1015-5759/a000787","DOIUrl":"https://doi.org/10.1027/1015-5759/a000787","url":null,"abstract":"Abstract: The PhoPhiKat-30 is a self-report instrument for describing personality related to laughter and ridicule including gelotophobia, gelotophilia, and katagelasticism. The present study assessed the measurement properties of the newly translated Italian PhoPhiKat-30 across participants in Italy and Canada using multidimensional item response theory. Italian ( N = 326) and Canadian ( N = 1,467) participants completed the Italian and English PhoPhiKat-30, respectively. The parallel analysis supported the three-factor model in Italy. Conditional reliability estimates showed strong precision (> 0.80) of gelotophobia and gelotophilia along the latent continuum (−1.15 < θ < 3.08 and −1.69 < θ < 3.09, respectively). Katagelasticism showed a limited range (0.98 < θ < 2.85) for the latent attribute precisely measured, suggesting that new items that address the low to moderate difficulty of katagelasticism should be added in future studies. Item discrimination parameters varied across Reckase’s multidimensional normal-ogive model (MDISC mean = 0.79). Five items had uniform differential item functioning (DIF; McFadden’s pseudo R2 > .035 or β > .10) when comparing the Italian and English PhoPhiKat-30, with English items showing more agreement at the same level of the latent trait. The Italian PhoPhiKat-30 has good item discrimination across the latent continuum and showed cross-cultural equivalence for most items.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45945618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-02DOI: 10.1027/1015-5759/a000779
Mirka Henninger, Hansjörg Plieninger, T. Meiser
Abstract: Many researchers use self-report data to examine abilities, personalities, or attitudes. At the same time, there is a widespread concern that response styles, such as the tendency to give extreme, midscale, or acquiescent responses, may threaten data quality. As an alternative to post hoc control of response styles using psychometric models, a priori control using specific response formats may be a means to reduce biasing response style effects in self-report data in day-to-day research practice. Previous research has suggested that response styles were less influential in a Drag-and-Drop (DnD) format compared to the traditional Likert-type format. In this article, we further examine the advantage of the DnD format, test its generalizability, and investigate its underlying mechanisms. In two between-participants experiments, we tested different versions of the DnD format against the Likert format. We found no evidence for reduced response style influence in any of the DnD conditions, nor did we find any difference between the conditions in terms of the validity of the measures to external criteria. We conclude that adaptations of response formats, such as the DnD format, may be promising, but require more thorough examination before recommending them as a means to reduce response style influence in psychological measurement.
{"title":"The Effect of Response Formats on Response Style Strength","authors":"Mirka Henninger, Hansjörg Plieninger, T. Meiser","doi":"10.1027/1015-5759/a000779","DOIUrl":"https://doi.org/10.1027/1015-5759/a000779","url":null,"abstract":"Abstract: Many researchers use self-report data to examine abilities, personalities, or attitudes. At the same time, there is a widespread concern that response styles, such as the tendency to give extreme, midscale, or acquiescent responses, may threaten data quality. As an alternative to post hoc control of response styles using psychometric models, a priori control using specific response formats may be a means to reduce biasing response style effects in self-report data in day-to-day research practice. Previous research has suggested that response styles were less influential in a Drag-and-Drop (DnD) format compared to the traditional Likert-type format. In this article, we further examine the advantage of the DnD format, test its generalizability, and investigate its underlying mechanisms. In two between-participants experiments, we tested different versions of the DnD format against the Likert format. We found no evidence for reduced response style influence in any of the DnD conditions, nor did we find any difference between the conditions in terms of the validity of the measures to external criteria. We conclude that adaptations of response formats, such as the DnD format, may be promising, but require more thorough examination before recommending them as a means to reduce response style influence in psychological measurement.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41548272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-02DOI: 10.1027/1015-5759/a000786
Kay Brauer, R. Proyer
Abstract: The Clance Impostor Phenomenon Scale (CIPS) is the most frequently used self-report instrument for the assessment of the Impostor Phenomenon (IP). The literature provided mixed findings on the factorial structure of the CIPS. We extend previous work on the German-language CIPS by testing a bifactor exploratory factor model in two large and independently collected samples ( Ntotal = 1,794). Our analyses show that the bifactor model comprising a general IP factor and three group factors (labeled Luck, Fear of Failure, and Discount) fits the data well and 7 of the 20 items could be clearly assigned to the factors. The general factor (ω ≥ .90) and facets (α ≥ .67) show satisfying internal consistencies and differential correlations to attributional styles and the broader Big Five and HEXACO personality traits. Our findings support the use of the CIPS total score and expand the understanding of the CIPS’ multidimensional measurement model. Taking limitations into account, the identification and use of fine-grained facets contribute to understanding the correlates and consequences of the IP. We discuss potential improvements to the CIPS.
{"title":"Understanding the Clance Impostor Phenomenon Scale Through the Lens of a Bifactor Model","authors":"Kay Brauer, R. Proyer","doi":"10.1027/1015-5759/a000786","DOIUrl":"https://doi.org/10.1027/1015-5759/a000786","url":null,"abstract":"Abstract: The Clance Impostor Phenomenon Scale (CIPS) is the most frequently used self-report instrument for the assessment of the Impostor Phenomenon (IP). The literature provided mixed findings on the factorial structure of the CIPS. We extend previous work on the German-language CIPS by testing a bifactor exploratory factor model in two large and independently collected samples ( Ntotal = 1,794). Our analyses show that the bifactor model comprising a general IP factor and three group factors (labeled Luck, Fear of Failure, and Discount) fits the data well and 7 of the 20 items could be clearly assigned to the factors. The general factor (ω ≥ .90) and facets (α ≥ .67) show satisfying internal consistencies and differential correlations to attributional styles and the broader Big Five and HEXACO personality traits. Our findings support the use of the CIPS total score and expand the understanding of the CIPS’ multidimensional measurement model. Taking limitations into account, the identification and use of fine-grained facets contribute to understanding the correlates and consequences of the IP. We discuss potential improvements to the CIPS.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49045812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1027/1015-5759/a000772
Chrystalla C. Koutsogiorgi, M. Michaelides
Abstract: The Rosenberg Self-Esteem Scale (RSES) was developed as a unitary scale to assess attitudes toward the self. Previous studies have shown differences in responses and psychometric indices between the positively and negatively worded items, suggesting differential processing of responses. The current study examined differences in response behaviors toward two positively and two negatively worded items of the RSES with eye-tracking methodology and explored whether those differences were more pronounced among individuals with higher neuroticism, controlling for verbal abilities and mood. Eighty-seven university students completed a computerized version of the scale, while their responses, response time, and eye movements were recorded through the Gazepoint GP3 HD eye-tracker. In linear mixed-effects models, two negatively worded items elicited higher scores (elicited stronger disagreement) in self-esteem, and different response processes, for example, longer viewing times, than two positively worded items. Neuroticism predicted lower responses and more revisits to item statements. Eye-tracking can enhance the examination of response tendencies and the role of wording and its interaction with individual characteristics at different stages of the response process.
{"title":"Response Tendencies to Positively and Negatively Worded Items of the Rosenberg Self-Esteem Scale With Eye-Tracking Methodology","authors":"Chrystalla C. Koutsogiorgi, M. Michaelides","doi":"10.1027/1015-5759/a000772","DOIUrl":"https://doi.org/10.1027/1015-5759/a000772","url":null,"abstract":"Abstract: The Rosenberg Self-Esteem Scale (RSES) was developed as a unitary scale to assess attitudes toward the self. Previous studies have shown differences in responses and psychometric indices between the positively and negatively worded items, suggesting differential processing of responses. The current study examined differences in response behaviors toward two positively and two negatively worded items of the RSES with eye-tracking methodology and explored whether those differences were more pronounced among individuals with higher neuroticism, controlling for verbal abilities and mood. Eighty-seven university students completed a computerized version of the scale, while their responses, response time, and eye movements were recorded through the Gazepoint GP3 HD eye-tracker. In linear mixed-effects models, two negatively worded items elicited higher scores (elicited stronger disagreement) in self-esteem, and different response processes, for example, longer viewing times, than two positively worded items. Neuroticism predicted lower responses and more revisits to item statements. Eye-tracking can enhance the examination of response tendencies and the role of wording and its interaction with individual characteristics at different stages of the response process.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42847801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1027/1015-5759/a000776
Carolin Hahnel, Alexander J. Jung, Frank Goldhammer
Abstract: Following an extended perspective of evidence-centered design, this study provides a methodological exemplar of the theory-based construction of process indicators from log data. We investigated decision-making processes in web search as the target construct, assuming that individuals follow a heuristic search (focusing on search results vs. websites as a primary information source) and stopping rule (following a satisficing vs. sampling strategy). Drawing on these assumptions, we describe our reasoning for identifying the empirical evidence needed and selecting an assessment to obtain this evidence to derive process indicators that represent groups differentiated by search and stopping rule combinations. To evaluate our approach, we reanalyzed the processing behavior of 150 university students who were requested in four tasks to select a specific website from a list of five search results. We determined the process indicators per item and conducted multiple cluster analyses to investigate group recovery. For each item, we found three clusters, two of which matched our assumptions. Additionally, we explored the consistency of students’ cluster membership across items and investigated their relationship with students’ skills in evaluating online information. Based on the results, we discuss the tradeoff between construct breadth and process elaboration for deriving meaningful process indicators.
{"title":"Theory Matters","authors":"Carolin Hahnel, Alexander J. Jung, Frank Goldhammer","doi":"10.1027/1015-5759/a000776","DOIUrl":"https://doi.org/10.1027/1015-5759/a000776","url":null,"abstract":"Abstract: Following an extended perspective of evidence-centered design, this study provides a methodological exemplar of the theory-based construction of process indicators from log data. We investigated decision-making processes in web search as the target construct, assuming that individuals follow a heuristic search (focusing on search results vs. websites as a primary information source) and stopping rule (following a satisficing vs. sampling strategy). Drawing on these assumptions, we describe our reasoning for identifying the empirical evidence needed and selecting an assessment to obtain this evidence to derive process indicators that represent groups differentiated by search and stopping rule combinations. To evaluate our approach, we reanalyzed the processing behavior of 150 university students who were requested in four tasks to select a specific website from a list of five search results. We determined the process indicators per item and conducted multiple cluster analyses to investigate group recovery. For each item, we found three clusters, two of which matched our assumptions. Additionally, we explored the consistency of students’ cluster membership across items and investigated their relationship with students’ skills in evaluating online information. Based on the results, we discuss the tradeoff between construct breadth and process elaboration for deriving meaningful process indicators.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42031973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1027/1015-5759/a000790
M. A. Lindner, Samuel Greiff
Abstract: This editorial provides a comprehensive framework and overview of potential uses and next steps in research on process data in computer-based assessments, expanding toward broad perspectives on the field and an exploration of future directions and emerging trends. After briefly reflecting on the evolution of process data use in research and assessment practice, we discuss three key challenges, namely (1) the theoretical grounding and validation of process data indicators, (2) assessment design for process data, and (3) ethical standards. By considering best practice approaches in all three areas and current discussions in the literature, we conclude that a focus is needed on the following three areas: (1) strong, holistic theoretical frameworks for validating process data, (2) reliable, standardized data collections, preferably with a top-down approach to developing test items and preregistered hypotheses, and (3) ethical norms for data collection, data use, and guidelines for responsible inference, including restraints in decisions based on process data.
{"title":"Process Data in Computer-Based Assessment","authors":"M. A. Lindner, Samuel Greiff","doi":"10.1027/1015-5759/a000790","DOIUrl":"https://doi.org/10.1027/1015-5759/a000790","url":null,"abstract":"Abstract: This editorial provides a comprehensive framework and overview of potential uses and next steps in research on process data in computer-based assessments, expanding toward broad perspectives on the field and an exploration of future directions and emerging trends. After briefly reflecting on the evolution of process data use in research and assessment practice, we discuss three key challenges, namely (1) the theoretical grounding and validation of process data indicators, (2) assessment design for process data, and (3) ethical standards. By considering best practice approaches in all three areas and current discussions in the literature, we conclude that a focus is needed on the following three areas: (1) strong, holistic theoretical frameworks for validating process data, (2) reliable, standardized data collections, preferably with a top-down approach to developing test items and preregistered hypotheses, and (3) ethical norms for data collection, data use, and guidelines for responsible inference, including restraints in decisions based on process data.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45220889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-29DOI: 10.1027/1015-5759/a000770
C. Omoeva, Nina Menezes Cunha, P. Kyllonen, Sarah Gates, Andres Martinez, H. Burke
Abstract: We developed and evaluated the YouthPower Action Youth Soft Skills Assessment (YAYSSA), a self-report soft skills measure. The YAYSSA targets 15- to 19-year-old youth in lower resource environments. In Study 1, we identified 16 key constructs based on a review of those associated with positive youth outcomes in sexual and reproductive health, violence prevention, and workforce success. We adapted promising items measuring those constructs from existing and openly available tools. We conducted cognitive interviews with 50 youth from six schools in Uganda, for wording and response formats, leading to a first draft tool. In Study 2 we administered that tool to N = 1,098 youth in 59 schools in Uganda. Confirmatory factor analyses did not support the hypothesized 16-factor structure, but exploratory factor analyses suggested a four-factor solution (Positive self-concept, Higher-order thinking skills, Social and Communication skills, and Negative self-concept). In Study 3, a revised tool was administered to Uganda youth ( N = 1,010, 59 sites). After cognitive testing with 45 youth in Guatemala, the tool was administered to youth ( N = 794; 59 sites) in Guatemala once, then 5 months later, with a mixture of retested and new participants ( N = 784; 67 sites). Factor analytic results supported the four-factor structure with 48 retained items and indicated that the instrument was reliable by internal consistency and test-retest correlations. The instrument correlated with demographic variables and outcomes in expected directions. We found evidence for measurement invariance across country, country and gender, country and socioeconomic status, and time. We discuss implications for scale validation and use in future research.
{"title":"Developing a New Tool for International Youth Programs","authors":"C. Omoeva, Nina Menezes Cunha, P. Kyllonen, Sarah Gates, Andres Martinez, H. Burke","doi":"10.1027/1015-5759/a000770","DOIUrl":"https://doi.org/10.1027/1015-5759/a000770","url":null,"abstract":"Abstract: We developed and evaluated the YouthPower Action Youth Soft Skills Assessment (YAYSSA), a self-report soft skills measure. The YAYSSA targets 15- to 19-year-old youth in lower resource environments. In Study 1, we identified 16 key constructs based on a review of those associated with positive youth outcomes in sexual and reproductive health, violence prevention, and workforce success. We adapted promising items measuring those constructs from existing and openly available tools. We conducted cognitive interviews with 50 youth from six schools in Uganda, for wording and response formats, leading to a first draft tool. In Study 2 we administered that tool to N = 1,098 youth in 59 schools in Uganda. Confirmatory factor analyses did not support the hypothesized 16-factor structure, but exploratory factor analyses suggested a four-factor solution (Positive self-concept, Higher-order thinking skills, Social and Communication skills, and Negative self-concept). In Study 3, a revised tool was administered to Uganda youth ( N = 1,010, 59 sites). After cognitive testing with 45 youth in Guatemala, the tool was administered to youth ( N = 794; 59 sites) in Guatemala once, then 5 months later, with a mixture of retested and new participants ( N = 784; 67 sites). Factor analytic results supported the four-factor structure with 48 retained items and indicated that the instrument was reliable by internal consistency and test-retest correlations. The instrument correlated with demographic variables and outcomes in expected directions. We found evidence for measurement invariance across country, country and gender, country and socioeconomic status, and time. We discuss implications for scale validation and use in future research.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47522567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-29DOI: 10.1027/1015-5759/a000782
Mandira Mishra, Mark S. Allen
Abstract: This research sought to test the face, construct and criterion validity, and test-retest reliability of the Adult Rejection Sensitivity Questionnaire (ARSQ). In Study 1, participants ( n = 45) completed the ARSQ and questions assessing scale item relevancy, clarity, difficulty, and sensitivity. In Study 2, participants ( n = 513) completed the ARSQ and demographic questions. In Study 3, participants ( n = 244) completed the ARSQ and returned 2 weeks later to complete the ARSQ and measures of depression, anxiety, and self-silencing behavior. Study 1 provided strong support for face validity with all items deemed relevant, clear, easy to answer, and neither distressing nor judgmental. Study 2 provided adequate support for the factor structure of the ARSQ (single-factor model and two-factor model) but suggested modifications could be made to improve scale validity. Study 3 provided further support for an adequate (but not good) factor structure, and evidence for criterion validity established through medium-large effect size correlations with depression, anxiety, and self-silencing behavior. However, the 2-week scale stability appeared poor ( r = .45) in a subsample of participants. Overall, the ARSQ showed sufficient validity to recommend its continued use, but we recommend further tests of scale reliability and potential modifications to increase construct validity.
{"title":"Face, Construct and Criterion Validity, and Test-Retest Reliability, of the Adult Rejection Sensitivity Questionnaire","authors":"Mandira Mishra, Mark S. Allen","doi":"10.1027/1015-5759/a000782","DOIUrl":"https://doi.org/10.1027/1015-5759/a000782","url":null,"abstract":"Abstract: This research sought to test the face, construct and criterion validity, and test-retest reliability of the Adult Rejection Sensitivity Questionnaire (ARSQ). In Study 1, participants ( n = 45) completed the ARSQ and questions assessing scale item relevancy, clarity, difficulty, and sensitivity. In Study 2, participants ( n = 513) completed the ARSQ and demographic questions. In Study 3, participants ( n = 244) completed the ARSQ and returned 2 weeks later to complete the ARSQ and measures of depression, anxiety, and self-silencing behavior. Study 1 provided strong support for face validity with all items deemed relevant, clear, easy to answer, and neither distressing nor judgmental. Study 2 provided adequate support for the factor structure of the ARSQ (single-factor model and two-factor model) but suggested modifications could be made to improve scale validity. Study 3 provided further support for an adequate (but not good) factor structure, and evidence for criterion validity established through medium-large effect size correlations with depression, anxiety, and self-silencing behavior. However, the 2-week scale stability appeared poor ( r = .45) in a subsample of participants. Overall, the ARSQ showed sufficient validity to recommend its continued use, but we recommend further tests of scale reliability and potential modifications to increase construct validity.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49228571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}