Pub Date : 2024-11-20DOI: 10.1080/00223891.2024.2430321
Joost Hutsebaut, Carla Sharp
{"title":"Opportunities for the AMPD: Commentary on Hopwood, 2024.","authors":"Joost Hutsebaut, Carla Sharp","doi":"10.1080/00223891.2024.2430321","DOIUrl":"https://doi.org/10.1080/00223891.2024.2430321","url":null,"abstract":"","PeriodicalId":16707,"journal":{"name":"Journal of personality assessment","volume":" ","pages":"1-5"},"PeriodicalIF":2.8,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142676121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-20DOI: 10.1080/00223891.2024.2431126
Mariagrazia Di Giuseppe, Annalisa Tanzilli
Richardson, Beath, and Boag (this issue) developed a questionnaire designed to measure attachment-related defense mechanisms with considerable promise for research, practice, and training. Their robust design and the sophisticated psychometric techniques used to generate and validate the measure are notable. The goal of this commentary is to situate the measure in contemporary research on defenses, draw a distinction between defenses linked specifically to attachment and defenses more generally, and to stimulate a constructive dialogue with the Defense Mechanisms Rating Scales (DMRS; Di Giuseppe & Perry, 2021; Perry, 1990, 2014), a model and set of measures that has dominated defense mechanism research for half a century.
Richardson、Beath 和 Boag(本期)开发了一份调查问卷,用于测量与依恋相关的防御机制,该问卷在研究、实践和培训方面前景广阔。值得注意的是,他们采用了稳健的设计和复杂的心理测量技术来生成和验证测量结果。本评论的目的是将该测量方法置于当代防御机制研究中,区分与依恋相关的防御机制和更广泛的防御机制,并激发与防御机制评定量表(DMRS;Di Giuseppe & Perry, 2021; Perry, 1990, 2014)的建设性对话。
{"title":"Defenses and Attachment in Clinical Practice: What Came First?","authors":"Mariagrazia Di Giuseppe, Annalisa Tanzilli","doi":"10.1080/00223891.2024.2431126","DOIUrl":"https://doi.org/10.1080/00223891.2024.2431126","url":null,"abstract":"<p><p>Richardson, Beath, and Boag (this issue) developed a questionnaire designed to measure attachment-related defense mechanisms with considerable promise for research, practice, and training. Their robust design and the sophisticated psychometric techniques used to generate and validate the measure are notable. The goal of this commentary is to situate the measure in contemporary research on defenses, draw a distinction between defenses linked specifically to attachment and defenses more generally, and to stimulate a constructive dialogue with the <i>Defense Mechanisms Rating Scales</i> (DMRS; Di Giuseppe & Perry, 2021; Perry, 1990, 2014), a model and set of measures that has dominated defense mechanism research for half a century.</p>","PeriodicalId":16707,"journal":{"name":"Journal of personality assessment","volume":" ","pages":"1-2"},"PeriodicalIF":2.8,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142676112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-20DOI: 10.1080/00223891.2024.2430318
Jie Gong, Dong-Li Bei, Dai-Li Pi, Jie Luo
The Triarchic Model of Grit Scale (TMGS) was developed as an innovative measurement designed to evaluate general grit levels, encompassing perseverance of effort, consistency of interests, and adaptability to situations within a collectivism culture. The present study was undertaken with the aim of examining the factor structure, measurement invariance, empirical validity, and incremental validity of the TMGS among in a sample of Chinese adolescents (N = 997, 43.4% males, Mage = 16.64, SDage = 1.05). The results revealed that the original three-factor model of TMGS exhibited the best fit to the data, and supported partial scalar invariance across gender. Additionally, the internal consistency values of the TMGS scores ranged from marginal to acceptable, and the stability coefficients across time were acceptable. Moreover, the TMGS scores showed satisfactory criterion-related validity, correlating with scores of external criteria variables (e.g., Grit-S, self-control, and big five personality). Finally, the TMGS scores demonstrated superior incremental validity in predicting academic burnout compared to conscientiousness. Overall, although further studies are needed, our findings suggested that the TMGS demonstrated acceptable psychometric properties within a collectivist culture and may serve as a promising tool for assessing grit levels in Chinese adolescents.
{"title":"Further Validation of the Triarchic Model of Grit Scale (TMGS) in Chinese Adolescents.","authors":"Jie Gong, Dong-Li Bei, Dai-Li Pi, Jie Luo","doi":"10.1080/00223891.2024.2430318","DOIUrl":"https://doi.org/10.1080/00223891.2024.2430318","url":null,"abstract":"<p><p>The Triarchic Model of Grit Scale (TMGS) was developed as an innovative measurement designed to evaluate general grit levels, encompassing perseverance of effort, consistency of interests, and adaptability to situations within a collectivism culture. The present study was undertaken with the aim of examining the factor structure, measurement invariance, empirical validity, and incremental validity of the TMGS among in a sample of Chinese adolescents (<i>N</i> = 997, 43.4% males, <i>M</i><sub>age</sub> = 16.64, <i>SD</i><sub>age</sub> = 1.05). The results revealed that the original three-factor model of TMGS exhibited the best fit to the data, and supported partial scalar invariance across gender. Additionally, the internal consistency values of the TMGS scores ranged from marginal to acceptable, and the stability coefficients across time were acceptable. Moreover, the TMGS scores showed satisfactory criterion-related validity, correlating with scores of external criteria variables (e.g., Grit-S, self-control, and big five personality). Finally, the TMGS scores demonstrated superior incremental validity in predicting academic burnout compared to conscientiousness. Overall, although further studies are needed, our findings suggested that the TMGS demonstrated acceptable psychometric properties within a collectivist culture and may serve as a promising tool for assessing grit levels in Chinese adolescents.</p>","PeriodicalId":16707,"journal":{"name":"Journal of personality assessment","volume":" ","pages":"1-9"},"PeriodicalIF":2.8,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142676114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-19DOI: 10.1080/00223891.2024.2425660
Jenelle M Slavin-Mulford, Elyse M Vincent, Savanna G Coleman, Havilah P Ravula, Jeremy J Coleman, Melanie M Wilcox, Michelle B Stein
The Thematic Apperception Test (TAT) is the second most commonly used performance-based task. However, traditional TAT administration is time-consuming and raises accessibility issues. Research exploring administration modifications has found that within a lab setting, having participants type their own narratives leads to richer responses than when participants narrate the stories out loud to an examiner. The current study extends prior research by investigating the impact of card presentation (hard copy versus computer screen) and setting (in the lab versus online) on narrative quality. A four-card TAT protocol was administered to 134 college students in three separate conditions: in lab with hard copies of cards, in lab with images on the computer, and online in which participants could take the TAT wherever they wished. In all conditions, participants typed their narratives. The narratives were scored using the Social Cognition and Object Relations Scale- Global Rating Method (SCORS-G). MANOVA procedures showed that SCORS-G ratings were not affected by card presentation or setting and add to prior work to suggest that the TAT can be administered online without a diminution in the quality of SCORS-G ratings at least with some populations.
主题感知测验(TAT)是第二种最常用的基于表现的任务。然而,传统的 TAT 施测既耗时,又存在无障碍问题。研究人员对施测方法的改进进行了探索,发现在实验室环境中,让受试者自己打字叙述故事比受试者大声向考官叙述故事能得到更丰富的回答。本研究通过调查卡片呈现方式(硬拷贝与电脑屏幕)和环境(实验室与网络)对叙述质量的影响,对之前的研究进行了扩展。研究人员在三种不同的条件下对 134 名大学生进行了四张卡片的 TAT 测试:在实验室中使用硬拷贝卡片,在实验室中使用电脑屏幕上的图像,以及在线测试(参与者可以在任何地方进行 TAT 测试)。在所有条件下,参与者都要打字叙述。叙述内容采用社会认知与客体关系量表-总体评分法(SCORS-G)进行评分。MANOVA 程序显示,SCORS-G 评级不受卡片展示或环境的影响,这也补充了之前的研究,表明 TAT 可以在线进行,至少在某些人群中不会降低 SCORS-G 评级的质量。
{"title":"Moving Toward an Online Thematic Apperception Test (TAT): The Impact of Administration Modifications on Narrative Length and Story Richness.","authors":"Jenelle M Slavin-Mulford, Elyse M Vincent, Savanna G Coleman, Havilah P Ravula, Jeremy J Coleman, Melanie M Wilcox, Michelle B Stein","doi":"10.1080/00223891.2024.2425660","DOIUrl":"https://doi.org/10.1080/00223891.2024.2425660","url":null,"abstract":"<p><p>The Thematic Apperception Test (TAT) is the second most commonly used performance-based task. However, traditional TAT administration is time-consuming and raises accessibility issues. Research exploring administration modifications has found that within a lab setting, having participants type their own narratives leads to richer responses than when participants narrate the stories out loud to an examiner. The current study extends prior research by investigating the impact of card presentation (hard copy versus computer screen) and setting (in the lab versus online) on narrative quality. A four-card TAT protocol was administered to 134 college students in three separate conditions: in lab with hard copies of cards, in lab with images on the computer, and online in which participants could take the TAT wherever they wished. In all conditions, participants typed their narratives. The narratives were scored using the Social Cognition and Object Relations Scale- Global Rating Method (SCORS-G). MANOVA procedures showed that SCORS-G ratings were not affected by card presentation or setting and add to prior work to suggest that the TAT can be administered online without a diminution in the quality of SCORS-G ratings at least with some populations.</p>","PeriodicalId":16707,"journal":{"name":"Journal of personality assessment","volume":" ","pages":"1-10"},"PeriodicalIF":2.8,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142676117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-15DOI: 10.1080/00223891.2024.2425663
Majse Lind, Henry R Cowan, Jonathan M Adler, Dan P McAdams
In narrative identity research, variables are typically captured through detailed content-coding of personal narratives. Yet, alternative methods have been suggested, notably self-report scales, because they capture a participant's own interpretation of their personal narratives, and because they are efficient to administer as a supplement to more labor-intensive coding methods. This study developed and validated the Narrative Identity Self-Evaluation (NISE) questionnaire. In Study 1, the questionnaire was developed through exploratory factor analysis (n = 425) and its criterion validity examined. In Study 2, the NISE factor structure and criterion relationships were confirmed (n = 304). In Study 3 (based on the same sample as Study 1), content-coding of 11 narrative identity characteristics in open-ended personal story accounts was conducted, and NISE scores were compared to corresponding content-coded variables. The 20-item NISE has three factors replicating common dimensions in narrative identity (autobiographical reasoning, desire for structure, positive motivational/affective themes) and a novel fourth factor capturing disturbances of narrative identity. The NISE correlated in theoretically-coherent ways with content coded narrative identity variables, self-report traits, and measures relevant for narrative identity, self-concept, well-being, and psychopathology. We discuss the scale's advantages in complementing content-coding of narrative accounts to assess variation in narrative identity within both clinical and non-clinical populations.
{"title":"Development and Validation of the Narrative Identity Self-Evaluation Scale (NISE).","authors":"Majse Lind, Henry R Cowan, Jonathan M Adler, Dan P McAdams","doi":"10.1080/00223891.2024.2425663","DOIUrl":"https://doi.org/10.1080/00223891.2024.2425663","url":null,"abstract":"<p><p>In narrative identity research, variables are typically captured through detailed content-coding of personal narratives. Yet, alternative methods have been suggested, notably self-report scales, because they capture a participant's own interpretation of their personal narratives, and because they are efficient to administer as a supplement to more labor-intensive coding methods. This study developed and validated the Narrative Identity Self-Evaluation (NISE) questionnaire. In Study 1, the questionnaire was developed through exploratory factor analysis (<i>n</i> = 425) and its criterion validity examined. In Study 2, the NISE factor structure and criterion relationships were confirmed (<i>n</i> = 304). In Study 3 (based on the same sample as Study 1), content-coding of 11 narrative identity characteristics in open-ended personal story accounts was conducted, and NISE scores were compared to corresponding content-coded variables. The 20-item NISE has three factors replicating common dimensions in narrative identity (autobiographical reasoning, desire for structure, positive motivational/affective themes) and a novel fourth factor capturing disturbances of narrative identity. The NISE correlated in theoretically-coherent ways with content coded narrative identity variables, self-report traits, and measures relevant for narrative identity, self-concept, well-being, and psychopathology. We discuss the scale's advantages in complementing content-coding of narrative accounts to assess variation in narrative identity within both clinical and non-clinical populations.</p>","PeriodicalId":16707,"journal":{"name":"Journal of personality assessment","volume":" ","pages":"1-14"},"PeriodicalIF":2.8,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142639243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-14DOI: 10.1080/00223891.2024.2416416
Faming Wang, Ronnel B King
Given the critical role of socio-emotional skills in students' academic success, psychological well-being, and other critical life outcomes, the Organization for Economic Cooperation and Development (OECD) developed the Survey on Social and Emotional Skills (SSES) to measure these skills among school-age students. However, the broad conceptual scope of socio-emotional skills necessitated the use of a large number of items (i.e., 120 items) in the original SSES, which poses challenges regarding survey administration and participant fatigue. To address these issues, this study aimed to develop a short form of the SSES (i.e., SSES-SF). The sample included 29,798 15-year-old students across 10 regions. We developed a 45-item version of SSES-SF using the machine learning approach of genetic algorithm, which is 62.5% shorter than the original 120-item SSES. The reliability, construct validity, reproduced information, concurrent validity, and measurement invariance of the SSES-SF were investigated. We found that the SSES-SF demonstrated satisfactory reliability, construct validity, and concurrent validity. Furthermore, the SSES-SF was able to reproduce a substantial amount of information from the original full-form SSES and exhibited measurement invariance across genders, regions, and language groups. Theoretical and practical implications of the findings are discussed.
{"title":"Developing the Short Form of the Survey on Social and Emotional Skills (SSES-SF).","authors":"Faming Wang, Ronnel B King","doi":"10.1080/00223891.2024.2416416","DOIUrl":"https://doi.org/10.1080/00223891.2024.2416416","url":null,"abstract":"<p><p>Given the critical role of socio-emotional skills in students' academic success, psychological well-being, and other critical life outcomes, the Organization for Economic Cooperation and Development (OECD) developed the Survey on Social and Emotional Skills (SSES) to measure these skills among school-age students. However, the broad conceptual scope of socio-emotional skills necessitated the use of a large number of items (i.e., 120 items) in the original SSES, which poses challenges regarding survey administration and participant fatigue. To address these issues, this study aimed to develop a short form of the SSES (i.e., SSES-SF). The sample included 29,798 15-year-old students across 10 regions. We developed a 45-item version of SSES-SF using the machine learning approach of genetic algorithm, which is 62.5% shorter than the original 120-item SSES. The reliability, construct validity, reproduced information, concurrent validity, and measurement invariance of the SSES-SF were investigated. We found that the SSES-SF demonstrated satisfactory reliability, construct validity, and concurrent validity. Furthermore, the SSES-SF was able to reproduce a substantial amount of information from the original full-form SSES and exhibited measurement invariance across genders, regions, and language groups. Theoretical and practical implications of the findings are discussed.</p>","PeriodicalId":16707,"journal":{"name":"Journal of personality assessment","volume":" ","pages":"1-16"},"PeriodicalIF":2.8,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142622734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-11DOI: 10.1080/00223891.2024.2420869
Diego F Graña, Rodrigo S Kreitchmann, Francisco J Abad, Miguel A Sorrel
Forced-choice (FC) questionnaires have gained scientific interest over the last decades. However, the inclusion of unequally keyed item pairs in FC questionnaires remains a subject of debate, as there is evidence supporting both their usage and avoidance. Designing unequally keyed pairs may be more difficult when considering social desirability, as they might allow the identification of ideal responses. Nevertheless, they may enhance the reliability and the potential for normative interpretation of scores. To empirically investigate this topic, data were collected from 1,125 undergraduate Psychology students who completed a personality item pool measuring the Big Five personality traits in Likert-type format and two FC questionnaires (with and without unequally keyed pairs). These questionnaires were compared in terms of reliability, convergent and criterion validity, and ipsativity of the scores, along with insights on the construction process. While constructing questionnaires with unequally keyed blocks presented challenges in matching items on their social desirability, the differences observed in terms of reliability, validity, or ipsativity were sporadic and lacked systematic patterns. This suggests that neither questionnaire format exhibited a clear superiority. Given these results, it is recommended using only equally keyed blocks to minimize potential validity issues associated with response biases.
过去几十年来,强迫选择(FC)问卷受到了科学界的广泛关注。然而,是否在 FC 问卷中加入不等键值的项目对仍然是一个争论的话题,因为有证据支持使用和避免使用不等键值的项目对。考虑到社会期望性,设计不等键值的项目对可能会更加困难,因为它们可能会让人识别出理想的回答。不过,它们可能会提高评分的可靠性和规范解释的可能性。为了对这一课题进行实证研究,我们收集了 1125 名心理学本科生的数据,他们完成了一个以李克特类型格式测量五大人格特质的人格项目库和两份 FC 问卷(有不等键对和无不等键对)。我们对这些问卷的信度、收敛效度、标准效度和分数的同位性进行了比较,并对问卷的制作过程进行了深入分析。虽然使用不等键块构建问卷在匹配项目的社会可取性方面遇到了挑战,但在信度、效度或同点性方面观察到的差异只是零星的,缺乏系统的模式。这表明这两种问卷形式都没有明显的优越性。鉴于这些结果,建议只使用键值相同的区块,以尽量减少与回答偏差相关的潜在有效性问题。
{"title":"Equally vs. unequally keyed blocks in forced-choice questionnaires: Implications on validity and reliability.","authors":"Diego F Graña, Rodrigo S Kreitchmann, Francisco J Abad, Miguel A Sorrel","doi":"10.1080/00223891.2024.2420869","DOIUrl":"https://doi.org/10.1080/00223891.2024.2420869","url":null,"abstract":"<p><p>Forced-choice (FC) questionnaires have gained scientific interest over the last decades. However, the inclusion of unequally keyed item pairs in FC questionnaires remains a subject of debate, as there is evidence supporting both their usage and avoidance. Designing unequally keyed pairs may be more difficult when considering social desirability, as they might allow the identification of ideal responses. Nevertheless, they may enhance the reliability and the potential for normative interpretation of scores. To empirically investigate this topic, data were collected from 1,125 undergraduate Psychology students who completed a personality item pool measuring the Big Five personality traits in Likert-type format and two FC questionnaires (with and without unequally keyed pairs). These questionnaires were compared in terms of reliability, convergent and criterion validity, and ipsativity of the scores, along with insights on the construction process. While constructing questionnaires with unequally keyed blocks presented challenges in matching items on their social desirability, the differences observed in terms of reliability, validity, or ipsativity were sporadic and lacked systematic patterns. This suggests that neither questionnaire format exhibited a clear superiority. Given these results, it is recommended using only equally keyed blocks to minimize potential validity issues associated with response biases.</p>","PeriodicalId":16707,"journal":{"name":"Journal of personality assessment","volume":" ","pages":"1-14"},"PeriodicalIF":2.8,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142622740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-11DOI: 10.1080/00223891.2024.2422432
Qi-Wu Sun, Zhi-Huan Wang, Ming Wang, Stephen E Finn
Therapeutic Assessment (TA) is a relatively new, short-term intervention that uses psychological tests to address clients' persistent problems in living. The core feature of TA is that assessors and clients work collaboratively in all the phases of the process, and psychological tests are used as "empathy magnifiers" to help assessors understand clients' "dilemmas of change" and promote positive change. An "ultra-brief" TA protocol involving an Initial Session, Test Administration and Extended Inquiry, and Summary/Discussion Session was undertaken with three adult clients in China. A case-based time-series design with daily measures was used to assess the outcome of TA. Recruited in a natural setting, all 3 clients benefited from participation in the TA. These results suggest that Ultra-brief TA may be a promising treatment for Chinese adult clients with a variety of psychological concerns.
治疗性评估(TA)是一种相对较新的短期干预措施,它利用心理测试来解决求助者在生活中持续存在的问题。治疗性评估的核心特点是,评估者和求助者在整个过程的各个阶段都要通力合作,而心理测试则被用作 "移情放大镜",帮助评估者理解求助者的 "改变困境",并促进其积极改变。我们与中国的三位成年客户开展了一项 "超简短 "的心理辅导方案,包括初始会谈、测试管理和扩展探究,以及总结/讨论会谈。采用基于案例的时间序列设计和每日测量来评估 TA 的结果。在自然环境中招募的 3 名受助者都从参与助教活动中受益。这些结果表明,对于有各种心理问题的中国成年求助者来说,超简短TA可能是一种很有前景的治疗方法。
{"title":"Ultra-Brief Therapeutic Assessment with Three Chinese Adult Clients: A Case-Based Time-Series Pilot Study.","authors":"Qi-Wu Sun, Zhi-Huan Wang, Ming Wang, Stephen E Finn","doi":"10.1080/00223891.2024.2422432","DOIUrl":"https://doi.org/10.1080/00223891.2024.2422432","url":null,"abstract":"<p><p>Therapeutic Assessment (TA) is a relatively new, short-term intervention that uses psychological tests to address clients' persistent problems in living. The core feature of TA is that assessors and clients work collaboratively in all the phases of the process, and psychological tests are used as \"empathy magnifiers\" to help assessors understand clients' \"dilemmas of change\" and promote positive change. An \"ultra-brief\" TA protocol involving an Initial Session, Test Administration and Extended Inquiry, and Summary/Discussion Session was undertaken with three adult clients in China. A case-based time-series design with daily measures was used to assess the outcome of TA. Recruited in a natural setting, all 3 clients benefited from participation in the TA. These results suggest that Ultra-brief TA may be a promising treatment for Chinese adult clients with a variety of psychological concerns.</p>","PeriodicalId":16707,"journal":{"name":"Journal of personality assessment","volume":" ","pages":"1-12"},"PeriodicalIF":2.8,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142622776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-08DOI: 10.1080/00223891.2024.2416412
Morgan Gianola, Maria M Llabre, Elizabeth A Reynolds Losin
Language is a fundamental aspect of human culture that influences cognitive and perceptual processes. Prior evidence demonstrates personality self-report can vary across multilingual persons' language contexts. We assessed how cultural identification, language dominance, or both dynamically influence bilingual respondents' self-conception, via self-reported personality, across English and Spanish contexts. During separate English and Spanish conditions, 133 Hispanic/Latino bilingual participants (70 female) completed the Big Five Inventory of personality. We used language use and acculturation surveys completed in both languages to calculate participants' relative language dominance and identification with U.S.-American and Hispanic culture. Participants reported higher levels of agreeableness, conscientiousness, and neuroticism in English relative to Spanish. Language dominance predicted cross-language differences in personality report, with higher extraversion reported in participants' dominant language. Within each language, greater endorsement of U.S.-American identity was associated with higher extraversion and conscientiousness and lower reported neuroticism. Agreeableness report in both languages was positively predicted by Hispanic identification. Our results clarify existing literature related to language and cultural effects on personality report among U.S. Hispanics/Latinos. These findings could inform assessments of self-relevant cognitions across languages among bilingual populations and hold relevance for health outcomes affected by cultural processes.
{"title":"Language Dominance and Cultural Identity Predict Variation in Self-Reported Personality in English and Spanish Among Hispanic/Latino Bilingual Adults.","authors":"Morgan Gianola, Maria M Llabre, Elizabeth A Reynolds Losin","doi":"10.1080/00223891.2024.2416412","DOIUrl":"https://doi.org/10.1080/00223891.2024.2416412","url":null,"abstract":"<p><p>Language is a fundamental aspect of human culture that influences cognitive and perceptual processes. Prior evidence demonstrates personality self-report can vary across multilingual persons' language contexts. We assessed how cultural identification, language dominance, or both dynamically influence bilingual respondents' self-conception, via self-reported personality, across English and Spanish contexts. During separate English and Spanish conditions, 133 Hispanic/Latino bilingual participants (70 female) completed the Big Five Inventory of personality. We used language use and acculturation surveys completed in both languages to calculate participants' relative language dominance and identification with U.S.-American and Hispanic culture. Participants reported higher levels of agreeableness, conscientiousness, and neuroticism in English relative to Spanish. Language dominance predicted cross-language differences in personality report, with higher extraversion reported in participants' dominant language. Within each language, greater endorsement of U.S.-American identity was associated with higher extraversion and conscientiousness and lower reported neuroticism. Agreeableness report in both languages was positively predicted by Hispanic identification. Our results clarify existing literature related to language and cultural effects on personality report among U.S. Hispanics/Latinos. These findings could inform assessments of self-relevant cognitions across languages among bilingual populations and hold relevance for health outcomes affected by cultural processes.</p>","PeriodicalId":16707,"journal":{"name":"Journal of personality assessment","volume":" ","pages":"1-13"},"PeriodicalIF":2.8,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142605051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-07DOI: 10.1080/00223891.2024.2420175
Steven P Reise, Mark G Haviland
Coefficient alpha estimates the degree to which scale scores reflect systematic variation due to one or more common dimensions. Coefficient beta, on the other hand, estimates the degree to which scale scores reflect a single dimension common among all the items; that is, the target construct a scale attempts to measure. As such, the magnitude of beta, relative to alpha, informs on the ability to meaningfully interpret derived scale scores as reflecting a single construct. Despite its clear interpretative usefulness, coefficient beta is rarely reported and, perhaps, not well understood. As such, we first describe how coefficient alpha and beta are analogues to model-based reliability coefficients omega total and omega hierarchical. We then demonstrate with simulated data how these indices function under a variety of data structures. Finally, we perform a hierarchical cluster analysis of the Multidimensional Personality Questionnaire's Stress Reaction Scale, estimating alpha and beta, as clusters form. This demonstrates a chief advantage of alpha and beta; they do not require a formal structural model. Moreover, we illustrate how scales that primarily are based on sets of homogeneous item clusters can "ramp up" to yield reliable scores with conceptual breadth and predominantly reflect the intended target construct.
{"title":"Understanding Alpha and Beta and Sources of Common Variance: Theoretical Underpinnings and a Practical Example.","authors":"Steven P Reise, Mark G Haviland","doi":"10.1080/00223891.2024.2420175","DOIUrl":"https://doi.org/10.1080/00223891.2024.2420175","url":null,"abstract":"<p><p>Coefficient alpha estimates the degree to which scale scores reflect systematic variation due to one or more common dimensions. Coefficient beta, on the other hand, estimates the degree to which scale scores reflect a single dimension common among all the items; that is, the target construct a scale attempts to measure. As such, the magnitude of beta, relative to alpha, informs on the ability to meaningfully interpret derived scale scores as reflecting a single construct. Despite its clear interpretative usefulness, coefficient beta is rarely reported and, perhaps, not well understood. As such, we first describe how coefficient alpha and beta are analogues to model-based reliability coefficients omega total and omega hierarchical. We then demonstrate with simulated data how these indices function under a variety of data structures. Finally, we perform a hierarchical cluster analysis of the Multidimensional Personality Questionnaire's Stress Reaction Scale, estimating alpha and beta, as clusters form. This demonstrates a chief advantage of alpha and beta; they do not require a formal structural model. Moreover, we illustrate how scales that primarily are based on sets of homogeneous item clusters can \"ramp up\" to yield reliable scores with conceptual breadth and predominantly reflect the intended target construct.</p>","PeriodicalId":16707,"journal":{"name":"Journal of personality assessment","volume":" ","pages":"1-16"},"PeriodicalIF":2.8,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142605053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}