首页 > 最新文献

Research Methods in Applied Linguistics最新文献

英文 中文
Evaluating and enhancing the accuracy of automated fluency annotation tools in L2 research 评估和提高二语研究中自动流畅性标注工具的准确性
Pub Date : 2026-01-30 DOI: 10.1016/j.rmal.2026.100302
Jueyu Lu, John Rogers
Fluency is a central dimension of L2 oral proficiency. Further, fluency assessment is important for many applied contexts, including pedagogical and assessment purposes. Yet, the measurement of fluency using manual annotation is labor-intensive, which limits its broad application and scalability. We evaluate two automated tools — an acoustic-based tool (de Jong et al., 2021) and a machine-learning tool (Matsuura et al., 2025) — using data from L1-Chinese learners of English. Accuracy was assessed for three metrics, articulation rate (AR), pause ratio (PR), and mean pause duration (MPD), via Pearson correlations with manual annotation. We compared two automated tools and tested whether targeted manual post-processing (TextGrid checks and transcript adjustments) improves metric extraction using Steiger’s test. Results from our sample indicated that de Jong et al. (2021) yielded higher accuracy for silence-based metrics (PR, MPD). However, text-dependent metrics (syllable number after removing disfluency words in AR) benefited from corrected TextGrids (for the acoustic tool) or corrected transcripts (for the machine-learning tool). These findings suggest a scalable division of labor: use an acoustic-based tool for silence-driven metrics, and apply corrected transcripts with a machine-learning tool when extracting text-sensitive metrics.
流利性是第二语言口语熟练程度的一个核心维度。此外,流利度评估在许多应用环境中都很重要,包括教学和评估目的。然而,使用手工标注来测量语言流畅性是一项劳动密集型的工作,这限制了其广泛的应用和可扩展性。我们评估了两种自动化工具——一种基于声学的工具(de Jong et al., 2021)和一种机器学习工具(Matsuura et al., 2025)——使用来自L1-Chinese英语学习者的数据。通过与手动注释的Pearson相关性,评估三个指标的准确性,发音率(AR),暂停率(PR)和平均暂停时间(MPD)。我们比较了两种自动化工具,并使用Steiger测试测试了是否有针对性的手动后处理(TextGrid检查和文本调整)改善了度量提取。我们的样本结果表明,de Jong等人(2021)对基于沉默的指标(PR, MPD)的准确性更高。然而,文本依赖指标(在AR中去除不流畅单词后的音节数)受益于纠正的textgrid(用于声学工具)或纠正的转录本(用于机器学习工具)。这些发现表明了一种可扩展的分工:使用基于声学的工具来获取沉默驱动的指标,并在提取文本敏感指标时使用机器学习工具应用纠正的转录本。
{"title":"Evaluating and enhancing the accuracy of automated fluency annotation tools in L2 research","authors":"Jueyu Lu,&nbsp;John Rogers","doi":"10.1016/j.rmal.2026.100302","DOIUrl":"10.1016/j.rmal.2026.100302","url":null,"abstract":"<div><div>Fluency is a central dimension of L2 oral proficiency. Further, fluency assessment is important for many applied contexts, including pedagogical and assessment purposes. Yet, the measurement of fluency using manual annotation is labor-intensive, which limits its broad application and scalability. We evaluate two automated tools — an acoustic-based tool (de Jong et al., 2021) and a machine-learning tool (Matsuura et al., 2025) — using data from L1-Chinese learners of English. Accuracy was assessed for three metrics, articulation rate (AR), pause ratio (PR), and mean pause duration (MPD), via Pearson correlations with manual annotation. We compared two automated tools and tested whether targeted manual post-processing (TextGrid checks and transcript adjustments) improves metric extraction using Steiger’s test. Results from our sample indicated that de Jong et al. (2021) yielded higher accuracy for silence-based metrics (PR, MPD). However, text-dependent metrics (syllable number after removing disfluency words in AR) benefited from corrected TextGrids (for the acoustic tool) or corrected transcripts (for the machine-learning tool). These findings suggest a scalable division of labor: use an acoustic-based tool for silence-driven metrics, and apply corrected transcripts with a machine-learning tool when extracting text-sensitive metrics.</div></div>","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"5 1","pages":"Article 100302"},"PeriodicalIF":0.0,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mapping language economy strategies in Spanish political discourse on X 西班牙X政治话语中的语言经济策略映射
Pub Date : 2026-01-23 DOI: 10.1016/j.rmal.2026.100301
Sergei Sikorskii, María Luisa Carrió-Pastor
This study investigates language economy strategies in Spanish political discourse on X, analysing how users optimize communication. Through analysis of posts, we identify patterns of linguistic adaptation across syntactic and morphological dimensions. The research combines computational linguistics with traditional discourse analysis to examine strategy distribution, comprehensibility, and effectiveness. Results reveal a preference for syntactic strategies over morphological modifications. Message comprehensibility remains high despite substantial compression, challenging assumptions about the economy-clarity trade-off. Thread depth analysis shows peak strategy diversity at moderate depths, suggesting an optimal complexity point in digital political discourse. The study extends platform vernacular theory by demonstrating how political actors adapt linguistic strategies while maintaining effectiveness. These findings contribute to understanding how languages adapt to digital environments and have implications for political communication strategies, platform design, and digital literacy education.
本研究调查了X上西班牙语政治话语中的语言经济策略,分析了用户如何优化沟通。通过对帖子的分析,我们发现了帖子在句法和形态两个维度上的语言适应模式。本研究将计算语言学与传统的语篇分析相结合,研究策略的分布、可理解性和有效性。结果显示,句法策略优于形态修饰。尽管大量压缩,但信息的可理解性仍然很高,这挑战了有关经济-清晰度权衡的假设。线程深度分析显示,策略多样性在中等深度处达到峰值,表明数字政治话语的最优复杂性点。该研究通过展示政治行为者如何在保持有效性的同时适应语言策略,扩展了平台白话理论。这些发现有助于理解语言如何适应数字环境,并对政治沟通策略、平台设计和数字素养教育产生影响。
{"title":"Mapping language economy strategies in Spanish political discourse on X","authors":"Sergei Sikorskii,&nbsp;María Luisa Carrió-Pastor","doi":"10.1016/j.rmal.2026.100301","DOIUrl":"10.1016/j.rmal.2026.100301","url":null,"abstract":"<div><div>This study investigates language economy strategies in Spanish political discourse on X, analysing how users optimize communication. Through analysis of posts, we identify patterns of linguistic adaptation across syntactic and morphological dimensions. The research combines computational linguistics with traditional discourse analysis to examine strategy distribution, comprehensibility, and effectiveness. Results reveal a preference for syntactic strategies over morphological modifications. Message comprehensibility remains high despite substantial compression, challenging assumptions about the economy-clarity trade-off. Thread depth analysis shows peak strategy diversity at moderate depths, suggesting an optimal complexity point in digital political discourse. The study extends platform vernacular theory by demonstrating how political actors adapt linguistic strategies while maintaining effectiveness. These findings contribute to understanding how languages adapt to digital environments and have implications for political communication strategies, platform design, and digital literacy education.</div></div>","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"5 1","pages":"Article 100301"},"PeriodicalIF":0.0,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146037251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence-based automatic evaluation of human translation and interpreting: A systematic review of assessment and validation practices 基于人工智能的人工翻译和口译自动评估:评估和验证实践的系统回顾
Pub Date : 2026-01-23 DOI: 10.1016/j.rmal.2026.100300
Chao Han
Human-generated translation and interpreting (T&I) are routinely evaluated in domains such as language education and professional certification. While artificial intelligence (AI) is increasingly used for automatic assessment, little research has examined its application to human T&I. Drawing on rigorous database search and screening, this systematic review attempts to close this gap. Based on a curated corpus of 69 studies, we identify important trends in assessment design, model architecture, and validation practice. The data analysis shows a marked increase in research since 2020, with a dominant focus on English-Chinese T&I, primarily within educational contexts. Most studies employed feature-based machine learning models or repurposed machine translation metrics for scoring, while only a minority explored end-to-end large language models. Benchmark construction was found to be inconsistently reported, with many studies omitting key information about rater qualification, training, reliability, and scoring criteria. Validation practices primarily relied on correlations with human benchmark scores, with limited evidence of convergent validity or cross-condition generalizability. Notably, post-hoc explainability, a crucial step for ensuring transparency in opaque AI systems, was rarely implemented. Overall, this review highlights both progress and persistent challenges in AI-based T&I assessment. While AI holds promise for enhancing assessment efficiency and scalability, methodological limitations and transparency gaps currently constrain its responsible use. We recommend improved reporting standards, multi-pronged validation strategies, development of large annotated benchmark datasets, and greater attention to model interpretability and explainability. These steps are essential for building robust, trustworthy AI systems for automatic T&I assessment.
人工翻译和口译(T&;I)通常在语言教育和专业认证等领域进行评估。虽然人工智能(AI)越来越多地用于自动评估,但很少有研究考察其在人类评估中的应用。通过严格的数据库搜索和筛选,本系统综述试图缩小这一差距。基于69项研究的精选语料库,我们确定了评估设计、模型架构和验证实践中的重要趋势。数据分析显示,自2020年以来,对英语的研究显著增加,主要集中在英语和汉语,主要是在教育背景下。大多数研究采用基于特征的机器学习模型或重新利用机器翻译指标进行评分,而只有少数研究探索了端到端的大型语言模型。发现基准构建的报告不一致,许多研究忽略了有关评分员资格、培训、可靠性和评分标准的关键信息。验证实践主要依赖于与人类基准分数的相关性,具有有限的收敛有效性或交叉条件泛化性的证据。值得注意的是,在不透明的人工智能系统中,确保透明度的关键步骤——事后可解释性——很少得到实施。总的来说,这篇综述强调了基于人工智能的t&&i评估的进展和持续的挑战。虽然人工智能有望提高评估效率和可扩展性,但方法上的限制和透明度差距目前限制了其负责任的使用。我们建议改进报告标准,多管齐下的验证策略,开发大型带注释的基准数据集,并更多地关注模型的可解释性和可解释性。这些步骤对于为自动t&i评估构建健壮、可靠的人工智能系统至关重要。
{"title":"Artificial intelligence-based automatic evaluation of human translation and interpreting: A systematic review of assessment and validation practices","authors":"Chao Han","doi":"10.1016/j.rmal.2026.100300","DOIUrl":"10.1016/j.rmal.2026.100300","url":null,"abstract":"<div><div>Human-generated translation and interpreting (T&amp;I) are routinely evaluated in domains such as language education and professional certification. While artificial intelligence (AI) is increasingly used for automatic assessment, little research has examined its application to human T&amp;I. Drawing on rigorous database search and screening, this systematic review attempts to close this gap. Based on a curated corpus of 69 studies, we identify important trends in assessment design, model architecture, and validation practice. The data analysis shows a marked increase in research since 2020, with a dominant focus on English-Chinese T&amp;I, primarily within educational contexts. Most studies employed feature-based machine learning models or repurposed machine translation metrics for scoring, while only a minority explored end-to-end large language models. Benchmark construction was found to be inconsistently reported, with many studies omitting key information about rater qualification, training, reliability, and scoring criteria. Validation practices primarily relied on correlations with human benchmark scores, with limited evidence of convergent validity or cross-condition generalizability. Notably, post-hoc explainability, a crucial step for ensuring transparency in opaque AI systems, was rarely implemented. Overall, this review highlights both progress and persistent challenges in AI-based T&amp;I assessment. While AI holds promise for enhancing assessment efficiency and scalability, methodological limitations and transparency gaps currently constrain its responsible use. We recommend improved reporting standards, multi-pronged validation strategies, development of large annotated benchmark datasets, and greater attention to model interpretability and explainability. These steps are essential for building robust, trustworthy AI systems for automatic T&amp;I assessment.</div></div>","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"5 1","pages":"Article 100300"},"PeriodicalIF":0.0,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146037254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A tutorial on unsupervised Gaussian mixture model for performance clustering in second language research 无监督高斯混合模型在第二语言研究中的性能聚类
Pub Date : 2026-01-23 DOI: 10.1016/j.rmal.2026.100296
Huiying Cai , Yan Tang , Xun Yan
This tutorial introduces the application of unsupervised Gaussian Mixture Model (GMM) clustering to identify second language (L2) performance profiles. GMM employs a probabilistic clustering technique that accommodates overlapping profile membership and provides a flexible method for analyzing high-dimensional performance data commonly encountered in L2 research. Using L2 writing assessment data from a local English placement test, we present a step-by-step analytical pipeline, covering data preparation, dimensionality reduction, model selection, visualization, and interpretation. This approach is adaptable to other performance modalities (e.g, speaking) and can be enriched with additional performance features to support a more comprehensive understanding of L2 performance and underlying language ability.
本教程介绍了使用无监督高斯混合模型(GMM)聚类来识别第二语言(L2)性能概况的应用。GMM采用了一种概率聚类技术,该技术可以容纳重叠的剖面隶属度,并为分析L2研究中常见的高维性能数据提供了一种灵活的方法。我们利用当地英语分班考试的第二语言写作评估数据,介绍了一个逐步分析的流程,包括数据准备、降维、模型选择、可视化和解释。这种方法适用于其他表现模式(例如,口语),并且可以通过附加的表现特征来丰富,以支持对第二语言表现和潜在语言能力的更全面的理解。
{"title":"A tutorial on unsupervised Gaussian mixture model for performance clustering in second language research","authors":"Huiying Cai ,&nbsp;Yan Tang ,&nbsp;Xun Yan","doi":"10.1016/j.rmal.2026.100296","DOIUrl":"10.1016/j.rmal.2026.100296","url":null,"abstract":"<div><div>This tutorial introduces the application of unsupervised Gaussian Mixture Model (GMM) clustering to identify second language (L2) performance profiles. GMM employs a probabilistic clustering technique that accommodates overlapping profile membership and provides a flexible method for analyzing high-dimensional performance data commonly encountered in L2 research. Using L2 writing assessment data from a local English placement test, we present a step-by-step analytical pipeline, covering data preparation, dimensionality reduction, model selection, visualization, and interpretation. This approach is adaptable to other performance modalities (e.g, speaking) and can be enriched with additional performance features to support a more comprehensive understanding of L2 performance and underlying language ability.</div></div>","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"5 1","pages":"Article 100296"},"PeriodicalIF":0.0,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146037253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Overcoming and reinforcing linguicism? Language, power and critical reflexivity in a large multilingual research team 克服和加强语言?一个大型多语种研究团队中的语言、权力和批判性反思
Pub Date : 2026-01-21 DOI: 10.1016/j.rmal.2026.100298
Shinya Uekusa , Sally Carlton , Sylvia Nissen , Fernanda Fernandez Zimmermann , Jay Marlowe , Fareeha Ali , Wondyrad A. Asres , Ginj Chang , Rami Elsayed , Jia Geng , D.H.P.S. Gunasekara , Jean Hur , Rika Maeno , Minh Tran , Wahida Zahedi , Stephen May , Tyron Love
This paper explores the tensions between challenging and unintentionally reinforcing linguicism in a large-scale research project on multilingual crisis communication during the COVID-19 pandemic in Aotearoa New Zealand. Our study involved a multilingual and multicultural research team conducting interviews in 14 different languages, with a methodological commitment to linguistic justice and inclusive research. Using collective self-reflection, we critically examined how our positionalities, language practices and research design, though intended to be counterhegemonic, sometimes reproduced dominant language ideologies. In this paper, we explore three key tensions: 1) the paradoxical privilege and power of bi-/multilingual researchers; 2) the internalisation of linguicism among participants; and 3) the challenges of translating emotional and cultural nuances. These findings reveal the complexity and paradoxes inherent in inclusive multilingual research, demonstrating how even well-intentioned practices can reproduce symbolic violence and linguicism. We argue for deeper reflexivity, methodological humility, and structurally transformative approaches that centre epistemic justice and critically challenge the institutional and ideological roots of linguicism. This paper contributes to critical language studies, disaster research and decolonising methodologies, providing both theoretical insights and practical guidance for researchers working with linguistic minorities.
本文在新西兰奥特罗阿开展的一项关于COVID-19大流行期间多语言危机沟通的大型研究项目中,探讨了挑战和无意中加强语言主义之间的紧张关系。我们的研究涉及一个多语言和多元文化的研究团队,用14种不同的语言进行访谈,并在方法论上致力于语言公正和包容性研究。通过集体自我反思,我们批判性地审视了我们的立场、语言实践和研究设计,虽然意在反霸权,但有时会复制主导的语言意识形态。在本文中,我们探讨了三个关键的紧张关系:1)双语/多语研究者自相矛盾的特权和权力;2)参与者的语言内化;3)翻译情感和文化差异的挑战。这些发现揭示了包容性多语言研究固有的复杂性和悖论,表明即使是善意的做法也会再现符号暴力和语言主义。我们主张更深层次的反身性、方法论上的谦逊和结构上的变革方法,这些方法以认识正义为中心,批判性地挑战语言主义的制度和意识形态根源。本文对批判性语言研究、灾难研究和非殖民化方法论做出了贡献,为研究语言少数群体的研究人员提供了理论见解和实践指导。
{"title":"Overcoming and reinforcing linguicism? Language, power and critical reflexivity in a large multilingual research team","authors":"Shinya Uekusa ,&nbsp;Sally Carlton ,&nbsp;Sylvia Nissen ,&nbsp;Fernanda Fernandez Zimmermann ,&nbsp;Jay Marlowe ,&nbsp;Fareeha Ali ,&nbsp;Wondyrad A. Asres ,&nbsp;Ginj Chang ,&nbsp;Rami Elsayed ,&nbsp;Jia Geng ,&nbsp;D.H.P.S. Gunasekara ,&nbsp;Jean Hur ,&nbsp;Rika Maeno ,&nbsp;Minh Tran ,&nbsp;Wahida Zahedi ,&nbsp;Stephen May ,&nbsp;Tyron Love","doi":"10.1016/j.rmal.2026.100298","DOIUrl":"10.1016/j.rmal.2026.100298","url":null,"abstract":"<div><div>This paper explores the tensions between challenging and unintentionally reinforcing linguicism in a large-scale research project on multilingual crisis communication during the COVID-19 pandemic in Aotearoa New Zealand. Our study involved a multilingual and multicultural research team conducting interviews in 14 different languages, with a methodological commitment to linguistic justice and inclusive research. Using collective self-reflection, we critically examined how our positionalities, language practices and research design, though intended to be counterhegemonic, sometimes reproduced dominant language ideologies. In this paper, we explore three key tensions: 1) the paradoxical privilege and power of bi-/multilingual researchers; 2) the internalisation of linguicism among participants; and 3) the challenges of translating emotional and cultural nuances. These findings reveal the complexity and paradoxes inherent in inclusive multilingual research, demonstrating how even well-intentioned practices can reproduce symbolic violence and linguicism. We argue for deeper reflexivity, methodological humility, and structurally transformative approaches that centre epistemic justice and critically challenge the institutional and ideological roots of linguicism. This paper contributes to critical language studies, disaster research and decolonising methodologies, providing both theoretical insights and practical guidance for researchers working with linguistic minorities.</div></div>","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"5 1","pages":"Article 100298"},"PeriodicalIF":0.0,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146037252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Language teacher beliefs and teacher education programs: A 25-year methodological synthesis (2000-2024) 语言教师信念与教师教育计划:25年的方法论综合(2000-2024)
Pub Date : 2026-01-16 DOI: 10.1016/j.rmal.2026.100299
Farahnaz Faez , Michael Karas , Ata Ghaderi
Language teacher beliefs are one of the main strands of teacher education research, and numerous studies explore how teacher education programs affect the development of such beliefs through the enacted program. There is a paucity of research, however, on the methodological design of these studies and what characterizes their initiatives. The aim of this research synthesis was to review and map out the methodological arrangements of the studies and illustrate how they implement their desired program. A comprehensive search was done in the three databases of Web of Science, Scopus, and Google Scholar with keywords related to language teacher beliefs and cognitions. A total number of 104 studies were identified and coded in 10 sections, which include such factors as (a) methodology, (b) theoretical framework, (c) data collection instruments, (d) number of participants, and (e) participants’ career stage. The results indicate an overall lack of clarity with the ontological framework of the studies, which is especially pronounced in light of the prevalence of qualitative designs within the corpus. Further, there is often a misalignment between the studies’ ontological paradigm and the methodological choices made. The findings call for greater ontological transparency, a higher degree of alignment between the theoretical framework and the methodological blueprint of research studies, and a broader and more versatile toolkit in identifying, examining, and transforming language teacher beliefs. The synthesis provides recommendations for advancing research on teacher beliefs through the methodological apparatus in this strand.
语言教师信念是教师教育研究的主要方向之一,许多研究探讨了教师教育计划如何通过制定的计划影响这种信念的发展。然而,关于这些研究的方法设计以及其主动性的特征的研究很少。本研究综合的目的是回顾和绘制出研究的方法安排,并说明它们如何实施预期的计划。在Web of Science、Scopus和谷歌Scholar三个数据库中全面检索与语言教师信念和认知相关的关键词。总共有104项研究被确定并编码为10个部分,其中包括(A)方法,(b)理论框架,(c)数据收集工具,(d)参与者人数,(e)参与者的职业阶段等因素。结果表明,研究的本体论框架总体上缺乏清晰度,这在语料库中普遍存在定性设计的情况下尤为明显。此外,在研究的本体论范式和所做的方法选择之间经常存在不一致。研究结果要求提高本体论的透明度,提高理论框架与研究方法蓝图之间的一致性,以及在识别、检查和转变语言教师信念方面使用更广泛、更通用的工具包。综合提供了建议,以推进研究的教师信念通过这一链的方法设备。
{"title":"Language teacher beliefs and teacher education programs: A 25-year methodological synthesis (2000-2024)","authors":"Farahnaz Faez ,&nbsp;Michael Karas ,&nbsp;Ata Ghaderi","doi":"10.1016/j.rmal.2026.100299","DOIUrl":"10.1016/j.rmal.2026.100299","url":null,"abstract":"<div><div>Language teacher beliefs are one of the main strands of teacher education research, and numerous studies explore how teacher education programs affect the development of such beliefs through the enacted program. There is a paucity of research, however, on the methodological design of these studies and what characterizes their initiatives. The aim of this research synthesis was to review and map out the methodological arrangements of the studies and illustrate how they implement their desired program. A comprehensive search was done in the three databases of Web of Science, Scopus, and Google Scholar with keywords related to language teacher beliefs and cognitions. A total number of 104 studies were identified and coded in 10 sections, which include such factors as (a) methodology, (b) theoretical framework, (c) data collection instruments, (d) number of participants, and (e) participants’ career stage. The results indicate an overall lack of clarity with the ontological framework of the studies, which is especially pronounced in light of the prevalence of qualitative designs within the corpus. Further, there is often a misalignment between the studies’ ontological paradigm and the methodological choices made. The findings call for greater ontological transparency, a higher degree of alignment between the theoretical framework and the methodological blueprint of research studies, and a broader and more versatile toolkit in identifying, examining, and transforming language teacher beliefs. The synthesis provides recommendations for advancing research on teacher beliefs through the methodological apparatus in this strand.</div></div>","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"5 1","pages":"Article 100299"},"PeriodicalIF":0.0,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145976503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intersubjectivity as the distinguishing feature or common ground: A contrastive study between human-written abstracts and LLM-generated abstracts 主体间性作为特征或共同点:人类撰写的摘要与法学硕士生成的摘要的对比研究
Pub Date : 2026-01-13 DOI: 10.1016/j.rmal.2026.100297
Miaoru Lin, Dingjia Liu
The vast knowledge and great efficiency of Large Language Models have made it essential to demystify the language written by human beings and that generated by LLMs, particularly their interactional capability. As a pivotal genre in academic discourse, “abstract” serves both the informative and the communicative functions. This study aims to explore intersubjective discourse markers manifested in human-written abstracts and LLM-generated ones as well as their similarities and differences across disciplines. The results show that human writers employ a wider range of stance and engagement markers to facilitate intersubjective positioning. Human-written abstracts exhibit a more sophisticated linguistic realization of intersubjectivity through lexical resources, patterned phrases, and syntactic structures. The correspondence analysis reveals that human writers emphasize disciplinary distinctions, while LLMs adopt a convergent approach to achieving writer-reader interaction among different disciplines. These findings underscore human writers’ superiority in navigating complex writer-reader interaction in abstract writing. Though LLMs have demonstrated some potential in emulating intersubjective communication, their interactional capability falls short of that of human writers. The findings offer significant implications for deepening our understanding of the nature of LLMs and contributing to LLM-assisted EAP studies and EAP teaching across disciplines.
大型语言模型的巨大知识和高效率使得揭开人类编写的语言和法学硕士生成的语言的神秘面纱,特别是它们的交互能力变得至关重要。摘要“抽象”作为一种重要的学术话语类型,既具有信息功能,又具有交际功能。本研究旨在探讨人类撰写的摘要和法学硕士生成的摘要中所表现的主体间话语标记及其跨学科的异同。结果表明,人类作家使用更广泛的立场和参与标记来促进主体间定位。人类撰写的摘要通过词汇资源、模式短语和句法结构表现出更复杂的主体间性语言实现。对应分析表明,人类作家强调学科差异,而法学硕士采用趋同的方法来实现不同学科之间的作者-读者互动。这些发现强调了人类作家在抽象写作中驾驭复杂的作者-读者互动方面的优势。尽管法学硕士在模拟主体间交流方面表现出了一定的潜力,但它们的互动能力仍不及人类作家。这一发现对加深我们对法学硕士本质的理解具有重要意义,并有助于法学硕士辅助的EAP研究和跨学科的EAP教学。
{"title":"Intersubjectivity as the distinguishing feature or common ground: A contrastive study between human-written abstracts and LLM-generated abstracts","authors":"Miaoru Lin,&nbsp;Dingjia Liu","doi":"10.1016/j.rmal.2026.100297","DOIUrl":"10.1016/j.rmal.2026.100297","url":null,"abstract":"<div><div>The vast knowledge and great efficiency of Large Language Models have made it essential to demystify the language written by human beings and that generated by LLMs, particularly their interactional capability. As a pivotal genre in academic discourse, “abstract” serves both the informative and the communicative functions. This study aims to explore intersubjective discourse markers manifested in human-written abstracts and LLM-generated ones as well as their similarities and differences across disciplines. The results show that human writers employ a wider range of stance and engagement markers to facilitate intersubjective positioning. Human-written abstracts exhibit a more sophisticated linguistic realization of intersubjectivity through lexical resources, patterned phrases, and syntactic structures. The correspondence analysis reveals that human writers emphasize disciplinary distinctions, while LLMs adopt a convergent approach to achieving writer-reader interaction among different disciplines. These findings underscore human writers’ superiority in navigating complex writer-reader interaction in abstract writing. Though LLMs have demonstrated some potential in emulating intersubjective communication, their interactional capability falls short of that of human writers. The findings offer significant implications for deepening our understanding of the nature of LLMs and contributing to LLM-assisted EAP studies and EAP teaching across disciplines.</div></div>","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"5 1","pages":"Article 100297"},"PeriodicalIF":0.0,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145976502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research methods and generative artificial intelligence in applied linguistics 应用语言学中的研究方法与生成式人工智能
Pub Date : 2026-01-09 DOI: 10.1016/j.rmal.2026.100295
Benjamin Luke Moorhouse , Sal Consoli , Samantha M. Curle
{"title":"Research methods and generative artificial intelligence in applied linguistics","authors":"Benjamin Luke Moorhouse ,&nbsp;Sal Consoli ,&nbsp;Samantha M. Curle","doi":"10.1016/j.rmal.2026.100295","DOIUrl":"10.1016/j.rmal.2026.100295","url":null,"abstract":"","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"5 1","pages":"Article 100295"},"PeriodicalIF":0.0,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How do language models handle emotional content in video game localization? A computational linguistics approach 语言模型如何处理电子游戏本地化中的情感内容?计算语言学方法
Pub Date : 2026-01-03 DOI: 10.1016/j.rmal.2025.100294
Xiaojing Zhao, Emmanuele Chersoni, Chu-Ren Huang, Han Xu
This study employs emotion analysis, a natural language processing technique, to examine how language models handle emotional content compared to human translators in video game localization. The analysis is based on a corpus consisting of Chinese subtitles from Black Myth: Wukong, their official English translations, and translations generated by a language model. The findings reveal that, despite similarities between humans and the language model in their translation of emotions, differences exist. Human translators often neutralize emotions through context-dependent strategies, such as omission, addition, and substitution, to address cultural sensitivities and enhance player engagement. In contrast, the language model relies on direct translation to preserve diverse emotions, including negative ones. Such an approach may risk misalignment with the preferences of target audiences due to limited adaptation of tone and cultural nuances. In addition, occasional mistranslation and hallucination were also found. This study highlights the promise of integrating language models into localization workflows and demonstrates the potential of emotion analysis for assessing translation accuracy.
本研究采用情感分析(一种自然语言处理技术)来检验语言模型如何处理电子游戏本地化中的情感内容。该分析基于一个语料库,该语料库包括《黑色神话:悟空》的中文字幕、它们的官方英文翻译以及由语言模型生成的翻译。研究结果表明,尽管人类和语言模型在情感翻译上有相似之处,但差异仍然存在。人类译者通常通过情境依赖策略(如省略、添加和替代)来中和情感,以解决文化敏感性并增强玩家粘性。相比之下,语言模型依赖于直接翻译来保留各种情绪,包括负面情绪。由于语气和文化差异的适应有限,这种方法可能会与目标受众的偏好不一致。此外,还发现了偶尔的误译和幻觉。这项研究强调了将语言模型集成到本地化工作流程中的前景,并展示了情感分析在评估翻译准确性方面的潜力。
{"title":"How do language models handle emotional content in video game localization? A computational linguistics approach","authors":"Xiaojing Zhao,&nbsp;Emmanuele Chersoni,&nbsp;Chu-Ren Huang,&nbsp;Han Xu","doi":"10.1016/j.rmal.2025.100294","DOIUrl":"10.1016/j.rmal.2025.100294","url":null,"abstract":"<div><div>This study employs emotion analysis, a natural language processing technique, to examine how language models handle emotional content compared to human translators in video game localization. The analysis is based on a corpus consisting of Chinese subtitles from <em>Black Myth: Wukong</em>, their official English translations, and translations generated by a language model. The findings reveal that, despite similarities between humans and the language model in their translation of emotions, differences exist. Human translators often neutralize emotions through context-dependent strategies, such as omission, addition, and substitution, to address cultural sensitivities and enhance player engagement. In contrast, the language model relies on direct translation to preserve diverse emotions, including negative ones. Such an approach may risk misalignment with the preferences of target audiences due to limited adaptation of tone and cultural nuances. In addition, occasional mistranslation and hallucination were also found. This study highlights the promise of integrating language models into localization workflows and demonstrates the potential of emotion analysis for assessing translation accuracy.</div></div>","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"5 1","pages":"Article 100294"},"PeriodicalIF":0.0,"publicationDate":"2026-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145884274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adapting the L2 engagement scale for young language learners: Methodological considerations for age-appropriateness 适应年轻语言学习者的第二语言投入量表:适合年龄的方法论考虑
Pub Date : 2025-12-22 DOI: 10.1016/j.rmal.2025.100293
Yohei Nakanishi , Osamu Takeuchi
This study aimed to adapt an L2 engagement scale—originally developed by Teravainen-Goff (2023) for secondary school students in the United Kingdom—for young language learners (YLLs) in the Japanese EFL context, with particular attention to age-appropriateness throughout the questionnaire adaptation process. The present study implemented a rigorous six-step process to adapt the scale for YLLs and assessed its validity and reliability. Three hundred ninety-nine elementary school students in a Japanese EFL context completed the adapted L2 engagement scale. The exploratory factor analysis identified four key factors of L2 engagement, including “perceived quality of engagement with peers,” “perceived quality of engagement with teachers,” “intensity of effort in learning,” and “perceived quality of engagement with teaching content.” The validity and reliability of the adapted L2 engagement scale were further confirmed through confirmatory factor analysis. This study provides a detailed account of the questionnaire adaptation process to ensure methodological rigor and transparency. Our findings not only contribute to a better understanding of YLLs’ engagement in EFL classrooms but also establish methodologically sound questionnaire-adaptation procedures for under-researched populations in the field of applied linguistics.
本研究旨在调整一套最初由Teravainen-Goff(2023)为英国中学生开发的第二语言参与量表,以适应日本英语背景下的年轻语言学习者(YLLs),并在整个问卷调整过程中特别关注年龄适宜性。本研究采用了严格的六步程序对量表进行调整,并评估了量表的效度和信度。399名日本英语背景下的小学生完成了改编的第二语言投入量表。探索性因素分析确定了二语参与的四个关键因素,包括“与同伴参与的感知质量”、“与教师参与的感知质量”、“学习努力的强度”和“与教学内容参与的感知质量”。通过验证性因子分析进一步证实了改编的第二语言敬业度量表的效度和信度。本研究提供了问卷调整过程的详细说明,以确保方法的严谨性和透明度。我们的研究结果不仅有助于更好地理解外语学习者在课堂上的参与情况,而且还为应用语言学领域研究不足的人群建立了方法论上合理的问卷适应程序。
{"title":"Adapting the L2 engagement scale for young language learners: Methodological considerations for age-appropriateness","authors":"Yohei Nakanishi ,&nbsp;Osamu Takeuchi","doi":"10.1016/j.rmal.2025.100293","DOIUrl":"10.1016/j.rmal.2025.100293","url":null,"abstract":"<div><div>This study aimed to adapt an L2 engagement scale—originally developed by Teravainen-Goff (2023) for secondary school students in the United Kingdom—for young language learners (YLLs) in the Japanese EFL context, with particular attention to age-appropriateness throughout the questionnaire adaptation process. The present study implemented a rigorous six-step process to adapt the scale for YLLs and assessed its validity and reliability. Three hundred ninety-nine elementary school students in a Japanese EFL context completed the adapted L2 engagement scale. The exploratory factor analysis identified four key factors of L2 engagement, including “perceived quality of engagement with peers,” “perceived quality of engagement with teachers,” “intensity of effort in learning,” and “perceived quality of engagement with teaching content.” The validity and reliability of the adapted L2 engagement scale were further confirmed through confirmatory factor analysis. This study provides a detailed account of the questionnaire adaptation process to ensure methodological rigor and transparency. Our findings not only contribute to a better understanding of YLLs’ engagement in EFL classrooms but also establish methodologically sound questionnaire-adaptation procedures for under-researched populations in the field of applied linguistics.</div></div>","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"5 1","pages":"Article 100293"},"PeriodicalIF":0.0,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Research Methods in Applied Linguistics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1