首页 > 最新文献

Assessing Writing最新文献

英文 中文
Exploring the use of model texts as a feedback instrument in expository writing: EFL learners’ noticing, incorporations, and text quality 探索在说明文写作中使用范文作为反馈工具:英语学习者的注意、融入和文本质量
IF 4.2 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-10-01 Epub Date: 2024-09-20 DOI: 10.1016/j.asw.2024.100890
Long Quoc Nguyen , Bao Trang Thi Nguyen , Hoang Yen Phuong
Model texts as a feedback instrument (MTFI) have proven effective in enhancing L2 writing, yet research on this domain mainly focused on narrative compositions over a three-stage task: i) composing, ii) comparing, and iii) rewriting. The impact of MTFI on learners’ noticing, incorporations, and text quality in expository writing, especially in the Vietnamese context, remains underexplored. To address these gaps, this study aims to investigate the effect of MTFI on 68 Vietnamese EFL undergraduates’ expository writing following a process-product approach. The participants were divided into a control group (CG, N = 33) and an experimental group (EG, N = 35). Both groups attended stages one and three, but only the EG compared their initial writing with a model text in stage two. The results, derived from learners’ note-taking sheets, written paragraphs, and semi-structured interviews, revealed that despite the two groups’ comparability in stage one, the EG demonstrated significantly better text quality than the CG in stage three, particularly in content, lexis, and organization. Furthermore, while the EG largely encountered lexical issues at the outset, they primarily concentrated on content-related and organizational features in the subsequent stages. Based on the findings, recommendations for future research and implications for pedagogy were deliberated.
事实证明,作为反馈工具的范文(MTFI)能有效提高 L2 写作水平,但这方面的研究主要集中在叙事作文的三个阶段任务上:i) 作文,ii) 比较,iii) 改写。MTFI对学习者在说明文写作中的注意、融入和文本质量的影响,尤其是在越南语境中的影响,仍未得到充分探索。为了弥补这些不足,本研究采用过程-产品法,旨在调查 MTFI 对 68 名越南 EFL 本科生说明文写作的影响。参与者分为对照组(CG,33 人)和实验组(EG,35 人)。两组都参加了第一和第三阶段,但只有实验组在第二阶段将他们的初始写作与范文进行了比较。从学习者的笔记单、书面段落和半结构式访谈中得出的结果显示,尽管两组在第一阶段具有可比性,但在第三阶段,EG 的文章质量明显优于 CG,尤其是在内容、词汇和组织方面。此外,虽然教育组在开始阶段主要遇到词汇方面的问题,但在随后的阶段,他们主要集中在与内容相关的和组织方面的特点上。根据研究结果,讨论了对未来研究的建议和对教学法的影响。
{"title":"Exploring the use of model texts as a feedback instrument in expository writing: EFL learners’ noticing, incorporations, and text quality","authors":"Long Quoc Nguyen ,&nbsp;Bao Trang Thi Nguyen ,&nbsp;Hoang Yen Phuong","doi":"10.1016/j.asw.2024.100890","DOIUrl":"10.1016/j.asw.2024.100890","url":null,"abstract":"<div><div>Model texts as a feedback instrument (MTFI) have proven effective in enhancing L2 writing, yet research on this domain mainly focused on narrative compositions over a three-stage task: i) composing, ii) comparing, and iii) rewriting. The impact of MTFI on learners’ noticing, incorporations, and text quality in expository writing, especially in the Vietnamese context, remains underexplored. To address these gaps, this study aims to investigate the effect of MTFI on 68 Vietnamese EFL undergraduates’ expository writing following a process-product approach. The participants were divided into a control group (CG, <em>N</em> = 33) and an experimental group (EG, <em>N</em> = 35). Both groups attended stages one and three, but only the EG compared their initial writing with a model text in stage two. The results, derived from learners’ note-taking sheets, written paragraphs, and semi-structured interviews, revealed that despite the two groups’ comparability in stage one, the EG demonstrated significantly better text quality than the CG in stage three, particularly in content, lexis, and organization. Furthermore, while the EG largely encountered lexical issues at the outset, they primarily concentrated on content-related and organizational features in the subsequent stages. Based on the findings, recommendations for future research and implications for pedagogy were deliberated.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"62 ","pages":"Article 100890"},"PeriodicalIF":4.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142311404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of writing feedback literacies on feedback engagement and writing performance: A cross-linguistic perspective 写作反馈素养对反馈参与和写作成绩的影响:跨语言视角
IF 4.2 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-10-01 Epub Date: 2024-09-05 DOI: 10.1016/j.asw.2024.100889
Qi Lu , Xinhua Zhu , Siyu Zhu , Yuan Yao

While the educational field has made progress in comprehending student feedback literacy, its impact on feedback engagement and student writing performance remains insufficiently explored. Furthermore, the cross-linguistic perspective has not yet been introduced to the literature on student feedback literacy, even though this approach has seen increased utilization in both L1 and L2 learning research. The current study examined the relationship between L1 and L2 writing feedback literacies and how they may contribute to L2 feedback engagement and L2 writing performance. Data were collected from 231 English major sophomore students from a Chinese university. The structural equation modeling analyses results showed that students’ L1 writing feedback literacy had a positive effect on their L2 writing feedback literacy. Further, L1 writing feedback literacy exerted an indirect effect on L2 writing performance via L2 writing feedback literacy and L2 feedback engagement. These findings underscore the pivotal role of L1 writing feedback literacy in L2 development and provide empirical evidence elucidating the close relationship between student feedback literacy and feedback engagement. The study concludes with pedagogical suggestions based on the observed outcomes.

虽然教育领域在理解学生反馈素养方面取得了进展,但对其对反馈参与和学生写作成绩的影响的探讨仍然不够。此外,跨语言视角尚未被引入学生反馈素养的文献中,尽管这种方法在 L1 和 L2 学习研究中得到了越来越多的应用。本研究探讨了 L1 和 L2 写作反馈素养之间的关系,以及它们如何促进 L2 反馈参与和 L2 写作表现。研究收集了中国某大学英语专业231名大二学生的数据。结构方程模型分析结果显示,学生的L1写作反馈素养对其L2写作反馈素养有积极影响。此外,L1写作反馈素养通过L2写作反馈素养和L2反馈参与对L2写作成绩产生间接影响。这些发现强调了L1写作反馈素养在L2发展中的关键作用,并为阐明学生反馈素养与反馈参与之间的密切关系提供了实证证据。研究最后根据观察到的结果提出了教学建议。
{"title":"Effects of writing feedback literacies on feedback engagement and writing performance: A cross-linguistic perspective","authors":"Qi Lu ,&nbsp;Xinhua Zhu ,&nbsp;Siyu Zhu ,&nbsp;Yuan Yao","doi":"10.1016/j.asw.2024.100889","DOIUrl":"10.1016/j.asw.2024.100889","url":null,"abstract":"<div><p>While the educational field has made progress in comprehending student feedback literacy, its impact on feedback engagement and student writing performance remains insufficiently explored. Furthermore, the cross-linguistic perspective has not yet been introduced to the literature on student feedback literacy, even though this approach has seen increased utilization in both L1 and L2 learning research. The current study examined the relationship between L1 and L2 writing feedback literacies and how they may contribute to L2 feedback engagement and L2 writing performance. Data were collected from 231 English major sophomore students from a Chinese university. The structural equation modeling analyses results showed that students’ L1 writing feedback literacy had a positive effect on their L2 writing feedback literacy. Further, L1 writing feedback literacy exerted an indirect effect on L2 writing performance via L2 writing feedback literacy and L2 feedback engagement. These findings underscore the pivotal role of L1 writing feedback literacy in L2 development and provide empirical evidence elucidating the close relationship between student feedback literacy and feedback engagement. The study concludes with pedagogical suggestions based on the observed outcomes.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"62 ","pages":"Article 100889"},"PeriodicalIF":4.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142149877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The impact of task duration on the scoring of independent writing responses of adult L2-English writers 任务持续时间对英语为第二语言的成年写作者独立写作回答评分的影响
IF 4.2 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-10-01 Epub Date: 2024-10-30 DOI: 10.1016/j.asw.2024.100895
Ben Naismith , Yigal Attali , Geoffrey T. LaFlair
In writing assessment, there is inherently a tension between authenticity and practicality: tasks with longer durations may more closely reflect real-life writing processes but are less feasible to administer and score. What is more, given total testing time, there is necessarily a trade-off between task duration and number of tasks. Traditionally, high-stakes assessments have managed this trade-off by administering one or two writing tasks each test, allowing 20–40 minutes per task. However, research on second language (L2) English writing has not found longer task durations to significantly improve score validity or reliability. Importantly, very few studies have compared much shorter durations for writing tasks to more traditional allotments. To explore this issue, we asked adult L2-English test takers to respond to two writing prompts with either 5-minute or 20-minute time limits. Responses were then evaluated by expert human raters and an automated writing evaluation tool. Regardless of scoring method, short duration scores evidenced equally high test-retest reliability and criterion validity as long duration scores. As expected, longer task duration yielded higher scores, but regardless of duration, test takers demonstrated the entire spectrum of writing proficiency. Implications for writing assessment are discussed in relation to scoring practices and task design.
在写作评估中,真实性和实用性之间存在着内在的矛盾:持续时间较长的任务可能更能反映真实的写作过程,但在实施和评分方面却不太可行。更重要的是,考虑到总的测试时间,必须在任务持续时间和任务数量之间做出权衡。传统上,高风险评估都是通过每次测试一到两个写作任务,每个任务 20-40 分钟的时间来实现这种权衡的。然而,对第二语言(L2)英语写作的研究并未发现较长的任务持续时间能显著提高分数的有效性或可靠性。重要的是,很少有研究将更短的写作任务时间与更传统的任务分配时间进行比较。为了探究这个问题,我们要求英语为第二语言的成年应试者在 5 分钟或 20 分钟的时间限制内回答两个写作提示。然后由人类专家评分员和自动写作评估工具对回答进行评估。无论采用哪种评分方法,短时段得分与长时段得分一样,都具有很高的测试再测可靠性和标准效度。正如预期的那样,任务持续时间越长,得分越高,但无论持续时间长短,应试者都能表现出全面的写作能力。本文讨论了评分方法和任务设计对写作评估的影响。
{"title":"The impact of task duration on the scoring of independent writing responses of adult L2-English writers","authors":"Ben Naismith ,&nbsp;Yigal Attali ,&nbsp;Geoffrey T. LaFlair","doi":"10.1016/j.asw.2024.100895","DOIUrl":"10.1016/j.asw.2024.100895","url":null,"abstract":"<div><div>In writing assessment, there is inherently a tension between authenticity and practicality: tasks with longer durations may more closely reflect real-life writing processes but are less feasible to administer and score. What is more, given total testing time, there is necessarily a trade-off between task duration and number of tasks. Traditionally, high-stakes assessments have managed this trade-off by administering one or two writing tasks each test, allowing 20–40 minutes per task. However, research on second language (L2) English writing has not found longer task durations to significantly improve score validity or reliability. Importantly, very few studies have compared much shorter durations for writing tasks to more traditional allotments. To explore this issue, we asked adult L2-English test takers to respond to two writing prompts with either 5-minute or 20-minute time limits. Responses were then evaluated by expert human raters and an automated writing evaluation tool. Regardless of scoring method, short duration scores evidenced equally high test-retest reliability and criterion validity as long duration scores. As expected, longer task duration yielded higher scores, but regardless of duration, test takers demonstrated the entire spectrum of writing proficiency. Implications for writing assessment are discussed in relation to scoring practices and task design.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"62 ","pages":"Article 100895"},"PeriodicalIF":4.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142552584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding the SSARC model of task sequencing: Assessing L2 writing development 了解任务排序的 SSARC 模型:评估 L2 写作发展
IF 4.2 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-10-01 Epub Date: 2024-09-23 DOI: 10.1016/j.asw.2024.100893
Mahmoud Abdi Tabari , Yizhou Wang , Michol Miller
This study aimed to explore the impact of task sequencing on the development of second language (L2) writing and investigate how L2 learners performed on three decision-making writing tasks completed in different orders over nine weeks. 120 advanced-high EFL students were randomly assigned to one of three groups and given different task sequences: 1) a simple-medium-complex (SMC) sequence, 2) a complex-medium-simple sequence (CMS), or 3) a random sequence (RDM). Essays were analyzed using measures of syntactic complexity, accuracy, lexical complexity, and fluency (CALF). Results showed that the CALF of L2 writing demonstrated longitudinal development over time in all three task sequencing groups. CALF development was not immediately apparent in the first six weeks, with most measures displaying a significant increase by the end of the ninth week. Furthermore, different task sequences resulted in varying patterns and magnitudes of CALF growth, but no specific sequence was found to be superior overall.
本研究旨在探讨任务顺序对第二语言(L2)写作发展的影响,并调查 L2 学习者在九周时间内以不同顺序完成三项决策性写作任务时的表现。120名EFL高年级学生被随机分配到三组中的一组,并被赋予不同的任务顺序:1)简单-中等-复杂(SMC)序列;2)复杂-中等-简单序列(CMS);或3)随机序列(RDM)。我们使用句法复杂性、准确性、词汇复杂性和流畅性(CALF)等指标对作文进行了分析。结果显示,在所有三个任务序列组中,L2 写作的 CALF 都随着时间的推移呈现纵向发展。在前六周,CALF 的发展并不明显,但到第九周结束时,大多数测量指标都有了显著的提高。此外,不同的任务序列会导致不同的 CALF 增长模式和幅度,但总体而言,没有发现任何特定的序列具有优势。
{"title":"Understanding the SSARC model of task sequencing: Assessing L2 writing development","authors":"Mahmoud Abdi Tabari ,&nbsp;Yizhou Wang ,&nbsp;Michol Miller","doi":"10.1016/j.asw.2024.100893","DOIUrl":"10.1016/j.asw.2024.100893","url":null,"abstract":"<div><div>This study aimed to explore the impact of task sequencing on the development of second language (L2) writing and investigate how L2 learners performed on three decision-making writing tasks completed in different orders over nine weeks. 120 advanced-high EFL students were randomly assigned to one of three groups and given different task sequences: 1) a simple-medium-complex (SMC) sequence, 2) a complex-medium-simple sequence (CMS), or 3) a random sequence (RDM). Essays were analyzed using measures of syntactic complexity, accuracy, lexical complexity, and fluency (CALF). Results showed that the CALF of L2 writing demonstrated longitudinal development over time in all three task sequencing groups. CALF development was not immediately apparent in the first six weeks, with most measures displaying a significant increase by the end of the ninth week. Furthermore, different task sequences resulted in varying patterns and magnitudes of CALF growth, but no specific sequence was found to be superior overall.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"62 ","pages":"Article 100893"},"PeriodicalIF":4.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142311405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A structural equation investigation of linguistic features as indices of writing quality in assessed secondary-level EMI learners’ scientific reports 以语言特征为指标,对中学英语学习者科学报告写作质量的结构方程研究
IF 4.2 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-10-01 Epub Date: 2024-10-30 DOI: 10.1016/j.asw.2024.100897
Jack Pun , Wangyin Kenneth Li
While inquiry into the relationship between linguistic features and L2 writing quality has been a long-standing line of research, little scholarly attention has been drawn to the predictive value of linguistic features in assessing the writing quality of English-medium scientific report writing. This study adds to the existing literature by examining the relation of lexical and syntactic complexity to writing quality, based on 106 scientific reports composed by Hong Kong Chinese learners of English in EMI secondary schools. Natural language processing tools were employed to extract computational indices of linguistic complexity features, followed by the use of a structural equation modeling (SEM) approach to investigate their predictive power. The validity of the anticipated construct was confirmed based upon several goodness-of-fit criteria. The SEM analysis indicated that writing quality was predicted by lexical sophistication (i.e., text-based complexity: word range and academic words; psycholinguistic complexity: word familiarity and age-of-acquisition ratings), lexical diversity (i.e., MTLD and VocD), and syntactic complexity (i.e., mean length of sentence and dependent clauses per T-unit). However, the relation of lexical diversity and syntactic complexity to writing quality was mediated by lexical sophistication. Implications for scientific report writing assessment and pedagogy in EMI educational contexts are discussed.
虽然探究语言特点与 L2 写作质量之间的关系是一个长期的研究方向,但很少有学者关注语言特点在评估以英语为母语的科学报告写作质量方面的预测价值。本研究以英美中学 106 名香港中文英语学习者撰写的科学报告为基础,探讨了词法和句法复杂性与写作质量的关系,为现有文献提供了补充。研究采用自然语言处理工具提取语言复杂性特征的计算指数,然后使用结构方程建模(SEM)方法研究其预测能力。根据若干拟合优度标准,确认了预期结构的有效性。SEM 分析表明,词汇复杂性(即基于文本的复杂性:词汇范围和学术词汇;心理语言复杂性:词汇熟悉度和习得年龄评级)、词汇多样性(即 MTLD 和 VocD)和句法复杂性(即每个 T 单元的句子和从句的平均长度)对写作质量有预测作用。然而,词汇多样性和句法复杂性与写作质量的关系是以词汇复杂性为中介的。本文讨论了在 EMI 教育背景下科学报告写作评估和教学法的意义。
{"title":"A structural equation investigation of linguistic features as indices of writing quality in assessed secondary-level EMI learners’ scientific reports","authors":"Jack Pun ,&nbsp;Wangyin Kenneth Li","doi":"10.1016/j.asw.2024.100897","DOIUrl":"10.1016/j.asw.2024.100897","url":null,"abstract":"<div><div>While inquiry into the relationship between linguistic features and L2 writing quality has been a long-standing line of research, little scholarly attention has been drawn to the predictive value of linguistic features in assessing the writing quality of English-medium scientific report writing. This study adds to the existing literature by examining the relation of lexical and syntactic complexity to writing quality, based on 106 scientific reports composed by Hong Kong Chinese learners of English in EMI secondary schools. Natural language processing tools were employed to extract computational indices of linguistic complexity features, followed by the use of a structural equation modeling (SEM) approach to investigate their predictive power. The validity of the anticipated construct was confirmed based upon several goodness-of-fit criteria. The SEM analysis indicated that writing quality was predicted by lexical sophistication (i.e., text-based complexity: word range and academic words; psycholinguistic complexity: word familiarity and age-of-acquisition ratings), lexical diversity (i.e., MTLD and VocD), and syntactic complexity (i.e., mean length of sentence and dependent clauses per T-unit). However, the relation of lexical diversity and syntactic complexity to writing quality was mediated by lexical sophistication. Implications for scientific report writing assessment and pedagogy in EMI educational contexts are discussed.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"62 ","pages":"Article 100897"},"PeriodicalIF":4.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142552585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Validating an integrated reading-into-writing scale with trained university students 通过训练有素的大学生验证 "从阅读到写作 "综合量表
IF 4.2 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-10-01 Epub Date: 2024-09-24 DOI: 10.1016/j.asw.2024.100894
Claudia Harsch , Valeriia Koval , Paraskevi (Voula) Kanistra , Ximena Delgado-Osorio
Integrated tasks are often used in higher education (HE) for diagnostic purposes, with increasing popularity in lingua franca contexts, such as German HE, where English-medium courses are gaining ground. In this context, we report the validation of a new rating scale for assessing reading-into-writing tasks. To examine scoring validity, we employed Weir’s (2005) socio-cognitive framework in an explanatory mixed-methods design. We collected 679 integrated performances in four summary and opinion tasks, which were rated by six trained student raters. They are to become writing tutors for first-year students. We utilized a many-facet Rasch model to investigate rater severity, reliability, consistency, and scale functioning. Using thematic analysis, we analyzed think-aloud protocols, retrospective and focus group interviews with the raters. Findings showed that the rating scale overall functions as intended and is perceived by the raters as valid operationalization of the integrated construct. FACETS analyses revealed reasonable reliabilities, yet exposed local issues with certain criteria and band levels. This is corroborated by the challenges reported by the raters, which they mainly attributed to the complexities inherent in such an assessment. Applying Weir’s (2005) framework in a mixed-methods approach facilitated the interpretation of the quantitative findings and yielded insights into potential validity threads.
在高等教育(HE)中,综合任务经常被用于诊断目的,在通用语言环境中越来越受欢迎,例如在德国高等教育中,以英语为教学语言的课程越来越受欢迎。在这种情况下,我们报告了用于评估 "读写结合 "任务的新评分量表的验证情况。为了检验评分的有效性,我们在解释性混合方法设计中采用了 Weir(2005 年)的社会认知框架。我们收集了 679 个学生在四项摘要和观点任务中的综合表现,并由六名经过培训的学生评分员进行评分。他们将成为一年级学生的写作导师。我们利用多方面的 Rasch 模型来研究评分者的严重程度、可靠性、一致性和量表功能。通过主题分析,我们分析了评分者的思考-朗读协议、回顾性访谈和焦点小组访谈。研究结果表明,评分量表的整体功能符合预期,评分者认为它是对综合概念的有效操作。FACETS 分析显示了合理的可靠性,但也暴露了某些标准和等级的局部问题。评定者所报告的挑战也证实了这一点,他们主要将这些挑战归因于这种评定所固有的复杂性。在混合方法中应用韦尔(Weir,2005 年)的框架有助于解释定量研究结果,并能深入了解潜在的有效性问题。
{"title":"Validating an integrated reading-into-writing scale with trained university students","authors":"Claudia Harsch ,&nbsp;Valeriia Koval ,&nbsp;Paraskevi (Voula) Kanistra ,&nbsp;Ximena Delgado-Osorio","doi":"10.1016/j.asw.2024.100894","DOIUrl":"10.1016/j.asw.2024.100894","url":null,"abstract":"<div><div>Integrated tasks are often used in higher education (HE) for diagnostic purposes, with increasing popularity in lingua franca contexts, such as German HE, where English-medium courses are gaining ground. In this context, we report the validation of a new rating scale for assessing reading-into-writing tasks. To examine scoring validity, we employed Weir’s (2005) socio-cognitive framework in an explanatory mixed-methods design. We collected 679 integrated performances in four summary and opinion tasks, which were rated by six trained student raters. They are to become writing tutors for first-year students. We utilized a many-facet Rasch model to investigate rater severity, reliability, consistency, and scale functioning. Using thematic analysis, we analyzed think-aloud protocols, retrospective and focus group interviews with the raters. Findings showed that the rating scale overall functions as intended and is perceived by the raters as valid operationalization of the integrated construct. FACETS analyses revealed reasonable reliabilities, yet exposed local issues with certain criteria and band levels. This is corroborated by the challenges reported by the raters, which they mainly attributed to the complexities inherent in such an assessment. Applying Weir’s (2005) framework in a mixed-methods approach facilitated the interpretation of the quantitative findings and yielded insights into potential validity threads.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"62 ","pages":"Article 100894"},"PeriodicalIF":4.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1075293524000874/pdfft?md5=73c505eab3803fbf3a3edfd0612d454a&pid=1-s2.0-S1075293524000874-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142314882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Matches and mismatches between Saudi university students' English writing feedback preferences and teachers' practices 沙特大学生英语写作反馈偏好与教师实践之间的匹配与不匹配
IF 3.9 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-07-01 Epub Date: 2024-06-17 DOI: 10.1016/j.asw.2024.100863
Muhammad M.M. Abdel Latif , Zainab Alsuhaibani , Asma Alsahil

Though much research has dealt with feedback practices in L2 writing classes, scarce studies have tried to investigate learner and teacher feedback perspectives from a wide angle. Drawing on an 8-dimension framework of feedback in writing classes, this study investigated the potential matches and mismatches between Saudi university students' English writing feedback preferences and their teachers' reported practices. Quantitative and qualitative data was collected using a student questionnaire and a teacher one. The two surveys assessed students' preferences for and teachers' use of 26 writing feedback modes, strategies and activities. A total of 575 undergraduate English majors at 11 Saudi universities completed the student questionnaire, and 82 writing instructors completed the teacher questionnaire. The data analysis revealed that the differences between the students' English writing feedback preferences and their teachers' practices vary from one feedback dimension to another. The study generally indicates that the mismatches between the students' writing feedback preferences and the teachers' reported practices far exceed the matches. The qualitative data obtained from the answers to a set of open-ended questions in both questionnaires provided information about the students' and teachers' feedback-related beliefs and reasons. The paper ends with discussing the results and their implications.

尽管许多研究都涉及到了 L2 写作课堂中的反馈实践,但很少有研究试图从广阔的角度来调查学习者和教师的反馈观点。本研究以写作课反馈的 8 维框架为基础,调查了沙特大学生的英语写作反馈偏好与教师反馈实践之间的潜在匹配与不匹配。通过学生问卷和教师问卷收集了定量和定性数据。这两项调查评估了学生对 26 种写作反馈模式、策略和活动的偏好以及教师对这些模式、策略和活动的使用情况。共有 11 所沙特大学的 575 名英语专业本科生填写了学生问卷,82 名写作指导教师填写了教师问卷。数据分析显示,学生的英语写作反馈偏好与教师的做法之间的差异在反馈维度上各不相同。研究普遍表明,学生写作反馈偏好与教师反馈实践之间的不匹配程度远远超过匹配程度。通过对两份问卷中一组开放式问题的回答所获得的定性数据,提供了有关学生和教师与反馈相关的信念和原因的信息。本文最后讨论了研究结果及其影响。
{"title":"Matches and mismatches between Saudi university students' English writing feedback preferences and teachers' practices","authors":"Muhammad M.M. Abdel Latif ,&nbsp;Zainab Alsuhaibani ,&nbsp;Asma Alsahil","doi":"10.1016/j.asw.2024.100863","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100863","url":null,"abstract":"<div><p>Though much research has dealt with feedback practices in L2 writing classes, scarce studies have tried to investigate learner and teacher feedback perspectives from a wide angle. Drawing on an 8-dimension framework of feedback in writing classes, this study investigated the potential matches and mismatches between Saudi university students' English writing feedback preferences and their teachers' reported practices. Quantitative and qualitative data was collected using a student questionnaire and a teacher one. The two surveys assessed students' preferences for and teachers' use of 26 writing feedback modes, strategies and activities. A total of 575 undergraduate English majors at 11 Saudi universities completed the student questionnaire, and 82 writing instructors completed the teacher questionnaire. The data analysis revealed that the differences between the students' English writing feedback preferences and their teachers' practices vary from one feedback dimension to another. The study generally indicates that the mismatches between the students' writing feedback preferences and the teachers' reported practices far exceed the matches. The qualitative data obtained from the answers to a set of open-ended questions in both questionnaires provided information about the students' and teachers' feedback-related beliefs and reasons. The paper ends with discussing the results and their implications.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100863"},"PeriodicalIF":3.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141423148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Construct representation and predictive validity of integrated writing tasks: A study on the writing component of the Duolingo English Test 综合写作任务的结构表征和预测有效性:对 Duolingo 英语测试写作部分的研究
IF 3.9 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-07-01 Epub Date: 2024-05-28 DOI: 10.1016/j.asw.2024.100846
Qin Xie

This study examined whether two integrated reading-to-write tasks could broaden the construct representation of the writing component of Duolingo English Test (DET). It also verified whether they could enhance DET’s predictive power of English academic writing in universities. The tasks were (1) writing a summary based on two source texts and (2) writing a reading-to-write essay based on five texts. Both were given to a sample (N = 204) of undergraduates from Hong Kong. Each participant also submitted an academic assignment written for the assessment of a disciplinary course. Three professional raters double-marked all writing samples against detailed analytical rubrics. Raw scores were first processed using Multi-Faceted Rasch Measurement to estimate inter- and intra-rater consistency and generate adjusted (fair) measures. Based on these measures, descriptive analyses, sequential multiple regression, and Structural Equation Modeling were conducted (in that order). The analyses verified the writing tasks’ underlying component constructs and assessed their relative contributions to the overall integrated writing scores. Both tasks were found to contribute to DET’s construct representation and add moderate predictive power to the domain performance. The findings, along with their practical implications, are discussed, especially regarding the complex relations between construct representation and predictive validity.

本研究探讨了两个 "从阅读到写作 "的综合任务是否能够拓宽Duolingo英语测试(DET)写作部分的建构表征。本研究还验证了这两项任务能否增强 DET 对大学英语学术写作的预测能力。这两项任务分别是:(1)根据两篇原文撰写摘要;(2)根据五篇原文撰写 "从阅读到写作 "的文章。这两项任务都是针对香港的本科生样本(N = 204)进行的。每位参与者还提交了一份为学科课程评估而撰写的学术作业。三位专业评分员根据详细的分析评分标准对所有写作样本进行双重评分。原始分数首先使用多方面拉施测量法(Multi-Faceted Rasch Measurement)进行处理,以估计评分者之间和评分者内部的一致性,并生成调整后的(公平的)测量结果。在这些测量结果的基础上,依次进行了描述性分析、连续多元回归和结构方程建模。这些分析验证了写作任务的基本组成结构,并评估了它们对综合写作总分的相对贡献。结果发现,这两项任务都有助于 DET 的建构表征,并为领域成绩增加了适度的预测力。本文讨论了这些研究结果及其实际意义,特别是关于建构表征与预测效度之间的复杂关系。
{"title":"Construct representation and predictive validity of integrated writing tasks: A study on the writing component of the Duolingo English Test","authors":"Qin Xie","doi":"10.1016/j.asw.2024.100846","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100846","url":null,"abstract":"<div><p>This study examined whether two integrated reading-to-write tasks could broaden the construct representation of the writing component of <em>Duolingo English Test</em> (DET). It also verified whether they could enhance DET’s predictive power of English academic writing in universities. The tasks were (1) writing a summary based on two source texts and (2) writing a reading-to-write essay based on five texts. Both were given to a sample (N = 204) of undergraduates from Hong Kong. Each participant also submitted an academic assignment written for the assessment of a disciplinary course. Three professional raters double-marked all writing samples against detailed analytical rubrics. Raw scores were first processed using Multi-Faceted Rasch Measurement to estimate inter- and intra-rater consistency and generate adjusted (fair) measures. Based on these measures, descriptive analyses, sequential multiple regression, and Structural Equation Modeling were conducted (in that order). The analyses verified the writing tasks’ underlying component constructs and assessed their relative contributions to the overall integrated writing scores. Both tasks were found to contribute to DET’s construct representation and add moderate predictive power to the domain performance. The findings, along with their practical implications, are discussed, especially regarding the complex relations between construct representation and predictive validity.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100846"},"PeriodicalIF":3.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1075293524000394/pdfft?md5=1959b9ed8a9acc732d6a5985fba62520&pid=1-s2.0-S1075293524000394-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141243090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the multi-dimensional human mind: Model-based and text-based approaches 探索多维人类思维:基于模型和基于文本的方法
IF 4.2 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-07-01 Epub Date: 2024-08-12 DOI: 10.1016/j.asw.2024.100878
Min Kyu Kim , Jinho Kim , Ali Heidari

In this study, we conceptualize two approaches, model-based and text-based, grounded on mental models and discourse comprehension theories, to computerized summary analysis. We juxtapose the model-based approach with the text-based approach to explore shared knowledge dimensions and associated measures from both approaches and use them to examine changes in students' summaries over time. We used 108 cases in which we computed model-based and text-based measures for two versions of students' summaries (i.e., initial and final revisions), resulting in a total of 216 observations. We used correlations, Principal Components Analysis (PCA), and Linear Mixed-Effects models. This exploratory investigation suggested a shortlist of text-based measures, and the findings of the PCA demonstrated that both model-based and text-based measures explained the three-dimensional model (i.e., surface, structure, and semantic). Overall, model-based measures were better for tracking changes in the surface dimension, while text-based measures were descriptive of the structure dimension. Both approaches worked well for the semantic dimension. The tested text-based measures can serve as a cross-reference to evaluate students' summaries along with the model-based measures. The current study shows the potential of using multidimensional measures to provide formative feedback on students' knowledge structure and writing styles along the three dimensions.

在本研究中,我们以心智模型和话语理解理论为基础,将基于模型和基于文本的两种方法概念化,用于计算机化摘要分析。我们将基于模型的方法与基于文本的方法并列,以探索这两种方法的共享知识维度和相关测量方法,并用它们来研究学生摘要随时间的变化。我们使用了 108 个案例,对两个版本的学生总结(即初始和最终修订版)计算了基于模型和基于文本的测量值,共得出 216 个观测值。我们使用了相关分析、主成分分析(PCA)和线性混合效应模型。这项探索性调查提出了一份基于文本的衡量标准短名单,PCA 的结果表明,基于模型的衡量标准和基于文本的衡量标准都能解释三维模型(即表面、结构和语义)。总体而言,基于模型的测量方法更适合跟踪表面维度的变化,而基于文本的测量方法则能描述结构维度的变化。这两种方法在语义维度上都有很好的效果。经过测试的基于文本的测量方法可以作为交叉参考,与基于模型的测量方法一起评估学生的摘要。目前的研究表明,使用多维度测量方法可以对学生在三个维度上的知识结构和写作风格提供形成性反馈。
{"title":"Exploring the multi-dimensional human mind: Model-based and text-based approaches","authors":"Min Kyu Kim ,&nbsp;Jinho Kim ,&nbsp;Ali Heidari","doi":"10.1016/j.asw.2024.100878","DOIUrl":"10.1016/j.asw.2024.100878","url":null,"abstract":"<div><p>In this study, we conceptualize two approaches, model-based and text-based, grounded on mental models and discourse comprehension theories, to computerized summary analysis. We juxtapose the model-based approach with the text-based approach to explore shared knowledge dimensions and associated measures from both approaches and use them to examine changes in students' summaries over time. We used 108 cases in which we computed model-based and text-based measures for two versions of students' summaries (i.e., initial and final revisions), resulting in a total of 216 observations. We used correlations, Principal Components Analysis (PCA), and Linear Mixed-Effects models. This exploratory investigation suggested a shortlist of text-based measures, and the findings of the PCA demonstrated that both model-based and text-based measures explained the three-dimensional model (i.e., surface, structure, and semantic). Overall, model-based measures were better for tracking changes in the surface dimension, while text-based measures were descriptive of the structure dimension. Both approaches worked well for the semantic dimension. The tested text-based measures can serve as a cross-reference to evaluate students' summaries along with the model-based measures. The current study shows the potential of using multidimensional measures to provide formative feedback on students' knowledge structure and writing styles along the three dimensions.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100878"},"PeriodicalIF":4.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141954088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corrigendum to “Assessing metacognition-based student feedback literacy for academic writing” [Assessing Writing 59 (2024) 100811] 基于元认知的学生学术写作反馈素养评估》更正 [Assessing Writing 59 (2024) 100811]
IF 4.2 1区 文学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-07-01 Epub Date: 2024-06-24 DOI: 10.1016/j.asw.2024.100869
Mark Feng Teng , Maggie Ma
{"title":"Corrigendum to “Assessing metacognition-based student feedback literacy for academic writing” [Assessing Writing 59 (2024) 100811]","authors":"Mark Feng Teng ,&nbsp;Maggie Ma","doi":"10.1016/j.asw.2024.100869","DOIUrl":"10.1016/j.asw.2024.100869","url":null,"abstract":"","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100869"},"PeriodicalIF":4.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S107529352400062X/pdfft?md5=bf2d5c4044b982a0307751dcc279b061&pid=1-s2.0-S107529352400062X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142095273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Assessing Writing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1