首页 > 最新文献

Educational Measurement-Issues and Practice最新文献

英文 中文
Using OpenAI GPT to Generate Reading Comprehension Items 使用 OpenAI GPT 生成阅读理解项目
IF 2 4区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-01-24 DOI: 10.1111/emip.12590
Ayfer Sayin, Mark Gierl

The purpose of this study is to introduce and evaluate a method for generating reading comprehension items using template-based automatic item generation. To begin, we describe a new model for generating reading comprehension items called the text analysis cognitive model assessing inferential skills across different reading passages. Next, the text analysis cognitive model is used to generate reading comprehension items where examinees are required to read a passage and identify the irrelevant sentence. The sentences for the generated passages were created using OpenAI GPT-3.5. Finally, the quality of the generated items was evaluated. The generated items were reviewed by three subject-matter experts. The generated items were also administered to a sample of 1,607 Grade-8 students. The correct options for the generated items produced a similar level of difficulty and yielded strong discrimination power while the incorrect options served as effective distractors. Implications of augmented intelligence for item development are discussed.

本研究的目的是介绍和评估一种使用基于模板的自动项目生成方法来生成阅读理解项目的方法。首先,我们介绍了一种用于生成阅读理解题目的新模型,即评估不同阅读段落推断能力的文本分析认知模型。接下来,我们使用文本分析认知模型生成阅读理解项目,要求考生阅读段落并找出不相关的句子。生成段落的句子使用 OpenAI GPT-3.5 创建。最后,对生成项目的质量进行了评估。三个主题专家对生成的项目进行了审核。生成的题目还对 1,607 名八年级学生进行了抽样测试。结果表明,生成题目的正确选项具有相似的难度和较强的区分度,而错误选项则能有效地分散学生的注意力。本文讨论了增强智能对项目开发的影响。
{"title":"Using OpenAI GPT to Generate Reading Comprehension Items","authors":"Ayfer Sayin,&nbsp;Mark Gierl","doi":"10.1111/emip.12590","DOIUrl":"10.1111/emip.12590","url":null,"abstract":"<p>The purpose of this study is to introduce and evaluate a method for generating reading comprehension items using template-based automatic item generation. To begin, we describe a new model for generating reading comprehension items called the text analysis cognitive model assessing inferential skills across different reading passages. Next, the text analysis cognitive model is used to generate reading comprehension items where examinees are required to read a passage and identify the irrelevant sentence. The sentences for the generated passages were created using OpenAI GPT-3.5. Finally, the quality of the generated items was evaluated. The generated items were reviewed by three subject-matter experts. The generated items were also administered to a sample of 1,607 Grade-8 students. The correct options for the generated items produced a similar level of difficulty and yielded strong discrimination power while the incorrect options served as effective distractors. Implications of augmented intelligence for item development are discussed.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"43 1","pages":"5-18"},"PeriodicalIF":2.0,"publicationDate":"2024-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/emip.12590","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139587888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Achievement and Growth on English Language Proficiency and Content Assessments for English Learners in Elementary Grades 小学英语学习者在英语语言能力和内容评估中的成绩和成长
IF 2 4区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-01-10 DOI: 10.1111/emip.12588
Heather M Buzick, Mikyung Kim Wolf, Laura Ballard

English language proficiency (ELP) assessment scores are used by states to make high-stakes decisions related to linguistic support in instruction and assessment for English learner (EL) students and for EL student reclassification. Changes to both academic content standards and ELP academic standards within the last decade have resulted in increased academic rigor and language demands. In this study, we explored the association between EL student performance over time on content (English language arts and mathematics) and ELP assessments, generally finding evidence of positive associations. Modeling the simultaneous association between changes over time in both content and ELP assessment performance contributes empirical evidence about the role of language in ELA and mathematics development and provides contextual information to serve as validity evidence for score inferences for EL students.

各州使用英语语言能力(ELP)评估分数来做出与英语学习者(EL)学生的教学和评估中的语言支持有关的高风险决策,以及对英语学习者学生进行重新分类。在过去十年中,学术内容标准和英语学习能力学术标准的变化导致了学术严谨性和语言要求的提高。在这项研究中,我们探讨了随着时间的推移,英语语言学习者(EL)学生在学习内容(英语语言艺术和数学)和英语语言学习能力(ELP)评估中的表现之间的关联,总体上发现了正相关的证据。建立内容和英语学习能力评估成绩随时间变化的同步关联模型,为语言在英语语言艺术和数学发展中的作用提供了经验证据,并为英语学习者的分数推断提供了背景信息作为有效性证据。
{"title":"Achievement and Growth on English Language Proficiency and Content Assessments for English Learners in Elementary Grades","authors":"Heather M Buzick,&nbsp;Mikyung Kim Wolf,&nbsp;Laura Ballard","doi":"10.1111/emip.12588","DOIUrl":"10.1111/emip.12588","url":null,"abstract":"<p>English language proficiency (ELP) assessment scores are used by states to make high-stakes decisions related to linguistic support in instruction and assessment for English learner (EL) students and for EL student reclassification. Changes to both academic content standards and ELP academic standards within the last decade have resulted in increased academic rigor and language demands. In this study, we explored the association between EL student performance over time on content (English language arts and mathematics) and ELP assessments, generally finding evidence of positive associations. Modeling the simultaneous association between changes over time in both content and ELP assessment performance contributes empirical evidence about the role of language in ELA and mathematics development and provides contextual information to serve as validity evidence for score inferences for EL students.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"43 1","pages":"83-95"},"PeriodicalIF":2.0,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139464311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Issue Cover 期封面
IF 2 4区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2023-12-05 DOI: 10.1111/emip.12518
{"title":"Issue Cover","authors":"","doi":"10.1111/emip.12518","DOIUrl":"https://doi.org/10.1111/emip.12518","url":null,"abstract":"","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"42 4","pages":""},"PeriodicalIF":2.0,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/emip.12518","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138485186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ITEMS Corner Update: The Final Three Steps in the Development Process 项目角落更新:开发过程中的最后三个步骤
IF 2 4区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2023-12-05 DOI: 10.1111/emip.12586
Brian C. Leventhal
<p>Throughout 2023, I have detailed each step of the module development process for the <i>Instructional Topics in Educational Measurement Series</i> (<i>ITEMS</i>). In the first issue, I outlined the 10 steps necessary to complete a module. In the second issue, I detailed Steps 1–3, which cover outlining the content, developing the content in premade PowerPoint templates, and having the slides reviewed by the editor. In the third issue of the year, I outlined Step 4—recording the audio, Step 5—having the editor polish the module (e.g., animating the content), Step 6—building the activity, and Step 7—building interactive learning checks (i.e., selected response questions designed to check for understanding). In this issue, I elaborate on the final three steps: Step 8—external review, Step 9—building the module on the portal, and Step 10—writing the piece to be published in <i>Educational Measurement: Issues and Practice</i> (<i>EM:IP</i>). Following the in-depth explanation of each of these steps, I then introduce the newest module published to the <i>ITEMS</i> portal (https://www.ncme.org/ITEMSportal).</p><p>Authors may opt to have their module externally reviewed (Step 8) prior to recording audio (Step 4) or after the module has been polished (Step 5). Having the module content reviewed prior to recording audio allows for modifying content easily without having to do “double” work (e.g., rerecording audio on slides, reorganizing flow charts). However, many authors find that their bulleted notes for each slide are not sufficient for reviewers to understand the final product. Alternatively, they may opt to have their module sent out for review once it has been editorially polished. This lets reviewers watch the author's “final” product. Because the reviewers may suggest updates, I request authors record audio on each slide. Should an author choose to make a change after review, they then do not have to rerecord an entire 20-minute section of audio. Reviewers are instructed to provide constructive feedback and are given insights about the full process that authors have already worked through (i.e., the ten-step process). It is emphasized that the purpose of <i>ITEMS</i> is not to present novel cutting-edge research. Rather, it is a publication designed to provide instructional resources on current practices in the field.</p><p>After receiving reviewer feedback, authors are provided an opportunity to revise their module. Similar to a manuscript revise and resubmission, authors are asked to respond to each reviewer's comment, articulating how they have addressed each. This serves an additional purpose; specifically, this assists the editor in repolishing the updated module. For example, if audio is rerecorded on a slide, the editor will need to adjust animations and timing. After the editor has made final updates, the author reviews the module to give final approval. Upon receiving approval, the editor then builds the module onto the NCME website <i
如果您有兴趣了解有关ITEMS模块开发过程的更多信息,编写模块或参与其他一些工作,请通过[email protected]与我联系。
{"title":"ITEMS Corner Update: The Final Three Steps in the Development Process","authors":"Brian C. Leventhal","doi":"10.1111/emip.12586","DOIUrl":"https://doi.org/10.1111/emip.12586","url":null,"abstract":"&lt;p&gt;Throughout 2023, I have detailed each step of the module development process for the &lt;i&gt;Instructional Topics in Educational Measurement Series&lt;/i&gt; (&lt;i&gt;ITEMS&lt;/i&gt;). In the first issue, I outlined the 10 steps necessary to complete a module. In the second issue, I detailed Steps 1–3, which cover outlining the content, developing the content in premade PowerPoint templates, and having the slides reviewed by the editor. In the third issue of the year, I outlined Step 4—recording the audio, Step 5—having the editor polish the module (e.g., animating the content), Step 6—building the activity, and Step 7—building interactive learning checks (i.e., selected response questions designed to check for understanding). In this issue, I elaborate on the final three steps: Step 8—external review, Step 9—building the module on the portal, and Step 10—writing the piece to be published in &lt;i&gt;Educational Measurement: Issues and Practice&lt;/i&gt; (&lt;i&gt;EM:IP&lt;/i&gt;). Following the in-depth explanation of each of these steps, I then introduce the newest module published to the &lt;i&gt;ITEMS&lt;/i&gt; portal (https://www.ncme.org/ITEMSportal).&lt;/p&gt;&lt;p&gt;Authors may opt to have their module externally reviewed (Step 8) prior to recording audio (Step 4) or after the module has been polished (Step 5). Having the module content reviewed prior to recording audio allows for modifying content easily without having to do “double” work (e.g., rerecording audio on slides, reorganizing flow charts). However, many authors find that their bulleted notes for each slide are not sufficient for reviewers to understand the final product. Alternatively, they may opt to have their module sent out for review once it has been editorially polished. This lets reviewers watch the author's “final” product. Because the reviewers may suggest updates, I request authors record audio on each slide. Should an author choose to make a change after review, they then do not have to rerecord an entire 20-minute section of audio. Reviewers are instructed to provide constructive feedback and are given insights about the full process that authors have already worked through (i.e., the ten-step process). It is emphasized that the purpose of &lt;i&gt;ITEMS&lt;/i&gt; is not to present novel cutting-edge research. Rather, it is a publication designed to provide instructional resources on current practices in the field.&lt;/p&gt;&lt;p&gt;After receiving reviewer feedback, authors are provided an opportunity to revise their module. Similar to a manuscript revise and resubmission, authors are asked to respond to each reviewer's comment, articulating how they have addressed each. This serves an additional purpose; specifically, this assists the editor in repolishing the updated module. For example, if audio is rerecorded on a slide, the editor will need to adjust animations and timing. After the editor has made final updates, the author reviews the module to give final approval. Upon receiving approval, the editor then builds the module onto the NCME website &lt;i","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"42 4","pages":"81"},"PeriodicalIF":2.0,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/emip.12586","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138485181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Cover: Tell-Tale Triangles of Subscore Value 封面:分值的泄密三角形
IF 2 4区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2023-12-05 DOI: 10.1111/emip.12587
Yuan-Ling Liaw
{"title":"On the Cover: Tell-Tale Triangles of Subscore Value","authors":"Yuan-Ling Liaw","doi":"10.1111/emip.12587","DOIUrl":"https://doi.org/10.1111/emip.12587","url":null,"abstract":"","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"42 4","pages":"4"},"PeriodicalIF":2.0,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138485202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digital Module 34: Introduction to Multilevel Measurement Modeling 数字模块34:介绍多层次的测量建模
IF 2 4区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2023-12-05 DOI: 10.1111/emip.12585
Mairead Shaw, Jessica K. Flake

Module Abstract

Clustered data structures are common in many areas of educational and psychological research (e.g., students clustered in schools, patients clustered by clinician). In the course of conducting research, questions are often administered to obtain scores reflecting latent constructs. Multilevel measurement models (MLMMs) allow for modeling measurement (the relationship of test items to constructs) and the relationships between variables in a clustered data structure. Modeling the two concurrently is important for accurately representing the relationships between items and constructs, and between constructs and other constructs/variables. The barrier to entry with MLMMs can be high, with many equations and less-documented software functionality. This module reviews two different frameworks for multilevel measurement modeling: (1) multilevel modeling and (2) structural equation modeling. We demonstrate the entire process in R with working code and available data, from preparing the dataset, through writing and running code, to interpreting and comparing output for the two approaches.

聚类数据结构在教育和心理学研究的许多领域都很常见(例如,学生在学校聚类,病人在临床医生聚类)。在进行研究的过程中,经常使用问题来获得反映潜在构念的分数。多层测量模型(mlmm)允许建模测量(测试项目与构造的关系)和聚类数据结构中变量之间的关系。同时对两者进行建模对于准确表示项目和构造之间以及构造和其他构造/变量之间的关系非常重要。传销的进入门槛可能很高,有许多方程式和较少记录的软件功能。本模块回顾了两种不同的多层测量建模框架:(1)多层建模和(2)结构方程建模。我们用R语言演示了整个过程,包括工作代码和可用数据,从准备数据集,到编写和运行代码,再到解释和比较两种方法的输出。
{"title":"Digital Module 34: Introduction to Multilevel Measurement Modeling","authors":"Mairead Shaw,&nbsp;Jessica K. Flake","doi":"10.1111/emip.12585","DOIUrl":"https://doi.org/10.1111/emip.12585","url":null,"abstract":"<div>\u0000 \u0000 <section>\u0000 \u0000 <h3> Module Abstract</h3>\u0000 \u0000 <p>Clustered data structures are common in many areas of educational and psychological research (e.g., students clustered in schools, patients clustered by clinician). In the course of conducting research, questions are often administered to obtain scores reflecting latent constructs. Multilevel measurement models (MLMMs) allow for modeling measurement (the relationship of test items to constructs) and the relationships between variables in a clustered data structure. Modeling the two concurrently is important for accurately representing the relationships between items and constructs, and between constructs and other constructs/variables. The barrier to entry with MLMMs can be high, with many equations and less-documented software functionality. This module reviews two different frameworks for multilevel measurement modeling: (1) multilevel modeling and (2) structural equation modeling. We demonstrate the entire process in R with working code and available data, from preparing the dataset, through writing and running code, to interpreting and comparing output for the two approaches.</p>\u0000 </section>\u0000 </div>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"42 4","pages":"82"},"PeriodicalIF":2.0,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/emip.12585","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138485188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The 2024 EM:IP Cover Graphic/Data Visualization Competition 2024 EM:IP封面图形/数据可视化竞赛
IF 2 4区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2023-11-13 DOI: 10.1111/emip.12583
Yuan-Ling Liaw
{"title":"The 2024 EM:IP Cover Graphic/Data Visualization Competition","authors":"Yuan-Ling Liaw","doi":"10.1111/emip.12583","DOIUrl":"10.1111/emip.12583","url":null,"abstract":"","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"42 4","pages":"5"},"PeriodicalIF":2.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136347884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparing Large-Scale Assessments in Two Proctoring Modalities with Interactive Log Data Analysis 比较两种监控模式下的大规模评估与交互式测井数据分析
IF 2 4区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2023-11-02 DOI: 10.1111/emip.12582
Jinnie Shin, Qi Guo, Maxim Morin

With the increased restrictions on physical distancing due to the COVID-19 pandemic, remote proctoring has emerged as an alternative to traditional onsite proctoring to ensure the continuity of essential assessments, such as computer-based medical licensing exams. Recent literature has highlighted the significant impact of different proctoring modalities on examinees’ test experience, including factors like response-time data. However, the potential influence of these differences on test performance has remained unclear. One limitation in the current literature is the lack of a rigorous learning analytics framework to evaluate the comparability of computer-based exams delivered using various proctoring settings. To address this gap, the current study aims to introduce a machine-learning-based framework that analyzes computer-generated response-time data to investigate the association between proctoring modalities in high-stakes assessments. We demonstrated the effectiveness of this framework using empirical data collected from a large-scale high-stakes medical licensing exam conducted in Canada. By applying the machine-learning-based framework, we were able to extract examinee-specific response-time data for each proctoring modality and identify distinct time-use patterns among examinees based on their proctoring modality.

由于COVID-19大流行对物理距离的限制越来越多,远程监考已经成为传统现场监考的替代方案,以确保基本评估的连续性,例如基于计算机的医疗执照考试。最近的文献强调了不同监考方式对考生考试体验的显著影响,包括响应时间数据等因素。然而,这些差异对测试表现的潜在影响仍不清楚。当前文献中的一个限制是缺乏严格的学习分析框架来评估使用各种监考设置交付的计算机考试的可比性。为了解决这一差距,目前的研究旨在引入一个基于机器学习的框架,该框架分析计算机生成的响应时间数据,以调查高风险评估中监考方式之间的关联。我们使用从加拿大进行的大规模高风险医疗执照考试中收集的经验数据证明了该框架的有效性。通过应用基于机器学习的框架,我们能够为每种监考模式提取考生特定的响应时间数据,并根据考生的监考模式识别出不同的时间使用模式。
{"title":"Comparing Large-Scale Assessments in Two Proctoring Modalities with Interactive Log Data Analysis","authors":"Jinnie Shin,&nbsp;Qi Guo,&nbsp;Maxim Morin","doi":"10.1111/emip.12582","DOIUrl":"10.1111/emip.12582","url":null,"abstract":"<p>With the increased restrictions on physical distancing due to the COVID-19 pandemic, remote proctoring has emerged as an alternative to traditional onsite proctoring to ensure the continuity of essential assessments, such as computer-based medical licensing exams. Recent literature has highlighted the significant impact of different proctoring modalities on examinees’ test experience, including factors like response-time data. However, the potential influence of these differences on test performance has remained unclear. One limitation in the current literature is the lack of a rigorous learning analytics framework to evaluate the comparability of computer-based exams delivered using various proctoring settings. To address this gap, the current study aims to introduce a machine-learning-based framework that analyzes computer-generated response-time data to investigate the association between proctoring modalities in high-stakes assessments. We demonstrated the effectiveness of this framework using empirical data collected from a large-scale high-stakes medical licensing exam conducted in Canada. By applying the machine-learning-based framework, we were able to extract examinee-specific response-time data for each proctoring modality and identify distinct time-use patterns among examinees based on their proctoring modality.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"42 4","pages":"66-80"},"PeriodicalIF":2.0,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135934362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Foundational Competencies in Educational Measurement 教育测量的基本能力
IF 2.7 4区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2023-10-17 DOI: 10.1111/emip.12581
Terry A. Ackerman, Deborah L. Bandalos, Derek C. Briggs, Howard T. Everson, Andrew D. Ho, Susan M. Lottridge, Matthew J. Madison, Sandip Sinharay, Michael C. Rodriguez, Michael Russell, Alina A. von Davier, Stefanie A. Wind

This article presents the consensus of an National Council on Measurement in Education Presidential Task Force on Foundational Competencies in Educational Measurement. Foundational competencies are those that support future development of additional professional and disciplinary competencies. The authors develop a framework for foundational competencies in educational measurement, illustrate how educational measurement programs can help learners develop these competencies, and demonstrate how foundational competencies continue to develop in educational measurement professions. The framework introduces three foundational competency domains: Communication and Collaboration Competencies; Technical, Statistical, and Computational Competencies; and Educational Measurement Competencies. Within the Educational Measurement Competency domain, the authors identify five subdomains: Social, Cultural, Historical, and Political Context; Validity, Validation, and Fairness; Theory and Instrumentation; Precision and Generalization; and Psychometric Modeling.

本文介绍了美国国家教育测量委员会(National Council on Measurement in Education Presidential Task Force on Foundational Competencies in Educational Measurement)的共识。基础能力是指支持未来发展其他专业和学科能力的能力。作者制定了教育测量基础能力框架,说明了教育测量课程如何帮助学习者发展这些能力,并展示了基础能力如何在教育测量专业中不断发展。该框架介绍了三个基础能力领域:沟通与协作能力;技术、统计与计算能力;教育测量能力。在教育测量能力领域中,作者确定了五个子领域:社会、文化、历史和政治背景;有效性、验证和公平性;理论和工具;精确性和概括性;以及心理测量模型。
{"title":"Foundational Competencies in Educational Measurement","authors":"Terry A. Ackerman,&nbsp;Deborah L. Bandalos,&nbsp;Derek C. Briggs,&nbsp;Howard T. Everson,&nbsp;Andrew D. Ho,&nbsp;Susan M. Lottridge,&nbsp;Matthew J. Madison,&nbsp;Sandip Sinharay,&nbsp;Michael C. Rodriguez,&nbsp;Michael Russell,&nbsp;Alina A. von Davier,&nbsp;Stefanie A. Wind","doi":"10.1111/emip.12581","DOIUrl":"10.1111/emip.12581","url":null,"abstract":"<p>This article presents the consensus of an National Council on Measurement in Education Presidential Task Force on Foundational Competencies in Educational Measurement. Foundational competencies are those that support future development of additional professional and disciplinary competencies. The authors develop a framework for foundational competencies in educational measurement, illustrate how educational measurement programs can help learners develop these competencies, and demonstrate how foundational competencies continue to develop in educational measurement professions. The framework introduces three foundational competency domains: Communication and Collaboration Competencies; Technical, Statistical, and Computational Competencies; and Educational Measurement Competencies. Within the Educational Measurement Competency domain, the authors identify five subdomains: Social, Cultural, Historical, and Political Context; Validity, Validation, and Fairness; Theory and Instrumentation; Precision and Generalization; and Psychometric Modeling.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"43 3","pages":"7-17"},"PeriodicalIF":2.7,"publicationDate":"2023-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136034537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digital Module 33: Fairness in Classroom Assessment: Dimensions and Tensions 数字模块33:课堂评估的公平性:维度和张力
IF 2 4区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2023-09-06 DOI: 10.1111/emip.12572
Amirhossein Rasooli

Perceptions of fairness are fundamental in building cooperation and trust, undermining conflicts, and gaining legitimacy in teacher-student relationships in classroom assessment. However, perceptions of unfairness in assessment can undermine students’ mental well-being, increase antisocial behaviors, increase psychological disengagement with learning, and threaten the belief in a fair society, fundamental to engaging in civic responsibilities. Despite the crucial role of perceived fairness in assessment, there are widespread experiences of unfairness reported by students internationally. To undermine these widespread unfair experiences, limited explicit education on promoting fairness in assessment is being delivered in graduate, preservice, and in-service training. However, it seems that explicit education is the first step in capacity building for reducing unfair perceptions and related undesirable outcomes. The purpose of this module is thus to share the findings drawn from theoretical and empirical research from various countries to provide a space for further critical reflection on best practices in enhancing fairness in classroom assessment contexts.

在课堂评估中,公平感是建立合作和信任、消除冲突和获得师生关系合法性的基础。然而,对评估不公平的认知会损害学生的心理健康,增加反社会行为,增加对学习的心理脱离,并威胁到对公平社会的信念,而公平社会是参与公民责任的基础。尽管感知公平在评估中起着至关重要的作用,但国际学生普遍报告了不公平的经历。为了消除这些普遍存在的不公平经历,在毕业生、职前培训和在职培训中,关于促进评估公平的明确教育有限。然而,明确的教育似乎是能力建设的第一步,以减少不公平的看法和相关的不良后果。因此,本模块的目的是分享从各国的理论和实证研究中得出的结论,为进一步批判性地反思在课堂评估环境中提高公平性的最佳做法提供空间。
{"title":"Digital Module 33: Fairness in Classroom Assessment: Dimensions and Tensions","authors":"Amirhossein Rasooli","doi":"10.1111/emip.12572","DOIUrl":"10.1111/emip.12572","url":null,"abstract":"<p>Perceptions of fairness are fundamental in building cooperation and trust, undermining conflicts, and gaining legitimacy in teacher-student relationships in classroom assessment. However, perceptions of unfairness in assessment can undermine students’ mental well-being, increase antisocial behaviors, increase psychological disengagement with learning, and threaten the belief in a fair society, fundamental to engaging in civic responsibilities. Despite the crucial role of perceived fairness in assessment, there are widespread experiences of unfairness reported by students internationally. To undermine these widespread unfair experiences, limited explicit education on promoting fairness in assessment is being delivered in graduate, preservice, and in-service training. However, it seems that explicit education is the first step in capacity building for reducing unfair perceptions and related undesirable outcomes. The purpose of this module is thus to share the findings drawn from theoretical and empirical research from various countries to provide a space for further critical reflection on best practices in enhancing fairness in classroom assessment contexts.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"42 3","pages":"82-83"},"PeriodicalIF":2.0,"publicationDate":"2023-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/emip.12572","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43265276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Educational Measurement-Issues and Practice
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1