首页 > 最新文献

Educational Measurement-Issues and Practice最新文献

英文 中文
A Probabilistic Filtering Approach to Non-Effortful Responding 一种非费力响应的概率过滤方法
IF 2 4区 教育学 Q2 Social Sciences Pub Date : 2023-06-16 DOI: 10.1111/emip.12567
Esther Ulitzsch, Benjamin W. Domingue, Radhika Kapoor, Klint Kanopka, Joseph A. Rios

Common response-time-based approaches for non-effortful response behavior (NRB) in educational achievement tests filter responses that are associated with response times below some threshold. These approaches are, however, limited in that they require a binary decision on whether a response is classified as stemming from NRB; thus ignoring potential classification uncertainty in resulting parameter estimates. We developed a response-time-based probabilistic filtering procedure that overcomes this limitation. The procedure is rooted in the principles of multiple imputation. Instead of creating multiple plausible replacements of missing data, however, multiple data sets are created that represent plausible filtered response data. We propose two different approaches to filtering models, originating in different research traditions and conceptualizations of response-time-based identification of NRB. The first approach uses Gaussian mixture modeling to identify a response time subcomponent stemming from NRB. Plausible filtered data sets are created based on examinees' posterior probabilities of belonging to the NRB subcomponent. The second approach defines a plausible range of response time thresholds and creates plausible filtered data sets by drawing multiple response time thresholds from the defined range. We illustrate the workings of the proposed procedure as well as differences between the proposed filtering models based on both simulated data and empirical data from PISA 2018.

在教育成就测试中,常见的基于响应时间的非努力响应行为(NRB)方法会过滤与响应时间低于某个阈值相关的响应。然而,这些方法的局限性在于,它们需要对反应是否归类为NRB的二元决策;从而忽略了结果参数估计中潜在的分类不确定性。我们开发了一种基于响应时间的概率过滤程序来克服这一限制。该程序植根于多重归算的原则。但是,不是为缺失的数据创建多个合理的替换,而是创建多个数据集来表示合理的过滤响应数据。我们提出了两种不同的过滤模型方法,源自不同的研究传统和基于响应时间的NRB识别概念。第一种方法使用高斯混合建模来识别源于NRB的响应时间子分量。可信的过滤数据集是基于考生属于NRB子成分的后验概率创建的。第二种方法定义响应时间阈值的合理范围,并通过从所定义的范围中绘制多个响应时间阈值来创建合理的过滤数据集。我们根据2018年PISA的模拟数据和经验数据说明了拟议程序的工作原理以及拟议过滤模型之间的差异。
{"title":"A Probabilistic Filtering Approach to Non-Effortful Responding","authors":"Esther Ulitzsch,&nbsp;Benjamin W. Domingue,&nbsp;Radhika Kapoor,&nbsp;Klint Kanopka,&nbsp;Joseph A. Rios","doi":"10.1111/emip.12567","DOIUrl":"10.1111/emip.12567","url":null,"abstract":"<p>Common response-time-based approaches for non-effortful response behavior (NRB) in educational achievement tests filter responses that are associated with response times below some threshold. These approaches are, however, limited in that they require a binary decision on whether a response is classified as stemming from NRB; thus ignoring potential classification uncertainty in resulting parameter estimates. We developed a response-time-based probabilistic filtering procedure that overcomes this limitation. The procedure is rooted in the principles of multiple imputation. Instead of creating multiple plausible replacements of missing data, however, multiple data sets are created that represent plausible filtered response data. We propose two different approaches to filtering models, originating in different research traditions and conceptualizations of response-time-based identification of NRB. The first approach uses Gaussian mixture modeling to identify a response time subcomponent stemming from NRB. Plausible filtered data sets are created based on examinees' posterior probabilities of belonging to the NRB subcomponent. The second approach defines a plausible range of response time thresholds and creates plausible filtered data sets by drawing multiple response time thresholds from the defined range. We illustrate the workings of the proposed procedure as well as differences between the proposed filtering models based on both simulated data and empirical data from PISA 2018.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/emip.12567","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46209020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Issue Cover 发行封面
IF 2 4区 教育学 Q2 Social Sciences Pub Date : 2023-06-09 DOI: 10.1111/emip.12514
{"title":"Issue Cover","authors":"","doi":"10.1111/emip.12514","DOIUrl":"https://doi.org/10.1111/emip.12514","url":null,"abstract":"","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/emip.12514","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50143334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visualizing Distributions Across Grades 可视化各个年级的分布
IF 2 4区 教育学 Q2 Social Sciences Pub Date : 2023-06-09 DOI: 10.1111/emip.12558
Yuan-Ling Liaw
{"title":"Visualizing Distributions Across Grades","authors":"Yuan-Ling Liaw","doi":"10.1111/emip.12558","DOIUrl":"10.1111/emip.12558","url":null,"abstract":"","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42774382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digital Module 32: Understanding and Mitigating the Impact of Low Effort on Common Uses of Test and Survey Scores 数字模块32:理解和减轻低努力对测试和调查分数常用的影响
IF 2 4区 教育学 Q2 Social Sciences Pub Date : 2023-06-09 DOI: 10.1111/emip.12555
James Soland

Most individuals who take, interpret, design, or score tests are aware that examinees do not always provide full effort when responding to items. However, many such individuals are not aware of how pervasive the issue is, what its consequences are, and how to address it. In this digital ITEMS module, Dr. James Soland will help fill these gaps in the knowledge base. Specifically, the module enumerates how frequently behaviors associated with low effort occur, and some of the ways they can distort inferences based on test scores. Then, the module explains some of the most common approaches for identifying low effort, and correcting for it when examining test scores. Brief discussion is also given to how these methods align with, and diverge from, those used to deal with low respondent effort in self-report contexts. Data and code are also provided such that readers can better implement some of the desired methods in their own work.

大多数参加、解释、设计或评分考试的人都意识到,考生在回答问题时并不总是全力以赴。然而,许多这样的人并没有意识到这个问题有多普遍,它的后果是什么,以及如何解决它。在这个数字项目模块中,James Soland博士将帮助填补知识库中的这些空白。具体来说,该模块列举了与低努力相关的行为发生的频率,以及它们可能歪曲基于考试成绩的推断的一些方式。然后,该模块解释了一些最常见的识别低努力的方法,并在检查考试成绩时对其进行纠正。还简要讨论了这些方法如何与那些用于处理自我报告背景下低应答者努力的方法保持一致,并与之不同。还提供了数据和代码,以便读者可以在自己的工作中更好地实现一些所需的方法。
{"title":"Digital Module 32: Understanding and Mitigating the Impact of Low Effort on Common Uses of Test and Survey Scores","authors":"James Soland","doi":"10.1111/emip.12555","DOIUrl":"10.1111/emip.12555","url":null,"abstract":"<p>Most individuals who take, interpret, design, or score tests are aware that examinees do not always provide full effort when responding to items. However, many such individuals are not aware of how pervasive the issue is, what its consequences are, and how to address it. In this digital ITEMS module, Dr. James Soland will help fill these gaps in the knowledge base. Specifically, the module enumerates how frequently behaviors associated with low effort occur, and some of the ways they can distort inferences based on test scores. Then, the module explains some of the most common approaches for identifying low effort, and correcting for it when examining test scores. Brief discussion is also given to how these methods align with, and diverge from, those used to deal with low respondent effort in self-report contexts. Data and code are also provided such that readers can better implement some of the desired methods in their own work.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45513786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ITEMS Corner Update: The Initial Steps in the ITEMS Development Process 项目角落更新:项目开发过程中的初始步骤
IF 2 4区 教育学 Q2 Social Sciences Pub Date : 2023-06-09 DOI: 10.1111/emip.12556
Brian C. Leventhal

In the previous issue of Educational Measurement: Issues and Practice (EM:IP) I outlined the ten steps to authoring and producing a digital module for the Instructional Topics in Educational Measurement Series (ITEMS). In the current piece, I detail the first three steps: Step 1—Content Outline; Step 2—Content Development; and Step 3—Draft Review. After in-depth discussion of these three steps, I introduce the newest ITEMS module.

Prior to beginning the ten-step process, ITEMS module development starts with an initial meeting between myself (as editor) and the lead author(s). During this meeting, I discuss the development process in detail, showcasing what a final product looks like from the learners’ perspective in addition to a sneak-peek behind-the-scenes at what the final product looks like from the editorial perspective. After discussing the end product, the remaining conversation focuses on the 10-step process and the user-friendly templates to guide authors. The conversation concludes after coming to an agreement of the topic and general scope for the module.

Authors then independently work through a module outline template to refine the scope and sequencing of the module (Step 1). During this step, authors are encouraged to first specify their audience before setting the learning objectives of the module. Once learning objectives are set, authors are then tasked with determining the prerequisite knowledge for learners. In the next section of the template, authors outline the content and sequencing of the 4–6 sections of the module. Each section has its own learning objectives that map to the objectives of the module. One of the sections is a learner-focused interactive activity, whether it be a demonstration of software or a case study that is relevant to the content of the other sections. Once the outline is completed, the authors receive feedback to ensure adequate sequencing, feasibility of module development (e.g., covering a reasonable amount of content), and appropriateness for the audience. This is an example of the unique nature of ITEMS module development. Unlike most other publications, ITEMS module development consists of regular communication and feedback from the editor. Once the scope and outline of content have been agreed to, the authors move on to Step 2: Content Development.

For Step 2, authors are provided a slide deck template to assist in developing content consistent with the ITEMS format and brand. Using this slide deck, authors maintain creative flexibility by choosing among many slide layouts, each preprogrammed with consistent font, sizing, and color. Authors create individual slide decks for each section of the module, embedding media (e.g., pictures/figures) wherever necessary to assist learner understanding. At this stage, authors are not expected to record audio nor are they expected to put in animations. The primary focus for the authors i

在上一期《教育测量:问题与实践》(EM:IP)中,我概述了为教育测量系列(ITEMS)中的教学主题编写和制作数字模块的十个步骤。在当前的文章中,我详细介绍了前三个步骤:步骤1 -内容大纲;步骤2 -内容开发;和步骤3 -草案审查。在深入讨论了这三个步骤之后,我介绍了最新的ITEMS模块。在开始十步流程之前,ITEMS模块开发始于我(作为编辑)和主要作者之间的初始会议。在这次会议上,我详细讨论了开发过程,从学习者的角度展示了最终产品的外观,并从编辑的角度偷窥了最终产品的幕后情况。在讨论了最终产品之后,剩下的对话集中在10步流程和指导作者的用户友好模板上。在对主题和模块的总体范围达成一致后,对话结束。然后作者独立地通过模块大纲模板来完善模块的范围和顺序(第1步)。在这一步中,鼓励作者在设定模块的学习目标之前首先指定他们的受众。一旦设定了学习目标,作者的任务就是确定学习者的先决知识。在模板的下一节中,作者概述了模块的4-6部分的内容和顺序。每个部分都有自己的学习目标,这些目标与模块的目标相对应。其中一个部分是一个以学习者为中心的互动活动,无论是软件演示还是与其他部分内容相关的案例研究。一旦大纲完成,作者将收到反馈,以确保适当的顺序、模块开发的可行性(例如,覆盖合理数量的内容)以及对受众的适当性。这是ITEMS模块开发的独特性质的一个例子。与大多数其他出版物不同,ITEMS模块的开发由编辑的定期交流和反馈组成。一旦对内容的范围和大纲达成一致,作者就进入第二步:内容开发。在第二步,为作者提供一个幻灯片模板,以帮助他们开发与ITEMS格式和品牌一致的内容。使用此幻灯片,作者可以通过在许多幻灯片布局中进行选择来保持创造性的灵活性,每个幻灯片布局都预先编程为一致的字体、大小和颜色。作者为模块的每个部分创建了单独的幻灯片,在必要时嵌入媒体(例如图片/数字),以帮助学习者理解。在这个阶段,作者不需要录制音频,也不需要添加动画。作者主要关注的是内容,其余的由编辑团队负责。根据主题的不同,典型的部分有10到15张幻灯片,作者计划在每张幻灯片上发言1到2分钟。通常,作者在为一个部分开发内容后要求反馈,以确认文本、图形和数字的适当平衡。在内容开发阶段,作者发现写详细的笔记是很有价值的,无论是通过项目符号还是实际的脚本来帮助之后的音频录制。然后由编辑团队审查每个部分的草稿(步骤3),其中步骤2和步骤3成为迭代,直到作者和编辑都同意工作。在幻灯片完成后,作者可以选择外部审查,或者选择录制音频,并在稍后的过程中寻求外部审查。每次都与编辑讨论寻求审稿的利弊。在未来的EM:IP中,我将详细介绍ITEMS模块开发过程的其余步骤。本次展览的目的是:(1)熟悉读者、学习者和潜在作者的发展过程,这种非典型出版物;(2)推广这些模块作者完成的幕后详细工作;(3)通过展示严格的、有指导的开发过程来吸引潜在作者的兴趣。ITEMS模块对许多受众(例如,研究生和教师,客户,教育测量领域内外的专业人士)具有不可思议的实用性。正是通过作者、编辑团队和审稿人的自愿贡献,才有了如此美妙的产品。最后,我很高兴地宣布,由James Soland博士编写的最新模块《数字模块32:理解和减轻低努力对测试和调查成绩的影响》即将出版。在这个由六部分组成的模块中,
{"title":"ITEMS Corner Update: The Initial Steps in the ITEMS Development Process","authors":"Brian C. Leventhal","doi":"10.1111/emip.12556","DOIUrl":"10.1111/emip.12556","url":null,"abstract":"<p>In the previous issue of <i>Educational Measurement: Issues and Practice</i> (<i>EM:IP</i>) I outlined the ten steps to authoring and producing a digital module for the <i>Instructional Topics in Educational Measurement Series</i> (<i>ITEMS</i>). In the current piece, I detail the first three steps: Step 1—Content Outline; Step 2—Content Development; and Step 3—Draft Review. After in-depth discussion of these three steps, I introduce the newest ITEMS module.</p><p>Prior to beginning the ten-step process, ITEMS module development starts with an initial meeting between myself (as editor) and the lead author(s). During this meeting, I discuss the development process in detail, showcasing what a final product looks like from the learners’ perspective in addition to a sneak-peek behind-the-scenes at what the final product looks like from the editorial perspective. After discussing the end product, the remaining conversation focuses on the 10-step process and the user-friendly templates to guide authors. The conversation concludes after coming to an agreement of the topic and general scope for the module.</p><p>Authors then independently work through a module outline template to refine the scope and sequencing of the module (Step 1). During this step, authors are encouraged to first specify their audience before setting the learning objectives of the module. Once learning objectives are set, authors are then tasked with determining the prerequisite knowledge for learners. In the next section of the template, authors outline the content and sequencing of the 4–6 sections of the module. Each section has its own learning objectives that map to the objectives of the module. One of the sections is a learner-focused interactive activity, whether it be a demonstration of software or a case study that is relevant to the content of the other sections. Once the outline is completed, the authors receive feedback to ensure adequate sequencing, feasibility of module development (e.g., covering a reasonable amount of content), and appropriateness for the audience. This is an example of the unique nature of <i>ITEMS</i> module development. Unlike most other publications, <i>ITEMS</i> module development consists of regular communication and feedback from the editor. Once the scope and outline of content have been agreed to, the authors move on to Step 2: Content Development.</p><p>For Step 2, authors are provided a slide deck template to assist in developing content consistent with the <i>ITEMS</i> format and brand. Using this slide deck, authors maintain creative flexibility by choosing among many slide layouts, each preprogrammed with consistent font, sizing, and color. Authors create individual slide decks for each section of the module, embedding media (e.g., pictures/figures) wherever necessary to assist learner understanding. At this stage, authors are not expected to record audio nor are they expected to put in animations. The primary focus for the authors i","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/emip.12556","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42654384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Role of Response Style Adjustments in Cross-Country Comparisons—A Case Study Using Data from the PISA 2015 Questionnaire 反应风格调整在跨国比较中的作用——基于2015年PISA问卷数据的案例研究
IF 2 4区 教育学 Q2 Social Sciences Pub Date : 2023-05-01 DOI: 10.1111/emip.12552
Esther Ulitzsch, Oliver Lüdtke, Alexander Robitzsch

Country differences in response styles (RS) may jeopardize cross-country comparability of Likert-type scales. When adjusting for rather than investigating RS is the primary goal, it seems advantageous to impose minimal assumptions on RS structures and leverage information from multiple scales for RS measurement. Using PISA 2015 background questionnaire data, we investigate such an adjustment procedure and explore its impact on cross-country comparisons in contrast to customary analyses and RS adjustments that (a) leave RS unconsidered, (b) incorporate stronger assumptions on RS structure, and/or (c) only use some selected scales for RS measurement. Our findings suggest that not only the decision as to whether to adjust for RS but also how to adjust may heavily impact cross-country comparisons. This concerns both the assumptions on RS structures and the scales employed for RS measurement. Implications for RS adjustments in cross-country comparisons are derived, strongly advocating for taking model uncertainty into account.

国家间反应风格的差异可能会危及李克特量表的跨国可比性。当调整而不是调查RS是主要目标时,对RS结构施加最小假设并利用来自多个尺度的RS测量信息似乎是有利的。使用PISA 2015背景问卷数据,我们调查了这种调整程序,并探讨了它对跨国比较的影响,而不是习惯分析和RS调整(a)不考虑RS, (b)对RS结构纳入更强的假设,和/或(c)仅使用一些选定的量表进行RS测量。我们的研究结果表明,不仅决定是否调整RS,而且如何调整也可能严重影响跨国比较。这既涉及RS结构的假设,也涉及RS测量所采用的尺度。本文推导了跨国比较对RS调整的影响,强烈主张考虑模式的不确定性。
{"title":"The Role of Response Style Adjustments in Cross-Country Comparisons—A Case Study Using Data from the PISA 2015 Questionnaire","authors":"Esther Ulitzsch,&nbsp;Oliver Lüdtke,&nbsp;Alexander Robitzsch","doi":"10.1111/emip.12552","DOIUrl":"10.1111/emip.12552","url":null,"abstract":"<p>Country differences in response styles (RS) may jeopardize cross-country comparability of Likert-type scales. When adjusting for rather than investigating RS is the primary goal, it seems advantageous to impose minimal assumptions on RS structures and leverage information from multiple scales for RS measurement. Using PISA 2015 background questionnaire data, we investigate such an adjustment procedure and explore its impact on cross-country comparisons in contrast to customary analyses and RS adjustments that (a) leave RS unconsidered, (b) incorporate stronger assumptions on RS structure, and/or (c) only use some selected scales for RS measurement. Our findings suggest that not only the decision as to whether to adjust for RS but also how to adjust may heavily impact cross-country comparisons. This concerns both the assumptions on RS structures and the scales employed for RS measurement. Implications for RS adjustments in cross-country comparisons are derived, strongly advocating for taking model uncertainty into account.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/emip.12552","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49547140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diving Into Students’ Transcripts: High School Course-Taking Sequences and Postsecondary Enrollment 深入研究学生成绩单:高中课程选修顺序和中学后入学
IF 2 4区 教育学 Q2 Social Sciences Pub Date : 2023-04-23 DOI: 10.1111/emip.12554
Burhan Ogut, Ruhan Circi

The purpose of this study was to explore high school course-taking sequences and their relationship to college enrollment. Specifically, we implemented sequence analysis to discover common course-taking trajectories in math, science, and English language arts using high school transcript data from a recent nationally representative survey. Through sequence clustering, we reduced the complexity of the sequences and examined representative course-taking sequences. Classification tree, random forests, and multinomial logistic regression analyses were used to explore the relationship between the course sequences students complete and their postsecondary outcomes. Results showed that distinct representative course-taking sequences can be identified for all students as well as student subgroups. More advanced and complex course-taking sequences were associated with postsecondary enrollment.

摘要本研究旨在探讨高中修课顺序与大学录取的关系。具体来说,我们利用最近一项全国代表性调查的高中成绩单数据实施了序列分析,以发现数学、科学和英语语言艺术的共同课程学习轨迹。通过序列聚类,降低了序列的复杂度,检验了具有代表性的选修序列。本研究使用分类树、随机森林及多项逻辑回归分析,探讨学生修习的课程顺序与他们的大专后成绩之间的关系。结果表明,在所有学生和学生亚群中,都可以识别出具有明显代表性的选修课程序列。更高级和复杂的课程选择顺序与中学后入学有关。
{"title":"Diving Into Students’ Transcripts: High School Course-Taking Sequences and Postsecondary Enrollment","authors":"Burhan Ogut,&nbsp;Ruhan Circi","doi":"10.1111/emip.12554","DOIUrl":"10.1111/emip.12554","url":null,"abstract":"<p>The purpose of this study was to explore high school course-taking sequences and their relationship to college enrollment. Specifically, we implemented sequence analysis to discover common course-taking trajectories in math, science, and English language arts using high school transcript data from a recent nationally representative survey. Through sequence clustering, we reduced the complexity of the sequences and examined representative course-taking sequences. Classification tree, random forests, and multinomial logistic regression analyses were used to explore the relationship between the course sequences students complete and their postsecondary outcomes. Results showed that distinct representative course-taking sequences can be identified for all students as well as student subgroups. More advanced and complex course-taking sequences were associated with postsecondary enrollment.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42997861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Validation as Evaluating Desired and Undesired Effects: Insights From Cross-Classified Mixed Effects Model 评估期望和不期望效果的验证:来自交叉分类混合效应模型的见解
IF 2 4区 教育学 Q2 Social Sciences Pub Date : 2023-04-05 DOI: 10.1111/emip.12553
Xuejun Ryan Ji, Amery D. Wu

The Cross-Classified Mixed Effects Model (CCMEM) has been demonstrated to be a flexible framework for evaluating reliability by measurement specialists. Reliability can be estimated based on the variance components of the test scores. Built upon their accomplishment, this study extends the CCMEM to be used for evaluating validity evidence. Validity is viewed as the coherence among the elements of a measurement system. As such, validity can be evaluated by the user-reasoned desired or undesired fixed and random effects. Based on the data of ePIRLS 2016 Reading Assessment, we demonstrate how to obtain evidence for reliability and validity by CCMEM. We conclude with a discussion on the practicality and benefits of this validation method.

交叉分类混合效应模型(CCMEM)已被测量专家证明是一个灵活的评估可靠性的框架。信度可以根据测试分数的方差分量来估计。在此基础上,本研究将CCMEM扩展至评估效度证据。效度被视为一个测量系统的要素之间的一致性。因此,有效性可以通过用户推理的期望或不期望的固定和随机效果来评估。本文以ePIRLS 2016阅读测评数据为基础,论证了如何利用CCMEM获取信度和效度证据。最后讨论了该验证方法的实用性和效益。
{"title":"Validation as Evaluating Desired and Undesired Effects: Insights From Cross-Classified Mixed Effects Model","authors":"Xuejun Ryan Ji,&nbsp;Amery D. Wu","doi":"10.1111/emip.12553","DOIUrl":"10.1111/emip.12553","url":null,"abstract":"<p>The Cross-Classified Mixed Effects Model (CCMEM) has been demonstrated to be a flexible framework for evaluating reliability by measurement specialists. Reliability can be estimated based on the variance components of the test scores. Built upon their accomplishment, this study extends the CCMEM to be used for evaluating validity evidence. Validity is viewed as the coherence among the elements of a measurement system. As such, validity can be evaluated by the user-reasoned desired or undesired fixed and random effects. Based on the data of ePIRLS 2016 Reading Assessment, we demonstrate how to obtain evidence for reliability and validity by CCMEM. We conclude with a discussion on the practicality and benefits of this validation method.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44738214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Personalizing Large-Scale Assessment in Practice 在实践中个性化大规模评估
IF 2 4区 教育学 Q2 Social Sciences Pub Date : 2023-03-26 DOI: 10.1111/emip.12551
Heather M. Buzick, Jodi M. Casabianca, Melissa L. Gholson

The article describes practical suggestions for measurement researchers and psychometricians to respond to calls for social responsibility in assessment. The underlying assumption is that personalizing large-scale assessment improves the chances that assessment and the use of test scores will contribute to equity in education. This article describes a spectrum of standardization and personalization in large-scale assessment. Informed by a review of existing theories, models, and frameworks in the context of current and developing technologies and with a social justice lens, we propose steps to take, as part of assessment research and development, to contribute to the science of personalizing large-scale assessment in technically defensible ways.

本文为测量研究者和心理测量学家在评估中响应社会责任的呼吁提出了切实可行的建议。潜在的假设是,个性化的大规模评估提高了评估和考试成绩的使用有助于教育公平的机会。本文描述了大规模评估中标准化和个性化的范围。通过对当前和发展中的技术背景下的现有理论、模型和框架的回顾,并从社会正义的角度出发,我们提出了一些步骤,作为评估研究和发展的一部分,以技术上可辩护的方式为个性化大规模评估的科学做出贡献。
{"title":"Personalizing Large-Scale Assessment in Practice","authors":"Heather M. Buzick,&nbsp;Jodi M. Casabianca,&nbsp;Melissa L. Gholson","doi":"10.1111/emip.12551","DOIUrl":"10.1111/emip.12551","url":null,"abstract":"<p>The article describes practical suggestions for measurement researchers and psychometricians to respond to calls for social responsibility in assessment. The underlying assumption is that personalizing large-scale assessment improves the chances that assessment and the use of test scores will contribute to equity in education. This article describes a spectrum of standardization and personalization in large-scale assessment. Informed by a review of existing theories, models, and frameworks in the context of current and developing technologies and with a social justice lens, we propose steps to take, as part of assessment research and development, to contribute to the science of personalizing large-scale assessment in technically defensible ways.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44359584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
ITEMS Corner Update: The New ITEMS Module Development Process 物品角更新:新的物品模块开发过程
IF 2 4区 教育学 Q2 Social Sciences Pub Date : 2023-03-26 DOI: 10.1111/emip.12545
Brian C. Leventhal

This issue marks 1 year into my tenure as editor of Instructional Topics in Educational Measurement Series (ITEMS). I will summarize and reflect on the achievements from the past year, outline the new ITEMS module production process, and introduce the new module published in this issue of Educational Measurement: Issues and Practice (EM:IP).

Over the past year, there have been three new modules published: Unusual Things That Usually Occur in a Credentialing Testing Program (Feinberg et al., 2022), Multidimensional Item Response Theory Equating (Kim, 2022), and Validity and Educational Testing: Purposes and Uses of Educational Tests (Lewis & Sireci, 2022). Each of these modules has been a great addition to the ITEMS library, with the latter two being in the new format released in mid-2022.

Among the many benefits of the new format, modules are now more accessible on a variety of devices (e.g., desktop, phone, tablet) in both online and offline mode. The production process has also been simplified. Over the next few issues of EM:IP, I will take a deep dive into the process of designing a module for this nontraditional publication. The goal is threefold: (1) to educate readers of the behind-the-scenes process; (2) to showcase the extensive work that module development requires; and (3) to attract readers as potential authors, understanding the value of taking time to produce such a useful resource.

As noted, I will discuss these steps in more detail in the upcoming issues of EM:IP. Reconceptualizing ITEMS modules into this new form was only one of two initiatives I undertook in 2022. For the other, I worked to shift the ITEMS portal from a stand-alone website to the NCME website. As noted in the last issue of EM:IP, this has successfully been completed with evidence of several learners accessing the new ITEMS portal.

For 2023, I look forward to the production of several new and engaging ITEMS modules. I am excited to announce the first module of 2023, Digital Module 31: Testing Accommodations for Students with Disabilities, authored by Dr. Benjamin Lovett. In this module, Dr. Lovett describes common testing accommodations, explains how testing accommodations can reduce constructive-irrelevant variance and increase fairness, and describes best practices along with current common problems in practice. In this five-section module, Dr. Lovett provides video versions of the content as well as an interactive activity using two case studies.

If you are interested in learning more about the ITEMS module development process, authoring a module, or being involved in some other capacity, please reach out to me at [email protected].

本期是我担任《教育测量教学专题丛刊》编辑一周年。我将总结和反思过去一年的成就,概述新的ITEMS模块的制作过程,并介绍发表在本期《教育测量:问题与实践》(EM:IP)上的新模块。在过去的一年里,已经出版了三个新的模块:不寻常的事情通常发生在一个认证测试程序(Feinberg et al., 2022),多维项目反应理论等同(Kim, 2022)和有效性和教育测试:教育测试的目的和用途(Lewis &Sireci, 2022)。这些模块中的每一个都是ITEMS库的重要补充,后两个模块的新格式将于2022年年中发布。在新格式的众多好处中,模块现在更容易在各种设备(例如,桌面,电话,平板电脑)上在线和离线模式访问。生产过程也得到了简化。在接下来的几期《EM:IP》中,我将深入探讨为这种非传统出版物设计模块的过程。其目标有三个:(1)让读者了解幕后过程;(2)展示模块开发所需的大量工作;(3)吸引读者成为潜在的作者,理解花时间制作这样一个有用的资源的价值。如前所述,我将在即将出版的EM:IP中更详细地讨论这些步骤。将ITEMS模块重新定义为这种新形式只是我在2022年采取的两项举措之一。另一方面,我致力于将ITEMS门户网站从一个独立的网站转移到NCME网站。正如EM:IP上一期所指出的那样,这已经成功完成,有证据表明有几个学习者访问了新的ITEMS门户。对于2023年,我期待着生产几个新的和引人入胜的ITEMS模块。我很高兴地宣布2023年的第一个模块,数字模块31:测试残疾学生的住宿,由Benjamin Lovett博士撰写。在这个模块中,洛维特博士描述了常见的测试工具,解释了测试工具如何减少与建设性无关的差异并增加公平性,并描述了最佳实践以及实践中当前常见的问题。在这个由五个部分组成的模块中,洛维特博士提供了内容的视频版本以及使用两个案例研究的互动活动。如果您有兴趣了解有关ITEMS模块开发过程的更多信息,编写模块或参与其他一些工作,请通过[email protected]与我联系。
{"title":"ITEMS Corner Update: The New ITEMS Module Development Process","authors":"Brian C. Leventhal","doi":"10.1111/emip.12545","DOIUrl":"10.1111/emip.12545","url":null,"abstract":"<p>This issue marks 1 year into my tenure as editor of <i>Instructional Topics in Educational Measurement Series</i> (<i>ITEMS</i>). I will summarize and reflect on the achievements from the past year, outline the new ITEMS module production process, and introduce the new module published in this issue of <i>Educational Measurement: Issues and Practice</i> (<i>EM:IP</i>).</p><p>Over the past year, there have been three new modules published: Unusual Things That Usually Occur in a Credentialing Testing Program (Feinberg et al., <span>2022</span>), Multidimensional Item Response Theory Equating (Kim, <span>2022</span>), and Validity and Educational Testing: Purposes and Uses of Educational Tests (Lewis &amp; Sireci, <span>2022</span>). Each of these modules has been a great addition to the ITEMS library, with the latter two being in the new format released in mid-2022.</p><p>Among the many benefits of the new format, modules are now more accessible on a variety of devices (e.g., desktop, phone, tablet) in both online and offline mode. The production process has also been simplified. Over the next few issues of <i>EM:IP</i>, I will take a deep dive into the process of designing a module for this nontraditional publication. The goal is threefold: (1) to educate readers of the behind-the-scenes process; (2) to showcase the extensive work that module development requires; and (3) to attract readers as potential authors, understanding the value of taking time to produce such a useful resource.</p><p>As noted, I will discuss these steps in more detail in the upcoming issues of <i>EM:IP</i>. Reconceptualizing <i>ITEMS</i> modules into this new form was only one of two initiatives I undertook in 2022. For the other, I worked to shift the ITEMS portal from a stand-alone website to the NCME website. As noted in the last issue of <i>EM:IP</i>, this has successfully been completed with evidence of several learners accessing the new <i>ITEMS</i> portal.</p><p>For 2023, I look forward to the production of several new and engaging <i>ITEMS</i> modules. I am excited to announce the first module of 2023, <i>Digital Module 31: Testing Accommodations for Students with Disabilities</i>, authored by Dr. Benjamin Lovett. In this module, Dr. Lovett describes common testing accommodations, explains how testing accommodations can reduce constructive-irrelevant variance and increase fairness, and describes best practices along with current common problems in practice. In this five-section module, Dr. Lovett provides video versions of the content as well as an interactive activity using two case studies.</p><p>If you are interested in learning more about the <i>ITEMS</i> module development process, authoring a module, or being involved in some other capacity, please reach out to me at <span>[email protected]</span>.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/emip.12545","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47864190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Educational Measurement-Issues and Practice
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1