首页 > 最新文献

Educational Measurement-Issues and Practice最新文献

英文 中文
Still Interested in Multidimensional Item Response Theory Modeling? Here Are Some Thoughts on How to Make It Work in Practice 还对多维项目反应理论建模感兴趣吗?这里有一些关于如何在实践中发挥作用的想法
IF 2.7 4区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-12-18 DOI: 10.1111/emip.12645
Terry A. Ackerman, Richard M. Luecht
<p>Given tremendous improvements over the past three to four decades in the computational methods and computer technologies needed to estimate the parameters for higher dimensionality models (Cai, <span>2010a, 2010b</span>, <span>2017</span>), we might expect that MIRT would by now be a widely used array of models and psychometric software tools being used operationally in many educational assessment settings. Perhaps one of the few areas where MIRT has helped practitioners is in the area of understanding Differential Item Functioning (DIF) (Ackerman & Ma, <span>2024</span>; Camilli, <span>1992</span>; Shealy & Stout, <span>1993</span>). Nevertheless, the expectation has not been met nor do there seem to be many operational initiatives to change the <i>status quo</i>.</p><p>Some research psychometricians might lament the lack of large-scale applications of MIRT in the field of educational assessment. However, the simple fact is that MIRT has not lived up to its early expectations nor its potential due to several barriers. Following a discussion of test purpose and metric design issues in the next section, we will examine some of the barriers associated with these topics and provide suggestions for overcoming or completely avoiding them.</p><p>Tests developed for one purpose are rarely of much utility for another purpose. For example, professional certification and licensure tests designed to optimize pass-fail classifications are often not very useful for reporting scores across a large proficiency range—at least not unless the tests are extremely long. Summative, and most interim assessments used in K–12 education, are usually designed to produce reliable total-test scores. The resulting scale scores are summarized as descriptive statistical aggregations of scale scores or other functions of the scores such as classifying students in ordered achievement levels (e.g., Below Basic, Basic, Proficient, Advanced), or in modeling student growth in a subject area as part of an educational accountability system. Some commercially available online “interim” assessments provide limited progress-oriented scores and subscores from on-demand tests. However, the defensible formative utility of most interim assessments remains limited because test development and psychometric analytics follow the summative assessment test design and development paradigm: focusing on maintaining vertically aligned or equated, unidimensional scores scales (e.g., a K–12 math scale).</p><p>The requisite test design and development frameworks for summative tests focus on the relationships between the item responses and the total test score scale (e.g., maximizing item-total score correlations and the conditional reliability within prioritized regions of that score scale).</p><p>Applying MIRT models to most summative or interim assessments makes little sense. The problem is that we continue to allow policymakers to make claims about score interpretations that are not support
考虑到过去三、四十年来估算高维模型参数所需的计算方法和计算机技术的巨大进步(Cai, 2010a, 2010b, 2017),我们可以预期,到目前为止,MIRT将成为广泛使用的一系列模型和心理测量软件工具,在许多教育评估环境中可操作性地使用。也许MIRT帮助实践者的少数几个领域之一是理解差异项目功能(DIF)领域(Ackerman &amp;马,2024;Camilli, 1992;谢伊,结实的,1993)。然而,期望并没有得到满足,似乎也没有采取许多行动来改变现状。一些研究心理测量学家可能会对MIRT在教育评估领域缺乏大规模应用感到遗憾。然而,一个简单的事实是,由于一些障碍,MIRT并没有达到其早期的期望和潜力。在下一节中讨论测试目的和度量设计问题之后,我们将检查与这些主题相关的一些障碍,并提供克服或完全避免它们的建议。为一个目的开发的测试很少对另一个目的有用。例如,设计用于优化合格-不合格分类的专业认证和执照测试对于报告大范围的熟练程度分数通常不是很有用——至少除非测试非常长。总结性评估和K-12教育中使用的大多数中期评估通常旨在产生可靠的总测试分数。由此产生的量表分数被总结为量表分数的描述性统计汇总或分数的其他功能,例如按顺序的成就水平对学生进行分类(例如,低于基本,基本,熟练,高级),或在一个学科领域为学生的成长建模,作为教育问责制的一部分。一些商业上可获得的在线“临时”评估提供有限的以进步为导向的分数和按需测试的子分数。然而,大多数中期评估的可辩护的形成效用仍然有限,因为测试开发和心理测量分析遵循总结性评估测试设计和开发范式:专注于保持垂直对齐或相等的单向度分数量表(例如,K-12数学量表)。总结性测试的必要测试设计和开发框架侧重于项目反应和总测试分数表之间的关系(例如,最大化项目-总分的相关性和分数表优先区域内的条件可靠性)。将MIRT模型应用于大多数总结性或中期评估几乎没有意义。问题是,我们继续允许政策制定者对测试或量表设计不支持的分数解释提出主张。大多数K-12考试的标准都不是多维的。相反,它们是一种无序语句的分类——其中许多不能通过典型的测试项目来衡量——它们模糊地反映了评估的预期范围。一些工作正在进行中,以提供“评估标准”,这些标准反映了一组有序的熟练程度声明和相关的证据(测量信息),这些证据的复杂性在变化。报告的分数可以看作是代表两个或更多内容域或子域的综合度量。但它们仍然倾向于以单维尺度发挥作用。一个单维的组合可以是多个子域或内容区域的混合,只要在一个特定的IRT模型下,底层的、统一的特征可以被经验地证明是满足局部独立性的。大多数用于总结性和中期试验的MIRT研究只是探索性因素分析。也就是说,这些模型可以帮助分离少量细微的多维度,然后研究人员可以尝试以某种以内容为中心的方式解释剩余协方差的模式。然而,每当我们开发和选择具有高项目总分相关性(例如,点双列相关性)的项目时,我们构建我们的测试来提供一个单一的测量信号——本质上是一个单维尺度。我们可能会假装我们可以合法地将项目组织到基于内容的链中并报告子分数。然而,子得分项目分组往往在统计上不合理,只会导致对数据支持的(本质上)单维特征的不可靠估计(Haberman &amp;Sinharay, 2010)。关键是子分数——本质上是单维测试中的任何子分数——不应该被计算或报告。开发可靠、有效和有用的子分数配置文件需要致力于设计和维护多个量表。相反,考虑一个不同的,也许更有用的形成性评估目的——至少对老师,家长,更重要的是,对学生有潜在的帮助。 虽然有些人将形成性评估概念化为低风险的课堂评估,但这些评估对改善教学和以积极的方式改变学生学习的关键价值不能被低估。设计良好的形成性评估应该基于多种指标,这些指标显然对良好的教学、课程设计和学生学习非常敏感。它们还需要按需提供——可能是每天提供——并向教师提供即时或至少是及时的、详细的、在教学上可操作的信息。从测试设计和开发的角度来看,这意味着形成性评估必须为单个学生提供有用的和信息丰富的表现概况,可靠地确定有效的学生优势,建立和弥补弱点,以及同时监控有关多个特征或基于能力的度量的进展。形成性评估的核心用途与MIRT建模的能力很好地结合在一起,后者提供了许多技术心理测量工具,用于构建和维护多个分数量表。[注:此声明扩展到诊断分类模型(dcm),其中离散的、有序的特征或属性取代了大多数MIRT模型中假定的连续熟练度度量。例如,参见Sessoms和Henson(2018)。Stout等人(2023)提出了一种有希望的诊断DCM的新方法。挑战在于,采用MIRT模型或DCM本身并不是形成性评估解决方案。需要一个不同的测试设计和开发范例。在这个关键时刻,似乎有必要提醒我们自己,我们让心理测量模型与数据相匹配,而不是相反。因此,本文的重要问题不在于使用哪个MIRT或DCM模型,或者使用哪个统计参数估计器。这些是心理测量校准和尺度选择。最重要的问题围绕着数据的特征——从我们如何有效地设计和创建项目开始,然后组装能够满足我们预期的形成性评估信息需求的测试表单。我们在大规模总结性测试方面的丰富经验和基于研究的知识可能不适用于形成性评估系统。例如,我们需要考虑不同的机制来评估项目质量、校准项目、连接或等同量表,以及为学生的表现打分。如果我们能够同意形成性评估的效用以及随后对多个构式的需求,那么显而易见的问题就变成了,“我们应该测量哪些构式?”针对模糊的内容和/或与基于内容的子领域相关的认知规范编写测试问题,对大规模现场试验的结果进行因素分析,然后玩“命名那个因素”游戏是不够的。每个规模都需要有一个具体的目的,由其设计属性和开发优先级支持。图1显示了二年级数学共同核心国家标准(CCSS)的四个主要领域。CCSSO, 2010)。在集群和标准级别提供了额外的详细信息。数据域。现在从形成性评估设计的角度来思考这个CCSS的例子。至少,我们需要四个评分尺度,每个域一个(2)。OA, 2。电视台,2。MD,和2.G)。虽然可能彼此呈正相关,但大多数二年级学生有相同的教育机会学习几乎相同的数学知识和技能水平,甚至在这四种基于领域的熟练程度上表现出高度一致的模式,这似乎是不可能的。通常,维度与学生在学习中的位置有关。在接受指导之前或掌握材料之后很长一段时间,数据都是单一性的。只有当学生受到学习的积极挑战时,维度才会出现。一个设计良好的形成性评估系统可能期望在四个领域中观察到不同的得分模式,反映出学生在特定领域的知识和技能方面的优势和劣势的不同模式。考虑图2。图的左侧描绘了预期的结构。也就是说,这四个椭圆是带有弯曲连接器的结构,表示尺度之间的非零协方差。中间的图像显示了特征之间的六个协方差的潜在幅度,这将与每个基于域的尺度之间的角度余弦成正比。从测量的角度来看,每个特征都是一个因素或参考组合(Luecht &amp;米勒,1992;Wang, 1986),心理测量学是一种独特的量表。最后,右侧显示了三个学生的成绩概况。 此图概述了一个高层次的目标形成性评估量表设计!图3明确地显示了更多关于我们的测试和规模设计目标在单维(总结性或中期)和形成性评估中是如何本质不同的细节。在单维设计范式下(图3左
{"title":"Still Interested in Multidimensional Item Response Theory Modeling? Here Are Some Thoughts on How to Make It Work in Practice","authors":"Terry A. Ackerman,&nbsp;Richard M. Luecht","doi":"10.1111/emip.12645","DOIUrl":"https://doi.org/10.1111/emip.12645","url":null,"abstract":"&lt;p&gt;Given tremendous improvements over the past three to four decades in the computational methods and computer technologies needed to estimate the parameters for higher dimensionality models (Cai, &lt;span&gt;2010a, 2010b&lt;/span&gt;, &lt;span&gt;2017&lt;/span&gt;), we might expect that MIRT would by now be a widely used array of models and psychometric software tools being used operationally in many educational assessment settings. Perhaps one of the few areas where MIRT has helped practitioners is in the area of understanding Differential Item Functioning (DIF) (Ackerman &amp; Ma, &lt;span&gt;2024&lt;/span&gt;; Camilli, &lt;span&gt;1992&lt;/span&gt;; Shealy &amp; Stout, &lt;span&gt;1993&lt;/span&gt;). Nevertheless, the expectation has not been met nor do there seem to be many operational initiatives to change the &lt;i&gt;status quo&lt;/i&gt;.&lt;/p&gt;&lt;p&gt;Some research psychometricians might lament the lack of large-scale applications of MIRT in the field of educational assessment. However, the simple fact is that MIRT has not lived up to its early expectations nor its potential due to several barriers. Following a discussion of test purpose and metric design issues in the next section, we will examine some of the barriers associated with these topics and provide suggestions for overcoming or completely avoiding them.&lt;/p&gt;&lt;p&gt;Tests developed for one purpose are rarely of much utility for another purpose. For example, professional certification and licensure tests designed to optimize pass-fail classifications are often not very useful for reporting scores across a large proficiency range—at least not unless the tests are extremely long. Summative, and most interim assessments used in K–12 education, are usually designed to produce reliable total-test scores. The resulting scale scores are summarized as descriptive statistical aggregations of scale scores or other functions of the scores such as classifying students in ordered achievement levels (e.g., Below Basic, Basic, Proficient, Advanced), or in modeling student growth in a subject area as part of an educational accountability system. Some commercially available online “interim” assessments provide limited progress-oriented scores and subscores from on-demand tests. However, the defensible formative utility of most interim assessments remains limited because test development and psychometric analytics follow the summative assessment test design and development paradigm: focusing on maintaining vertically aligned or equated, unidimensional scores scales (e.g., a K–12 math scale).&lt;/p&gt;&lt;p&gt;The requisite test design and development frameworks for summative tests focus on the relationships between the item responses and the total test score scale (e.g., maximizing item-total score correlations and the conditional reliability within prioritized regions of that score scale).&lt;/p&gt;&lt;p&gt;Applying MIRT models to most summative or interim assessments makes little sense. The problem is that we continue to allow policymakers to make claims about score interpretations that are not support","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"43 4","pages":"93-100"},"PeriodicalIF":2.7,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/emip.12645","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143252772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Personalizing Assessment: Dream or Nightmare? 个性化评估:梦想还是噩梦?
IF 2.7 4区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-12-04 DOI: 10.1111/emip.12652
Randy E. Bennett

Over our field's 100-year-plus history, standardization has been a central assumption in test theory and practice. The concept's justification turns on leveling the playing field by presenting all examinees with putatively equivalent experiences. Until relatively recently, our field has accepted that justification almost without question. In this article, I present a case for standardization's antithesis, personalization. Interestingly, personalized assessment has important precedents within the measurement community. As intriguing are some of the divergent ways in which personalization might be realized in practice. Those ways, however, suggest a host of serious issues. Despite those issues, both moral obligation and survival imperative counsel persistence in trying to personalize assessment.

在该领域100多年的历史中,标准化一直是测试理论和实践的中心假设。这一概念的正当性在于通过向所有考生提供假定的同等经历来平衡竞争环境。直到最近,我们的领域几乎毫无疑问地接受了这个理由。在本文中,我将介绍标准化的对立面——个性化。有趣的是,个性化评估在度量社区中有重要的先例。同样有趣的是,个性化在实践中可能实现的一些不同方式。然而,这些方式暗示了一系列严重的问题。尽管存在这些问题,道德义务和生存的必要性都建议坚持尝试个性化评估。
{"title":"Personalizing Assessment: Dream or Nightmare?","authors":"Randy E. Bennett","doi":"10.1111/emip.12652","DOIUrl":"https://doi.org/10.1111/emip.12652","url":null,"abstract":"<p>Over our field's 100-year-plus history, standardization has been a central assumption in test theory and practice. The concept's justification turns on leveling the playing field by presenting all examinees with putatively equivalent experiences. Until relatively recently, our field has accepted that justification almost without question. In this article, I present a case for standardization's antithesis, personalization. Interestingly, personalized assessment has important precedents within the measurement community. As intriguing are some of the divergent ways in which personalization might be realized in practice. Those ways, however, suggest a host of serious issues. Despite those issues, both moral obligation and survival imperative counsel persistence in trying to personalize assessment.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"43 4","pages":"119-125"},"PeriodicalIF":2.7,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143248372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Measurement Reflections 测量反射
IF 2.7 4区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-12-03 DOI: 10.1111/emip.12654
John Fremer
{"title":"Measurement Reflections","authors":"John Fremer","doi":"10.1111/emip.12654","DOIUrl":"https://doi.org/10.1111/emip.12654","url":null,"abstract":"","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"43 4","pages":"101-103"},"PeriodicalIF":2.7,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143248322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Process Data to Evaluate the Impact of Shortening Allotted Case Time in a Simulation-Based Assessment 在基于模拟的评估中使用过程数据来评估缩短分配案例时间的影响
IF 2.7 4区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-11-27 DOI: 10.1111/emip.12656
Chunyan Liu, Monica M. Cuddy, Qiwei He, Wenli Ouyang, Cara Artman

The Computer-based Case Simulations (CCS) component of the United States Medical Licensing Examination (USMLE) Step 3 was developed to assess the decision-making and patient-management skills of physicians. Process data can provide deep insights into examinees’ behavioral processes related to completing the CCS assessment task. In this paper, we utilized process data to evaluate the impact of shortening allotted time limit by rescoring the CCS cases based on process data extracted at various timestamps that represented different percentages of the original allotted case time. It was found that examinees’ performance as well as the correlation between original and newly generated scores both tended to decrease as the timestamp condition became stricter. The impact of shortening allotted time limit was found marginally associated with case difficulties, but strongly dependent on the case time intensity under the original time setting.

美国医师执照考试(USMLE)第3步的计算机案例模拟(CCS)部分是为了评估医生的决策和病人管理技能而开发的。过程数据可以深入了解考生完成CCS评估任务的相关行为过程。在本文中,我们利用流程数据来评估缩短分配时间限制的影响,方法是基于代表原始分配案例时间的不同百分比的不同时间戳提取的流程数据来重新记录CCS案例。我们发现,随着时间戳条件的严格,考生的成绩以及原始分数与新分数之间的相关性都有降低的趋势。缩短分配时间限制的影响被发现与病例困难轻微相关,但强烈依赖于原始时间设置下的病例时间强度。
{"title":"Using Process Data to Evaluate the Impact of Shortening Allotted Case Time in a Simulation-Based Assessment","authors":"Chunyan Liu,&nbsp;Monica M. Cuddy,&nbsp;Qiwei He,&nbsp;Wenli Ouyang,&nbsp;Cara Artman","doi":"10.1111/emip.12656","DOIUrl":"https://doi.org/10.1111/emip.12656","url":null,"abstract":"<p>The Computer-based Case Simulations (CCS) component of the United States Medical Licensing Examination (USMLE) Step 3 was developed to assess the decision-making and patient-management skills of physicians. Process data can provide deep insights into examinees’ behavioral processes related to completing the CCS assessment task. In this paper, we utilized process data to evaluate the impact of shortening allotted time limit by rescoring the CCS cases based on process data extracted at various timestamps that represented different percentages of the original allotted case time. It was found that examinees’ performance as well as the correlation between original and newly generated scores both tended to decrease as the timestamp condition became stricter. The impact of shortening allotted time limit was found marginally associated with case difficulties, but strongly dependent on the case time intensity under the original time setting.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"43 4","pages":"24-32"},"PeriodicalIF":2.7,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143253681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What Makes Measurement Important for Education? 为什么测量对教育很重要?
IF 2.7 4区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-11-25 DOI: 10.1111/emip.12646
Mark Wilson

This contribution to the Special Issue of EM:IP on the topic of The Past, Present and Future of Educational Measurement concentrates on the present and the future and hence focuses on the goal of improving education. The results of meta-analyses were examined, and it was noted that the largest effect sizes were associated with actual use of formative assessments in classroom settings—hence classroom assessment (in contrast with large-scale assessment). The paper describes micro assessment, which focuses on in-classroom forms of measurement, and then expands this assessment approach to focus on frames beyond that in terms of summative end-of-semester tests (macro). This is followed by a description of how these approaches can be combined using a construct map as the basis for developing and using assessments to span across these two levels in terms of the BEAR Assessment System (BAS). Throughout, this is exemplified using an elementary school program designed to teach students about geometry. Finally, a conclusion summarizes the discussion, and also looks to the future where a meso level of use involves end-of-unit tests.

教育测量的过去、现在和未来》是《教育测量:IP》特刊的一个专题,这篇论文集中探讨了教育测量的现在和未来,从而把重点放在改善教育的目标上。本文对荟萃分析的结果进行了研究,发现最大的效应大小与在课堂环境中实际使用形成性评估有关,即课堂评估(与大规模评估相反)。本文介绍了微观评估,其重点是课堂内的测量形式,然后将这种评估方法扩展到学期末终结性测试(宏观)以外的框架。随后,将介绍如何利用建构图将这些方法结合起来,并以此为基础,开发和使用 BEAR 评估系统(BAS)中跨越这两个层次的评估方法。在整个过程中,我们使用了一个旨在向学生传授几何知识的小学课程来举例说明。最后,结论部分对讨论进行了总结,并展望了中观层面使用单元测试的未来。
{"title":"What Makes Measurement Important for Education?","authors":"Mark Wilson","doi":"10.1111/emip.12646","DOIUrl":"https://doi.org/10.1111/emip.12646","url":null,"abstract":"<p>This contribution to the Special Issue of <i>EM:IP</i> on the topic of <i>The Past, Present and Future of Educational Measurement</i> concentrates on the present and the future and hence focuses on the goal of improving education. The results of meta-analyses were examined, and it was noted that the largest effect sizes were associated with actual use of formative assessments in classroom settings—hence <i>classroom assessment</i> (in contrast with <i>large-scale assessment</i>). The paper describes micro assessment, which focuses on in-classroom forms of measurement, and then expands this assessment approach to focus on frames beyond that in terms of summative end-of-semester tests (macro). This is followed by a description of how these approaches can be combined using a construct map as the basis for developing and using assessments to span across these two levels in terms of the BEAR Assessment System (BAS). Throughout, this is exemplified using an elementary school program designed to teach students about geometry. Finally, a conclusion summarizes the discussion, and also looks to the future where a meso level of use involves end-of-unit tests.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"43 4","pages":"73-82"},"PeriodicalIF":2.7,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/emip.12646","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143253139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Growth across Grades and Common Item Grade Alignment in Vertical Scaling Using the Rasch Model 使用Rasch模型的垂直缩放中的跨等级增长和公共项目等级对齐
IF 2.7 4区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-11-25 DOI: 10.1111/emip.12639
Sanford R. Student, Derek C. Briggs, Laurie Davis

Vertical scales are frequently developed using common item nonequivalent group linking. In this design, one can use upper-grade, lower-grade, or mixed-grade common items to estimate the linking constants that underlie the absolute measurement of growth. Using the Rasch model and a dataset from Curriculum Associates’ i-Ready Diagnostic in math in grades 3–7, we demonstrate how grade-to-grade mean differences in mathematics proficiency appear much larger when upper-grade linking items are used instead of lower-grade items, with linkings based on a mixture of items falling in between. We then consider salient properties of the three calibrated scales including invariance of the different sets of common items to student grade and item difficulty reversals. These exploratory analyses suggest that upper-grade common items in vertical scaling are more subject to threats to score comparability across grades, even though these items also tend to imply the most growth.

垂直比例尺经常使用公共项目非等效组连接来开发。在这个设计中,可以使用高级、低级或混合等级的公共项目来估计构成增长绝对测量的连接常数。使用Rasch模型和课程协会的i-Ready诊断3-7年级数学的数据集,我们证明了当使用高年级的链接项目而不是低年级的项目时,年级之间的数学熟练程度平均差异如何表现得更大,并且基于两者之间的混合项目的链接。然后,我们考虑了三个校准量表的显著特性,包括不同组的常见项目对学生成绩的不变性和项目难度逆转。这些探索性分析表明,纵向尺度上的高年级常见项目更容易受到跨年级评分可比性的威胁,尽管这些项目也往往意味着最大的增长。
{"title":"Growth across Grades and Common Item Grade Alignment in Vertical Scaling Using the Rasch Model","authors":"Sanford R. Student,&nbsp;Derek C. Briggs,&nbsp;Laurie Davis","doi":"10.1111/emip.12639","DOIUrl":"https://doi.org/10.1111/emip.12639","url":null,"abstract":"<p>Vertical scales are frequently developed using common item nonequivalent group linking. In this design, one can use upper-grade, lower-grade, or mixed-grade common items to estimate the linking constants that underlie the absolute measurement of growth. Using the Rasch model and a dataset from Curriculum Associates’ i-Ready Diagnostic in math in grades 3–7, we demonstrate how grade-to-grade mean differences in mathematics proficiency appear much larger when upper-grade linking items are used instead of lower-grade items, with linkings based on a mixture of items falling in between. We then consider salient properties of the three calibrated scales including invariance of the different sets of common items to student grade and item difficulty reversals. These exploratory analyses suggest that upper-grade common items in vertical scaling are more subject to threats to score comparability across grades, even though these items also tend to imply the most growth.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"44 1","pages":"84-95"},"PeriodicalIF":2.7,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143424128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Educational Measurement: Models, Methods, and Theory 教育测量:模型、方法和理论
IF 2.7 4区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-11-24 DOI: 10.1111/emip.12642
Lauress L. Wise, Daisy W. Rutstein

This article describes an amazing development of methods and models supporting educational measurement together with a much slower evolution of theory about how and what students learn and how educational measurement best supports that learning. Told from the perspective of someone who has lived through many of these changes, the article provides background on these developments and insights into challenges and opportunities for future development.

本文描述了支持教育测量的方法和模型的惊人发展,以及关于学生如何学习和学习什么以及教育测量如何最好地支持学习的理论的缓慢演变。本文从经历过许多这些变化的人的角度出发,提供了这些发展的背景以及对未来发展的挑战和机遇的见解。
{"title":"Educational Measurement: Models, Methods, and Theory","authors":"Lauress L. Wise,&nbsp;Daisy W. Rutstein","doi":"10.1111/emip.12642","DOIUrl":"https://doi.org/10.1111/emip.12642","url":null,"abstract":"<p>This article describes an amazing development of methods and models supporting educational measurement together with a much slower evolution of theory about how and what students learn and how educational measurement best supports that learning. Told from the perspective of someone who has lived through many of these changes, the article provides background on these developments and insights into challenges and opportunities for future development.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"43 4","pages":"83-87"},"PeriodicalIF":2.7,"publicationDate":"2024-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143253240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Measurement Must Be Qualitative, then Quantitative, then Qualitative Again 测量必须是定性的,然后是定量的,然后再是定性的
IF 2.7 4区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-11-21 DOI: 10.1111/emip.12662
Andrew D. Ho

Educational measurement is a social science that requires both qualitative and quantitative competencies. Qualitative competencies in educational measurement include developing and applying theories of learning, designing instruments, and identifying the social, cultural, historical, and political contexts of measurement. Quantitative competencies include statistical inference, computational fluency, and psychometric modeling. I review 12 commentaries authored by past presidents of the National Council on Measurement in Education (NCME) published in a special issue prompting them to reflect on the past, present, and future of educational measurement. I explain how a perspective on both qualitative and quantitative competencies yields common themes across the commentaries. These include the appeal and challenge of personalization, the necessity of contextualization, and the value of communication and collaboration. I conclude that elevation of both qualitative and quantitative competencies underlying educational measurement provides a clearer sense of how NCME can advance its mission, “to advance theory and applications of educational measurement to benefit society.”

教育测量是一门需要定性和定量能力的社会科学。教育测量的定性能力包括发展和应用学习理论,设计工具,识别测量的社会、文化、历史和政治背景。定量能力包括统计推断、计算流畅性和心理测量建模。我回顾了12篇由全国教育测量委员会(NCME)前任主席撰写的评论,这些评论发表在一个特刊上,促使他们反思教育测量的过去、现在和未来。我解释了定性和定量能力的观点如何在评论中产生共同的主题。其中包括个性化的吸引力和挑战,情境化的必要性,以及沟通和协作的价值。我的结论是,教育测量的定性和定量能力的提升,为NCME如何推进其使命提供了更清晰的认识,“推进教育测量的理论和应用,造福社会”。
{"title":"Measurement Must Be Qualitative, then Quantitative, then Qualitative Again","authors":"Andrew D. Ho","doi":"10.1111/emip.12662","DOIUrl":"https://doi.org/10.1111/emip.12662","url":null,"abstract":"<p>Educational measurement is a social science that requires both qualitative and quantitative competencies. Qualitative competencies in educational measurement include developing and applying theories of learning, designing instruments, and identifying the social, cultural, historical, and political contexts of measurement. Quantitative competencies include statistical inference, computational fluency, and psychometric modeling. I review 12 commentaries authored by past presidents of the National Council on Measurement in Education (NCME) published in a special issue prompting them to reflect on the past, present, and future of educational measurement. I explain how a perspective on both qualitative and quantitative competencies yields common themes across the commentaries. These include the appeal and challenge of personalization, the necessity of contextualization, and the value of communication and collaboration. I conclude that elevation of both qualitative and quantitative competencies underlying educational measurement provides a clearer sense of how NCME can advance its mission, “to advance theory and applications of educational measurement to benefit society.”</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"43 4","pages":"137-145"},"PeriodicalIF":2.7,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143252943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Admission Testing in Higher Education: Changing Landscape and Outcomes from Test-Optional Policies 高等教育入学考试:可选考试政策的变化景观和结果
IF 2.7 4区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-11-14 DOI: 10.1111/emip.12651
Wayne Camara

Access to admission tests was greatly restricted during the COVID-19 pandemic resulting in widespread adoption of test-optional policies by colleges and universities. Many institutions adopted such policies on an interim or trial basis, as many others signaled the change would be long term. Several Ivy League institutions and selective public flagship universities have returned to requiring test scores from all applicants citing their own research indicating diversity and ensuring academic success of applicants can be best served by inclusion of test scores in the admissions process. This paper reviews recent research on the impact of test-optional policies on score-sending behaviors of applicants and differential outcomes in college and score sending. Ultimately, test-optional policies are neither the panacea for diversity that proponents suggested nor do they result in a decay of academic outcomes that opponents forecast, but they do have consequences, which colleges will need to weigh going forward.

在2019冠状病毒病大流行期间,参加入学考试的机会受到极大限制,导致高校广泛采用非强制性考试政策。许多机构在临时或试验的基础上采用了这种政策,而其他许多机构则表示这种变化将是长期的。一些常春藤盟校和重点公立旗舰大学已经恢复了对所有申请人的考试成绩的要求,理由是他们自己的研究表明了多样性,并确保在录取过程中包括考试成绩可以最好地服务于申请人的学业成功。本文综述了近年来关于非考试选择政策对申请人送分行为的影响以及大学和送分的差异结果的研究。最终,非必考政策既不是支持者所说的多样性的灵丹妙药,也不会像反对者预测的那样导致学业成绩的下降,但它们确实会产生影响,大学需要在未来加以权衡。
{"title":"Admission Testing in Higher Education: Changing Landscape and Outcomes from Test-Optional Policies","authors":"Wayne Camara","doi":"10.1111/emip.12651","DOIUrl":"https://doi.org/10.1111/emip.12651","url":null,"abstract":"<p>Access to admission tests was greatly restricted during the COVID-19 pandemic resulting in widespread adoption of test-optional policies by colleges and universities. Many institutions adopted such policies on an interim or trial basis, as many others signaled the change would be long term. Several Ivy League institutions and selective public flagship universities have returned to requiring test scores from all applicants citing their own research indicating diversity and ensuring academic success of applicants can be best served by inclusion of test scores in the admissions process. This paper reviews recent research on the impact of test-optional policies on score-sending behaviors of applicants and differential outcomes in college and score sending. Ultimately, test-optional policies are neither the panacea for diversity that proponents suggested nor do they result in a decay of academic outcomes that opponents forecast, but they do have consequences, which colleges will need to weigh going forward.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"43 4","pages":"104-111"},"PeriodicalIF":2.7,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143252691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leading ITEMS: A Retrospective on Progress and Future Goals 主要项目:回顾进展和未来目标
IF 2.7 4区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2024-11-14 DOI: 10.1111/emip.12661
Brian C. Leventhal
<p>As this issue marks the conclusion of my tenure as editor of the Instructional Topics in Educational Measurement Series (ITEMS), I take this opportunity to reflect on the progress made during my term and to outline potential future directions for the publication.</p><p>First, I extend my gratitude to the National Council on Measurement in Education (NCME) and the publications committee for entrusting me with the role of editor and for their unwavering support of my vision for ITEMS. I am also deeply appreciative of Richard Feinberg, who served as associate editor throughout my tenure, and Zhongmin Cui, editor of <i>Educational Measurement: Issues and Practice</i> (<i>EM:IP</i>) for their invaluable collaboration. Additionally, I thank all the authors who contributed modules and the dedicated readership that has engaged with the content.</p><p>ITEMS stands as a distinctive publication, bridging the gap between research and education by offering learning modules on both emerging and established practice in educational measurement. I saw the primary objective of ITEMS as to provide accessible learning resources to a diverse audience, including practitioners, students, partners, stakeholders, and the general public. These modules serve various purposes; practitioners may seek to research or expand their skills, students and professors may use them to complement classroom learning, partners and stakeholders may develop foundational knowledge to enhance collaboration with measurement professionals, and the public may gain insights into tests they encounter in their daily lives. Addressing the needs of such a broad audience is challenging, yet it underscores the essential role that ITEMS plays.</p><p>Upon assuming the role of editor three years ago, ITEMS had recently transitioned from static articles to interactive digital modules. My efforts focused on furthering this transformation by enhancing the engagement of digital publications and streamlining the development process. Although much of this work occurred behind the scenes, the benefits are evident to learners. The modules are now easily accessible on the NCME website, available in both digital and print formats. Newer modules include downloadable videos for offline use or course integration. Content is now accessible across multiple devices, including computers, phones and tablets. Authors also benefit from the updated development process, which now uses familiar software such as Microsoft PowerPoint or Google Slides. Comprehensive documentation, including timelines, deliverables, and templates, supports authors throughout the development process, allowing them to focus on content creation rather than formatting and logistics.</p><p>Reflecting on my tenure, I am proud of the modules published, yet I recognize areas for improvement and future growth. Recruiting authors and maintaining content development posed significant challenges, with some modules remaining incomplete. I am hopeful that th
本期标志着我作为《教育测量教学专题丛书》(ITEMS)编辑任期的结束,我借此机会反思我在任期内取得的进展,并概述该出版物未来的发展方向。首先,我要感谢全国教育测量委员会(NCME)和出版委员会委托我担任编辑一职,并感谢他们对我的《项目》愿景的坚定支持。我也非常感谢在我任职期间担任副主编的理查德·范伯格(Richard Feinberg)和《教育测量:问题与实践》(EM:IP)主编崔忠民的宝贵合作。此外,我要感谢所有贡献模块的作者和参与内容的忠实读者。ITEMS是一份独特的出版物,通过提供新兴和已建立的教育测量实践的学习模块,弥合了研究与教育之间的差距。我认为ITEMS的主要目标是为不同的受众提供可访问的学习资源,包括从业者、学生、合作伙伴、利益相关者和公众。这些模块用于各种目的;从业者可能会寻求研究或扩展他们的技能,学生和教授可能会用它们来补充课堂学习,合作伙伴和利益相关者可能会发展基础知识,以加强与测量专业人员的合作,公众可能会对他们日常生活中遇到的测试有更深的了解。满足如此广泛的受众的需求是具有挑战性的,但它强调了ITEMS所起的重要作用。自三年前担任编辑以来,ITEMS最近已经从静态文章过渡到交互式数字模块。我的工作重点是通过加强数字出版物的参与和简化开发过程来进一步推动这一转变。虽然大部分工作都是在幕后进行的,但对学习者来说,好处是显而易见的。这些模块现在可以很容易地在NCME网站上获得,有数字和印刷两种格式。较新的模块包括离线使用或课程集成的可下载视频。内容现在可以通过多种设备访问,包括电脑、手机和平板电脑。作者也受益于更新的开发过程,现在使用熟悉的软件,如Microsoft PowerPoint或谷歌Slides。全面的文档,包括时间表、可交付成果和模板,在整个开发过程中为作者提供支持,使他们能够专注于内容创建,而不是格式化和后勤。回顾我的任期,我为发布的模块感到自豪,但我也认识到需要改进和未来增长的领域。招募作者和维护内容开发带来了巨大的挑战,有些模块仍然不完整。我希望简化后的程序将缓解这些问题。此外,虽然努力吸引相关学科的作者,但在这方面仍有改进的余地。我设想ITEMS出版更多来自新兴学者的模块,这些模块既在传统的教育测量范围内,也在传统的教育测量范围之外。随着该领域继续与基础能力接触,项目可以在加强、教学和扩展这些能力方面发挥关键作用。此外,必须通过遵循通用设计原则和以多种语言提供模块来增强ITEMS的可访问性。这将扩大出版物的影响范围,加强NCME在教育测量方面的领导地位。最后,我主张增加文化响应性评估、公平评估实践和社会正义评估的内容。这些方法和框架在该领域获得了牵引力,ITEMS可以使在这些领域缺乏指导的研究生和实践者更容易获得它们。虽然很少有研究生课程涉及这些主题,但新兴学者对此非常感兴趣。物品可以作为他们的宝贵资源。在我结束编辑生涯之际,我期待着《项目》杂志继续取得成功,扩大影响力。
{"title":"Leading ITEMS: A Retrospective on Progress and Future Goals","authors":"Brian C. Leventhal","doi":"10.1111/emip.12661","DOIUrl":"https://doi.org/10.1111/emip.12661","url":null,"abstract":"&lt;p&gt;As this issue marks the conclusion of my tenure as editor of the Instructional Topics in Educational Measurement Series (ITEMS), I take this opportunity to reflect on the progress made during my term and to outline potential future directions for the publication.&lt;/p&gt;&lt;p&gt;First, I extend my gratitude to the National Council on Measurement in Education (NCME) and the publications committee for entrusting me with the role of editor and for their unwavering support of my vision for ITEMS. I am also deeply appreciative of Richard Feinberg, who served as associate editor throughout my tenure, and Zhongmin Cui, editor of &lt;i&gt;Educational Measurement: Issues and Practice&lt;/i&gt; (&lt;i&gt;EM:IP&lt;/i&gt;) for their invaluable collaboration. Additionally, I thank all the authors who contributed modules and the dedicated readership that has engaged with the content.&lt;/p&gt;&lt;p&gt;ITEMS stands as a distinctive publication, bridging the gap between research and education by offering learning modules on both emerging and established practice in educational measurement. I saw the primary objective of ITEMS as to provide accessible learning resources to a diverse audience, including practitioners, students, partners, stakeholders, and the general public. These modules serve various purposes; practitioners may seek to research or expand their skills, students and professors may use them to complement classroom learning, partners and stakeholders may develop foundational knowledge to enhance collaboration with measurement professionals, and the public may gain insights into tests they encounter in their daily lives. Addressing the needs of such a broad audience is challenging, yet it underscores the essential role that ITEMS plays.&lt;/p&gt;&lt;p&gt;Upon assuming the role of editor three years ago, ITEMS had recently transitioned from static articles to interactive digital modules. My efforts focused on furthering this transformation by enhancing the engagement of digital publications and streamlining the development process. Although much of this work occurred behind the scenes, the benefits are evident to learners. The modules are now easily accessible on the NCME website, available in both digital and print formats. Newer modules include downloadable videos for offline use or course integration. Content is now accessible across multiple devices, including computers, phones and tablets. Authors also benefit from the updated development process, which now uses familiar software such as Microsoft PowerPoint or Google Slides. Comprehensive documentation, including timelines, deliverables, and templates, supports authors throughout the development process, allowing them to focus on content creation rather than formatting and logistics.&lt;/p&gt;&lt;p&gt;Reflecting on my tenure, I am proud of the modules published, yet I recognize areas for improvement and future growth. Recruiting authors and maintaining content development posed significant challenges, with some modules remaining incomplete. I am hopeful that th","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":"43 4","pages":"169"},"PeriodicalIF":2.7,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/emip.12661","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143252692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Educational Measurement-Issues and Practice
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1