Most individuals who take, interpret, design, or score tests are aware that examinees do not always provide full effort when responding to items. However, many such individuals are not aware of how pervasive the issue is, what its consequences are, and how to address it. In this digital ITEMS module, Dr. James Soland will help fill these gaps in the knowledge base. Specifically, the module enumerates how frequently behaviors associated with low effort occur, and some of the ways they can distort inferences based on test scores. Then, the module explains some of the most common approaches for identifying low effort, and correcting for it when examining test scores. Brief discussion is also given to how these methods align with, and diverge from, those used to deal with low respondent effort in self-report contexts. Data and code are also provided such that readers can better implement some of the desired methods in their own work.
{"title":"Digital Module 32: Understanding and Mitigating the Impact of Low Effort on Common Uses of Test and Survey Scores","authors":"James Soland","doi":"10.1111/emip.12555","DOIUrl":"10.1111/emip.12555","url":null,"abstract":"<p>Most individuals who take, interpret, design, or score tests are aware that examinees do not always provide full effort when responding to items. However, many such individuals are not aware of how pervasive the issue is, what its consequences are, and how to address it. In this digital ITEMS module, Dr. James Soland will help fill these gaps in the knowledge base. Specifically, the module enumerates how frequently behaviors associated with low effort occur, and some of the ways they can distort inferences based on test scores. Then, the module explains some of the most common approaches for identifying low effort, and correcting for it when examining test scores. Brief discussion is also given to how these methods align with, and diverge from, those used to deal with low respondent effort in self-report contexts. Data and code are also provided such that readers can better implement some of the desired methods in their own work.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45513786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the previous issue of Educational Measurement: Issues and Practice (EM:IP) I outlined the ten steps to authoring and producing a digital module for the Instructional Topics in Educational Measurement Series (ITEMS). In the current piece, I detail the first three steps: Step 1—Content Outline; Step 2—Content Development; and Step 3—Draft Review. After in-depth discussion of these three steps, I introduce the newest ITEMS module.
Prior to beginning the ten-step process, ITEMS module development starts with an initial meeting between myself (as editor) and the lead author(s). During this meeting, I discuss the development process in detail, showcasing what a final product looks like from the learners’ perspective in addition to a sneak-peek behind-the-scenes at what the final product looks like from the editorial perspective. After discussing the end product, the remaining conversation focuses on the 10-step process and the user-friendly templates to guide authors. The conversation concludes after coming to an agreement of the topic and general scope for the module.
Authors then independently work through a module outline template to refine the scope and sequencing of the module (Step 1). During this step, authors are encouraged to first specify their audience before setting the learning objectives of the module. Once learning objectives are set, authors are then tasked with determining the prerequisite knowledge for learners. In the next section of the template, authors outline the content and sequencing of the 4–6 sections of the module. Each section has its own learning objectives that map to the objectives of the module. One of the sections is a learner-focused interactive activity, whether it be a demonstration of software or a case study that is relevant to the content of the other sections. Once the outline is completed, the authors receive feedback to ensure adequate sequencing, feasibility of module development (e.g., covering a reasonable amount of content), and appropriateness for the audience. This is an example of the unique nature of ITEMS module development. Unlike most other publications, ITEMS module development consists of regular communication and feedback from the editor. Once the scope and outline of content have been agreed to, the authors move on to Step 2: Content Development.
For Step 2, authors are provided a slide deck template to assist in developing content consistent with the ITEMS format and brand. Using this slide deck, authors maintain creative flexibility by choosing among many slide layouts, each preprogrammed with consistent font, sizing, and color. Authors create individual slide decks for each section of the module, embedding media (e.g., pictures/figures) wherever necessary to assist learner understanding. At this stage, authors are not expected to record audio nor are they expected to put in animations. The primary focus for the authors i
{"title":"ITEMS Corner Update: The Initial Steps in the ITEMS Development Process","authors":"Brian C. Leventhal","doi":"10.1111/emip.12556","DOIUrl":"10.1111/emip.12556","url":null,"abstract":"<p>In the previous issue of <i>Educational Measurement: Issues and Practice</i> (<i>EM:IP</i>) I outlined the ten steps to authoring and producing a digital module for the <i>Instructional Topics in Educational Measurement Series</i> (<i>ITEMS</i>). In the current piece, I detail the first three steps: Step 1—Content Outline; Step 2—Content Development; and Step 3—Draft Review. After in-depth discussion of these three steps, I introduce the newest ITEMS module.</p><p>Prior to beginning the ten-step process, ITEMS module development starts with an initial meeting between myself (as editor) and the lead author(s). During this meeting, I discuss the development process in detail, showcasing what a final product looks like from the learners’ perspective in addition to a sneak-peek behind-the-scenes at what the final product looks like from the editorial perspective. After discussing the end product, the remaining conversation focuses on the 10-step process and the user-friendly templates to guide authors. The conversation concludes after coming to an agreement of the topic and general scope for the module.</p><p>Authors then independently work through a module outline template to refine the scope and sequencing of the module (Step 1). During this step, authors are encouraged to first specify their audience before setting the learning objectives of the module. Once learning objectives are set, authors are then tasked with determining the prerequisite knowledge for learners. In the next section of the template, authors outline the content and sequencing of the 4–6 sections of the module. Each section has its own learning objectives that map to the objectives of the module. One of the sections is a learner-focused interactive activity, whether it be a demonstration of software or a case study that is relevant to the content of the other sections. Once the outline is completed, the authors receive feedback to ensure adequate sequencing, feasibility of module development (e.g., covering a reasonable amount of content), and appropriateness for the audience. This is an example of the unique nature of <i>ITEMS</i> module development. Unlike most other publications, <i>ITEMS</i> module development consists of regular communication and feedback from the editor. Once the scope and outline of content have been agreed to, the authors move on to Step 2: Content Development.</p><p>For Step 2, authors are provided a slide deck template to assist in developing content consistent with the <i>ITEMS</i> format and brand. Using this slide deck, authors maintain creative flexibility by choosing among many slide layouts, each preprogrammed with consistent font, sizing, and color. Authors create individual slide decks for each section of the module, embedding media (e.g., pictures/figures) wherever necessary to assist learner understanding. At this stage, authors are not expected to record audio nor are they expected to put in animations. The primary focus for the authors i","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/emip.12556","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42654384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Esther Ulitzsch, Oliver Lüdtke, Alexander Robitzsch
Country differences in response styles (RS) may jeopardize cross-country comparability of Likert-type scales. When adjusting for rather than investigating RS is the primary goal, it seems advantageous to impose minimal assumptions on RS structures and leverage information from multiple scales for RS measurement. Using PISA 2015 background questionnaire data, we investigate such an adjustment procedure and explore its impact on cross-country comparisons in contrast to customary analyses and RS adjustments that (a) leave RS unconsidered, (b) incorporate stronger assumptions on RS structure, and/or (c) only use some selected scales for RS measurement. Our findings suggest that not only the decision as to whether to adjust for RS but also how to adjust may heavily impact cross-country comparisons. This concerns both the assumptions on RS structures and the scales employed for RS measurement. Implications for RS adjustments in cross-country comparisons are derived, strongly advocating for taking model uncertainty into account.
{"title":"The Role of Response Style Adjustments in Cross-Country Comparisons—A Case Study Using Data from the PISA 2015 Questionnaire","authors":"Esther Ulitzsch, Oliver Lüdtke, Alexander Robitzsch","doi":"10.1111/emip.12552","DOIUrl":"10.1111/emip.12552","url":null,"abstract":"<p>Country differences in response styles (RS) may jeopardize cross-country comparability of Likert-type scales. When adjusting for rather than investigating RS is the primary goal, it seems advantageous to impose minimal assumptions on RS structures and leverage information from multiple scales for RS measurement. Using PISA 2015 background questionnaire data, we investigate such an adjustment procedure and explore its impact on cross-country comparisons in contrast to customary analyses and RS adjustments that (a) leave RS unconsidered, (b) incorporate stronger assumptions on RS structure, and/or (c) only use some selected scales for RS measurement. Our findings suggest that not only the decision as to whether to adjust for RS but also how to adjust may heavily impact cross-country comparisons. This concerns both the assumptions on RS structures and the scales employed for RS measurement. Implications for RS adjustments in cross-country comparisons are derived, strongly advocating for taking model uncertainty into account.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/emip.12552","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49547140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The purpose of this study was to explore high school course-taking sequences and their relationship to college enrollment. Specifically, we implemented sequence analysis to discover common course-taking trajectories in math, science, and English language arts using high school transcript data from a recent nationally representative survey. Through sequence clustering, we reduced the complexity of the sequences and examined representative course-taking sequences. Classification tree, random forests, and multinomial logistic regression analyses were used to explore the relationship between the course sequences students complete and their postsecondary outcomes. Results showed that distinct representative course-taking sequences can be identified for all students as well as student subgroups. More advanced and complex course-taking sequences were associated with postsecondary enrollment.
{"title":"Diving Into Students’ Transcripts: High School Course-Taking Sequences and Postsecondary Enrollment","authors":"Burhan Ogut, Ruhan Circi","doi":"10.1111/emip.12554","DOIUrl":"10.1111/emip.12554","url":null,"abstract":"<p>The purpose of this study was to explore high school course-taking sequences and their relationship to college enrollment. Specifically, we implemented sequence analysis to discover common course-taking trajectories in math, science, and English language arts using high school transcript data from a recent nationally representative survey. Through sequence clustering, we reduced the complexity of the sequences and examined representative course-taking sequences. Classification tree, random forests, and multinomial logistic regression analyses were used to explore the relationship between the course sequences students complete and their postsecondary outcomes. Results showed that distinct representative course-taking sequences can be identified for all students as well as student subgroups. More advanced and complex course-taking sequences were associated with postsecondary enrollment.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42997861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Cross-Classified Mixed Effects Model (CCMEM) has been demonstrated to be a flexible framework for evaluating reliability by measurement specialists. Reliability can be estimated based on the variance components of the test scores. Built upon their accomplishment, this study extends the CCMEM to be used for evaluating validity evidence. Validity is viewed as the coherence among the elements of a measurement system. As such, validity can be evaluated by the user-reasoned desired or undesired fixed and random effects. Based on the data of ePIRLS 2016 Reading Assessment, we demonstrate how to obtain evidence for reliability and validity by CCMEM. We conclude with a discussion on the practicality and benefits of this validation method.
{"title":"Validation as Evaluating Desired and Undesired Effects: Insights From Cross-Classified Mixed Effects Model","authors":"Xuejun Ryan Ji, Amery D. Wu","doi":"10.1111/emip.12553","DOIUrl":"10.1111/emip.12553","url":null,"abstract":"<p>The Cross-Classified Mixed Effects Model (CCMEM) has been demonstrated to be a flexible framework for evaluating reliability by measurement specialists. Reliability can be estimated based on the variance components of the test scores. Built upon their accomplishment, this study extends the CCMEM to be used for evaluating validity evidence. Validity is viewed as the coherence among the elements of a measurement system. As such, validity can be evaluated by the user-reasoned desired or undesired fixed and random effects. Based on the data of ePIRLS 2016 Reading Assessment, we demonstrate how to obtain evidence for reliability and validity by CCMEM. We conclude with a discussion on the practicality and benefits of this validation method.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44738214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Heather M. Buzick, Jodi M. Casabianca, Melissa L. Gholson
The article describes practical suggestions for measurement researchers and psychometricians to respond to calls for social responsibility in assessment. The underlying assumption is that personalizing large-scale assessment improves the chances that assessment and the use of test scores will contribute to equity in education. This article describes a spectrum of standardization and personalization in large-scale assessment. Informed by a review of existing theories, models, and frameworks in the context of current and developing technologies and with a social justice lens, we propose steps to take, as part of assessment research and development, to contribute to the science of personalizing large-scale assessment in technically defensible ways.
{"title":"Personalizing Large-Scale Assessment in Practice","authors":"Heather M. Buzick, Jodi M. Casabianca, Melissa L. Gholson","doi":"10.1111/emip.12551","DOIUrl":"10.1111/emip.12551","url":null,"abstract":"<p>The article describes practical suggestions for measurement researchers and psychometricians to respond to calls for social responsibility in assessment. The underlying assumption is that personalizing large-scale assessment improves the chances that assessment and the use of test scores will contribute to equity in education. This article describes a spectrum of standardization and personalization in large-scale assessment. Informed by a review of existing theories, models, and frameworks in the context of current and developing technologies and with a social justice lens, we propose steps to take, as part of assessment research and development, to contribute to the science of personalizing large-scale assessment in technically defensible ways.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44359584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This issue marks 1 year into my tenure as editor of Instructional Topics in Educational Measurement Series (ITEMS). I will summarize and reflect on the achievements from the past year, outline the new ITEMS module production process, and introduce the new module published in this issue of Educational Measurement: Issues and Practice (EM:IP).
Over the past year, there have been three new modules published: Unusual Things That Usually Occur in a Credentialing Testing Program (Feinberg et al., 2022), Multidimensional Item Response Theory Equating (Kim, 2022), and Validity and Educational Testing: Purposes and Uses of Educational Tests (Lewis & Sireci, 2022). Each of these modules has been a great addition to the ITEMS library, with the latter two being in the new format released in mid-2022.
Among the many benefits of the new format, modules are now more accessible on a variety of devices (e.g., desktop, phone, tablet) in both online and offline mode. The production process has also been simplified. Over the next few issues of EM:IP, I will take a deep dive into the process of designing a module for this nontraditional publication. The goal is threefold: (1) to educate readers of the behind-the-scenes process; (2) to showcase the extensive work that module development requires; and (3) to attract readers as potential authors, understanding the value of taking time to produce such a useful resource.
As noted, I will discuss these steps in more detail in the upcoming issues of EM:IP. Reconceptualizing ITEMS modules into this new form was only one of two initiatives I undertook in 2022. For the other, I worked to shift the ITEMS portal from a stand-alone website to the NCME website. As noted in the last issue of EM:IP, this has successfully been completed with evidence of several learners accessing the new ITEMS portal.
For 2023, I look forward to the production of several new and engaging ITEMS modules. I am excited to announce the first module of 2023, Digital Module 31: Testing Accommodations for Students with Disabilities, authored by Dr. Benjamin Lovett. In this module, Dr. Lovett describes common testing accommodations, explains how testing accommodations can reduce constructive-irrelevant variance and increase fairness, and describes best practices along with current common problems in practice. In this five-section module, Dr. Lovett provides video versions of the content as well as an interactive activity using two case studies.
If you are interested in learning more about the ITEMS module development process, authoring a module, or being involved in some other capacity, please reach out to me at [email protected].
本期是我担任《教育测量教学专题丛刊》编辑一周年。我将总结和反思过去一年的成就,概述新的ITEMS模块的制作过程,并介绍发表在本期《教育测量:问题与实践》(EM:IP)上的新模块。在过去的一年里,已经出版了三个新的模块:不寻常的事情通常发生在一个认证测试程序(Feinberg et al., 2022),多维项目反应理论等同(Kim, 2022)和有效性和教育测试:教育测试的目的和用途(Lewis &Sireci, 2022)。这些模块中的每一个都是ITEMS库的重要补充,后两个模块的新格式将于2022年年中发布。在新格式的众多好处中,模块现在更容易在各种设备(例如,桌面,电话,平板电脑)上在线和离线模式访问。生产过程也得到了简化。在接下来的几期《EM:IP》中,我将深入探讨为这种非传统出版物设计模块的过程。其目标有三个:(1)让读者了解幕后过程;(2)展示模块开发所需的大量工作;(3)吸引读者成为潜在的作者,理解花时间制作这样一个有用的资源的价值。如前所述,我将在即将出版的EM:IP中更详细地讨论这些步骤。将ITEMS模块重新定义为这种新形式只是我在2022年采取的两项举措之一。另一方面,我致力于将ITEMS门户网站从一个独立的网站转移到NCME网站。正如EM:IP上一期所指出的那样,这已经成功完成,有证据表明有几个学习者访问了新的ITEMS门户。对于2023年,我期待着生产几个新的和引人入胜的ITEMS模块。我很高兴地宣布2023年的第一个模块,数字模块31:测试残疾学生的住宿,由Benjamin Lovett博士撰写。在这个模块中,洛维特博士描述了常见的测试工具,解释了测试工具如何减少与建设性无关的差异并增加公平性,并描述了最佳实践以及实践中当前常见的问题。在这个由五个部分组成的模块中,洛维特博士提供了内容的视频版本以及使用两个案例研究的互动活动。如果您有兴趣了解有关ITEMS模块开发过程的更多信息,编写模块或参与其他一些工作,请通过[email protected]与我联系。
{"title":"ITEMS Corner Update: The New ITEMS Module Development Process","authors":"Brian C. Leventhal","doi":"10.1111/emip.12545","DOIUrl":"10.1111/emip.12545","url":null,"abstract":"<p>This issue marks 1 year into my tenure as editor of <i>Instructional Topics in Educational Measurement Series</i> (<i>ITEMS</i>). I will summarize and reflect on the achievements from the past year, outline the new ITEMS module production process, and introduce the new module published in this issue of <i>Educational Measurement: Issues and Practice</i> (<i>EM:IP</i>).</p><p>Over the past year, there have been three new modules published: Unusual Things That Usually Occur in a Credentialing Testing Program (Feinberg et al., <span>2022</span>), Multidimensional Item Response Theory Equating (Kim, <span>2022</span>), and Validity and Educational Testing: Purposes and Uses of Educational Tests (Lewis & Sireci, <span>2022</span>). Each of these modules has been a great addition to the ITEMS library, with the latter two being in the new format released in mid-2022.</p><p>Among the many benefits of the new format, modules are now more accessible on a variety of devices (e.g., desktop, phone, tablet) in both online and offline mode. The production process has also been simplified. Over the next few issues of <i>EM:IP</i>, I will take a deep dive into the process of designing a module for this nontraditional publication. The goal is threefold: (1) to educate readers of the behind-the-scenes process; (2) to showcase the extensive work that module development requires; and (3) to attract readers as potential authors, understanding the value of taking time to produce such a useful resource.</p><p>As noted, I will discuss these steps in more detail in the upcoming issues of <i>EM:IP</i>. Reconceptualizing <i>ITEMS</i> modules into this new form was only one of two initiatives I undertook in 2022. For the other, I worked to shift the ITEMS portal from a stand-alone website to the NCME website. As noted in the last issue of <i>EM:IP</i>, this has successfully been completed with evidence of several learners accessing the new <i>ITEMS</i> portal.</p><p>For 2023, I look forward to the production of several new and engaging <i>ITEMS</i> modules. I am excited to announce the first module of 2023, <i>Digital Module 31: Testing Accommodations for Students with Disabilities</i>, authored by Dr. Benjamin Lovett. In this module, Dr. Lovett describes common testing accommodations, explains how testing accommodations can reduce constructive-irrelevant variance and increase fairness, and describes best practices along with current common problems in practice. In this five-section module, Dr. Lovett provides video versions of the content as well as an interactive activity using two case studies.</p><p>If you are interested in learning more about the <i>ITEMS</i> module development process, authoring a module, or being involved in some other capacity, please reach out to me at <span>[email protected]</span>.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/emip.12545","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47864190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the Cover: Key Specifications for a Large-Scale Medical Exam","authors":"Yuan-Ling Liaw","doi":"10.1111/emip.12549","DOIUrl":"10.1111/emip.12549","url":null,"abstract":"","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46263648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Students with disabilities often take tests under different conditions than their peers do. Testing accommodations, which involve changes to test administration that maintain test content, include extending time limits, presenting written text through auditory means, and taking a test in a private room with fewer distractions. For some students with disabilities, accommodations such as these are necessary for fair assessment; without accommodations, invalid interpretations would be made on the basis of these students’ scores. However, when misapplied, accommodations can also diminish fairness, introduce new sources of construct-irrelevant variance, and also lead to invalid interpretation of test scores. This module provides a psychometric framework for thinking about accommodations, and then explicates an accommodations decision-making framework that includes a variety of considerations. Problems with current accommodations practices are discussed, along with potential solutions and future directions. The module is accompanied by exercises allowing participants to apply their understanding.
{"title":"Digital Module 31: Testing Accommodations for Students with Disabilities","authors":"Benjamin J. Lovett","doi":"10.1111/emip.12542","DOIUrl":"10.1111/emip.12542","url":null,"abstract":"<p>Students with disabilities often take tests under different conditions than their peers do. Testing accommodations, which involve changes to test administration that maintain test content, include extending time limits, presenting written text through auditory means, and taking a test in a private room with fewer distractions. For some students with disabilities, accommodations such as these are necessary for fair assessment; without accommodations, invalid interpretations would be made on the basis of these students’ scores. However, when misapplied, accommodations can also diminish fairness, introduce new sources of construct-irrelevant variance, and also lead to invalid interpretation of test scores. This module provides a psychometric framework for thinking about accommodations, and then explicates an accommodations decision-making framework that includes a variety of considerations. Problems with current accommodations practices are discussed, along with potential solutions and future directions. The module is accompanied by exercises allowing participants to apply their understanding.</p>","PeriodicalId":47345,"journal":{"name":"Educational Measurement-Issues and Practice","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41687435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}