Bias risks in ILSA related to non-participation: evidence from a longitudinal large-scale survey in Germany (PISA Plus)

IF 2.8 3区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH Educational Assessment Evaluation and Accountability Pub Date : 2023-12-06 DOI:10.1007/s11092-023-09422-5
Sabine Meinck, Jörg-Henrik Heine, Julia Mang, Gabriel Nagy
{"title":"Bias risks in ILSA related to non-participation: evidence from a longitudinal large-scale survey in Germany (PISA Plus)","authors":"Sabine Meinck, Jörg-Henrik Heine, Julia Mang, Gabriel Nagy","doi":"10.1007/s11092-023-09422-5","DOIUrl":null,"url":null,"abstract":"<p>This study uses evidence from a longitudinal survey (PISA Plus, Germany) to examine the potential of bias in international large-scale assessments (ILSAs). In PISA Plus, participation was mandatory at the first measurement point, but voluntary at the second measurement point. The study provides evidence for relevant selection bias regarding student competencies and background variables when participation is voluntary. Sample dropout at the second measurement point was related to characteristics such as family background, achievement in mathematics, reading and science, and other student and school demographic variables at both the student and school levels, with lower performing students and those with less favorable background characteristics having higher dropout frequencies, from which higher dropout probabilities of such students can be inferred. We further contrast the possibilities for addressing non-response through weight adjustments in longitudinal surveys with those in cross-sectional surveys. Considering our results, we evaluate and confirm the validity and appropriateness of strict participation rate requirements in ILSAs. Likely magnitudes of bias in cross-sectional studies in varying scenarios are illustrated. Accordingly, if combined participation rates drop below 70%, a difference of at least one-fifth of a standard deviation in an achievement score between non-respondents and participants leads to relevant bias. When participation drops below 50%, even a very small difference (one-tenth of a standard deviation) will cause non-negligible bias. Finally, we conclude that the stringent participation rate requirements established in most ILSAs are fully valid, reasonable, and important since they ensure a relatively low risk of biased results.</p>","PeriodicalId":46725,"journal":{"name":"Educational Assessment Evaluation and Accountability","volume":null,"pages":null},"PeriodicalIF":2.8000,"publicationDate":"2023-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Educational Assessment Evaluation and Accountability","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1007/s11092-023-09422-5","RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0

Abstract

This study uses evidence from a longitudinal survey (PISA Plus, Germany) to examine the potential of bias in international large-scale assessments (ILSAs). In PISA Plus, participation was mandatory at the first measurement point, but voluntary at the second measurement point. The study provides evidence for relevant selection bias regarding student competencies and background variables when participation is voluntary. Sample dropout at the second measurement point was related to characteristics such as family background, achievement in mathematics, reading and science, and other student and school demographic variables at both the student and school levels, with lower performing students and those with less favorable background characteristics having higher dropout frequencies, from which higher dropout probabilities of such students can be inferred. We further contrast the possibilities for addressing non-response through weight adjustments in longitudinal surveys with those in cross-sectional surveys. Considering our results, we evaluate and confirm the validity and appropriateness of strict participation rate requirements in ILSAs. Likely magnitudes of bias in cross-sectional studies in varying scenarios are illustrated. Accordingly, if combined participation rates drop below 70%, a difference of at least one-fifth of a standard deviation in an achievement score between non-respondents and participants leads to relevant bias. When participation drops below 50%, even a very small difference (one-tenth of a standard deviation) will cause non-negligible bias. Finally, we conclude that the stringent participation rate requirements established in most ILSAs are fully valid, reasonable, and important since they ensure a relatively low risk of biased results.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
ILSA 中与不参与有关的偏差风险:来自德国纵向大规模调查(PISA Plus)的证据
本研究利用一项纵向调查(PISA Plus,德国)的证据来研究国际大规模评估(ILSA)中可能存在的偏差。在 PISA Plus 中,第一个测量点的参与是强制性的,而第二个测量点的参与则是自愿的。研究证明,在自愿参与的情况下,学生的能力和背景变量会出现相关的选择偏差。在学生和学校层面,第二个测量点的样本辍学与家庭背景、数学、阅读和科学成绩以及其他学生和学校人口统计变量等特征有关,成绩较差的学生和背景特征较差的学生辍学频率较高,由此可以推断这些学生的辍学概率较高。我们进一步对比了纵向调查和横截面调查中通过权重调整解决非响应问题的可能性。考虑到我们的结果,我们评估并确认了在 ILSA 中严格要求参与率的有效性和适当性。我们还说明了不同情况下横断面研究中可能出现的偏差幅度。因此,如果综合参与率低于 70%,则未参与调查者和参与调查者的成绩得分至少相差五分之一个标准差,就会导致相关偏差。当参与率降至 50%以下时,即使是很小的差异(十分之一的标准差)也会造成不可忽视的偏差。最后,我们得出结论,大多数国际语言能力评估中规定的严格的参与率要求是完全有效、合理和重要的,因为它们确保了结果出现偏差的风险相对较低。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Educational Assessment Evaluation and Accountability
Educational Assessment Evaluation and Accountability EDUCATION & EDUCATIONAL RESEARCH-
CiteScore
9.40
自引率
2.60%
发文量
23
期刊介绍: The main objective of this international journal is to advance knowledge and dissemination of research on and about assessment, evaluation and accountability of all kinds and on various levels as well as in all fields of education.  The journal provides readers with an understanding of the rich contextual nature of evaluation, assessment and accountability in education. The journal is theory-oriented and methodology-based and seeks to connect research, policy making and practice.  The journal publishes outstanding empirical works, peer-reviewed by eminent scholars around the world.Aims and Scope in more detail: The main objective of this international journal is to advance knowledge and dissemination of research on and about evaluation, assessment and accountability: - of all kinds (e.g. person, programme, organisation), - on various levels (state, regional, local), - in all fields of education (primary, secondary, higher education/tertiary, as well as non-school sector) and across all different life phases (e.g. adult education/andragogy/Human Resource Management/professional development).The journal provides readers with an understanding of the rich contextual nature of evaluation, assessment and accountability in education. The journal is theory-oriented and methodology-based and seeks to connect research, policy making and practice. Therefore, the journal explores and discusses: -       theories of evaluation, assessment and accountability, -       function, role, aims and purpose of evaluation, assessment and accountability, -       impact of evaluation, assessment and accountability, -       methodology, design and methods of evaluation, assessment and accountability, -       principles, standards and quality of evaluation, assessment and accountability, -       issues of planning, coordinating, conducting, reporting of evaluation, assessment and accountability.The journal also covers the quality of different instruments or procedures or approaches which are used for evaluation, assessment and accountability.The journal only includes research findings from evaluation, assessment and accountability, if the design or approach of it is meta-reflected in the article.The journal publishes outstanding empirical works, peer-reviewed by eminent scholars around the world.
期刊最新文献
How representative is the Swedish PISA sample? A comparison of PISA and register data Dimensions of teachers’ data literacy: A systematic review of literature from 1990 to 2021 Examining pre-service teachers’ feedback on low- and high-quality written assignments Legitimising capital: parent organisations and their resistance to testing in England Signal, error, or bias? exploring the uses of scores from observation systems
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1