Implementing equitable and intersectionality-aware ML in education: A practical guide

IF 6.7 1区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH British Journal of Educational Technology Pub Date : 2024-05-23 DOI:10.1111/bjet.13484
Mudit Mangal, Zachary A. Pardos
{"title":"Implementing equitable and intersectionality-aware ML in education: A practical guide","authors":"Mudit Mangal,&nbsp;Zachary A. Pardos","doi":"10.1111/bjet.13484","DOIUrl":null,"url":null,"abstract":"<div>\n \n <section>\n \n \n <p>The greater the proliferation of AI in educational contexts, the more important it becomes to ensure that AI adheres to the equity and inclusion values of an educational system or institution. Given that modern AI is based on historic datasets, mitigating historic biases with respect to protected classes (ie, fairness) is an important component of this value alignment. Although extensive research has been done on AI fairness in education, there has been a lack of guidance for practitioners, which could enhance the practical uptake of these methods. In this work, we present a practitioner-oriented, step-by-step framework, based on findings from the field, to implement AI fairness techniques. We also present an empirical case study that applies this framework in the context of a grade prediction task using data from a large public university. Our novel findings from the case study and extended analyses underscore the importance of incorporating intersectionality (such as race and gender) as central equity and inclusion institution values. Moreover, our research demonstrates the effectiveness of bias mitigation techniques, like adversarial learning, in enhancing fairness, particularly for intersectional categories like race–gender and race–income.</p>\n </section>\n \n <section>\n \n <div>\n \n <div>\n \n <h3>Practitioner notes</h3>\n <p>What is already known about this topic\n\n </p><ul>\n \n <li>AI-powered Educational Decision Support Systems (EDSS) are increasingly used in various educational contexts, such as course selection, admissions, scholarship allocation and identifying at-risk students.</li>\n \n <li>There are known challenges with AI in education, particularly around the reinforcement of existing biases, leading to unfair outcomes.</li>\n \n <li>The machine learning community has developed metrics and methods to measure and mitigate biases, which have been effectively applied to education as seen in the AI in education literature.</li>\n </ul>\n <p>What this paper adds\n\n </p><ul>\n \n <li>Introduces a comprehensive technical framework for equity and inclusion, specifically for machine learning practitioners in AI education systems.</li>\n \n <li>Presents a novel modification to the ABROCA fairness metric to better represent disparities among multiple subgroups within a protected class.</li>\n \n <li>Empirical analysis of the effectiveness of bias-mitigating techniques, like adversarial learning, in reducing biases in intersectional classes (eg, race–gender, race–income).</li>\n \n <li>Model reporting in the form of model cards that can foster transparent communication among developers, users and stakeholders.</li>\n </ul>\n <p>Implications for practice and/or policy\n\n </p><ul>\n \n <li>The fairness framework can act as a systematic guide for practitioners to design equitable and inclusive AI-EDSS.</li>\n \n <li>The fairness framework can act as a systematic guide for practitioners to make compliance with emerging AI regulations more manageable.</li>\n \n <li>Stakeholders may become more involved in tailoring the fairness and equity model tuning process to align with their values.</li>\n </ul>\n </div>\n </div>\n </section>\n </div>","PeriodicalId":48315,"journal":{"name":"British Journal of Educational Technology","volume":"55 5","pages":"2003-2038"},"PeriodicalIF":6.7000,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/bjet.13484","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"British Journal of Educational Technology","FirstCategoryId":"95","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/bjet.13484","RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0

Abstract

The greater the proliferation of AI in educational contexts, the more important it becomes to ensure that AI adheres to the equity and inclusion values of an educational system or institution. Given that modern AI is based on historic datasets, mitigating historic biases with respect to protected classes (ie, fairness) is an important component of this value alignment. Although extensive research has been done on AI fairness in education, there has been a lack of guidance for practitioners, which could enhance the practical uptake of these methods. In this work, we present a practitioner-oriented, step-by-step framework, based on findings from the field, to implement AI fairness techniques. We also present an empirical case study that applies this framework in the context of a grade prediction task using data from a large public university. Our novel findings from the case study and extended analyses underscore the importance of incorporating intersectionality (such as race and gender) as central equity and inclusion institution values. Moreover, our research demonstrates the effectiveness of bias mitigation techniques, like adversarial learning, in enhancing fairness, particularly for intersectional categories like race–gender and race–income.

Practitioner notes

What is already known about this topic

  • AI-powered Educational Decision Support Systems (EDSS) are increasingly used in various educational contexts, such as course selection, admissions, scholarship allocation and identifying at-risk students.
  • There are known challenges with AI in education, particularly around the reinforcement of existing biases, leading to unfair outcomes.
  • The machine learning community has developed metrics and methods to measure and mitigate biases, which have been effectively applied to education as seen in the AI in education literature.

What this paper adds

  • Introduces a comprehensive technical framework for equity and inclusion, specifically for machine learning practitioners in AI education systems.
  • Presents a novel modification to the ABROCA fairness metric to better represent disparities among multiple subgroups within a protected class.
  • Empirical analysis of the effectiveness of bias-mitigating techniques, like adversarial learning, in reducing biases in intersectional classes (eg, race–gender, race–income).
  • Model reporting in the form of model cards that can foster transparent communication among developers, users and stakeholders.

Implications for practice and/or policy

  • The fairness framework can act as a systematic guide for practitioners to design equitable and inclusive AI-EDSS.
  • The fairness framework can act as a systematic guide for practitioners to make compliance with emerging AI regulations more manageable.
  • Stakeholders may become more involved in tailoring the fairness and equity model tuning process to align with their values.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
在教育中实施公平和具有跨学科意识的多语言教学:实用指南
人工智能在教育领域的普及程度越高,确保人工智能符合教育系统或机构的公平和包容价值观就越重要。鉴于现代人工智能是以历史数据集为基础的,减少受保护群体的历史偏见(即公平性)是这种价值一致性的重要组成部分。虽然对人工智能在教育领域的公平性进行了广泛研究,但一直缺乏对从业人员的指导,而这种指导可以加强这些方法的实际应用。在这项工作中,我们基于实地研究成果,提出了一个以实践者为导向、循序渐进的框架,以实施人工智能公平技术。我们还介绍了一个实证案例研究,在使用一所大型公立大学数据的成绩预测任务中应用了这一框架。我们从案例研究和扩展分析中得出的新发现强调了将交叉性(如种族和性别)作为公平和包容机构的核心价值的重要性。此外,我们的研究还证明了减轻偏见技术(如对抗性学习)在提高公平性方面的有效性,特别是在种族-性别和种族-收入等交叉类别方面。关于本主题的已知信息 人工智能驱动的教育决策支持系统(EDSS)越来越多地应用于各种教育环境,如选课、招生、奖学金分配和识别问题学生。众所周知,人工智能在教育领域存在一些挑战,特别是在强化现有偏见方面,从而导致不公平的结果。机器学习界已经开发出衡量和减轻偏见的指标和方法,这些指标和方法已有效地应用于教育领域,正如人工智能在教育领域的文献所显示的那样。本文的补充内容专门为人工智能教育系统中的机器学习从业人员介绍了一个全面的公平性和包容性技术框架。对 ABROCA 公平性指标进行了新颖的修改,以更好地体现受保护类别中多个子群体之间的差异。对减轻偏见技术(如对抗学习)在减少交叉类别(如种族-性别、种族-收入)偏见方面的有效性进行了实证分析。以模型卡的形式进行模型报告,可促进开发人员、用户和利益相关者之间的透明交流。对实践和/或政策的影响公平性框架可作为从业人员的系统指南,帮助他们设计公平、包容的人工智能电子数据系统。公平性框架可作为从业人员的系统指南,帮助他们更容易地遵守新兴的人工智能法规。利益相关者可更多地参与调整公平性和公正性模型的过程,使之符合他们的价值观。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
British Journal of Educational Technology
British Journal of Educational Technology EDUCATION & EDUCATIONAL RESEARCH-
CiteScore
15.60
自引率
4.50%
发文量
111
期刊介绍: BJET is a primary source for academics and professionals in the fields of digital educational and training technology throughout the world. The Journal is published by Wiley on behalf of The British Educational Research Association (BERA). It publishes theoretical perspectives, methodological developments and high quality empirical research that demonstrate whether and how applications of instructional/educational technology systems, networks, tools and resources lead to improvements in formal and non-formal education at all levels, from early years through to higher, technical and vocational education, professional development and corporate training.
期刊最新文献
Issue Information A multimodal approach to support teacher, researcher and AI collaboration in STEM+C learning environments Exploring Twitter as a social learning space for education scholars: An analysis of value‐added contributions to the #TPACK network Youths' relationship with culture: Tracing sixth graders' learning through designing culturally centred multimedia projects Seeking to support preservice teachers' responsive teaching: Leveraging artificial intelligence‐supported virtual simulation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1