{"title":"在教育中实施公平和具有跨学科意识的多语言教学:实用指南","authors":"Mudit Mangal, Zachary A. Pardos","doi":"10.1111/bjet.13484","DOIUrl":null,"url":null,"abstract":"<div>\n \n <section>\n \n \n <p>The greater the proliferation of AI in educational contexts, the more important it becomes to ensure that AI adheres to the equity and inclusion values of an educational system or institution. Given that modern AI is based on historic datasets, mitigating historic biases with respect to protected classes (ie, fairness) is an important component of this value alignment. Although extensive research has been done on AI fairness in education, there has been a lack of guidance for practitioners, which could enhance the practical uptake of these methods. In this work, we present a practitioner-oriented, step-by-step framework, based on findings from the field, to implement AI fairness techniques. We also present an empirical case study that applies this framework in the context of a grade prediction task using data from a large public university. Our novel findings from the case study and extended analyses underscore the importance of incorporating intersectionality (such as race and gender) as central equity and inclusion institution values. Moreover, our research demonstrates the effectiveness of bias mitigation techniques, like adversarial learning, in enhancing fairness, particularly for intersectional categories like race–gender and race–income.</p>\n </section>\n \n <section>\n \n <div>\n \n <div>\n \n <h3>Practitioner notes</h3>\n <p>What is already known about this topic\n\n </p><ul>\n \n <li>AI-powered Educational Decision Support Systems (EDSS) are increasingly used in various educational contexts, such as course selection, admissions, scholarship allocation and identifying at-risk students.</li>\n \n <li>There are known challenges with AI in education, particularly around the reinforcement of existing biases, leading to unfair outcomes.</li>\n \n <li>The machine learning community has developed metrics and methods to measure and mitigate biases, which have been effectively applied to education as seen in the AI in education literature.</li>\n </ul>\n <p>What this paper adds\n\n </p><ul>\n \n <li>Introduces a comprehensive technical framework for equity and inclusion, specifically for machine learning practitioners in AI education systems.</li>\n \n <li>Presents a novel modification to the ABROCA fairness metric to better represent disparities among multiple subgroups within a protected class.</li>\n \n <li>Empirical analysis of the effectiveness of bias-mitigating techniques, like adversarial learning, in reducing biases in intersectional classes (eg, race–gender, race–income).</li>\n \n <li>Model reporting in the form of model cards that can foster transparent communication among developers, users and stakeholders.</li>\n </ul>\n <p>Implications for practice and/or policy\n\n </p><ul>\n \n <li>The fairness framework can act as a systematic guide for practitioners to design equitable and inclusive AI-EDSS.</li>\n \n <li>The fairness framework can act as a systematic guide for practitioners to make compliance with emerging AI regulations more manageable.</li>\n \n <li>Stakeholders may become more involved in tailoring the fairness and equity model tuning process to align with their values.</li>\n </ul>\n </div>\n </div>\n </section>\n </div>","PeriodicalId":48315,"journal":{"name":"British Journal of Educational Technology","volume":"55 5","pages":"2003-2038"},"PeriodicalIF":6.7000,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/bjet.13484","citationCount":"0","resultStr":"{\"title\":\"Implementing equitable and intersectionality-aware ML in education: A practical guide\",\"authors\":\"Mudit Mangal, Zachary A. Pardos\",\"doi\":\"10.1111/bjet.13484\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n <section>\\n \\n \\n <p>The greater the proliferation of AI in educational contexts, the more important it becomes to ensure that AI adheres to the equity and inclusion values of an educational system or institution. Given that modern AI is based on historic datasets, mitigating historic biases with respect to protected classes (ie, fairness) is an important component of this value alignment. Although extensive research has been done on AI fairness in education, there has been a lack of guidance for practitioners, which could enhance the practical uptake of these methods. In this work, we present a practitioner-oriented, step-by-step framework, based on findings from the field, to implement AI fairness techniques. We also present an empirical case study that applies this framework in the context of a grade prediction task using data from a large public university. Our novel findings from the case study and extended analyses underscore the importance of incorporating intersectionality (such as race and gender) as central equity and inclusion institution values. Moreover, our research demonstrates the effectiveness of bias mitigation techniques, like adversarial learning, in enhancing fairness, particularly for intersectional categories like race–gender and race–income.</p>\\n </section>\\n \\n <section>\\n \\n <div>\\n \\n <div>\\n \\n <h3>Practitioner notes</h3>\\n <p>What is already known about this topic\\n\\n </p><ul>\\n \\n <li>AI-powered Educational Decision Support Systems (EDSS) are increasingly used in various educational contexts, such as course selection, admissions, scholarship allocation and identifying at-risk students.</li>\\n \\n <li>There are known challenges with AI in education, particularly around the reinforcement of existing biases, leading to unfair outcomes.</li>\\n \\n <li>The machine learning community has developed metrics and methods to measure and mitigate biases, which have been effectively applied to education as seen in the AI in education literature.</li>\\n </ul>\\n <p>What this paper adds\\n\\n </p><ul>\\n \\n <li>Introduces a comprehensive technical framework for equity and inclusion, specifically for machine learning practitioners in AI education systems.</li>\\n \\n <li>Presents a novel modification to the ABROCA fairness metric to better represent disparities among multiple subgroups within a protected class.</li>\\n \\n <li>Empirical analysis of the effectiveness of bias-mitigating techniques, like adversarial learning, in reducing biases in intersectional classes (eg, race–gender, race–income).</li>\\n \\n <li>Model reporting in the form of model cards that can foster transparent communication among developers, users and stakeholders.</li>\\n </ul>\\n <p>Implications for practice and/or policy\\n\\n </p><ul>\\n \\n <li>The fairness framework can act as a systematic guide for practitioners to design equitable and inclusive AI-EDSS.</li>\\n \\n <li>The fairness framework can act as a systematic guide for practitioners to make compliance with emerging AI regulations more manageable.</li>\\n \\n <li>Stakeholders may become more involved in tailoring the fairness and equity model tuning process to align with their values.</li>\\n </ul>\\n </div>\\n </div>\\n </section>\\n </div>\",\"PeriodicalId\":48315,\"journal\":{\"name\":\"British Journal of Educational Technology\",\"volume\":\"55 5\",\"pages\":\"2003-2038\"},\"PeriodicalIF\":6.7000,\"publicationDate\":\"2024-05-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/bjet.13484\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"British Journal of Educational Technology\",\"FirstCategoryId\":\"95\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/bjet.13484\",\"RegionNum\":1,\"RegionCategory\":\"教育学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"British Journal of Educational Technology","FirstCategoryId":"95","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/bjet.13484","RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
Implementing equitable and intersectionality-aware ML in education: A practical guide
The greater the proliferation of AI in educational contexts, the more important it becomes to ensure that AI adheres to the equity and inclusion values of an educational system or institution. Given that modern AI is based on historic datasets, mitigating historic biases with respect to protected classes (ie, fairness) is an important component of this value alignment. Although extensive research has been done on AI fairness in education, there has been a lack of guidance for practitioners, which could enhance the practical uptake of these methods. In this work, we present a practitioner-oriented, step-by-step framework, based on findings from the field, to implement AI fairness techniques. We also present an empirical case study that applies this framework in the context of a grade prediction task using data from a large public university. Our novel findings from the case study and extended analyses underscore the importance of incorporating intersectionality (such as race and gender) as central equity and inclusion institution values. Moreover, our research demonstrates the effectiveness of bias mitigation techniques, like adversarial learning, in enhancing fairness, particularly for intersectional categories like race–gender and race–income.
Practitioner notes
What is already known about this topic
AI-powered Educational Decision Support Systems (EDSS) are increasingly used in various educational contexts, such as course selection, admissions, scholarship allocation and identifying at-risk students.
There are known challenges with AI in education, particularly around the reinforcement of existing biases, leading to unfair outcomes.
The machine learning community has developed metrics and methods to measure and mitigate biases, which have been effectively applied to education as seen in the AI in education literature.
What this paper adds
Introduces a comprehensive technical framework for equity and inclusion, specifically for machine learning practitioners in AI education systems.
Presents a novel modification to the ABROCA fairness metric to better represent disparities among multiple subgroups within a protected class.
Empirical analysis of the effectiveness of bias-mitigating techniques, like adversarial learning, in reducing biases in intersectional classes (eg, race–gender, race–income).
Model reporting in the form of model cards that can foster transparent communication among developers, users and stakeholders.
Implications for practice and/or policy
The fairness framework can act as a systematic guide for practitioners to design equitable and inclusive AI-EDSS.
The fairness framework can act as a systematic guide for practitioners to make compliance with emerging AI regulations more manageable.
Stakeholders may become more involved in tailoring the fairness and equity model tuning process to align with their values.
期刊介绍:
BJET is a primary source for academics and professionals in the fields of digital educational and training technology throughout the world. The Journal is published by Wiley on behalf of The British Educational Research Association (BERA). It publishes theoretical perspectives, methodological developments and high quality empirical research that demonstrate whether and how applications of instructional/educational technology systems, networks, tools and resources lead to improvements in formal and non-formal education at all levels, from early years through to higher, technical and vocational education, professional development and corporate training.