{"title":"Friendship for Virtue, by Kristján Kristjánsson, Oxford University Press, 2022, 213 pp.","authors":"Dan Mamlok","doi":"10.1111/edth.70033","DOIUrl":"https://doi.org/10.1111/edth.70033","url":null,"abstract":"","PeriodicalId":47134,"journal":{"name":"EDUCATIONAL THEORY","volume":"75 4","pages":"765-770"},"PeriodicalIF":1.0,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As Artificial Intelligence (AI) keeps advancing, Generation Alpha and future generations are more likely to cope with situations that call for critical thinking by turning to AI and relying on its guidance without sufficient critical thinking. I defend this worry and argue that it calls for educational reforms that would be designed mainly to (a) motivate students to think critically about AI applications and the justifiability of their deployment, as well as (b) cultivate the skills, knowledge, and dispositions that will help them do so. Furthermore, I argue that these educational aims will remain important in the distant future no matter how far AI advances, even merely on outcome-based grounds (i.e., without appealing to the final value of autonomy, or authenticity, or understanding, etc.; or to any educational ideal that dictates the cultivation of critical thinking regardless of its instrumental value). For any “artificial consultant” that might emerge in the future, even with a perfect track record, it is highly improbable that we could ever justifiably rule out or assign negligible probability to the scenario that (a) it will mislead us in certain high-stakes situations, and/or that (b) human critical thinking could help reach better conclusions and prevent significantly bad outcomes.
{"title":"The Worrisome Potential of Outsourcing Critical Thinking to Artificial Intelligence","authors":"Ron Aboodi","doi":"10.1111/edth.70037","DOIUrl":"https://doi.org/10.1111/edth.70037","url":null,"abstract":"<p>As Artificial Intelligence (AI) keeps advancing, Generation Alpha and future generations are more likely to cope with situations that call for critical thinking by turning to AI and relying on its guidance without sufficient critical thinking. I defend this worry and argue that it calls for educational reforms that would be designed mainly to (a) motivate students to think critically about AI applications and the justifiability of their deployment, as well as (b) cultivate the skills, knowledge, and dispositions that will help them do so. Furthermore, I argue that these educational aims will remain important in the distant future no matter how far AI advances, even merely on outcome-based grounds (i.e., without appealing to the final value of autonomy, or authenticity, or understanding, etc.; or to any educational ideal that dictates the cultivation of critical thinking regardless of its instrumental value). For any “artificial consultant” that might emerge in the future, even with a perfect track record, it is highly improbable that we could ever justifiably rule out or assign negligible probability to the scenario that (a) it will mislead us in certain high-stakes situations, and/or that (b) human critical thinking could help reach better conclusions and prevent significantly bad outcomes.</p>","PeriodicalId":47134,"journal":{"name":"EDUCATIONAL THEORY","volume":"75 4","pages":"626-645"},"PeriodicalIF":1.0,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/edth.70037","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
<p>This symposium revolves around two shared questions: First, how should educators view artificial intelligence (AI) as an educational resource, and what contributions can philosophy of education make toward thinking through these possibilities? Second, where is the future of AI foreseeably headed, and what new challenges will confront us in the (near) future?</p><p>This is a task for philosophy of education: to identify, and perhaps in some cases reformulate, the aims and objectives of education to fit this changing context. It also involves reasserting and defending what cannot be accommodated by AI, even as other aims and objectives must be reexamined in light of AI. For example, is using ChatGPT to produce a student paper considered “cheating”? Does it depend on <i>how</i> ChatGPT is used? Or do we need to reconsider what we have traditionally meant by “cheating”?<sup>3</sup></p><p>The articles in this symposium all address these kinds of “third space” questions, and move the discussion beyond either/or choices. Together, they illustrate the importance for all of us to become more knowledgeable about AI and what it can (and cannot) do.<sup>4</sup> Several focus on ChatGPT and similar generative AI programs that model or mimic human productive activities; others address much broader issues about the future of artificial intelligence — such as the possibilities of an artificial general intelligence (AGI) or even an artificial “superintelligence” (ASI). These articles were originally presented as part of an Ed Theory/PES Preconference Workshop at the 2024 meeting of the Philosophy of Education Society; after those detailed discussions and feedback, the articles were revised further as part of this symposium.</p><p>In “Artificial Intelligence on Campus: Revisiting Understanding as an Aim of Higher Education,” Jamie Herman and Henry Lara-Steidel argue that ChatGPT can be useful — for example, as a tutor — but that student reliance on it to produce educational projects jeopardizes the aim of promoting <i>understanding</i>.<sup>5</sup> Our assignments and assessment strategies, they argue, emphasize knowledge over understanding. As with other articles in this symposium, often what appear to be issues with uses of AI in education reveal other underlying errors in our educational thinking. Reasserting the importance of understanding as an educational goal, and assessing for understanding, is a broader objective that helps us recognize the value and the limitations of AI as an educational resource.</p><p>In “The Worrisome Potential of Outsourcing Critical Thinking to Artificial Intelligence,” Ron Aboodi argues for a limitation of AI's reliability, which stands independently of non-instrumental educational aims, such as promoting understanding for its own sake.<sup>6</sup> No matter how far AI will advance, reliance on even the best AI tools without sufficient critical thinking may lead us astray and cause significantly bad outcomes. Accordingly, Abood
{"title":"Artificial Intelligence in Education: Use it, or Refuse it?","authors":"Nicholas C. Burbules","doi":"10.1111/edth.70038","DOIUrl":"https://doi.org/10.1111/edth.70038","url":null,"abstract":"<p>This symposium revolves around two shared questions: First, how should educators view artificial intelligence (AI) as an educational resource, and what contributions can philosophy of education make toward thinking through these possibilities? Second, where is the future of AI foreseeably headed, and what new challenges will confront us in the (near) future?</p><p>This is a task for philosophy of education: to identify, and perhaps in some cases reformulate, the aims and objectives of education to fit this changing context. It also involves reasserting and defending what cannot be accommodated by AI, even as other aims and objectives must be reexamined in light of AI. For example, is using ChatGPT to produce a student paper considered “cheating”? Does it depend on <i>how</i> ChatGPT is used? Or do we need to reconsider what we have traditionally meant by “cheating”?<sup>3</sup></p><p>The articles in this symposium all address these kinds of “third space” questions, and move the discussion beyond either/or choices. Together, they illustrate the importance for all of us to become more knowledgeable about AI and what it can (and cannot) do.<sup>4</sup> Several focus on ChatGPT and similar generative AI programs that model or mimic human productive activities; others address much broader issues about the future of artificial intelligence — such as the possibilities of an artificial general intelligence (AGI) or even an artificial “superintelligence” (ASI). These articles were originally presented as part of an Ed Theory/PES Preconference Workshop at the 2024 meeting of the Philosophy of Education Society; after those detailed discussions and feedback, the articles were revised further as part of this symposium.</p><p>In “Artificial Intelligence on Campus: Revisiting Understanding as an Aim of Higher Education,” Jamie Herman and Henry Lara-Steidel argue that ChatGPT can be useful — for example, as a tutor — but that student reliance on it to produce educational projects jeopardizes the aim of promoting <i>understanding</i>.<sup>5</sup> Our assignments and assessment strategies, they argue, emphasize knowledge over understanding. As with other articles in this symposium, often what appear to be issues with uses of AI in education reveal other underlying errors in our educational thinking. Reasserting the importance of understanding as an educational goal, and assessing for understanding, is a broader objective that helps us recognize the value and the limitations of AI as an educational resource.</p><p>In “The Worrisome Potential of Outsourcing Critical Thinking to Artificial Intelligence,” Ron Aboodi argues for a limitation of AI's reliability, which stands independently of non-instrumental educational aims, such as promoting understanding for its own sake.<sup>6</sup> No matter how far AI will advance, reliance on even the best AI tools without sufficient critical thinking may lead us astray and cause significantly bad outcomes. Accordingly, Abood","PeriodicalId":47134,"journal":{"name":"EDUCATIONAL THEORY","volume":"75 4","pages":"597-602"},"PeriodicalIF":1.0,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/edth.70038","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article critically examines Artificial Intelligence in Education (AIED) within English as a Second Language (ESL) contexts, arguing that current practices often deepen systemic inequality. Drawing on Iris Marion Young's Five Faces of Oppression, we analyze the implementation of AIED in oppressed schools, illustrating how students are tracked into the consumer track—passive users of AI technologies—while privileged students are directed into the creator track, where they learn to design and develop AI. This divide reinforces systemic inequality, depriving disadvantaged students of communicative agency and social mobility. Focusing on the Israeli context, we demonstrate how teachers and students in these schools lack the training and infrastructure to engage meaningfully with AI, resulting in its instrumental rather than transformative use. This “veil of innovation” obscures educational injustice, masking deep inequalities in access, agency, and technological fluency. We advocate for an inclusive pedagogy that integrates AI within English education as a tool for empowerment—not as a replacement for linguistic and cognitive development.
本文批判性地考察了英语作为第二语言(ESL)背景下的教育人工智能(AIED),认为目前的做法往往加深了系统性的不平等。借鉴Iris Marion Young的《压迫的五面》(Five Faces of Oppression),我们分析了AIED在受压迫学校的实施情况,说明了学生是如何被追踪到消费者的轨道上的——人工智能技术的被动用户——而特权学生则被引导到创造者的轨道上,在那里他们学习设计和开发人工智能。这种分化加剧了系统性的不平等,剥夺了弱势学生的沟通能力和社会流动性。以以色列为例,我们展示了这些学校的教师和学生如何缺乏有效使用人工智能的培训和基础设施,导致其工具性使用而不是变革性使用。这种“创新的面纱”掩盖了教育的不公平,掩盖了在获取、代理和技术流畅性方面的深刻不平等。我们提倡一种包容性的教学法,将人工智能作为一种赋权的工具融入英语教育中,而不是作为语言和认知发展的替代品。
{"title":"The Paradox of AI in ESL Instruction: Between Innovation and Oppression","authors":"Liat Ariel, Merav Hayak","doi":"10.1111/edth.70034","DOIUrl":"https://doi.org/10.1111/edth.70034","url":null,"abstract":"<p>This article critically examines Artificial Intelligence in Education (AIED) within English as a Second Language (ESL) contexts, arguing that current practices often deepen systemic inequality. Drawing on Iris Marion Young's <i>Five Faces of Oppression</i>, we analyze the implementation of AIED in oppressed schools, illustrating how students are tracked into the consumer track—passive users of AI technologies—while privileged students are directed into the creator track, where they learn to design and develop AI. This divide reinforces systemic inequality, depriving disadvantaged students of communicative agency and social mobility. Focusing on the Israeli context, we demonstrate how teachers and students in these schools lack the training and infrastructure to engage meaningfully with AI, resulting in its instrumental rather than transformative use. This “veil of innovation” obscures educational injustice, masking deep inequalities in access, agency, and technological fluency. We advocate for an inclusive pedagogy that integrates AI within English education as a tool for empowerment—not as a replacement for linguistic and cognitive development.</p>","PeriodicalId":47134,"journal":{"name":"EDUCATIONAL THEORY","volume":"75 4","pages":"646-660"},"PeriodicalIF":1.0,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/edth.70034","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spinoza: Fiction and Manipulation in Civic Education, by Johan Dahlbeck, Springer, 2021, 90 pp.","authors":"Pascal Sévérac","doi":"10.1111/edth.70036","DOIUrl":"https://doi.org/10.1111/edth.70036","url":null,"abstract":"","PeriodicalId":47134,"journal":{"name":"EDUCATIONAL THEORY","volume":"75 4","pages":"771-774"},"PeriodicalIF":1.0,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this theoretical article, we analyze indoctrination in relation to the aims of democratic political education using a theory of indoctrination which is based on the work of Jürgen Habermas. In particular, we examine how the challenge of indoctrination is connected to the goals of democratic political education and how this issue can be avoided. We reconstruct a Habermasian concept of indoctrination and criteria for this type of teaching. Moreover, we describe central controversies in German didactic theories of political education and elucidate the theoretical premises of these theories. Lastly, we construct an account of the challenges facing democratic political education and provide solutions to these hurdles by conceptualizing how the aims of political education can be pursued as indoctrination, as well as critically of indoctrination. We find that democratic political education involves the challenges of indoctrination, but these can be avoided by teaching in a self-reflective, controversial, and dialogic manner.
{"title":"Indoctrination and the Aims of Democratic Political Education: Challenges and Answers","authors":"Antti Moilanen, Rauno Huttunen","doi":"10.1111/edth.70032","DOIUrl":"https://doi.org/10.1111/edth.70032","url":null,"abstract":"<p>In this theoretical article, we analyze indoctrination in relation to the aims of democratic political education using a theory of indoctrination which is based on the work of Jürgen Habermas. In particular, we examine how the challenge of indoctrination is connected to the goals of democratic political education and how this issue can be avoided. We reconstruct a Habermasian concept of indoctrination and criteria for this type of teaching. Moreover, we describe central controversies in German didactic theories of political education and elucidate the theoretical premises of these theories. Lastly, we construct an account of the challenges facing democratic political education and provide solutions to these hurdles by conceptualizing how the aims of political education can be pursued as indoctrination, as well as critically of indoctrination. We find that democratic political education involves the challenges of indoctrination, but these can be avoided by teaching in a self-reflective, controversial, and dialogic manner.</p>","PeriodicalId":47134,"journal":{"name":"EDUCATIONAL THEORY","volume":"75 5","pages":"823-847"},"PeriodicalIF":0.9,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/edth.70032","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145012951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In contemporary contexts of digitalization, proliferating media, and generative AI, various “life hacks” are regularly recommended to disconnect and resist distraction, ranging from meditation to getting back to nature to unplugging. This paper traces contemporary concerns over “the attention crisis” into a longer signature — the frequently elided field of signification today referred to as “spiritual,” a signature which links attention to theories of deep personal transformation and technologies of the self. First, we examine historiographical issues arising in studies related to the contemporary attention crisis, exposing the challenges of attending to attending. Second, we delineate how European-based Christian monasticism developed practices for disciplining “attention” in new institutional settings. We argue that this process was simultaneously bound to projections of Othering and to the cultivation of critical attitudes. In particular, we delineate how these medieval forms of Othering (in both “spiritualist” and “demographic” terms) were involved in practices of vigilance and attending that became indelibly etched in Christian empire-building through governing souls and violent persecutions. Tracing these genealogical trajectories retrieves recent elisions of the complexities in problematizing attention. We suggest that contemporary ways of thinking about and acting on an “attention crisis” in education are still marked by signatures of spirituality and their allied binaries, Othering logics, and ambiguities.
{"title":"Signature of Attention: Historical Ambiguities and Elisions in Contemporary Psychological Framings of Attending","authors":"Antti Saari, Bernadette M. Baker","doi":"10.1111/edth.70031","DOIUrl":"https://doi.org/10.1111/edth.70031","url":null,"abstract":"<p>In contemporary contexts of digitalization, proliferating media, and generative AI, various “life hacks” are regularly recommended to disconnect and resist distraction, ranging from meditation to getting back to nature to unplugging. This paper traces contemporary concerns over “the attention crisis” into a longer signature — the frequently elided field of signification today referred to as “spiritual,” a signature which links attention to theories of deep personal transformation and technologies of the self. First, we examine historiographical issues arising in studies related to the contemporary attention crisis, exposing the challenges of attending to attending. Second, we delineate how European-based Christian monasticism developed practices for disciplining “attention” in new institutional settings. We argue that this process was simultaneously bound to projections of Othering and to the cultivation of critical attitudes. In particular, we delineate how these medieval forms of Othering (in both “spiritualist” and “demographic” terms) were involved in practices of vigilance and attending that became indelibly etched in Christian empire-building through governing souls and violent persecutions. Tracing these genealogical trajectories retrieves recent elisions of the complexities in problematizing attention. We suggest that contemporary ways of thinking about and acting on an “attention crisis” in education are still marked by signatures of spirituality and their allied binaries, Othering logics, and ambiguities.</p>","PeriodicalId":47134,"journal":{"name":"EDUCATIONAL THEORY","volume":"75 5","pages":"936-961"},"PeriodicalIF":0.9,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/edth.70031","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145012804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Artificial intelligence companies and researchers are currently working to create Artificial Superintelligence (ASI): AI systems that significantly exceed human problem-solving speed, power, and precision across the full range of human solvable problems. Some have claimed that achieving ASI — for better or worse — would be the most significant event in human history and the last problem humanity would need to solve. In this essay Nicolas Tanchuk argues that current AI literacy frameworks and educational practices are inadequate for equipping the democratic public to deliberate about ASI design and to assess the existential risks of such technologies. He proposes that a systematic educational effort toward what he calls “Deep ASI Literacy” is needed to democratically evaluate possible ASI futures. Deep ASI Literacy integrates traditional AI literacy approaches with a deeper analysis of the axiological, epistemic, and ontological questions that are endemic to defining and risk-assessing pathways to ASI. Tanchuk concludes by recommending research aimed at identifying the assets and needs of educators across educational systems to advance Deep ASI Literacy.
{"title":"Deep ASI Literacy: Educating for Alignment with Artificial Super Intelligent Systems","authors":"Nicolas J. Tanchuk","doi":"10.1111/edth.70030","DOIUrl":"https://doi.org/10.1111/edth.70030","url":null,"abstract":"<p>Artificial intelligence companies and researchers are currently working to create Artificial Superintelligence (ASI): AI systems that significantly exceed human problem-solving speed, power, and precision across the full range of human solvable problems. Some have claimed that achieving ASI — for better or worse — would be the most significant event in human history and the last problem humanity would need to solve. In this essay Nicolas Tanchuk argues that current AI literacy frameworks and educational practices are inadequate for equipping the democratic public to deliberate about ASI design and to assess the existential risks of such technologies. He proposes that a systematic educational effort toward what he calls “Deep ASI Literacy” is needed to democratically evaluate possible ASI futures. Deep ASI Literacy integrates traditional AI literacy approaches with a deeper analysis of the axiological, epistemic, and ontological questions that are endemic to defining and risk-assessing pathways to ASI. Tanchuk concludes by recommending research aimed at identifying the assets and needs of educators across educational systems to advance Deep ASI Literacy.</p>","PeriodicalId":47134,"journal":{"name":"EDUCATIONAL THEORY","volume":"75 4","pages":"739-764"},"PeriodicalIF":1.0,"publicationDate":"2025-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/edth.70030","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144680992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The debate over halting artificial intelligence (AI) development stems from fears of malicious exploitation and potential emergence of destructive autonomous AI. While acknowledging the former concern, this paper argues the latter is exaggerated. True AI autonomy requires education inherently tied to ethics, making fully autonomous AI potentially safer than current semi-intelligent, enslaved versions. The paper introduces “non-originary anthropomorphism”—mistakenly viewing AI as resembling an individual human rather than humanity's collective culture. This error leads to overestimating AI's potential for malevolence. Unlike humans, AI lacks bodily desires driving aggression or domination. Additionally, AI's evolution cultivates knowledge-seeking behaviors that make human collaboration valuable. Three key arguments support benevolent autonomous AI: ethics being pragmatically inseparable from learning; absence of somatic roots for malevolence; and pragmatic value humans provide as diverse data sources. Rather than halting AI development, accelerating creation of fully autonomous, ethical AI while preventing monopolistic control through diverse ecosystems represents the optimal approach.
{"title":"Educating AI: A Case against Non-originary Anthropomorphism","authors":"Alexander M. Sidorkin","doi":"10.1111/edth.70027","DOIUrl":"https://doi.org/10.1111/edth.70027","url":null,"abstract":"<p>The debate over halting artificial intelligence (AI) development stems from fears of malicious exploitation and potential emergence of destructive autonomous AI. While acknowledging the former concern, this paper argues the latter is exaggerated. True AI autonomy requires education inherently tied to ethics, making fully autonomous AI potentially safer than current semi-intelligent, enslaved versions. The paper introduces “non-originary anthropomorphism”—mistakenly viewing AI as resembling an individual human rather than humanity's collective culture. This error leads to overestimating AI's potential for malevolence. Unlike humans, AI lacks bodily desires driving aggression or domination. Additionally, AI's evolution cultivates knowledge-seeking behaviors that make human collaboration valuable. Three key arguments support benevolent autonomous AI: ethics being pragmatically inseparable from learning; absence of somatic roots for malevolence; and pragmatic value humans provide as diverse data sources. Rather than halting AI development, accelerating creation of fully autonomous, ethical AI while preventing monopolistic control through diverse ecosystems represents the optimal approach.</p>","PeriodicalId":47134,"journal":{"name":"EDUCATIONAL THEORY","volume":"75 4","pages":"720-738"},"PeriodicalIF":1.0,"publicationDate":"2025-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Much has been written about how to improve the fairness of AI tools for decision-making but less has been said about how to approach this new field from the perspective of philosophy of education. My goal in this paper is to bring together criteria from the general algorithmic fairness literature with prominent values of justice defended by philosophers of education. Some kinds of fairness criteria appear better suited than others for realizing these values. Considering these criteria for cases of automated decision-making in education reveals that when the aim of justice is equal respect and belonging, this is best served by using statistical definitions of fairness to constrain decision-making. By contrast, distributive aims of justice are best promoted by thinking of fairness in terms of the intellectual virtues of human decision-makers who use algorithmic tools.
{"title":"Algorithmic Fairness and Educational Justice","authors":"Aaron Wolf","doi":"10.1111/edth.70029","DOIUrl":"https://doi.org/10.1111/edth.70029","url":null,"abstract":"<p>Much has been written about how to improve the fairness of AI tools for decision-making but less has been said about how to approach this new field from the perspective of philosophy of education. My goal in this paper is to bring together criteria from the general algorithmic fairness literature with prominent values of justice defended by philosophers of education. Some kinds of fairness criteria appear better suited than others for realizing these values. Considering these criteria for cases of automated decision-making in education reveals that when the aim of justice is equal respect and belonging, this is best served by using statistical definitions of fairness to constrain decision-making. By contrast, distributive aims of justice are best promoted by thinking of fairness in terms of the intellectual virtues of human decision-makers who use algorithmic tools.</p>","PeriodicalId":47134,"journal":{"name":"EDUCATIONAL THEORY","volume":"75 4","pages":"661-681"},"PeriodicalIF":1.0,"publicationDate":"2025-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}