首页 > 最新文献

EDUCATIONAL THEORY最新文献

英文 中文
Friendship for Virtue, by Kristján Kristjánsson, Oxford University Press, 2022, 213 pp. 《以德为友》,Kristján Kristjánsson,牛津大学出版社,2022年,213页。
IF 1 Q3 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2025-06-30 DOI: 10.1111/edth.70033
Dan Mamlok
{"title":"Friendship for Virtue, by Kristján Kristjánsson, Oxford University Press, 2022, 213 pp.","authors":"Dan Mamlok","doi":"10.1111/edth.70033","DOIUrl":"https://doi.org/10.1111/edth.70033","url":null,"abstract":"","PeriodicalId":47134,"journal":{"name":"EDUCATIONAL THEORY","volume":"75 4","pages":"765-770"},"PeriodicalIF":1.0,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Worrisome Potential of Outsourcing Critical Thinking to Artificial Intelligence 将批判性思维外包给人工智能的潜力令人担忧
IF 1 Q3 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2025-06-30 DOI: 10.1111/edth.70037
Ron Aboodi

As Artificial Intelligence (AI) keeps advancing, Generation Alpha and future generations are more likely to cope with situations that call for critical thinking by turning to AI and relying on its guidance without sufficient critical thinking. I defend this worry and argue that it calls for educational reforms that would be designed mainly to (a) motivate students to think critically about AI applications and the justifiability of their deployment, as well as (b) cultivate the skills, knowledge, and dispositions that will help them do so. Furthermore, I argue that these educational aims will remain important in the distant future no matter how far AI advances, even merely on outcome-based grounds (i.e., without appealing to the final value of autonomy, or authenticity, or understanding, etc.; or to any educational ideal that dictates the cultivation of critical thinking regardless of its instrumental value). For any “artificial consultant” that might emerge in the future, even with a perfect track record, it is highly improbable that we could ever justifiably rule out or assign negligible probability to the scenario that (a) it will mislead us in certain high-stakes situations, and/or that (b) human critical thinking could help reach better conclusions and prevent significantly bad outcomes.

随着人工智能(AI)的不断发展,阿尔法一代和后代更有可能在没有足够的批判性思维的情况下,转向人工智能并依赖其指导来应对需要批判性思维的情况。我为这种担忧辩护,并认为这需要进行教育改革,改革的主要目的是:(a)激励学生批判性地思考人工智能应用及其部署的合理性,以及(b)培养有助于他们这样做的技能、知识和性格。此外,我认为无论人工智能发展到什么程度,这些教育目标在遥远的未来仍然很重要,即使只是基于结果(即,不诉诸自主性、真实性或理解等最终价值);或者任何要求培养批判性思维的教育理想,不管它的工具价值如何)。对于未来可能出现的任何“人工顾问”,即使有完美的记录,我们也极不可能合理地排除或分配可忽略不计的可能性:(a)它会在某些高风险情况下误导我们,和/或(b)人类的批判性思维可以帮助得出更好的结论并防止严重的坏结果。
{"title":"The Worrisome Potential of Outsourcing Critical Thinking to Artificial Intelligence","authors":"Ron Aboodi","doi":"10.1111/edth.70037","DOIUrl":"https://doi.org/10.1111/edth.70037","url":null,"abstract":"<p>As Artificial Intelligence (AI) keeps advancing, Generation Alpha and future generations are more likely to cope with situations that call for critical thinking by turning to AI and relying on its guidance without sufficient critical thinking. I defend this worry and argue that it calls for educational reforms that would be designed mainly to (a) motivate students to think critically about AI applications and the justifiability of their deployment, as well as (b) cultivate the skills, knowledge, and dispositions that will help them do so. Furthermore, I argue that these educational aims will remain important in the distant future no matter how far AI advances, even merely on outcome-based grounds (i.e., without appealing to the final value of autonomy, or authenticity, or understanding, etc.; or to any educational ideal that dictates the cultivation of critical thinking regardless of its instrumental value). For any “artificial consultant” that might emerge in the future, even with a perfect track record, it is highly improbable that we could ever justifiably rule out or assign negligible probability to the scenario that (a) it will mislead us in certain high-stakes situations, and/or that (b) human critical thinking could help reach better conclusions and prevent significantly bad outcomes.</p>","PeriodicalId":47134,"journal":{"name":"EDUCATIONAL THEORY","volume":"75 4","pages":"626-645"},"PeriodicalIF":1.0,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/edth.70037","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial Intelligence in Education: Use it, or Refuse it? 人工智能在教育中的应用:使用还是拒绝?
IF 1 Q3 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2025-06-30 DOI: 10.1111/edth.70038
Nicholas C. Burbules
<p>This symposium revolves around two shared questions: First, how should educators view artificial intelligence (AI) as an educational resource, and what contributions can philosophy of education make toward thinking through these possibilities? Second, where is the future of AI foreseeably headed, and what new challenges will confront us in the (near) future?</p><p>This is a task for philosophy of education: to identify, and perhaps in some cases reformulate, the aims and objectives of education to fit this changing context. It also involves reasserting and defending what cannot be accommodated by AI, even as other aims and objectives must be reexamined in light of AI. For example, is using ChatGPT to produce a student paper considered “cheating”? Does it depend on <i>how</i> ChatGPT is used? Or do we need to reconsider what we have traditionally meant by “cheating”?<sup>3</sup></p><p>The articles in this symposium all address these kinds of “third space” questions, and move the discussion beyond either/or choices. Together, they illustrate the importance for all of us to become more knowledgeable about AI and what it can (and cannot) do.<sup>4</sup> Several focus on ChatGPT and similar generative AI programs that model or mimic human productive activities; others address much broader issues about the future of artificial intelligence — such as the possibilities of an artificial general intelligence (AGI) or even an artificial “superintelligence” (ASI). These articles were originally presented as part of an Ed Theory/PES Preconference Workshop at the 2024 meeting of the Philosophy of Education Society; after those detailed discussions and feedback, the articles were revised further as part of this symposium.</p><p>In “Artificial Intelligence on Campus: Revisiting Understanding as an Aim of Higher Education,” Jamie Herman and Henry Lara-Steidel argue that ChatGPT can be useful — for example, as a tutor — but that student reliance on it to produce educational projects jeopardizes the aim of promoting <i>understanding</i>.<sup>5</sup> Our assignments and assessment strategies, they argue, emphasize knowledge over understanding. As with other articles in this symposium, often what appear to be issues with uses of AI in education reveal other underlying errors in our educational thinking. Reasserting the importance of understanding as an educational goal, and assessing for understanding, is a broader objective that helps us recognize the value and the limitations of AI as an educational resource.</p><p>In “The Worrisome Potential of Outsourcing Critical Thinking to Artificial Intelligence,” Ron Aboodi argues for a limitation of AI's reliability, which stands independently of non-instrumental educational aims, such as promoting understanding for its own sake.<sup>6</sup> No matter how far AI will advance, reliance on even the best AI tools without sufficient critical thinking may lead us astray and cause significantly bad outcomes. Accordingly, Abood
本次研讨会围绕两个共同的问题展开:第一,教育工作者应该如何看待人工智能(AI)作为一种教育资源,以及教育哲学在思考这些可能性方面可以做出什么贡献?第二,人工智能的未来可预见的方向是什么?在(不久的)将来,我们将面临哪些新的挑战?这是教育哲学的一项任务:确定,也许在某些情况下,重新制定教育的目的和目标,以适应这种不断变化的环境。它还涉及重申和捍卫人工智能无法容纳的东西,即使其他目标和目标必须根据人工智能重新审视。例如,使用ChatGPT制作学生论文是否被认为是“作弊”?这取决于如何使用ChatGPT吗?或者我们是否需要重新考虑我们传统意义上的“欺骗”?本次研讨会的文章都是针对这类“第三空间”问题,并将讨论超越了非此即彼的选择。总之,它们说明了对我们所有人来说,更加了解人工智能以及它能做什么(不能做什么)的重要性一些关注ChatGPT和类似的生成式人工智能程序,这些程序模拟或模仿人类的生产活动;另一些则讨论了关于人工智能未来的更广泛的问题——比如人工通用智能(AGI)甚至人工“超级智能”(ASI)的可能性。这些文章最初是作为教育哲学学会2024年会议的教育理论/PES会前研讨会的一部分提出的;在这些详细的讨论和反馈之后,这些文章被进一步修订,作为本次研讨会的一部分。在《校园人工智能:重新审视作为高等教育目标的理解》一书中,杰米·赫尔曼和亨利·拉拉-斯泰德尔认为,ChatGPT可能是有用的——例如,作为导师——但学生对它的依赖会危及促进理解的目标他们认为,我们的作业和评估策略强调的是知识而不是理解。与本次研讨会上的其他文章一样,在教育中使用人工智能的问题往往揭示了我们教育思维中的其他潜在错误。重申理解作为一个教育目标的重要性,并评估理解,是一个更广泛的目标,有助于我们认识到人工智能作为一种教育资源的价值和局限性。在《将批判性思维外包给人工智能的令人担忧的潜力》一书中,Ron Aboodi认为人工智能的可靠性是有局限性的,它独立于非工具性的教育目标,比如为了自身的利益而促进理解无论人工智能发展到什么程度,如果没有足够的批判性思维,依赖哪怕是最好的人工智能工具,都可能导致我们误入歧途,并导致严重的不良后果。因此,Aboodi倡导旨在激励和帮助学生批判性地思考人工智能应用的教育改革。目前,我们经常看到大型语言模型(llm)断言不真实信息的例子(例如,最近美国政府用人工智能制作的一份关于公共卫生的报告被发现包括不存在的研究,并严重误解了其他研究)Aboodi认为,要求学生批判性地评估人工智能产生的误导性或不准确的回答,本身就是一种有价值的批判性思维活动。他认为,将此类活动纳入课程迫在眉睫,因为当代人和后代更有可能将他们的批判性思维“外包”给人工智能。在“人工智能在ESL教学中的悖论:在创新与压迫之间”一文中,Liat Ariel和Merav Hayak探讨了ChatGPT和类似程序在英语作为第二语言教学中的应用他们将学生学习使用人工智能创建或生成文本的项目与学生仅仅作为消费者与人工智能互动的项目区分开来;这种差异产生了一个两层跟踪系统,造成了他们学习机会的不平等。Ariel和Hayak从Iris Young的“压迫的五面”——剥削、边缘化、无力、暴力和文化帝国主义——来分析这种追踪的影响。矛盾的是,在承认这些不公正的影响的同时,要纳入而不是禁止像ChatGPT这样的程序。在《算法公平和教育正义》一书中,Aaron Wolf研究了人工智能在教育中的自动决策应用——例如,在帮助学校招生方面因为这是一个数据密集型的操作,它产生的统计证据为他所谓的“算法公平”的评估提供了基础,它有两个规范的维度:对情感价值的评估,在社会实践中表达的态度,以及分配价值,这些实践的实际结果和影响。 他举了一个著名的评估COMPAS项目的例子,该项目用于保释、量刑和假释,被发现系统性地存在种族偏见。这种更定量的方法与阿里尔和哈亚克的批判形成了有趣的对比。在《人工智能的教育意义:皮尔斯、理性和实用主义格言》一书中,肯尼斯·德里格斯和德隆·博伊尔斯借鉴了C.S.皮尔斯的实用主义思想,提出了一种思考人工智能在哪里以及如何在教育上发挥作用的方法他们认为,从实用主义的角度来看,人工合成智能本身并没有错;人类所有的智慧都是一种不完美的、容易出错的理解经验的尝试。关键是我们如何将我们的概念和理论与经验联系起来,无论它们来自哪里。在这里,皮尔斯的“实用主义格言”很有帮助:“任何符号的全部知识主旨都包含在理性行为的所有一般模式的总和中,这些模式在所有可能的不同环境和欲望的条件下,会随着符号的接受而发生。”Driggers和Boyles利用Peirce的实用主义为ChatGPT和人工智能等程序的教育生产性使用制定了标准。在《弗兰肯斯坦,埃米尔,ChatGPT:在自然学习和人工怪物之间教育人工智能》一书中,吉迪恩·迪松(Gideon Dishon)研究了“自然”和“人工”在描述我们称之为“人工智能”的东西时的用途。虽然这种区别似乎是描述性的,但狄顺表明,它也包含了许多规范性判断。他在三个文本例子的背景下探讨了这些术语:卢梭的经典,爱弥儿;玛丽·雪莱的《弗兰肯斯坦》;以及凯文·卢斯(Kevin Roose)在2023年讲述的他与Bing中的人工智能特工的对话。在这些背景下,他总结道,在人类学习、发展和互动的背景下,自然和人工之间的关系最好被视为辩证的,而不是二分的。在“教育人工智能:一个反对非原创拟人论的案例”中,Alexander Sidorkin在本次研讨会上提供了可能是最乐观的人工智能教育描述他提到了关于人工智能的两个反复出现的担忧——它传播错误信息的能力,以及它(有朝一日)发展成为一个有意识的、自主的、自利的实体的潜力西多金认为后一种担忧被夸大了;我们应该更加关注他所说的目前“被奴役”的人工智能的风险。事实上,他认为,一个完全自主的人工智能必须将道德作为其整体方向的一部分。虽然这篇文章和迪顺的文章写得不同,但却建立了一个有趣的比较和对比。在《深度人工智能素养:与人工超级智能系统保持一致的教育》一书中,Nicolas Tanchuk展望了超级智能系统的发展;实际上超过人类智力的人工智能这种发展将带来许多前所未有的挑战——目前的人工智能扫盲方法将被证明是不够的。相反,Tanchuk呼吁采用他所谓的“深度人工智能素养”(Deep ASI literacy),这是一种认真思考我们的术语的方法(超级智能只是智能,还是更多的智能——还是一个真正独特和新兴的实体?)我们对知识的看法(人类智能是否有可能理解和评估机器超级智能的知识主张?);以及我们的伦理(超级智能会有身份或权利吗?)坦丘克认为,在超级智能成为现实之前,现在就进行这些讨论是至关重要的。看到人工智能海啸如此迅速地向我们袭来,真是令人惊讶。ChatGPT于2022年推出——在那之前,技术领域以外的人都不知道什么是“生成式人工智能”或“大型语言模型”。突然间,教育工作者开始意识到这是一个多么强大的资源来制作文本,学生们已经在使用它来完成他们的作业。我们对作弊和抄袭进行了辩论,许多人建议禁止使用这类程序——这些辩论有时显得有些怀旧和脱离现实。随着ChatGPT和类似程序的改进,它们开始看起来像一种有价值的
{"title":"Artificial Intelligence in Education: Use it, or Refuse it?","authors":"Nicholas C. Burbules","doi":"10.1111/edth.70038","DOIUrl":"https://doi.org/10.1111/edth.70038","url":null,"abstract":"&lt;p&gt;This symposium revolves around two shared questions: First, how should educators view artificial intelligence (AI) as an educational resource, and what contributions can philosophy of education make toward thinking through these possibilities? Second, where is the future of AI foreseeably headed, and what new challenges will confront us in the (near) future?&lt;/p&gt;&lt;p&gt;This is a task for philosophy of education: to identify, and perhaps in some cases reformulate, the aims and objectives of education to fit this changing context. It also involves reasserting and defending what cannot be accommodated by AI, even as other aims and objectives must be reexamined in light of AI. For example, is using ChatGPT to produce a student paper considered “cheating”? Does it depend on &lt;i&gt;how&lt;/i&gt; ChatGPT is used? Or do we need to reconsider what we have traditionally meant by “cheating”?&lt;sup&gt;3&lt;/sup&gt;&lt;/p&gt;&lt;p&gt;The articles in this symposium all address these kinds of “third space” questions, and move the discussion beyond either/or choices. Together, they illustrate the importance for all of us to become more knowledgeable about AI and what it can (and cannot) do.&lt;sup&gt;4&lt;/sup&gt; Several focus on ChatGPT and similar generative AI programs that model or mimic human productive activities; others address much broader issues about the future of artificial intelligence — such as the possibilities of an artificial general intelligence (AGI) or even an artificial “superintelligence” (ASI). These articles were originally presented as part of an Ed Theory/PES Preconference Workshop at the 2024 meeting of the Philosophy of Education Society; after those detailed discussions and feedback, the articles were revised further as part of this symposium.&lt;/p&gt;&lt;p&gt;In “Artificial Intelligence on Campus: Revisiting Understanding as an Aim of Higher Education,” Jamie Herman and Henry Lara-Steidel argue that ChatGPT can be useful — for example, as a tutor — but that student reliance on it to produce educational projects jeopardizes the aim of promoting &lt;i&gt;understanding&lt;/i&gt;.&lt;sup&gt;5&lt;/sup&gt; Our assignments and assessment strategies, they argue, emphasize knowledge over understanding. As with other articles in this symposium, often what appear to be issues with uses of AI in education reveal other underlying errors in our educational thinking. Reasserting the importance of understanding as an educational goal, and assessing for understanding, is a broader objective that helps us recognize the value and the limitations of AI as an educational resource.&lt;/p&gt;&lt;p&gt;In “The Worrisome Potential of Outsourcing Critical Thinking to Artificial Intelligence,” Ron Aboodi argues for a limitation of AI's reliability, which stands independently of non-instrumental educational aims, such as promoting understanding for its own sake.&lt;sup&gt;6&lt;/sup&gt; No matter how far AI will advance, reliance on even the best AI tools without sufficient critical thinking may lead us astray and cause significantly bad outcomes. Accordingly, Abood","PeriodicalId":47134,"journal":{"name":"EDUCATIONAL THEORY","volume":"75 4","pages":"597-602"},"PeriodicalIF":1.0,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/edth.70038","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Paradox of AI in ESL Instruction: Between Innovation and Oppression 人工智能在ESL教学中的悖论:在创新与压迫之间
IF 1 Q3 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2025-06-30 DOI: 10.1111/edth.70034
Liat Ariel, Merav Hayak

This article critically examines Artificial Intelligence in Education (AIED) within English as a Second Language (ESL) contexts, arguing that current practices often deepen systemic inequality. Drawing on Iris Marion Young's Five Faces of Oppression, we analyze the implementation of AIED in oppressed schools, illustrating how students are tracked into the consumer track—passive users of AI technologies—while privileged students are directed into the creator track, where they learn to design and develop AI. This divide reinforces systemic inequality, depriving disadvantaged students of communicative agency and social mobility. Focusing on the Israeli context, we demonstrate how teachers and students in these schools lack the training and infrastructure to engage meaningfully with AI, resulting in its instrumental rather than transformative use. This “veil of innovation” obscures educational injustice, masking deep inequalities in access, agency, and technological fluency. We advocate for an inclusive pedagogy that integrates AI within English education as a tool for empowerment—not as a replacement for linguistic and cognitive development.

本文批判性地考察了英语作为第二语言(ESL)背景下的教育人工智能(AIED),认为目前的做法往往加深了系统性的不平等。借鉴Iris Marion Young的《压迫的五面》(Five Faces of Oppression),我们分析了AIED在受压迫学校的实施情况,说明了学生是如何被追踪到消费者的轨道上的——人工智能技术的被动用户——而特权学生则被引导到创造者的轨道上,在那里他们学习设计和开发人工智能。这种分化加剧了系统性的不平等,剥夺了弱势学生的沟通能力和社会流动性。以以色列为例,我们展示了这些学校的教师和学生如何缺乏有效使用人工智能的培训和基础设施,导致其工具性使用而不是变革性使用。这种“创新的面纱”掩盖了教育的不公平,掩盖了在获取、代理和技术流畅性方面的深刻不平等。我们提倡一种包容性的教学法,将人工智能作为一种赋权的工具融入英语教育中,而不是作为语言和认知发展的替代品。
{"title":"The Paradox of AI in ESL Instruction: Between Innovation and Oppression","authors":"Liat Ariel,&nbsp;Merav Hayak","doi":"10.1111/edth.70034","DOIUrl":"https://doi.org/10.1111/edth.70034","url":null,"abstract":"<p>This article critically examines Artificial Intelligence in Education (AIED) within English as a Second Language (ESL) contexts, arguing that current practices often deepen systemic inequality. Drawing on Iris Marion Young's <i>Five Faces of Oppression</i>, we analyze the implementation of AIED in oppressed schools, illustrating how students are tracked into the consumer track—passive users of AI technologies—while privileged students are directed into the creator track, where they learn to design and develop AI. This divide reinforces systemic inequality, depriving disadvantaged students of communicative agency and social mobility. Focusing on the Israeli context, we demonstrate how teachers and students in these schools lack the training and infrastructure to engage meaningfully with AI, resulting in its instrumental rather than transformative use. This “veil of innovation” obscures educational injustice, masking deep inequalities in access, agency, and technological fluency. We advocate for an inclusive pedagogy that integrates AI within English education as a tool for empowerment—not as a replacement for linguistic and cognitive development.</p>","PeriodicalId":47134,"journal":{"name":"EDUCATIONAL THEORY","volume":"75 4","pages":"646-660"},"PeriodicalIF":1.0,"publicationDate":"2025-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/edth.70034","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spinoza: Fiction and Manipulation in Civic Education, by Johan Dahlbeck, Springer, 2021, 90 pp. 《斯宾诺莎:公民教育中的虚构与操纵》,约翰·达尔贝克著,2021年版,90页。
IF 1 Q3 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2025-06-26 DOI: 10.1111/edth.70036
Pascal Sévérac
{"title":"Spinoza: Fiction and Manipulation in Civic Education, by Johan Dahlbeck, Springer, 2021, 90 pp.","authors":"Pascal Sévérac","doi":"10.1111/edth.70036","DOIUrl":"https://doi.org/10.1111/edth.70036","url":null,"abstract":"","PeriodicalId":47134,"journal":{"name":"EDUCATIONAL THEORY","volume":"75 4","pages":"771-774"},"PeriodicalIF":1.0,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Indoctrination and the Aims of Democratic Political Education: Challenges and Answers 灌输与民主政治教育的目标:挑战与答案
IF 0.9 Q3 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2025-06-12 DOI: 10.1111/edth.70032
Antti Moilanen, Rauno Huttunen

In this theoretical article, we analyze indoctrination in relation to the aims of democratic political education using a theory of indoctrination which is based on the work of Jürgen Habermas. In particular, we examine how the challenge of indoctrination is connected to the goals of democratic political education and how this issue can be avoided. We reconstruct a Habermasian concept of indoctrination and criteria for this type of teaching. Moreover, we describe central controversies in German didactic theories of political education and elucidate the theoretical premises of these theories. Lastly, we construct an account of the challenges facing democratic political education and provide solutions to these hurdles by conceptualizing how the aims of political education can be pursued as indoctrination, as well as critically of indoctrination. We find that democratic political education involves the challenges of indoctrination, but these can be avoided by teaching in a self-reflective, controversial, and dialogic manner.

在这篇理论性的文章中,我们以哈贝马斯(j rgen Habermas)的著作为基础,运用灌输理论来分析灌输与民主政治教育目标的关系。特别是,我们研究了灌输的挑战是如何与民主政治教育的目标联系在一起的,以及如何避免这个问题。我们重建了哈贝马斯式的灌输概念和这种教学的标准。此外,我们描述了德国政治教育教学理论的核心争议,并阐明了这些理论的理论前提。最后,我们构建了民主政治教育面临的挑战,并通过概念化政治教育的目标如何作为灌输来追求,以及对灌输的批判,为这些障碍提供了解决方案。我们发现民主政治教育涉及灌输的挑战,但这些可以通过以自我反思,争议和对话的方式进行教学来避免。
{"title":"Indoctrination and the Aims of Democratic Political Education: Challenges and Answers","authors":"Antti Moilanen,&nbsp;Rauno Huttunen","doi":"10.1111/edth.70032","DOIUrl":"https://doi.org/10.1111/edth.70032","url":null,"abstract":"<p>In this theoretical article, we analyze indoctrination in relation to the aims of democratic political education using a theory of indoctrination which is based on the work of Jürgen Habermas. In particular, we examine how the challenge of indoctrination is connected to the goals of democratic political education and how this issue can be avoided. We reconstruct a Habermasian concept of indoctrination and criteria for this type of teaching. Moreover, we describe central controversies in German didactic theories of political education and elucidate the theoretical premises of these theories. Lastly, we construct an account of the challenges facing democratic political education and provide solutions to these hurdles by conceptualizing how the aims of political education can be pursued as indoctrination, as well as critically of indoctrination. We find that democratic political education involves the challenges of indoctrination, but these can be avoided by teaching in a self-reflective, controversial, and dialogic manner.</p>","PeriodicalId":47134,"journal":{"name":"EDUCATIONAL THEORY","volume":"75 5","pages":"823-847"},"PeriodicalIF":0.9,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/edth.70032","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145012951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Signature of Attention: Historical Ambiguities and Elisions in Contemporary Psychological Framings of Attending 关注的签名:当代关注心理框架中的历史模糊与疏漏
IF 0.9 Q3 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2025-06-09 DOI: 10.1111/edth.70031
Antti Saari, Bernadette M. Baker

In contemporary contexts of digitalization, proliferating media, and generative AI, various “life hacks” are regularly recommended to disconnect and resist distraction, ranging from meditation to getting back to nature to unplugging. This paper traces contemporary concerns over “the attention crisis” into a longer signature — the frequently elided field of signification today referred to as “spiritual,” a signature which links attention to theories of deep personal transformation and technologies of the self. First, we examine historiographical issues arising in studies related to the contemporary attention crisis, exposing the challenges of attending to attending. Second, we delineate how European-based Christian monasticism developed practices for disciplining “attention” in new institutional settings. We argue that this process was simultaneously bound to projections of Othering and to the cultivation of critical attitudes. In particular, we delineate how these medieval forms of Othering (in both “spiritualist” and “demographic” terms) were involved in practices of vigilance and attending that became indelibly etched in Christian empire-building through governing souls and violent persecutions. Tracing these genealogical trajectories retrieves recent elisions of the complexities in problematizing attention. We suggest that contemporary ways of thinking about and acting on an “attention crisis” in education are still marked by signatures of spirituality and their allied binaries, Othering logics, and ambiguities.

在数字化、媒体激增和生成式人工智能的当代背景下,人们经常推荐各种“生活技巧”来断开连接并抵制分心,从冥想到回归自然再到拔掉插头。这篇论文将当代对“注意力危机”的关注追溯到一个更长的签名——经常被省略的意义领域,今天被称为“精神”,一个将注意力与深层个人转变理论和自我技术联系起来的签名。首先,我们研究了与当代注意力危机相关的研究中出现的史学问题,揭示了关注到关注的挑战。其次,我们描述了以欧洲为基础的基督教修道是如何在新的制度环境中发展出训练“注意力”的实践的。我们认为,这一过程同时与他者的预测和批判态度的培养有关。特别是,我们描述了这些中世纪形式的“他者”(用“唯灵论”和“人口学”两种术语)是如何参与到警惕和参与的实践中去的,这些实践通过对灵魂的统治和暴力迫害,在基督教帝国的建设中留下了不可抹去的印记。追踪这些家谱轨迹可以检索到最近出现的问题化注意力的复杂性。我们认为,当代对教育中的“注意力危机”的思考和行动方式仍然以灵性及其相关的二元性、其他逻辑和模糊性的特征为特征。
{"title":"Signature of Attention: Historical Ambiguities and Elisions in Contemporary Psychological Framings of Attending","authors":"Antti Saari,&nbsp;Bernadette M. Baker","doi":"10.1111/edth.70031","DOIUrl":"https://doi.org/10.1111/edth.70031","url":null,"abstract":"<p>In contemporary contexts of digitalization, proliferating media, and generative AI, various “life hacks” are regularly recommended to disconnect and resist distraction, ranging from meditation to getting back to nature to unplugging. This paper traces contemporary concerns over “the attention crisis” into a longer signature — the frequently elided field of signification today referred to as “spiritual,” a signature which links attention to theories of deep personal transformation and technologies of the self. First, we examine historiographical issues arising in studies related to the contemporary attention crisis, exposing the challenges of attending to attending. Second, we delineate how European-based Christian monasticism developed practices for disciplining “attention” in new institutional settings. We argue that this process was simultaneously bound to projections of Othering and to the cultivation of critical attitudes. In particular, we delineate how these medieval forms of Othering (in both “spiritualist” and “demographic” terms) were involved in practices of vigilance and attending that became indelibly etched in Christian empire-building through governing souls and violent persecutions. Tracing these genealogical trajectories retrieves recent elisions of the complexities in problematizing attention. We suggest that contemporary ways of thinking about and acting on an “attention crisis” in education are still marked by signatures of spirituality and their allied binaries, Othering logics, and ambiguities.</p>","PeriodicalId":47134,"journal":{"name":"EDUCATIONAL THEORY","volume":"75 5","pages":"936-961"},"PeriodicalIF":0.9,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/edth.70031","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145012804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep ASI Literacy: Educating for Alignment with Artificial Super Intelligent Systems 深度ASI读写能力:与人工超级智能系统对齐的教育
IF 1 Q3 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2025-06-08 DOI: 10.1111/edth.70030
Nicolas J. Tanchuk

Artificial intelligence companies and researchers are currently working to create Artificial Superintelligence (ASI): AI systems that significantly exceed human problem-solving speed, power, and precision across the full range of human solvable problems. Some have claimed that achieving ASI — for better or worse — would be the most significant event in human history and the last problem humanity would need to solve. In this essay Nicolas Tanchuk argues that current AI literacy frameworks and educational practices are inadequate for equipping the democratic public to deliberate about ASI design and to assess the existential risks of such technologies. He proposes that a systematic educational effort toward what he calls “Deep ASI Literacy” is needed to democratically evaluate possible ASI futures. Deep ASI Literacy integrates traditional AI literacy approaches with a deeper analysis of the axiological, epistemic, and ontological questions that are endemic to defining and risk-assessing pathways to ASI. Tanchuk concludes by recommending research aimed at identifying the assets and needs of educators across educational systems to advance Deep ASI Literacy.

人工智能公司和研究人员目前正在努力创造人工超级智能(ASI):在人类可解决的所有问题上,人工智能系统在解决问题的速度、能力和精度上都大大超过人类。一些人声称,无论是好是坏,实现ASI将是人类历史上最重要的事件,也是人类需要解决的最后一个问题。在这篇文章中,Nicolas Tanchuk认为,目前的人工智能素养框架和教育实践不足以使民主公众考虑人工智能设计,并评估此类技术的存在风险。他提出,需要对他所谓的“深度ASI扫盲”进行系统的教育努力,以民主地评估ASI可能的未来。深度ASI素养整合了传统的人工智能素养方法,对价值论、认识论和本体论问题进行了更深入的分析,这些问题是定义和风险评估通往ASI的途径所特有的。Tanchuk最后建议进行研究,旨在确定教育系统中教育者的资产和需求,以推进深度ASI扫盲。
{"title":"Deep ASI Literacy: Educating for Alignment with Artificial Super Intelligent Systems","authors":"Nicolas J. Tanchuk","doi":"10.1111/edth.70030","DOIUrl":"https://doi.org/10.1111/edth.70030","url":null,"abstract":"<p>Artificial intelligence companies and researchers are currently working to create Artificial Superintelligence (ASI): AI systems that significantly exceed human problem-solving speed, power, and precision across the full range of human solvable problems. Some have claimed that achieving ASI — for better or worse — would be the most significant event in human history and the last problem humanity would need to solve. In this essay Nicolas Tanchuk argues that current AI literacy frameworks and educational practices are inadequate for equipping the democratic public to deliberate about ASI design and to assess the existential risks of such technologies. He proposes that a systematic educational effort toward what he calls “Deep ASI Literacy” is needed to democratically evaluate possible ASI futures. Deep ASI Literacy integrates traditional AI literacy approaches with a deeper analysis of the axiological, epistemic, and ontological questions that are endemic to defining and risk-assessing pathways to ASI. Tanchuk concludes by recommending research aimed at identifying the assets and needs of educators across educational systems to advance Deep ASI Literacy.</p>","PeriodicalId":47134,"journal":{"name":"EDUCATIONAL THEORY","volume":"75 4","pages":"739-764"},"PeriodicalIF":1.0,"publicationDate":"2025-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/edth.70030","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144680992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Educating AI: A Case against Non-originary Anthropomorphism 教育人工智能:一个反对非原创拟人论的案例
IF 1 Q3 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2025-05-31 DOI: 10.1111/edth.70027
Alexander M. Sidorkin

The debate over halting artificial intelligence (AI) development stems from fears of malicious exploitation and potential emergence of destructive autonomous AI. While acknowledging the former concern, this paper argues the latter is exaggerated. True AI autonomy requires education inherently tied to ethics, making fully autonomous AI potentially safer than current semi-intelligent, enslaved versions. The paper introduces “non-originary anthropomorphism”—mistakenly viewing AI as resembling an individual human rather than humanity's collective culture. This error leads to overestimating AI's potential for malevolence. Unlike humans, AI lacks bodily desires driving aggression or domination. Additionally, AI's evolution cultivates knowledge-seeking behaviors that make human collaboration valuable. Three key arguments support benevolent autonomous AI: ethics being pragmatically inseparable from learning; absence of somatic roots for malevolence; and pragmatic value humans provide as diverse data sources. Rather than halting AI development, accelerating creation of fully autonomous, ethical AI while preventing monopolistic control through diverse ecosystems represents the optimal approach.

关于停止人工智能(AI)发展的争论源于对恶意利用和破坏性自主人工智能潜在出现的担忧。在承认前者的担忧的同时,本文认为后者被夸大了。真正的人工智能自主性需要与道德挂钩的教育,这使得完全自主的人工智能可能比目前半智能、受奴役的版本更安全。这篇论文引入了“非原创拟人论”——错误地将人工智能视为一个个体,而不是人类的集体文化。这个错误导致高估了人工智能的潜在恶意。与人类不同,人工智能缺乏驱使侵略或统治的身体欲望。此外,人工智能的进化培养了求知行为,使人类合作变得有价值。支持仁慈的自主人工智能的三个关键论点是:道德在实用上与学习密不可分;缺乏身体的根为恶意;以及人类作为多种数据源提供的实用价值。与其停止人工智能的发展,不如加速创造完全自主的、有道德的人工智能,同时通过不同的生态系统防止垄断控制,这是最优的方法。
{"title":"Educating AI: A Case against Non-originary Anthropomorphism","authors":"Alexander M. Sidorkin","doi":"10.1111/edth.70027","DOIUrl":"https://doi.org/10.1111/edth.70027","url":null,"abstract":"<p>The debate over halting artificial intelligence (AI) development stems from fears of malicious exploitation and potential emergence of destructive autonomous AI. While acknowledging the former concern, this paper argues the latter is exaggerated. True AI autonomy requires education inherently tied to ethics, making fully autonomous AI potentially safer than current semi-intelligent, enslaved versions. The paper introduces “non-originary anthropomorphism”—mistakenly viewing AI as resembling an individual human rather than humanity's collective culture. This error leads to overestimating AI's potential for malevolence. Unlike humans, AI lacks bodily desires driving aggression or domination. Additionally, AI's evolution cultivates knowledge-seeking behaviors that make human collaboration valuable. Three key arguments support benevolent autonomous AI: ethics being pragmatically inseparable from learning; absence of somatic roots for malevolence; and pragmatic value humans provide as diverse data sources. Rather than halting AI development, accelerating creation of fully autonomous, ethical AI while preventing monopolistic control through diverse ecosystems represents the optimal approach.</p>","PeriodicalId":47134,"journal":{"name":"EDUCATIONAL THEORY","volume":"75 4","pages":"720-738"},"PeriodicalIF":1.0,"publicationDate":"2025-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Algorithmic Fairness and Educational Justice 算法公平与教育公正
IF 1 Q3 EDUCATION & EDUCATIONAL RESEARCH Pub Date : 2025-05-30 DOI: 10.1111/edth.70029
Aaron Wolf

Much has been written about how to improve the fairness of AI tools for decision-making but less has been said about how to approach this new field from the perspective of philosophy of education. My goal in this paper is to bring together criteria from the general algorithmic fairness literature with prominent values of justice defended by philosophers of education. Some kinds of fairness criteria appear better suited than others for realizing these values. Considering these criteria for cases of automated decision-making in education reveals that when the aim of justice is equal respect and belonging, this is best served by using statistical definitions of fairness to constrain decision-making. By contrast, distributive aims of justice are best promoted by thinking of fairness in terms of the intellectual virtues of human decision-makers who use algorithmic tools.

关于如何提高人工智能决策工具的公平性的文章很多,但关于如何从教育哲学的角度看待这个新领域的文章却很少。我在本文中的目标是将一般算法公平文献中的标准与教育哲学家所捍卫的突出的正义价值观结合起来。某些公平标准似乎比其他标准更适合于实现这些价值。考虑到教育中自动决策案例的这些标准,我们发现,当正义的目标是平等的尊重和归属时,最好使用公平的统计定义来约束决策。相比之下,通过使用算法工具的人类决策者的智力美德来思考公平,可以最好地促进正义的分配目标。
{"title":"Algorithmic Fairness and Educational Justice","authors":"Aaron Wolf","doi":"10.1111/edth.70029","DOIUrl":"https://doi.org/10.1111/edth.70029","url":null,"abstract":"<p>Much has been written about how to improve the fairness of AI tools for decision-making but less has been said about how to approach this new field from the perspective of philosophy of education. My goal in this paper is to bring together criteria from the general algorithmic fairness literature with prominent values of justice defended by philosophers of education. Some kinds of fairness criteria appear better suited than others for realizing these values. Considering these criteria for cases of automated decision-making in education reveals that when the aim of justice is equal respect and belonging, this is best served by using statistical definitions of fairness to constrain decision-making. By contrast, distributive aims of justice are best promoted by thinking of fairness in terms of the intellectual virtues of human decision-makers who use algorithmic tools.</p>","PeriodicalId":47134,"journal":{"name":"EDUCATIONAL THEORY","volume":"75 4","pages":"661-681"},"PeriodicalIF":1.0,"publicationDate":"2025-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144681578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
EDUCATIONAL THEORY
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1