Can we disrupt the momentum of the AI colonization of science education?

IF 3.6 1区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH Journal of Research in Science Teaching Pub Date : 2024-05-27 DOI:10.1002/tea.21961
Lucy Avraamidou
{"title":"Can we disrupt the momentum of the AI colonization of science education?","authors":"Lucy Avraamidou","doi":"10.1002/tea.21961","DOIUrl":null,"url":null,"abstract":"<p>Not a day goes by that an advertisement for a new generative AI tool that promises to revolutionize scientific research and education does not make its way to the news. Generative AI as a silver bullet. AI tools are used to extract data (usually without consent), replace research participants, read papers, summarize papers, write papers, design lesson plans, manage students, and assess students, just to name a few. Generative AI technologies are creating a techno-utopia and a new world order.</p><p>The scientific community has increasingly been utilizing AI tools to improve research, namely, maximizing productivity by attempting to overcome human shortcomings (Messeri &amp; Crockett, <span>2023</span>). For example, AI tools can enhance scientific research by enabling fast collection and analysis of large data sets. This, however, is not without a cost, as it poses a potential threat to scientific research related to the AI algorithmic monoculture (i.e., choices and preferences are homogeneous, as, all of us enjoy the same kind of music, clothes, or films) in the face of algorithmic curation (Kleinberg &amp; Raghavan, <span>2021</span>). Can we, hence, ever imagine reverting to monocultural scientific research despite evidence of the value of diversity and plurality of voices and knowledge? The same question applies to education. Even though AI technologies have the potential to innovate teaching they also bring risks and challenges associated with digital monoculturalism as well as ethical, inclusive, and equitable use of AI (UNESCO, <span>2023</span>).</p><p>Educational institutions are buying into generative AI promises and hallucinations (Alkaissi &amp; McFarlane, <span>2023</span>) and frantically trying to catch up with a mass production of AI tools. National funding agencies in different parts of the world are allocating financial support for research projects utilizing AI tools in science and education (e.g., <i>New Horizon Europe Funding for Data, Computing, and AI Technologies</i>). Several (science) education journals have dedicated special issues to an examination of the potential of AI for teaching and learning. Researchers in science education are shifting their interests toward AI to engage with “hot” research in this new world order created by the AI industry.</p><p>The problem with this new world order is that it repeats patterns of colonial history through exploitation and extraction of resources to enrich the wealthy and powerful at the great expense of the poor (Hao, <span>2022</span>). There exists a wealth of evidence pointing to how AI has exploited marginalized communities for the development of large language models, for example, ChatGPT (Perrigo, <span>2023</span>). Several studies have shed light on issues related to ethics, biases, and racial and gender stereotypes. For example, descriptions of people images through tagging (i.e., Google Cloud Vision API), personalized news feeds (i.e., Google Search, Amazon Cloud Search), virtual assistants (i.e., Siri), and large language models (i.e., ChatGPT) reflect human biases and reinforce social stereotypes: <i>Physicists are white and male</i>, <i>Black people are athletic, Asian women are beautiful, Black women are hypersexualized</i> (Kyriakou et al., <span>2019</span>; Noble, <span>2013</span>; Otterbacher et al., <span>2017</span>). Moreover, research findings showed that online social networks and information networks (e.g., <i>Who to Follow</i>) that rely on algorithms and used for different purposes (e.g. networking, hiring, and promotion procedures) perpetuate inequities and further discriminate against minorities (Espín-Noboa et al., <span>2022</span>).</p><p>Other studies provided evidence of the large environmental impact of AI technologies, which include energy used from both the training of models and the actual use (De Vries, <span>2023</span>; Luccioni et al., <span>2023</span>). For example, the carbon footprint of an AI prompt is 4–5 times higher than a search engine query. More concretely, if the typical 9-billion daily Google searches were instead AI chatbot queries it would require as much power to run a country with a population of about 5 million people. Another indicative example of the energy consumption of AI tools is that generating only one image using an AI model takes as much energy as fully charging a smartphone (Luccioni et al., <span>2023</span>).</p><p>These are crucial socio-scientific issues that the science education community ought to engage with through a critical approach to AI. Instead, science education is currently operating at the service of the generative and predictive AI industry, at least in the Global North, and remains largely disengaged with issues related to digital monoculturalism, algorithmic biases, ethics, and exploitation of both human and natural resources by the AI industry. Essentially, what this means is that the AI industry is currently shaping the future of science education.</p><p>In a systematic literature review examining the use of AI in school science in the period between 2010 and 2021, we found that AI applications have been used mostly to automate existing educational practices, for example, reducing workload and automatizing feedback (Heeg &amp; Avraamidou, <span>2023</span>). Another finding of our review is that the majority of the studies reviewed were atheoretical and lacked criticality. In identifying gaps in the existing knowledge base, we found that those cut across epistemic and sociocultural domains of science learning. Research studies examining the use of AI tools in education have focused largely on cognitive goals and have remained largely disengaged with goals connected to the nature of scientific knowledge, the social nature of both scientific research and learning as well as goals related to learners' socio-emotional development.</p><p>For example, Intelligent Tutoring Systems with their focus on the cognitive needs of students, often leave unaddressed the critical challenge of supporting the need for social relationships and autonomy that are essential to learning, engaged behavior, and well-being (Collie, <span>2020</span>). For this to be happening in the post-pandemic world is at least a paradox. Because, if there is one thing that the multiple lockdowns and campus closures taught us, it is that we cannot exist without embodied affairs with other people, no matter how many machines we have at our disposal. We are not only social, but we are also relational beings. We live our lives not only through social interactions but also through relationships with others in social ecologies (Wenger, <span>1998</span>) where both embodiment and emotions are central (Avraamidou, <span>2020</span>).</p><p>The multiple forms of knowledge produced through social relations and how those intertwine with learners' and teachers' subjectivities, identities, values, and cultures while inherent to learning are absent from AI-driven tools, whether those are virtual tutors, chatbots, automated assessment tools, or learning analytics. Instead, the vast majority of AI systems follow a convenience-food approach to learning that promotes fast bite-sized learning over slow learning and prioritizes the use of specific learning paths for the purpose of achieving prescribed goals. Education is confused with training and students with machines that operate through an input–output process. This is reflected in tools that track the progress of students and provide analytics on their performance, engagement, and behavior to create either the “ideal” learning path or a personalized path toward an “ideal” prescribed outcome (Paolucci et al., <span>2024</span>).</p><p>This is how generative AI might promote both the dehumanization of learning and standardization of thinking instead of a celebration of the <i>infinite ways of becoming</i> a science learner (Avraamidou, <span>2020</span>). Why? Because the Anglo-American AI industry is leading an unsolicited science education reform that lacks vision, it is a-theoretical, it is de-contextualized, it remains largely oblivious to research about how people learn, it is disconnected from social and political tasks of resistance, and it has profit instead of the learner at its center.</p><p>A feminist AI will provide the frames and tools to prioritize algorithmic literacy and an understanding of how AI perpetuates biases, racism, and existing systems of oppression. Such an approach might also serve as a springboard for designing socially just AI-driven curricula that place <i>all</i> learners' identities, subjectivities, values, and cultures at the forefront.</p><p>Science education does not need an AI utopia driven by corporate, neoliberal, and eugenics paradigms (Gebru &amp; Torres, <span>2024</span>), designed through pedagogies for the economy and a disimagination machine (Giroux, <span>2014</span>). What the field needs is a human-centered feminist AI vision framed within relationality, embodiment, and resistance, pedagogies of care, affect, and cultural sustainability to curate educational spaces where humanization of science learning and social transformation that transcend the algorithm can happen.</p><p>Can we disrupt the momentum of the AI colonization of science education? Yes, we can—once we step outside of corporate and capitalist visions of science education and imagine more sustainable and socially just futures.</p>","PeriodicalId":48369,"journal":{"name":"Journal of Research in Science Teaching","volume":"61 10","pages":"2570-2574"},"PeriodicalIF":3.6000,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/tea.21961","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Research in Science Teaching","FirstCategoryId":"95","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/tea.21961","RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0

Abstract

Not a day goes by that an advertisement for a new generative AI tool that promises to revolutionize scientific research and education does not make its way to the news. Generative AI as a silver bullet. AI tools are used to extract data (usually without consent), replace research participants, read papers, summarize papers, write papers, design lesson plans, manage students, and assess students, just to name a few. Generative AI technologies are creating a techno-utopia and a new world order.

The scientific community has increasingly been utilizing AI tools to improve research, namely, maximizing productivity by attempting to overcome human shortcomings (Messeri & Crockett, 2023). For example, AI tools can enhance scientific research by enabling fast collection and analysis of large data sets. This, however, is not without a cost, as it poses a potential threat to scientific research related to the AI algorithmic monoculture (i.e., choices and preferences are homogeneous, as, all of us enjoy the same kind of music, clothes, or films) in the face of algorithmic curation (Kleinberg & Raghavan, 2021). Can we, hence, ever imagine reverting to monocultural scientific research despite evidence of the value of diversity and plurality of voices and knowledge? The same question applies to education. Even though AI technologies have the potential to innovate teaching they also bring risks and challenges associated with digital monoculturalism as well as ethical, inclusive, and equitable use of AI (UNESCO, 2023).

Educational institutions are buying into generative AI promises and hallucinations (Alkaissi & McFarlane, 2023) and frantically trying to catch up with a mass production of AI tools. National funding agencies in different parts of the world are allocating financial support for research projects utilizing AI tools in science and education (e.g., New Horizon Europe Funding for Data, Computing, and AI Technologies). Several (science) education journals have dedicated special issues to an examination of the potential of AI for teaching and learning. Researchers in science education are shifting their interests toward AI to engage with “hot” research in this new world order created by the AI industry.

The problem with this new world order is that it repeats patterns of colonial history through exploitation and extraction of resources to enrich the wealthy and powerful at the great expense of the poor (Hao, 2022). There exists a wealth of evidence pointing to how AI has exploited marginalized communities for the development of large language models, for example, ChatGPT (Perrigo, 2023). Several studies have shed light on issues related to ethics, biases, and racial and gender stereotypes. For example, descriptions of people images through tagging (i.e., Google Cloud Vision API), personalized news feeds (i.e., Google Search, Amazon Cloud Search), virtual assistants (i.e., Siri), and large language models (i.e., ChatGPT) reflect human biases and reinforce social stereotypes: Physicists are white and male, Black people are athletic, Asian women are beautiful, Black women are hypersexualized (Kyriakou et al., 2019; Noble, 2013; Otterbacher et al., 2017). Moreover, research findings showed that online social networks and information networks (e.g., Who to Follow) that rely on algorithms and used for different purposes (e.g. networking, hiring, and promotion procedures) perpetuate inequities and further discriminate against minorities (Espín-Noboa et al., 2022).

Other studies provided evidence of the large environmental impact of AI technologies, which include energy used from both the training of models and the actual use (De Vries, 2023; Luccioni et al., 2023). For example, the carbon footprint of an AI prompt is 4–5 times higher than a search engine query. More concretely, if the typical 9-billion daily Google searches were instead AI chatbot queries it would require as much power to run a country with a population of about 5 million people. Another indicative example of the energy consumption of AI tools is that generating only one image using an AI model takes as much energy as fully charging a smartphone (Luccioni et al., 2023).

These are crucial socio-scientific issues that the science education community ought to engage with through a critical approach to AI. Instead, science education is currently operating at the service of the generative and predictive AI industry, at least in the Global North, and remains largely disengaged with issues related to digital monoculturalism, algorithmic biases, ethics, and exploitation of both human and natural resources by the AI industry. Essentially, what this means is that the AI industry is currently shaping the future of science education.

In a systematic literature review examining the use of AI in school science in the period between 2010 and 2021, we found that AI applications have been used mostly to automate existing educational practices, for example, reducing workload and automatizing feedback (Heeg & Avraamidou, 2023). Another finding of our review is that the majority of the studies reviewed were atheoretical and lacked criticality. In identifying gaps in the existing knowledge base, we found that those cut across epistemic and sociocultural domains of science learning. Research studies examining the use of AI tools in education have focused largely on cognitive goals and have remained largely disengaged with goals connected to the nature of scientific knowledge, the social nature of both scientific research and learning as well as goals related to learners' socio-emotional development.

For example, Intelligent Tutoring Systems with their focus on the cognitive needs of students, often leave unaddressed the critical challenge of supporting the need for social relationships and autonomy that are essential to learning, engaged behavior, and well-being (Collie, 2020). For this to be happening in the post-pandemic world is at least a paradox. Because, if there is one thing that the multiple lockdowns and campus closures taught us, it is that we cannot exist without embodied affairs with other people, no matter how many machines we have at our disposal. We are not only social, but we are also relational beings. We live our lives not only through social interactions but also through relationships with others in social ecologies (Wenger, 1998) where both embodiment and emotions are central (Avraamidou, 2020).

The multiple forms of knowledge produced through social relations and how those intertwine with learners' and teachers' subjectivities, identities, values, and cultures while inherent to learning are absent from AI-driven tools, whether those are virtual tutors, chatbots, automated assessment tools, or learning analytics. Instead, the vast majority of AI systems follow a convenience-food approach to learning that promotes fast bite-sized learning over slow learning and prioritizes the use of specific learning paths for the purpose of achieving prescribed goals. Education is confused with training and students with machines that operate through an input–output process. This is reflected in tools that track the progress of students and provide analytics on their performance, engagement, and behavior to create either the “ideal” learning path or a personalized path toward an “ideal” prescribed outcome (Paolucci et al., 2024).

This is how generative AI might promote both the dehumanization of learning and standardization of thinking instead of a celebration of the infinite ways of becoming a science learner (Avraamidou, 2020). Why? Because the Anglo-American AI industry is leading an unsolicited science education reform that lacks vision, it is a-theoretical, it is de-contextualized, it remains largely oblivious to research about how people learn, it is disconnected from social and political tasks of resistance, and it has profit instead of the learner at its center.

A feminist AI will provide the frames and tools to prioritize algorithmic literacy and an understanding of how AI perpetuates biases, racism, and existing systems of oppression. Such an approach might also serve as a springboard for designing socially just AI-driven curricula that place all learners' identities, subjectivities, values, and cultures at the forefront.

Science education does not need an AI utopia driven by corporate, neoliberal, and eugenics paradigms (Gebru & Torres, 2024), designed through pedagogies for the economy and a disimagination machine (Giroux, 2014). What the field needs is a human-centered feminist AI vision framed within relationality, embodiment, and resistance, pedagogies of care, affect, and cultural sustainability to curate educational spaces where humanization of science learning and social transformation that transcend the algorithm can happen.

Can we disrupt the momentum of the AI colonization of science education? Yes, we can—once we step outside of corporate and capitalist visions of science education and imagine more sustainable and socially just futures.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
我们能阻挡人工智能殖民科学教育的势头吗?
在对 2010 年至 2021 年期间人工智能在学校科学中的应用进行的系统性文献综述中,我们发现,人工智能应用主要用于现有教育实践的自动化,例如,减少工作量和反馈自动化(Heeg &amp; Avraamidou, 2023)。我们审查的另一个发现是,审查的大多数研究都是理论性的,缺乏批判性。在确定现有知识基础的差距时,我们发现这些差距涉及科学学习的认识论和社会文化领域。例如,智能辅导系统关注的是学生的认知需求,但往往忽略了支持社会关系和自主性需求这一关键挑战,而社会关系和自主性需求对学习、参与行为和幸福感至关重要(Collie,2020 年)。在流行病后的世界里发生这种情况至少是一个悖论。因为,如果说多次封锁和关闭校园让我们明白了一件事,那就是无论我们拥有多少机器,我们都离不开与他人的实际交往。我们不仅是社会人,也是关系人。我们不仅通过社会交往,而且通过社会生态中与他人的关系来生活(Wenger,1998 年),在社会生态中,体现和情感都是核心(Avraamidou,2020 年)。通过社会关系产生的多种形式的知识,以及这些知识如何与学习者和教师的主体性、身份、价值观和文化交织在一起,同时又是学习的固有特性,这些在人工智能驱动的工具中都不存在,无论是虚拟导师、聊天机器人、自动评估工具还是学习分析。取而代之的是,绝大多数人工智能系统都遵循方便食品式的学习方法,提倡快速学习而不是慢速学习,并优先使用特定的学习路径来实现规定的目标。教育被混淆为培训,学生被混淆为通过输入-输出过程运行的机器。这反映在一些工具上,这些工具跟踪学生的学习进度,并对他们的表现、参与度和行为进行分析,以创建 "理想的 "学习路径或实现 "理想的 "规定结果的个性化路径(Paolucci et al.为什么会这样?因为英美人工智能产业正在引领一场不请自来的科学教育改革,这场改革缺乏远见、理论化、去语境化,在很大程度上忽视了关于人们如何学习的研究,与反抗的社会和政治任务脱节,以利益而非学习者为中心。女性主义人工智能将提供框架和工具,优先考虑算法素养以及对人工智能如何延续偏见、种族主义和现有压迫体系的理解。科学教育不需要一个由企业、新自由主义和优生学范式驱动的人工智能乌托邦(Gebru &amp; Torres, 2024),也不需要通过经济教学法和一台臆想机器来设计人工智能(Giroux, 2014)。该领域需要的是一种以人为本的女权主义人工智能愿景,其框架包括关系性、体现性和抵抗性,以及关爱、情感和文化可持续性教学法,以策划教育空间,实现科学学习的人性化和超越算法的社会转型。是的,我们能--只要我们跳出企业和资本主义的科学教育观,想象更可持续、更社会公正的未来。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Journal of Research in Science Teaching
Journal of Research in Science Teaching EDUCATION & EDUCATIONAL RESEARCH-
CiteScore
8.80
自引率
19.60%
发文量
96
期刊介绍: Journal of Research in Science Teaching, the official journal of NARST: A Worldwide Organization for Improving Science Teaching and Learning Through Research, publishes reports for science education researchers and practitioners on issues of science teaching and learning and science education policy. Scholarly manuscripts within the domain of the Journal of Research in Science Teaching include, but are not limited to, investigations employing qualitative, ethnographic, historical, survey, philosophical, case study research, quantitative, experimental, quasi-experimental, data mining, and data analytics approaches; position papers; policy perspectives; critical reviews of the literature; and comments and criticism.
期刊最新文献
Issue Information Issue Information Artificial intelligence: Tool or teammate? “Powered by emotions”: Exploring emotion induction in out‐of‐school authentic science learning Issue Information
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1