{"title":"Can we disrupt the momentum of the AI colonization of science education?","authors":"Lucy Avraamidou","doi":"10.1002/tea.21961","DOIUrl":null,"url":null,"abstract":"<p>Not a day goes by that an advertisement for a new generative AI tool that promises to revolutionize scientific research and education does not make its way to the news. Generative AI as a silver bullet. AI tools are used to extract data (usually without consent), replace research participants, read papers, summarize papers, write papers, design lesson plans, manage students, and assess students, just to name a few. Generative AI technologies are creating a techno-utopia and a new world order.</p><p>The scientific community has increasingly been utilizing AI tools to improve research, namely, maximizing productivity by attempting to overcome human shortcomings (Messeri & Crockett, <span>2023</span>). For example, AI tools can enhance scientific research by enabling fast collection and analysis of large data sets. This, however, is not without a cost, as it poses a potential threat to scientific research related to the AI algorithmic monoculture (i.e., choices and preferences are homogeneous, as, all of us enjoy the same kind of music, clothes, or films) in the face of algorithmic curation (Kleinberg & Raghavan, <span>2021</span>). Can we, hence, ever imagine reverting to monocultural scientific research despite evidence of the value of diversity and plurality of voices and knowledge? The same question applies to education. Even though AI technologies have the potential to innovate teaching they also bring risks and challenges associated with digital monoculturalism as well as ethical, inclusive, and equitable use of AI (UNESCO, <span>2023</span>).</p><p>Educational institutions are buying into generative AI promises and hallucinations (Alkaissi & McFarlane, <span>2023</span>) and frantically trying to catch up with a mass production of AI tools. National funding agencies in different parts of the world are allocating financial support for research projects utilizing AI tools in science and education (e.g., <i>New Horizon Europe Funding for Data, Computing, and AI Technologies</i>). Several (science) education journals have dedicated special issues to an examination of the potential of AI for teaching and learning. Researchers in science education are shifting their interests toward AI to engage with “hot” research in this new world order created by the AI industry.</p><p>The problem with this new world order is that it repeats patterns of colonial history through exploitation and extraction of resources to enrich the wealthy and powerful at the great expense of the poor (Hao, <span>2022</span>). There exists a wealth of evidence pointing to how AI has exploited marginalized communities for the development of large language models, for example, ChatGPT (Perrigo, <span>2023</span>). Several studies have shed light on issues related to ethics, biases, and racial and gender stereotypes. For example, descriptions of people images through tagging (i.e., Google Cloud Vision API), personalized news feeds (i.e., Google Search, Amazon Cloud Search), virtual assistants (i.e., Siri), and large language models (i.e., ChatGPT) reflect human biases and reinforce social stereotypes: <i>Physicists are white and male</i>, <i>Black people are athletic, Asian women are beautiful, Black women are hypersexualized</i> (Kyriakou et al., <span>2019</span>; Noble, <span>2013</span>; Otterbacher et al., <span>2017</span>). Moreover, research findings showed that online social networks and information networks (e.g., <i>Who to Follow</i>) that rely on algorithms and used for different purposes (e.g. networking, hiring, and promotion procedures) perpetuate inequities and further discriminate against minorities (Espín-Noboa et al., <span>2022</span>).</p><p>Other studies provided evidence of the large environmental impact of AI technologies, which include energy used from both the training of models and the actual use (De Vries, <span>2023</span>; Luccioni et al., <span>2023</span>). For example, the carbon footprint of an AI prompt is 4–5 times higher than a search engine query. More concretely, if the typical 9-billion daily Google searches were instead AI chatbot queries it would require as much power to run a country with a population of about 5 million people. Another indicative example of the energy consumption of AI tools is that generating only one image using an AI model takes as much energy as fully charging a smartphone (Luccioni et al., <span>2023</span>).</p><p>These are crucial socio-scientific issues that the science education community ought to engage with through a critical approach to AI. Instead, science education is currently operating at the service of the generative and predictive AI industry, at least in the Global North, and remains largely disengaged with issues related to digital monoculturalism, algorithmic biases, ethics, and exploitation of both human and natural resources by the AI industry. Essentially, what this means is that the AI industry is currently shaping the future of science education.</p><p>In a systematic literature review examining the use of AI in school science in the period between 2010 and 2021, we found that AI applications have been used mostly to automate existing educational practices, for example, reducing workload and automatizing feedback (Heeg & Avraamidou, <span>2023</span>). Another finding of our review is that the majority of the studies reviewed were atheoretical and lacked criticality. In identifying gaps in the existing knowledge base, we found that those cut across epistemic and sociocultural domains of science learning. Research studies examining the use of AI tools in education have focused largely on cognitive goals and have remained largely disengaged with goals connected to the nature of scientific knowledge, the social nature of both scientific research and learning as well as goals related to learners' socio-emotional development.</p><p>For example, Intelligent Tutoring Systems with their focus on the cognitive needs of students, often leave unaddressed the critical challenge of supporting the need for social relationships and autonomy that are essential to learning, engaged behavior, and well-being (Collie, <span>2020</span>). For this to be happening in the post-pandemic world is at least a paradox. Because, if there is one thing that the multiple lockdowns and campus closures taught us, it is that we cannot exist without embodied affairs with other people, no matter how many machines we have at our disposal. We are not only social, but we are also relational beings. We live our lives not only through social interactions but also through relationships with others in social ecologies (Wenger, <span>1998</span>) where both embodiment and emotions are central (Avraamidou, <span>2020</span>).</p><p>The multiple forms of knowledge produced through social relations and how those intertwine with learners' and teachers' subjectivities, identities, values, and cultures while inherent to learning are absent from AI-driven tools, whether those are virtual tutors, chatbots, automated assessment tools, or learning analytics. Instead, the vast majority of AI systems follow a convenience-food approach to learning that promotes fast bite-sized learning over slow learning and prioritizes the use of specific learning paths for the purpose of achieving prescribed goals. Education is confused with training and students with machines that operate through an input–output process. This is reflected in tools that track the progress of students and provide analytics on their performance, engagement, and behavior to create either the “ideal” learning path or a personalized path toward an “ideal” prescribed outcome (Paolucci et al., <span>2024</span>).</p><p>This is how generative AI might promote both the dehumanization of learning and standardization of thinking instead of a celebration of the <i>infinite ways of becoming</i> a science learner (Avraamidou, <span>2020</span>). Why? Because the Anglo-American AI industry is leading an unsolicited science education reform that lacks vision, it is a-theoretical, it is de-contextualized, it remains largely oblivious to research about how people learn, it is disconnected from social and political tasks of resistance, and it has profit instead of the learner at its center.</p><p>A feminist AI will provide the frames and tools to prioritize algorithmic literacy and an understanding of how AI perpetuates biases, racism, and existing systems of oppression. Such an approach might also serve as a springboard for designing socially just AI-driven curricula that place <i>all</i> learners' identities, subjectivities, values, and cultures at the forefront.</p><p>Science education does not need an AI utopia driven by corporate, neoliberal, and eugenics paradigms (Gebru & Torres, <span>2024</span>), designed through pedagogies for the economy and a disimagination machine (Giroux, <span>2014</span>). What the field needs is a human-centered feminist AI vision framed within relationality, embodiment, and resistance, pedagogies of care, affect, and cultural sustainability to curate educational spaces where humanization of science learning and social transformation that transcend the algorithm can happen.</p><p>Can we disrupt the momentum of the AI colonization of science education? Yes, we can—once we step outside of corporate and capitalist visions of science education and imagine more sustainable and socially just futures.</p>","PeriodicalId":48369,"journal":{"name":"Journal of Research in Science Teaching","volume":"61 10","pages":"2570-2574"},"PeriodicalIF":3.6000,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/tea.21961","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Research in Science Teaching","FirstCategoryId":"95","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/tea.21961","RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0
Abstract
Not a day goes by that an advertisement for a new generative AI tool that promises to revolutionize scientific research and education does not make its way to the news. Generative AI as a silver bullet. AI tools are used to extract data (usually without consent), replace research participants, read papers, summarize papers, write papers, design lesson plans, manage students, and assess students, just to name a few. Generative AI technologies are creating a techno-utopia and a new world order.
The scientific community has increasingly been utilizing AI tools to improve research, namely, maximizing productivity by attempting to overcome human shortcomings (Messeri & Crockett, 2023). For example, AI tools can enhance scientific research by enabling fast collection and analysis of large data sets. This, however, is not without a cost, as it poses a potential threat to scientific research related to the AI algorithmic monoculture (i.e., choices and preferences are homogeneous, as, all of us enjoy the same kind of music, clothes, or films) in the face of algorithmic curation (Kleinberg & Raghavan, 2021). Can we, hence, ever imagine reverting to monocultural scientific research despite evidence of the value of diversity and plurality of voices and knowledge? The same question applies to education. Even though AI technologies have the potential to innovate teaching they also bring risks and challenges associated with digital monoculturalism as well as ethical, inclusive, and equitable use of AI (UNESCO, 2023).
Educational institutions are buying into generative AI promises and hallucinations (Alkaissi & McFarlane, 2023) and frantically trying to catch up with a mass production of AI tools. National funding agencies in different parts of the world are allocating financial support for research projects utilizing AI tools in science and education (e.g., New Horizon Europe Funding for Data, Computing, and AI Technologies). Several (science) education journals have dedicated special issues to an examination of the potential of AI for teaching and learning. Researchers in science education are shifting their interests toward AI to engage with “hot” research in this new world order created by the AI industry.
The problem with this new world order is that it repeats patterns of colonial history through exploitation and extraction of resources to enrich the wealthy and powerful at the great expense of the poor (Hao, 2022). There exists a wealth of evidence pointing to how AI has exploited marginalized communities for the development of large language models, for example, ChatGPT (Perrigo, 2023). Several studies have shed light on issues related to ethics, biases, and racial and gender stereotypes. For example, descriptions of people images through tagging (i.e., Google Cloud Vision API), personalized news feeds (i.e., Google Search, Amazon Cloud Search), virtual assistants (i.e., Siri), and large language models (i.e., ChatGPT) reflect human biases and reinforce social stereotypes: Physicists are white and male, Black people are athletic, Asian women are beautiful, Black women are hypersexualized (Kyriakou et al., 2019; Noble, 2013; Otterbacher et al., 2017). Moreover, research findings showed that online social networks and information networks (e.g., Who to Follow) that rely on algorithms and used for different purposes (e.g. networking, hiring, and promotion procedures) perpetuate inequities and further discriminate against minorities (Espín-Noboa et al., 2022).
Other studies provided evidence of the large environmental impact of AI technologies, which include energy used from both the training of models and the actual use (De Vries, 2023; Luccioni et al., 2023). For example, the carbon footprint of an AI prompt is 4–5 times higher than a search engine query. More concretely, if the typical 9-billion daily Google searches were instead AI chatbot queries it would require as much power to run a country with a population of about 5 million people. Another indicative example of the energy consumption of AI tools is that generating only one image using an AI model takes as much energy as fully charging a smartphone (Luccioni et al., 2023).
These are crucial socio-scientific issues that the science education community ought to engage with through a critical approach to AI. Instead, science education is currently operating at the service of the generative and predictive AI industry, at least in the Global North, and remains largely disengaged with issues related to digital monoculturalism, algorithmic biases, ethics, and exploitation of both human and natural resources by the AI industry. Essentially, what this means is that the AI industry is currently shaping the future of science education.
In a systematic literature review examining the use of AI in school science in the period between 2010 and 2021, we found that AI applications have been used mostly to automate existing educational practices, for example, reducing workload and automatizing feedback (Heeg & Avraamidou, 2023). Another finding of our review is that the majority of the studies reviewed were atheoretical and lacked criticality. In identifying gaps in the existing knowledge base, we found that those cut across epistemic and sociocultural domains of science learning. Research studies examining the use of AI tools in education have focused largely on cognitive goals and have remained largely disengaged with goals connected to the nature of scientific knowledge, the social nature of both scientific research and learning as well as goals related to learners' socio-emotional development.
For example, Intelligent Tutoring Systems with their focus on the cognitive needs of students, often leave unaddressed the critical challenge of supporting the need for social relationships and autonomy that are essential to learning, engaged behavior, and well-being (Collie, 2020). For this to be happening in the post-pandemic world is at least a paradox. Because, if there is one thing that the multiple lockdowns and campus closures taught us, it is that we cannot exist without embodied affairs with other people, no matter how many machines we have at our disposal. We are not only social, but we are also relational beings. We live our lives not only through social interactions but also through relationships with others in social ecologies (Wenger, 1998) where both embodiment and emotions are central (Avraamidou, 2020).
The multiple forms of knowledge produced through social relations and how those intertwine with learners' and teachers' subjectivities, identities, values, and cultures while inherent to learning are absent from AI-driven tools, whether those are virtual tutors, chatbots, automated assessment tools, or learning analytics. Instead, the vast majority of AI systems follow a convenience-food approach to learning that promotes fast bite-sized learning over slow learning and prioritizes the use of specific learning paths for the purpose of achieving prescribed goals. Education is confused with training and students with machines that operate through an input–output process. This is reflected in tools that track the progress of students and provide analytics on their performance, engagement, and behavior to create either the “ideal” learning path or a personalized path toward an “ideal” prescribed outcome (Paolucci et al., 2024).
This is how generative AI might promote both the dehumanization of learning and standardization of thinking instead of a celebration of the infinite ways of becoming a science learner (Avraamidou, 2020). Why? Because the Anglo-American AI industry is leading an unsolicited science education reform that lacks vision, it is a-theoretical, it is de-contextualized, it remains largely oblivious to research about how people learn, it is disconnected from social and political tasks of resistance, and it has profit instead of the learner at its center.
A feminist AI will provide the frames and tools to prioritize algorithmic literacy and an understanding of how AI perpetuates biases, racism, and existing systems of oppression. Such an approach might also serve as a springboard for designing socially just AI-driven curricula that place all learners' identities, subjectivities, values, and cultures at the forefront.
Science education does not need an AI utopia driven by corporate, neoliberal, and eugenics paradigms (Gebru & Torres, 2024), designed through pedagogies for the economy and a disimagination machine (Giroux, 2014). What the field needs is a human-centered feminist AI vision framed within relationality, embodiment, and resistance, pedagogies of care, affect, and cultural sustainability to curate educational spaces where humanization of science learning and social transformation that transcend the algorithm can happen.
Can we disrupt the momentum of the AI colonization of science education? Yes, we can—once we step outside of corporate and capitalist visions of science education and imagine more sustainable and socially just futures.
期刊介绍:
Journal of Research in Science Teaching, the official journal of NARST: A Worldwide Organization for Improving Science Teaching and Learning Through Research, publishes reports for science education researchers and practitioners on issues of science teaching and learning and science education policy. Scholarly manuscripts within the domain of the Journal of Research in Science Teaching include, but are not limited to, investigations employing qualitative, ethnographic, historical, survey, philosophical, case study research, quantitative, experimental, quasi-experimental, data mining, and data analytics approaches; position papers; policy perspectives; critical reviews of the literature; and comments and criticism.