人工智能与《科学教学研究》杂志

IF 3.6 1区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH Journal of Research in Science Teaching Pub Date : 2024-02-23 DOI:10.1002/tea.21933
Troy D. Sadler, Felicia Moore Mensah, Jonathan Tam
{"title":"人工智能与《科学教学研究》杂志","authors":"Troy D. Sadler,&nbsp;Felicia Moore Mensah,&nbsp;Jonathan Tam","doi":"10.1002/tea.21933","DOIUrl":null,"url":null,"abstract":"<p>Artificial Intelligence (AI) is a transformative technology that promises to impact many aspects of society including research, education, and publishing. We, the editors of the <i>Journal of Research in Science Teaching</i> (JRST), think that the journal has a responsibility to contribute to the ongoing dialogues about the use of AI in research and publishing with particular attention to the field of science education. We use this editorial to share our current ideas about the opportunities and challenges associated with AI in science education research and to sketch out new journal guidelines related to the use of AI for the production of JRST articles. We also extend an invitation to scholars to submit research articles and commentaries that advance the field's understanding of the intersections of AI and science education.</p><p>Establishing foundations for an AI revolution has been in progress since the mid-twentieth century (Adamopoulou &amp; Moussiades, <span>2020</span>), and a giant step in public engagement with AI was taken in November 2022 when OpenAI released ChatGPT. This tool along with other large language models (LLM) such as Google Bard, and Microsoft's Copilot, provide platforms that are easy to use and can generate content such as text, images, computer code, audio, and video. It has quickly become apparent that these <i>generative</i> AI tools have the potential to change education in substantial ways. There is already evidence that students and teachers are actively using AI in ways that will push the field of education to reconsider what it means to construct learning artifacts, how to assess the work of learners, and the nature of learner-technology interactions (e.g., Prather et al., <span>2023</span>). Of course, generative AI will not just impact the work of students, teachers, and other educational practitioners, it will affect how research is conducted and reported. As journal editors, we are particularly interested in the use of AI in the sharing of research and publication processes.</p><p>Across the field of education research, and science education research more specifically, scholars use a host of technologies to support their work. For example, researchers regularly use statistical packages to derive quantitative patterns in data, qualitative software to organize and represent coded themes in data, grammar, and spelling check software embedded in word processors and online (i.e., Grammarly), and reference managers to find and cite literature. Technologies such as these examples are ubiquitous across our field, and new generative AI presents another set of tools that researchers might leverage for the sharing of their scholarship. However, the now widely available LLMs seem, to us, to represent a fundamental shift in technological capacity for producing research publications. The users of software for data analysis, reference management, and grammar checks exert levels of control and supervision over these technologies, which is not the case when using an LLM. There is a much greater degree of opaqueness and uncertainty when it comes to generating content with an LLM as compared to generating regression coefficients with data analysis software. Given these distinctions between AI and other technologies used by researchers, we think AI presents a unique challenge for academic publishing and therefore warrants the additional attention called for in this editorial.</p><p>In considering the role of AI in publishing research, we think it is important to highlight two fundamental tensions. First, the research enterprise is about the creation of new knowledge. Researchers conduct and write about studies and other forms of scholarship as a means of generating new ideas and insights about the foci of their inquiries. We argue that AI, at least the LLMs that are currently prevalent, cannot achieve the goal of trustworthy knowledge creation. LLMs necessarily work from existing source material—they can repeat, reword, and summarize what already exists, but they do not create new knowledge. AI can be generative in the sense that it can generate content such as text, but AI is not generative from a research perspective. Second, an important hallmark of science and research is a commitment to openness and transparency. The set of social practices employed by research communities is a fundamental dimension of science itself, and open sharing and critique of methods, findings, and interpretations are some of these critical social practices (Osborne et al., <span>2022</span>). The processes underlying generative AI tools in common use are not open or transparent. It is not always clear what the sources for AI generation are, how the sources are being analyzed, or why some ideas are highlighted and others are not. The phenomenon of AI hallucination, wherein an LLM generates false information based on patterns that do not exist in the source material, provides evidence of this problem. Why AI tools create content that is false or misleading is not fully understood and reflects an underlying degree of uncertainty (Athaluri et al., <span>2023</span>).</p><p>Despite these concerns, we are not arguing that AI has no place in conducting and publishing research. As authors of a recent JRST commentary suggest, “the [AI] train… has indeed left the station” (Zhai &amp; Nehm, <span>2023</span>, p. 1395). Although this statement was written specifically in response to AI's role in formative assessment, its point about the inevitability of AI extends to other aspects of our field including publishing. We can imagine ways in which AI might be used (and is already being used) responsibly for conducting research and preparing manuscripts. For example, AI can help researchers review existing literature, generate code for analyzing data, create outlines for organizing manuscripts, and assist brainstorming processes. (In the interest of full disclosure, as we thought about what to claim that AI could do for researchers, we posed the following questions to ChatGPT: “How can generative AI be used responsibly for conducting research and publishing?” and “What things can AI do for researchers trying to publish their work?” Some of the responses were helpful to jump-start our thinking, but we created the final list shared above.)</p><p>We also think that it is critically important for users of AI to be aware of its limitations and problems. Some of those limitations and problems include bias, inaccuracy, and, as we highlighted above, limited transparency. Generative AI is biased by the data corpus that it reviews. Models trained on biased data sets produce biased results including the propagation of gender stereotypes and racial discrimination (Heaven, <span>2023</span>). These platforms can also produce inaccurate results—the output can be outdated, factually inaccurate, and occasionally nonsensical. In addition, generative AI tends not to provide citations for the products that it creates, and when asked specifically to do so, may create fictitious references (Stokel-Walker &amp; Van Noorden, <span>2023</span>). Over time, the models will improve, and the users of this technology will get better at using it. However, these concerns will not simply go away, and it is essential for scholars using generative AI as well as those consuming AI-generated content to be aware of these issues.</p><p>Given both the challenges and potential associated with AI, we are not in favor of the use of generative AI to produce text for writing manuscripts. However, as stewards of JRST, we recognize that AI technologies are rapidly evolving as are the ways in which science education scholars use them, and setting overly restrictive guidelines regarding the use of AI for JRST publications could be detrimental to the journal and the JRST community. We think that it would be inappropriate for a research team to use AI to generate the full text for a JRST manuscript. At this moment, we do not think that it would even be possible to do this in a way that yields a product that meets the standards for JRST publication. However, we can also imagine circumstances in which a team employs AI in a manner consistent with the uses we presented above, and that some aspect of the AI-generated content ends up in the manuscript. Despite our acknowledged skepticism of the role of AI in publishing scholarship generally, we see this hypothetical case as one of likely numerous situations in which AI-generated content is quite appropriately included in a JRST article. In all situations in which authors employ AI, they should thoroughly review and edit the AI-generated content to check for accuracy and ensure that ethical standards for research, including proper attribution of sources and the avoidance of plagiarism, are met.</p><p>In terms of guidelines for the journal regarding AI, transparency is our key principle. When authors choose to use AI in their research and creation of manuscripts to be considered in JRST, they should openly disclose what AI tools were used and how they were used. Authors should make it clear at the time of submission what, if any, text, or other content (e.g., images or data displays) included in the manuscript was the product of an AI tool. These disclosures should be made in a manuscript's Methods section, when AI use relates to the design, enactment, or analysis of the research, or in an acknowledgments section. Ultimately, the authors are responsible for the information presented in their manuscripts. This includes accuracy of the information, proper citation of sources, and insurance of academic integrity. The editors, associate editors, and reviewers of JRST will consider AI declarations as a part of the process for publication decisions.</p><p>Whereas the use of AI tools for the preparation of manuscripts should be clearly acknowledged, these tools cannot be included as coauthors in JRST. Authorship carries with it responsibilities related to integrity, accuracy, and agreement to the journal's terms of use. AI cannot assume these responsibilities and, therefore, should not be listed as an author for JRST manuscripts. Human authors who submit a manuscript to JRST are responsible for all of the content presented in their manuscript regardless of the ways AI might have been used to support the process of generating the research or preparing the manuscript. The guidelines that we have outlined for JRST regarding author responsibilities, use and declaration of AI, and authorship are consistent with Wiley's guidelines for research integrity and publishing ethics. Wiley, the Publisher of JRST, includes an explicit statement on AI-generated content in their statement on ethics (https://authorservices.wiley.com/ethics-guidelines/index.html). The guidelines we share are also consistent with the Committee on Publication Ethics (COPE) position statement on AI tools (https://publicationethics.org/) and align with prevailing trends among academic publishers and journals (e.g., Flanagin et al., <span>2023</span>).</p><p>Of course, there is potential for employing AI in publication processes that go beyond conducting research and preparing manuscripts. For example, JRST regularly uses software to detect how similar newly accepted manuscripts are to previously published reports. In this case, we use a form of AI to guard against plagiarism. However, at this time, JRST does not approve of the use of generative AI in the review of manuscripts or the determination of publication decisions. Furthermore, reviewers should not upload any content from submitted manuscripts to generative AI tools. Uploading manuscripts to an AI model violates the confidentiality assumed in the JRST review process. The editorial team sends manuscripts to reviewers to read and provide feedback based on their expertise, and we expect the feedback provided to be the product of the expert reviewers and not AI. We think that reviewing and making publication decisions on science education research manuscripts requires specialized knowledge and that current AI tools cannot complete these tasks well nor do they currently have the capacity to do so.</p><p>AI holds exciting potential for many dimensions of modern life; and research, education, and publishing are certainly some of the areas that might be dramatically impacted. Just as it is exciting to consider the possibilities of AI, there are ample reasons for concern. As the editors of JRST, we think it is important for the journal to present clear guidelines for the use of AI in JRST publications and review processes. In this editorial, we have attempted to outline such a set of guidelines. As AI technologies change, these guidelines will need to be reviewed and when appropriate revised; but for now, we hope that these guidelines provide help for researchers and authors trying to navigate the current environment for science education research in which AI is clearly a part.</p><p>In addition to presenting guidelines for AI use in JRST, we hope this editorial contributes to a burgeoning conversation in the science education community about AI more generally. As nearly all commentators about AI have suggested, AI is potentially transformative, but there are many uncertainties about how we should use AI and what problems could be generated through that use. AI is already an important part of science learning environments and a tool being used in many different ways by learners and teachers (e.g., Cross, <span>2023</span>). While there are certainly some science education researchers responding to the AI revolution (e.g., Antonenko &amp; Bramowitz, <span>2023</span>), we think, that as a whole, the science education research community is not as far along as it needs to be in terms of understanding, theorizing, and studying the intersections of AI and science education.</p><p>To help advance this discourse, we invite scholars to submit their research related to AI in science education to JRST. Authors of empirical manuscripts, literature reviews, or explorations of theory related to the use of AI in science education are invited to submit manuscripts to the journal. In addition, we are very interested in hosting a series of commentaries that advance positions regarding what AI technologies are being used in science education, how AI should be used (or not used) to support science learning and teaching, the pitfalls and potential of AI in our field, how the field should respond to developments in AI, and so forth. Commentaries are much shorter than full article submissions (1000–2000 words) and are reviewed by the editorial team as opposed to the full review process used for other types of manuscripts. We invite scholars to send inquiries regarding the appropriateness of particular themes or purposes of potential commentaries to the JRST editors via email: <span>[email protected]</span>. Commentaries related to AI (or other topics) should be submitted through the journal's online submission platform (https://mc.manuscriptcentral.com/jrst) as a “Comment” (when asked to select article type). We look forward to conversations in the pages of JRST that can help shape the future of science education and science education research and the role of AI in that future.</p>","PeriodicalId":48369,"journal":{"name":"Journal of Research in Science Teaching","volume":"61 4","pages":"739-743"},"PeriodicalIF":3.6000,"publicationDate":"2024-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/tea.21933","citationCount":"0","resultStr":"{\"title\":\"Artificial intelligence and the Journal of Research in Science Teaching\",\"authors\":\"Troy D. Sadler,&nbsp;Felicia Moore Mensah,&nbsp;Jonathan Tam\",\"doi\":\"10.1002/tea.21933\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Artificial Intelligence (AI) is a transformative technology that promises to impact many aspects of society including research, education, and publishing. We, the editors of the <i>Journal of Research in Science Teaching</i> (JRST), think that the journal has a responsibility to contribute to the ongoing dialogues about the use of AI in research and publishing with particular attention to the field of science education. We use this editorial to share our current ideas about the opportunities and challenges associated with AI in science education research and to sketch out new journal guidelines related to the use of AI for the production of JRST articles. We also extend an invitation to scholars to submit research articles and commentaries that advance the field's understanding of the intersections of AI and science education.</p><p>Establishing foundations for an AI revolution has been in progress since the mid-twentieth century (Adamopoulou &amp; Moussiades, <span>2020</span>), and a giant step in public engagement with AI was taken in November 2022 when OpenAI released ChatGPT. This tool along with other large language models (LLM) such as Google Bard, and Microsoft's Copilot, provide platforms that are easy to use and can generate content such as text, images, computer code, audio, and video. It has quickly become apparent that these <i>generative</i> AI tools have the potential to change education in substantial ways. There is already evidence that students and teachers are actively using AI in ways that will push the field of education to reconsider what it means to construct learning artifacts, how to assess the work of learners, and the nature of learner-technology interactions (e.g., Prather et al., <span>2023</span>). Of course, generative AI will not just impact the work of students, teachers, and other educational practitioners, it will affect how research is conducted and reported. As journal editors, we are particularly interested in the use of AI in the sharing of research and publication processes.</p><p>Across the field of education research, and science education research more specifically, scholars use a host of technologies to support their work. For example, researchers regularly use statistical packages to derive quantitative patterns in data, qualitative software to organize and represent coded themes in data, grammar, and spelling check software embedded in word processors and online (i.e., Grammarly), and reference managers to find and cite literature. Technologies such as these examples are ubiquitous across our field, and new generative AI presents another set of tools that researchers might leverage for the sharing of their scholarship. However, the now widely available LLMs seem, to us, to represent a fundamental shift in technological capacity for producing research publications. The users of software for data analysis, reference management, and grammar checks exert levels of control and supervision over these technologies, which is not the case when using an LLM. There is a much greater degree of opaqueness and uncertainty when it comes to generating content with an LLM as compared to generating regression coefficients with data analysis software. Given these distinctions between AI and other technologies used by researchers, we think AI presents a unique challenge for academic publishing and therefore warrants the additional attention called for in this editorial.</p><p>In considering the role of AI in publishing research, we think it is important to highlight two fundamental tensions. First, the research enterprise is about the creation of new knowledge. Researchers conduct and write about studies and other forms of scholarship as a means of generating new ideas and insights about the foci of their inquiries. We argue that AI, at least the LLMs that are currently prevalent, cannot achieve the goal of trustworthy knowledge creation. LLMs necessarily work from existing source material—they can repeat, reword, and summarize what already exists, but they do not create new knowledge. AI can be generative in the sense that it can generate content such as text, but AI is not generative from a research perspective. Second, an important hallmark of science and research is a commitment to openness and transparency. The set of social practices employed by research communities is a fundamental dimension of science itself, and open sharing and critique of methods, findings, and interpretations are some of these critical social practices (Osborne et al., <span>2022</span>). The processes underlying generative AI tools in common use are not open or transparent. It is not always clear what the sources for AI generation are, how the sources are being analyzed, or why some ideas are highlighted and others are not. The phenomenon of AI hallucination, wherein an LLM generates false information based on patterns that do not exist in the source material, provides evidence of this problem. Why AI tools create content that is false or misleading is not fully understood and reflects an underlying degree of uncertainty (Athaluri et al., <span>2023</span>).</p><p>Despite these concerns, we are not arguing that AI has no place in conducting and publishing research. As authors of a recent JRST commentary suggest, “the [AI] train… has indeed left the station” (Zhai &amp; Nehm, <span>2023</span>, p. 1395). Although this statement was written specifically in response to AI's role in formative assessment, its point about the inevitability of AI extends to other aspects of our field including publishing. We can imagine ways in which AI might be used (and is already being used) responsibly for conducting research and preparing manuscripts. For example, AI can help researchers review existing literature, generate code for analyzing data, create outlines for organizing manuscripts, and assist brainstorming processes. (In the interest of full disclosure, as we thought about what to claim that AI could do for researchers, we posed the following questions to ChatGPT: “How can generative AI be used responsibly for conducting research and publishing?” and “What things can AI do for researchers trying to publish their work?” Some of the responses were helpful to jump-start our thinking, but we created the final list shared above.)</p><p>We also think that it is critically important for users of AI to be aware of its limitations and problems. Some of those limitations and problems include bias, inaccuracy, and, as we highlighted above, limited transparency. Generative AI is biased by the data corpus that it reviews. Models trained on biased data sets produce biased results including the propagation of gender stereotypes and racial discrimination (Heaven, <span>2023</span>). These platforms can also produce inaccurate results—the output can be outdated, factually inaccurate, and occasionally nonsensical. In addition, generative AI tends not to provide citations for the products that it creates, and when asked specifically to do so, may create fictitious references (Stokel-Walker &amp; Van Noorden, <span>2023</span>). Over time, the models will improve, and the users of this technology will get better at using it. However, these concerns will not simply go away, and it is essential for scholars using generative AI as well as those consuming AI-generated content to be aware of these issues.</p><p>Given both the challenges and potential associated with AI, we are not in favor of the use of generative AI to produce text for writing manuscripts. However, as stewards of JRST, we recognize that AI technologies are rapidly evolving as are the ways in which science education scholars use them, and setting overly restrictive guidelines regarding the use of AI for JRST publications could be detrimental to the journal and the JRST community. We think that it would be inappropriate for a research team to use AI to generate the full text for a JRST manuscript. At this moment, we do not think that it would even be possible to do this in a way that yields a product that meets the standards for JRST publication. However, we can also imagine circumstances in which a team employs AI in a manner consistent with the uses we presented above, and that some aspect of the AI-generated content ends up in the manuscript. Despite our acknowledged skepticism of the role of AI in publishing scholarship generally, we see this hypothetical case as one of likely numerous situations in which AI-generated content is quite appropriately included in a JRST article. In all situations in which authors employ AI, they should thoroughly review and edit the AI-generated content to check for accuracy and ensure that ethical standards for research, including proper attribution of sources and the avoidance of plagiarism, are met.</p><p>In terms of guidelines for the journal regarding AI, transparency is our key principle. When authors choose to use AI in their research and creation of manuscripts to be considered in JRST, they should openly disclose what AI tools were used and how they were used. Authors should make it clear at the time of submission what, if any, text, or other content (e.g., images or data displays) included in the manuscript was the product of an AI tool. These disclosures should be made in a manuscript's Methods section, when AI use relates to the design, enactment, or analysis of the research, or in an acknowledgments section. Ultimately, the authors are responsible for the information presented in their manuscripts. This includes accuracy of the information, proper citation of sources, and insurance of academic integrity. The editors, associate editors, and reviewers of JRST will consider AI declarations as a part of the process for publication decisions.</p><p>Whereas the use of AI tools for the preparation of manuscripts should be clearly acknowledged, these tools cannot be included as coauthors in JRST. Authorship carries with it responsibilities related to integrity, accuracy, and agreement to the journal's terms of use. AI cannot assume these responsibilities and, therefore, should not be listed as an author for JRST manuscripts. Human authors who submit a manuscript to JRST are responsible for all of the content presented in their manuscript regardless of the ways AI might have been used to support the process of generating the research or preparing the manuscript. The guidelines that we have outlined for JRST regarding author responsibilities, use and declaration of AI, and authorship are consistent with Wiley's guidelines for research integrity and publishing ethics. Wiley, the Publisher of JRST, includes an explicit statement on AI-generated content in their statement on ethics (https://authorservices.wiley.com/ethics-guidelines/index.html). The guidelines we share are also consistent with the Committee on Publication Ethics (COPE) position statement on AI tools (https://publicationethics.org/) and align with prevailing trends among academic publishers and journals (e.g., Flanagin et al., <span>2023</span>).</p><p>Of course, there is potential for employing AI in publication processes that go beyond conducting research and preparing manuscripts. For example, JRST regularly uses software to detect how similar newly accepted manuscripts are to previously published reports. In this case, we use a form of AI to guard against plagiarism. However, at this time, JRST does not approve of the use of generative AI in the review of manuscripts or the determination of publication decisions. Furthermore, reviewers should not upload any content from submitted manuscripts to generative AI tools. Uploading manuscripts to an AI model violates the confidentiality assumed in the JRST review process. The editorial team sends manuscripts to reviewers to read and provide feedback based on their expertise, and we expect the feedback provided to be the product of the expert reviewers and not AI. We think that reviewing and making publication decisions on science education research manuscripts requires specialized knowledge and that current AI tools cannot complete these tasks well nor do they currently have the capacity to do so.</p><p>AI holds exciting potential for many dimensions of modern life; and research, education, and publishing are certainly some of the areas that might be dramatically impacted. Just as it is exciting to consider the possibilities of AI, there are ample reasons for concern. As the editors of JRST, we think it is important for the journal to present clear guidelines for the use of AI in JRST publications and review processes. In this editorial, we have attempted to outline such a set of guidelines. As AI technologies change, these guidelines will need to be reviewed and when appropriate revised; but for now, we hope that these guidelines provide help for researchers and authors trying to navigate the current environment for science education research in which AI is clearly a part.</p><p>In addition to presenting guidelines for AI use in JRST, we hope this editorial contributes to a burgeoning conversation in the science education community about AI more generally. As nearly all commentators about AI have suggested, AI is potentially transformative, but there are many uncertainties about how we should use AI and what problems could be generated through that use. AI is already an important part of science learning environments and a tool being used in many different ways by learners and teachers (e.g., Cross, <span>2023</span>). While there are certainly some science education researchers responding to the AI revolution (e.g., Antonenko &amp; Bramowitz, <span>2023</span>), we think, that as a whole, the science education research community is not as far along as it needs to be in terms of understanding, theorizing, and studying the intersections of AI and science education.</p><p>To help advance this discourse, we invite scholars to submit their research related to AI in science education to JRST. Authors of empirical manuscripts, literature reviews, or explorations of theory related to the use of AI in science education are invited to submit manuscripts to the journal. In addition, we are very interested in hosting a series of commentaries that advance positions regarding what AI technologies are being used in science education, how AI should be used (or not used) to support science learning and teaching, the pitfalls and potential of AI in our field, how the field should respond to developments in AI, and so forth. Commentaries are much shorter than full article submissions (1000–2000 words) and are reviewed by the editorial team as opposed to the full review process used for other types of manuscripts. We invite scholars to send inquiries regarding the appropriateness of particular themes or purposes of potential commentaries to the JRST editors via email: <span>[email protected]</span>. Commentaries related to AI (or other topics) should be submitted through the journal's online submission platform (https://mc.manuscriptcentral.com/jrst) as a “Comment” (when asked to select article type). We look forward to conversations in the pages of JRST that can help shape the future of science education and science education research and the role of AI in that future.</p>\",\"PeriodicalId\":48369,\"journal\":{\"name\":\"Journal of Research in Science Teaching\",\"volume\":\"61 4\",\"pages\":\"739-743\"},\"PeriodicalIF\":3.6000,\"publicationDate\":\"2024-02-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/tea.21933\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Research in Science Teaching\",\"FirstCategoryId\":\"95\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/tea.21933\",\"RegionNum\":1,\"RegionCategory\":\"教育学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Research in Science Teaching","FirstCategoryId":"95","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/tea.21933","RegionNum":1,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0

摘要

为什么人工智能工具会产生虚假或误导性的内容,这一点尚未完全明了,它反映了潜在的不确定性(Athaluri et al.正如 JRST 最近一篇评论的作者所言,"[人工智能]列车......确实已经驶离车站"(Zhai &amp; Nehm, 2023, 第 1395 页)。虽然这篇评论是专门针对人工智能在形成性评估中的作用而写的,但其中关于人工智能不可避免的观点也延伸到了我们领域的其他方面,包括出版业。我们可以想象,人工智能有可能(而且已经在)以负责任的方式用于开展研究和准备稿件。例如,人工智能可以帮助研究人员查阅现有文献、生成用于分析数据的代码、创建用于组织稿件的大纲,以及协助头脑风暴过程。(为了充分披露信息,在我们思考人工智能能为研究人员做些什么时,我们向 ChatGPT 提出了以下问题:"生成式人工智能如何能负责任地用于开展研究和发表论文?"以及 "人工智能能为试图发表作品的研究人员做些什么?")。其中一些回答有助于启动我们的思考,但我们还是创建了上面分享的最终清单。)我们还认为,对于人工智能的用户来说,意识到人工智能的局限性和问题至关重要。其中一些局限性和问题包括偏差、不准确,以及我们在上文强调的透明度有限。生成式人工智能因其审查的数据集而产生偏差。在有偏见的数据集上训练的模型会产生有偏见的结果,包括传播性别刻板印象和种族歧视(Heaven,2023 年)。这些平台也可能产生不准确的结果--输出结果可能过时、与事实不符,有时甚至是无稽之谈。此外,生成式人工智能往往不会为其创建的产品提供引文,当被特别要求提供引文时,可能会创建虚构的参考文献(Stokel-Walker &amp; Van Noorden, 2023)。随着时间的推移,模型会不断改进,这项技术的用户也会更好地使用它。然而,这些问题不会轻易消失,使用生成式人工智能的学者和消费人工智能生成内容的人都必须意识到这些问题。鉴于人工智能的挑战和潜力,我们不赞成使用生成式人工智能来生成撰写手稿的文本。但是,作为 JRST 的管理者,我们认识到人工智能技术正在迅速发展,科学教育学者使用这些技术的方式也是如此,如果对 JRST 出版物使用人工智能制定过于严格的指导方针,可能会对期刊和 JRST 社区造成损害。我们认为,研究团队使用人工智能生成 JRST 手稿的全文是不合适的。目前,我们认为甚至不可能做到生成符合 JRST 出版标准的产品。不过,我们也可以想象,在什么情况下,一个团队会以与我们上面介绍的用途一致的方式使用人工智能,而人工智能生成的内容的某些方面最终会出现在手稿中。尽管我们公认对人工智能在出版学术研究中的作用持怀疑态度,但我们认为这种假设情况可能是人工智能生成的内容被纳入 JRST 文章的众多情况之一。在作者使用人工智能的所有情况下,他们都应彻底审查和编辑人工智能生成的内容,以检查其准确性,并确保符合研究道德标准,包括正确注明来源和避免抄袭。当作者选择在其研究和创作中使用人工智能来撰写稿件供《JRST》参阅时,他们应公开披露使用了哪些人工智能工具以及如何使用这些工具。作者应在投稿时明确说明稿件中包含的文本或其他内容(如图像或数据显示)是人工智能工具的产物。当人工智能的使用与研究的设计、实施或分析有关时,应在稿件的 "方法 "部分或 "致谢 "部分进行披露。最终,作者对稿件中提供的信息负责。这包括信息的准确性、正确引用资料来源以及保证学术诚信。JRST的编辑、副主编和审稿人会将人工智能声明作为发表决定过程的一部分加以考虑。虽然在撰写稿件时使用了人工智能工具,但应明确注明,这些工具不能作为JRST的共同作者。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Artificial intelligence and the Journal of Research in Science Teaching

Artificial Intelligence (AI) is a transformative technology that promises to impact many aspects of society including research, education, and publishing. We, the editors of the Journal of Research in Science Teaching (JRST), think that the journal has a responsibility to contribute to the ongoing dialogues about the use of AI in research and publishing with particular attention to the field of science education. We use this editorial to share our current ideas about the opportunities and challenges associated with AI in science education research and to sketch out new journal guidelines related to the use of AI for the production of JRST articles. We also extend an invitation to scholars to submit research articles and commentaries that advance the field's understanding of the intersections of AI and science education.

Establishing foundations for an AI revolution has been in progress since the mid-twentieth century (Adamopoulou & Moussiades, 2020), and a giant step in public engagement with AI was taken in November 2022 when OpenAI released ChatGPT. This tool along with other large language models (LLM) such as Google Bard, and Microsoft's Copilot, provide platforms that are easy to use and can generate content such as text, images, computer code, audio, and video. It has quickly become apparent that these generative AI tools have the potential to change education in substantial ways. There is already evidence that students and teachers are actively using AI in ways that will push the field of education to reconsider what it means to construct learning artifacts, how to assess the work of learners, and the nature of learner-technology interactions (e.g., Prather et al., 2023). Of course, generative AI will not just impact the work of students, teachers, and other educational practitioners, it will affect how research is conducted and reported. As journal editors, we are particularly interested in the use of AI in the sharing of research and publication processes.

Across the field of education research, and science education research more specifically, scholars use a host of technologies to support their work. For example, researchers regularly use statistical packages to derive quantitative patterns in data, qualitative software to organize and represent coded themes in data, grammar, and spelling check software embedded in word processors and online (i.e., Grammarly), and reference managers to find and cite literature. Technologies such as these examples are ubiquitous across our field, and new generative AI presents another set of tools that researchers might leverage for the sharing of their scholarship. However, the now widely available LLMs seem, to us, to represent a fundamental shift in technological capacity for producing research publications. The users of software for data analysis, reference management, and grammar checks exert levels of control and supervision over these technologies, which is not the case when using an LLM. There is a much greater degree of opaqueness and uncertainty when it comes to generating content with an LLM as compared to generating regression coefficients with data analysis software. Given these distinctions between AI and other technologies used by researchers, we think AI presents a unique challenge for academic publishing and therefore warrants the additional attention called for in this editorial.

In considering the role of AI in publishing research, we think it is important to highlight two fundamental tensions. First, the research enterprise is about the creation of new knowledge. Researchers conduct and write about studies and other forms of scholarship as a means of generating new ideas and insights about the foci of their inquiries. We argue that AI, at least the LLMs that are currently prevalent, cannot achieve the goal of trustworthy knowledge creation. LLMs necessarily work from existing source material—they can repeat, reword, and summarize what already exists, but they do not create new knowledge. AI can be generative in the sense that it can generate content such as text, but AI is not generative from a research perspective. Second, an important hallmark of science and research is a commitment to openness and transparency. The set of social practices employed by research communities is a fundamental dimension of science itself, and open sharing and critique of methods, findings, and interpretations are some of these critical social practices (Osborne et al., 2022). The processes underlying generative AI tools in common use are not open or transparent. It is not always clear what the sources for AI generation are, how the sources are being analyzed, or why some ideas are highlighted and others are not. The phenomenon of AI hallucination, wherein an LLM generates false information based on patterns that do not exist in the source material, provides evidence of this problem. Why AI tools create content that is false or misleading is not fully understood and reflects an underlying degree of uncertainty (Athaluri et al., 2023).

Despite these concerns, we are not arguing that AI has no place in conducting and publishing research. As authors of a recent JRST commentary suggest, “the [AI] train… has indeed left the station” (Zhai & Nehm, 2023, p. 1395). Although this statement was written specifically in response to AI's role in formative assessment, its point about the inevitability of AI extends to other aspects of our field including publishing. We can imagine ways in which AI might be used (and is already being used) responsibly for conducting research and preparing manuscripts. For example, AI can help researchers review existing literature, generate code for analyzing data, create outlines for organizing manuscripts, and assist brainstorming processes. (In the interest of full disclosure, as we thought about what to claim that AI could do for researchers, we posed the following questions to ChatGPT: “How can generative AI be used responsibly for conducting research and publishing?” and “What things can AI do for researchers trying to publish their work?” Some of the responses were helpful to jump-start our thinking, but we created the final list shared above.)

We also think that it is critically important for users of AI to be aware of its limitations and problems. Some of those limitations and problems include bias, inaccuracy, and, as we highlighted above, limited transparency. Generative AI is biased by the data corpus that it reviews. Models trained on biased data sets produce biased results including the propagation of gender stereotypes and racial discrimination (Heaven, 2023). These platforms can also produce inaccurate results—the output can be outdated, factually inaccurate, and occasionally nonsensical. In addition, generative AI tends not to provide citations for the products that it creates, and when asked specifically to do so, may create fictitious references (Stokel-Walker & Van Noorden, 2023). Over time, the models will improve, and the users of this technology will get better at using it. However, these concerns will not simply go away, and it is essential for scholars using generative AI as well as those consuming AI-generated content to be aware of these issues.

Given both the challenges and potential associated with AI, we are not in favor of the use of generative AI to produce text for writing manuscripts. However, as stewards of JRST, we recognize that AI technologies are rapidly evolving as are the ways in which science education scholars use them, and setting overly restrictive guidelines regarding the use of AI for JRST publications could be detrimental to the journal and the JRST community. We think that it would be inappropriate for a research team to use AI to generate the full text for a JRST manuscript. At this moment, we do not think that it would even be possible to do this in a way that yields a product that meets the standards for JRST publication. However, we can also imagine circumstances in which a team employs AI in a manner consistent with the uses we presented above, and that some aspect of the AI-generated content ends up in the manuscript. Despite our acknowledged skepticism of the role of AI in publishing scholarship generally, we see this hypothetical case as one of likely numerous situations in which AI-generated content is quite appropriately included in a JRST article. In all situations in which authors employ AI, they should thoroughly review and edit the AI-generated content to check for accuracy and ensure that ethical standards for research, including proper attribution of sources and the avoidance of plagiarism, are met.

In terms of guidelines for the journal regarding AI, transparency is our key principle. When authors choose to use AI in their research and creation of manuscripts to be considered in JRST, they should openly disclose what AI tools were used and how they were used. Authors should make it clear at the time of submission what, if any, text, or other content (e.g., images or data displays) included in the manuscript was the product of an AI tool. These disclosures should be made in a manuscript's Methods section, when AI use relates to the design, enactment, or analysis of the research, or in an acknowledgments section. Ultimately, the authors are responsible for the information presented in their manuscripts. This includes accuracy of the information, proper citation of sources, and insurance of academic integrity. The editors, associate editors, and reviewers of JRST will consider AI declarations as a part of the process for publication decisions.

Whereas the use of AI tools for the preparation of manuscripts should be clearly acknowledged, these tools cannot be included as coauthors in JRST. Authorship carries with it responsibilities related to integrity, accuracy, and agreement to the journal's terms of use. AI cannot assume these responsibilities and, therefore, should not be listed as an author for JRST manuscripts. Human authors who submit a manuscript to JRST are responsible for all of the content presented in their manuscript regardless of the ways AI might have been used to support the process of generating the research or preparing the manuscript. The guidelines that we have outlined for JRST regarding author responsibilities, use and declaration of AI, and authorship are consistent with Wiley's guidelines for research integrity and publishing ethics. Wiley, the Publisher of JRST, includes an explicit statement on AI-generated content in their statement on ethics (https://authorservices.wiley.com/ethics-guidelines/index.html). The guidelines we share are also consistent with the Committee on Publication Ethics (COPE) position statement on AI tools (https://publicationethics.org/) and align with prevailing trends among academic publishers and journals (e.g., Flanagin et al., 2023).

Of course, there is potential for employing AI in publication processes that go beyond conducting research and preparing manuscripts. For example, JRST regularly uses software to detect how similar newly accepted manuscripts are to previously published reports. In this case, we use a form of AI to guard against plagiarism. However, at this time, JRST does not approve of the use of generative AI in the review of manuscripts or the determination of publication decisions. Furthermore, reviewers should not upload any content from submitted manuscripts to generative AI tools. Uploading manuscripts to an AI model violates the confidentiality assumed in the JRST review process. The editorial team sends manuscripts to reviewers to read and provide feedback based on their expertise, and we expect the feedback provided to be the product of the expert reviewers and not AI. We think that reviewing and making publication decisions on science education research manuscripts requires specialized knowledge and that current AI tools cannot complete these tasks well nor do they currently have the capacity to do so.

AI holds exciting potential for many dimensions of modern life; and research, education, and publishing are certainly some of the areas that might be dramatically impacted. Just as it is exciting to consider the possibilities of AI, there are ample reasons for concern. As the editors of JRST, we think it is important for the journal to present clear guidelines for the use of AI in JRST publications and review processes. In this editorial, we have attempted to outline such a set of guidelines. As AI technologies change, these guidelines will need to be reviewed and when appropriate revised; but for now, we hope that these guidelines provide help for researchers and authors trying to navigate the current environment for science education research in which AI is clearly a part.

In addition to presenting guidelines for AI use in JRST, we hope this editorial contributes to a burgeoning conversation in the science education community about AI more generally. As nearly all commentators about AI have suggested, AI is potentially transformative, but there are many uncertainties about how we should use AI and what problems could be generated through that use. AI is already an important part of science learning environments and a tool being used in many different ways by learners and teachers (e.g., Cross, 2023). While there are certainly some science education researchers responding to the AI revolution (e.g., Antonenko & Bramowitz, 2023), we think, that as a whole, the science education research community is not as far along as it needs to be in terms of understanding, theorizing, and studying the intersections of AI and science education.

To help advance this discourse, we invite scholars to submit their research related to AI in science education to JRST. Authors of empirical manuscripts, literature reviews, or explorations of theory related to the use of AI in science education are invited to submit manuscripts to the journal. In addition, we are very interested in hosting a series of commentaries that advance positions regarding what AI technologies are being used in science education, how AI should be used (or not used) to support science learning and teaching, the pitfalls and potential of AI in our field, how the field should respond to developments in AI, and so forth. Commentaries are much shorter than full article submissions (1000–2000 words) and are reviewed by the editorial team as opposed to the full review process used for other types of manuscripts. We invite scholars to send inquiries regarding the appropriateness of particular themes or purposes of potential commentaries to the JRST editors via email: [email protected]. Commentaries related to AI (or other topics) should be submitted through the journal's online submission platform (https://mc.manuscriptcentral.com/jrst) as a “Comment” (when asked to select article type). We look forward to conversations in the pages of JRST that can help shape the future of science education and science education research and the role of AI in that future.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Research in Science Teaching
Journal of Research in Science Teaching EDUCATION & EDUCATIONAL RESEARCH-
CiteScore
8.80
自引率
19.60%
发文量
96
期刊介绍: Journal of Research in Science Teaching, the official journal of NARST: A Worldwide Organization for Improving Science Teaching and Learning Through Research, publishes reports for science education researchers and practitioners on issues of science teaching and learning and science education policy. Scholarly manuscripts within the domain of the Journal of Research in Science Teaching include, but are not limited to, investigations employing qualitative, ethnographic, historical, survey, philosophical, case study research, quantitative, experimental, quasi-experimental, data mining, and data analytics approaches; position papers; policy perspectives; critical reviews of the literature; and comments and criticism.
期刊最新文献
Issue Information Issue Information Artificial intelligence: Tool or teammate? “Powered by emotions”: Exploring emotion induction in out‐of‐school authentic science learning Issue Information
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1