Kien Nguyen-Trung, Alexander K. Saeri, Stefan Kaufman
{"title":"Applying ChatGPT and AI-Powered Tools to Accelerate Evidence Reviews","authors":"Kien Nguyen-Trung, Alexander K. Saeri, Stefan Kaufman","doi":"10.1155/2024/8815424","DOIUrl":null,"url":null,"abstract":"<p>Artificial intelligence (AI) tools have been used to improve the productivity of evidence review and synthesis since at least 2016, with EPPI-Reviewer and Abstrackr being two prominent examples. However, since the release of ChatGPT by OpenAI in late 2022, the use of generative AI for research, especially for text-based data analysis, has exploded. In this article, we used a critical reflection approach to document and evaluate the capacity of different generative AI tools such as ChatGPT, GPT for Google Sheets and Docs, Casper AI, and ChatPDF to assist in the early stages of a rapid evidence review process. Our results demonstrate that these tools can boost research productivity in formulating search strings and screening literature, but they have some notable weaknesses, including producing inconsistent results and occasional errors. We recommend that researchers exercise caution when using generative AI technologies by designing a thorough research strategy and review protocol to ensure effective monitoring and quality control.</p>","PeriodicalId":36408,"journal":{"name":"Human Behavior and Emerging Technologies","volume":"2024 1","pages":""},"PeriodicalIF":4.3000,"publicationDate":"2024-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/2024/8815424","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human Behavior and Emerging Technologies","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1155/2024/8815424","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Artificial intelligence (AI) tools have been used to improve the productivity of evidence review and synthesis since at least 2016, with EPPI-Reviewer and Abstrackr being two prominent examples. However, since the release of ChatGPT by OpenAI in late 2022, the use of generative AI for research, especially for text-based data analysis, has exploded. In this article, we used a critical reflection approach to document and evaluate the capacity of different generative AI tools such as ChatGPT, GPT for Google Sheets and Docs, Casper AI, and ChatPDF to assist in the early stages of a rapid evidence review process. Our results demonstrate that these tools can boost research productivity in formulating search strings and screening literature, but they have some notable weaknesses, including producing inconsistent results and occasional errors. We recommend that researchers exercise caution when using generative AI technologies by designing a thorough research strategy and review protocol to ensure effective monitoring and quality control.
至少从 2016 年开始,人工智能(AI)工具就被用于提高证据审查和综合的效率,EPPI-Reviewer 和 Abstrackr 就是两个突出的例子。然而,自2022年底OpenAI发布ChatGPT以来,生成式人工智能在研究领域的应用,尤其是基于文本的数据分析,呈现爆炸式增长。在本文中,我们采用批判性反思的方法,记录并评估了不同的生成式人工智能工具,如 ChatGPT、GPT for Google Sheets and Docs、Casper AI 和 ChatPDF 在快速证据审查流程早期阶段的辅助能力。我们的研究结果表明,这些工具可以提高制定搜索字符串和筛选文献的研究效率,但它们也有一些明显的弱点,包括产生的结果不一致和偶尔出错。我们建议研究人员在使用生成式人工智能技术时谨慎行事,设计周密的研究策略和审查方案,以确保有效的监控和质量控制。
期刊介绍:
Human Behavior and Emerging Technologies is an interdisciplinary journal dedicated to publishing high-impact research that enhances understanding of the complex interactions between diverse human behavior and emerging digital technologies.