Using artificial intelligence for systematic review: the example of elicit.

IF 3.9 3区 医学 Q1 HEALTH CARE SCIENCES & SERVICES BMC Medical Research Methodology Pub Date : 2025-03-18 DOI:10.1186/s12874-025-02528-y
Nathan Bernard, Yoshimasa Sagawa, Nathalie Bier, Thomas Lihoreau, Lionel Pazart, Thomas Tannou
{"title":"Using artificial intelligence for systematic review: the example of elicit.","authors":"Nathan Bernard, Yoshimasa Sagawa, Nathalie Bier, Thomas Lihoreau, Lionel Pazart, Thomas Tannou","doi":"10.1186/s12874-025-02528-y","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI) tools are increasingly being used to assist researchers with various research tasks, particularly in the systematic review process. Elicit is one such tool that can generate a summary of the question asked, setting it apart from other AI tools. The aim of this study is to determine whether AI-assisted research using Elicit adds value to the systematic review process compared to traditional screening methods.</p><p><strong>Methods: </strong>We compare the results from an umbrella review conducted independently of AI with the results of the AI-based searching using the same criteria. Elicit contribution was assessed based on three criteria: repeatability, reliability and accuracy. For repeatability the search process was repeated three times on Elicit (trial 1, trial 2, trial 3). For accuracy, articles obtained with Elicit were reviewed using the same inclusion criteria as the umbrella review. Reliability was assessed by comparing the number of publications with those without AI-based searches.</p><p><strong>Results: </strong>The repeatability test found 246,169 results and 172 results for the trials 1, 2, and 3 respectively. Concerning accuracy, 6 articles were included at the conclusion of the selection process. Regarding, revealed 3 common articles, 3 exclusively identified by Elicit and 17 exclusively identified by the AI-independent umbrella review search.</p><p><strong>Conclusion: </strong>Our findings suggest that AI research assistants, like Elicit, can serve as valuable complementary tools for researchers when designing or writing systematic reviews. However, AI tools have several limitations and should be used with caution. When using AI tools, certain principles must be followed to maintain methodological rigour and integrity. Improving the performance of AI tools such as Elicit and contributing to the development of guidelines for their use during the systematic review process will enhance their effectiveness.</p>","PeriodicalId":9114,"journal":{"name":"BMC Medical Research Methodology","volume":"25 1","pages":"75"},"PeriodicalIF":3.9000,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMC Medical Research Methodology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s12874-025-02528-y","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Artificial intelligence (AI) tools are increasingly being used to assist researchers with various research tasks, particularly in the systematic review process. Elicit is one such tool that can generate a summary of the question asked, setting it apart from other AI tools. The aim of this study is to determine whether AI-assisted research using Elicit adds value to the systematic review process compared to traditional screening methods.

Methods: We compare the results from an umbrella review conducted independently of AI with the results of the AI-based searching using the same criteria. Elicit contribution was assessed based on three criteria: repeatability, reliability and accuracy. For repeatability the search process was repeated three times on Elicit (trial 1, trial 2, trial 3). For accuracy, articles obtained with Elicit were reviewed using the same inclusion criteria as the umbrella review. Reliability was assessed by comparing the number of publications with those without AI-based searches.

Results: The repeatability test found 246,169 results and 172 results for the trials 1, 2, and 3 respectively. Concerning accuracy, 6 articles were included at the conclusion of the selection process. Regarding, revealed 3 common articles, 3 exclusively identified by Elicit and 17 exclusively identified by the AI-independent umbrella review search.

Conclusion: Our findings suggest that AI research assistants, like Elicit, can serve as valuable complementary tools for researchers when designing or writing systematic reviews. However, AI tools have several limitations and should be used with caution. When using AI tools, certain principles must be followed to maintain methodological rigour and integrity. Improving the performance of AI tools such as Elicit and contributing to the development of guidelines for their use during the systematic review process will enhance their effectiveness.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
背景:人工智能(AI)工具越来越多地被用于协助研究人员完成各种研究任务,尤其是在系统性综述过程中。Elicit 就是这样一种工具,它可以生成所提问题的摘要,这使其有别于其他人工智能工具。本研究旨在确定与传统筛选方法相比,使用 Elicit 的人工智能辅助研究是否能为系统性综述过程增加价值:我们将独立于人工智能进行的总综述的结果与使用相同标准进行的基于人工智能的搜索结果进行了比较。根据三个标准评估了Elicit的贡献:可重复性、可靠性和准确性。对于可重复性,搜索过程在 Elicit 上重复了三次(试验 1、试验 2、试验 3)。在准确性方面,采用与总综述相同的纳入标准对通过 Elicit 搜索到的文章进行审查。通过比较未使用人工智能搜索的论文数量来评估可靠性:重复性测试发现,试验 1、试验 2 和试验 3 分别有 246、169 和 172 项结果。在准确性方面,有 6 篇文章在筛选过程结束时被收录。结论:我们的研究结果表明,人工智能研究助理的工作是对研究结果进行分析和评估:我们的研究结果表明,人工智能研究助手(如 Elicit)可作为研究人员设计或撰写系统综述时的宝贵补充工具。然而,人工智能工具也有一些局限性,应谨慎使用。在使用人工智能工具时,必须遵循某些原则,以保持方法论的严谨性和完整性。提高 Elicit 等人工智能工具的性能,并帮助制定在系统综述过程中使用这些工具的指导原则,将提高其有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
BMC Medical Research Methodology
BMC Medical Research Methodology 医学-卫生保健
CiteScore
6.50
自引率
2.50%
发文量
298
审稿时长
3-8 weeks
期刊介绍: BMC Medical Research Methodology is an open access journal publishing original peer-reviewed research articles in methodological approaches to healthcare research. Articles on the methodology of epidemiological research, clinical trials and meta-analysis/systematic review are particularly encouraged, as are empirical studies of the associations between choice of methodology and study outcomes. BMC Medical Research Methodology does not aim to publish articles describing scientific methods or techniques: these should be directed to the BMC journal covering the relevant biomedical subject area.
期刊最新文献
Measuring adversity in the ABCD® Study: systematic review and recommendations for best practices. Sample size recalculation based on the overall success rate in a randomized test-treatment trial with restricting randomization to discordant pairs. Using artificial intelligence for systematic review: the example of elicit. A flexible framework for local-level estimation of the effective reproductive number in geographic regions with sparse data. Evaluating methods to define place of residence in Canadian administrative data and the impact on observed associations with all-cause mortality in type 2 diabetes.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1