{"title":"Automated Review Generation Method Based on Large Language Models","authors":"Shican Wu, Xiao Ma, Dehui Luo, Lulu Li, Xiangcheng Shi, Xin Chang, Xiaoyun Lin, Ran Luo, Chunlei Pei, Zhi-Jian Zhao, Jinlong Gong","doi":"arxiv-2407.20906","DOIUrl":null,"url":null,"abstract":"Literature research, vital for scientific advancement, is overwhelmed by the\nvast ocean of available information. Addressing this, we propose an automated\nreview generation method based on Large Language Models (LLMs) to streamline\nliterature processing and reduce cognitive load. In case study on propane\ndehydrogenation (PDH) catalysts, our method swiftly generated comprehensive\nreviews from 343 articles, averaging seconds per article per LLM account.\nExtended analysis of 1041 articles provided deep insights into catalysts'\ncomposition, structure, and performance. Recognizing LLMs' hallucinations, we\nemployed a multi-layered quality control strategy, ensuring our method's\nreliability and effective hallucination mitigation. Expert verification\nconfirms the accuracy and citation integrity of generated reviews,\ndemonstrating LLM hallucination risks reduced to below 0.5% with over 95%\nconfidence. Released Windows application enables one-click review generation,\naiding researchers in tracking advancements and recommending literature. This\napproach showcases LLMs' role in enhancing scientific research productivity and\nsets the stage for further exploration.","PeriodicalId":501065,"journal":{"name":"arXiv - PHYS - Data Analysis, Statistics and Probability","volume":"50 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - PHYS - Data Analysis, Statistics and Probability","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.20906","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Literature research, vital for scientific advancement, is overwhelmed by the
vast ocean of available information. Addressing this, we propose an automated
review generation method based on Large Language Models (LLMs) to streamline
literature processing and reduce cognitive load. In case study on propane
dehydrogenation (PDH) catalysts, our method swiftly generated comprehensive
reviews from 343 articles, averaging seconds per article per LLM account.
Extended analysis of 1041 articles provided deep insights into catalysts'
composition, structure, and performance. Recognizing LLMs' hallucinations, we
employed a multi-layered quality control strategy, ensuring our method's
reliability and effective hallucination mitigation. Expert verification
confirms the accuracy and citation integrity of generated reviews,
demonstrating LLM hallucination risks reduced to below 0.5% with over 95%
confidence. Released Windows application enables one-click review generation,
aiding researchers in tracking advancements and recommending literature. This
approach showcases LLMs' role in enhancing scientific research productivity and
sets the stage for further exploration.