[科学中的大型语言模型]。

IF 0.5 4区 医学 Q4 UROLOGY & NEPHROLOGY Urologie Pub Date : 2024-09-01 Epub Date: 2024-07-24 DOI:10.1007/s00120-024-02396-2
Karl-Friedrich Kowalewski, Severin Rodler
{"title":"[科学中的大型语言模型]。","authors":"Karl-Friedrich Kowalewski, Severin Rodler","doi":"10.1007/s00120-024-02396-2","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>Large language models (LLMs) are gaining popularity due to their ability to communicate in a human-like manner. Their potential for science, including urology, is increasingly recognized. However, unresolved concerns regarding transparency, accountability, and the accuracy of LLM results still exist.</p><p><strong>Research question: </strong>This review examines the ethical, technical, and practical challenges as well as the potential applications of LLMs in urology and science.</p><p><strong>Materials and methods: </strong>A selective literature review was conducted to analyze current findings and developments in the field of LLMs. The review considered studies on technical aspects, ethical considerations, and practical applications in research and practice.</p><p><strong>Results: </strong>LLMs, such as GPT from OpenAI and Gemini from Google, show great potential for processing and analyzing text data. Applications in urology include creating patient information and supporting administrative tasks. However, for purely clinical and scientific questions, the methods do not yet seem mature. Currently, concerns about ethical issues and the accuracy of results persist.</p><p><strong>Conclusion: </strong>LLMs have the potential to support research and practice through efficient data processing and information provision. Despite their advantages, ethical concerns and technical challenges must be addressed to ensure responsible and trustworthy use. Increased implementation could reduce the workload of urologists and improve communication with patients.</p>","PeriodicalId":29782,"journal":{"name":"Urologie","volume":" ","pages":"860-866"},"PeriodicalIF":0.5000,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"[Large language models in science].\",\"authors\":\"Karl-Friedrich Kowalewski, Severin Rodler\",\"doi\":\"10.1007/s00120-024-02396-2\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objective: </strong>Large language models (LLMs) are gaining popularity due to their ability to communicate in a human-like manner. Their potential for science, including urology, is increasingly recognized. However, unresolved concerns regarding transparency, accountability, and the accuracy of LLM results still exist.</p><p><strong>Research question: </strong>This review examines the ethical, technical, and practical challenges as well as the potential applications of LLMs in urology and science.</p><p><strong>Materials and methods: </strong>A selective literature review was conducted to analyze current findings and developments in the field of LLMs. The review considered studies on technical aspects, ethical considerations, and practical applications in research and practice.</p><p><strong>Results: </strong>LLMs, such as GPT from OpenAI and Gemini from Google, show great potential for processing and analyzing text data. Applications in urology include creating patient information and supporting administrative tasks. However, for purely clinical and scientific questions, the methods do not yet seem mature. Currently, concerns about ethical issues and the accuracy of results persist.</p><p><strong>Conclusion: </strong>LLMs have the potential to support research and practice through efficient data processing and information provision. Despite their advantages, ethical concerns and technical challenges must be addressed to ensure responsible and trustworthy use. Increased implementation could reduce the workload of urologists and improve communication with patients.</p>\",\"PeriodicalId\":29782,\"journal\":{\"name\":\"Urologie\",\"volume\":\" \",\"pages\":\"860-866\"},\"PeriodicalIF\":0.5000,\"publicationDate\":\"2024-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Urologie\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/s00120-024-02396-2\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/7/24 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q4\",\"JCRName\":\"UROLOGY & NEPHROLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Urologie","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s00120-024-02396-2","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/7/24 0:00:00","PubModel":"Epub","JCR":"Q4","JCRName":"UROLOGY & NEPHROLOGY","Score":null,"Total":0}
引用次数: 0

摘要

目的:大语言模型(LLMs)由于能够以类似人类的方式进行交流而越来越受欢迎。它们在包括泌尿学在内的科学领域的潜力正日益得到认可。然而,有关 LLM 结果的透明度、问责制和准确性的问题仍未得到解决:本综述探讨了 LLM 在伦理、技术和实践方面的挑战以及在泌尿学和科学领域的潜在应用:材料与方法:我们有选择性地进行了文献综述,以分析当前在 LLMs 领域的发现和发展。综述考虑了技术方面的研究、伦理方面的考虑以及在研究和实践中的实际应用:LLM(如 OpenAI 的 GPT 和 Google 的 Gemini)在处理和分析文本数据方面显示出巨大的潜力。在泌尿科的应用包括创建患者信息和支持行政任务。不过,对于纯粹的临床和科学问题,这些方法似乎还不成熟。目前,人们对伦理问题和结果准确性的担忧依然存在:LLM 有潜力通过高效的数据处理和信息提供为研究和实践提供支持。尽管有其优势,但必须解决伦理问题和技术挑战,以确保负责任和可信赖的使用。加大使用力度可以减轻泌尿科医生的工作量,改善与患者的沟通。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
[Large language models in science].

Objective: Large language models (LLMs) are gaining popularity due to their ability to communicate in a human-like manner. Their potential for science, including urology, is increasingly recognized. However, unresolved concerns regarding transparency, accountability, and the accuracy of LLM results still exist.

Research question: This review examines the ethical, technical, and practical challenges as well as the potential applications of LLMs in urology and science.

Materials and methods: A selective literature review was conducted to analyze current findings and developments in the field of LLMs. The review considered studies on technical aspects, ethical considerations, and practical applications in research and practice.

Results: LLMs, such as GPT from OpenAI and Gemini from Google, show great potential for processing and analyzing text data. Applications in urology include creating patient information and supporting administrative tasks. However, for purely clinical and scientific questions, the methods do not yet seem mature. Currently, concerns about ethical issues and the accuracy of results persist.

Conclusion: LLMs have the potential to support research and practice through efficient data processing and information provision. Despite their advantages, ethical concerns and technical challenges must be addressed to ensure responsible and trustworthy use. Increased implementation could reduce the workload of urologists and improve communication with patients.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Urologie
Urologie UROLOGY & NEPHROLOGY-
CiteScore
1.00
自引率
0.00%
发文量
0
期刊最新文献
[Non-clear cell renal cell carcinoma]. [Urothelial carcinoma of the upper and lower urinary tract-which risk factors make early detection worthwhile?] Erratum zu: Kryokonservierung von Keimzellen aus Ejakulat oder Hodengewebe zur Fertilitätsprotektion. [Antibiotic prescribing practice in urological departments in Germany: results of a cross-sectional study]. [Adjuvant therapy for renal cell carcinoma : Relevant patient and tumor factors].
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1