首页 > 最新文献

medRxiv - Health Informatics最新文献

英文 中文
A Framework to Assess Clinical Safety and Hallucination Rates of LLMs for Medical Text Summarisation 评估用于医学文本摘要的 LLM 的临床安全性和幻觉率的框架
Pub Date : 2024-09-13 DOI: 10.1101/2024.09.12.24313556
Elham Asgari, Nina Montana-Brown, Magda Dubois, Saleh Khalil, Jasmine Balloch, Dominic Pimenta
The integration of large language models (LLMs) into healthcare settings holds great promise for improving clinical workflow efficiency and enhancing patient care, with the potential to automate tasks such as text summarisation during consultations. The fidelity between LLM outputs and ground truth information is therefore paramount in healthcare, as errors in medical summary generation can lead to miscommunication between patients and clinicians, leading to incorrect diagnosis and treatment decisions and compromising patient safety. LLMs are well-known to produce a variety of errors. Currently, there is no established clinical framework for assessing the safety and accuracy of LLM-generated medical text.We have developed a new approach to: a) categorise LLM errors within the clinical documentation context, b) establish clinical safety metrics for the live usage phase, and c) suggest a framework named CREOLA for assessing the safety risk for errors. We present clinical error metrics over 18 different LLM experimental configurations for the clinical note generation task, consisting of 12,999 clinician-annotated sentences. We illustrate the utility of using our platform CREOLA for iteration over LLM architectures with two experiments. Overall, we find our best-performing experiments outperform previously reported model error rates in the note generation literature, and additionally outperform human annotators. Our suggested framework can be used to assess the accuracy and safety of LLM output in the clinical context.
将大型语言模型(LLM)集成到医疗保健环境中,为提高临床工作流程效率和加强患者护理带来了巨大的希望,并有可能实现会诊过程中文本摘要等任务的自动化。因此,在医疗保健领域,LLM 输出与基本真实信息之间的保真度至关重要,因为医疗摘要生成中的错误会导致患者与临床医生之间的沟通不畅,从而导致错误的诊断和治疗决定,并危及患者安全。众所周知,LLM 会产生各种错误。目前,还没有成熟的临床框架来评估 LLM 生成的医疗文本的安全性和准确性。我们开发了一种新方法来:a) 在临床文档背景下对 LLM 错误进行分类;b) 建立实时使用阶段的临床安全指标;c) 提出一个名为 CREOLA 的框架来评估错误的安全风险。我们介绍了针对临床笔记生成任务的 18 种不同 LLM 实验配置的临床错误指标,其中包括 12999 个临床医生注释的句子。我们通过两个实验说明了使用我们的平台 CREOLA 对 LLM 架构进行迭代的实用性。总体而言,我们发现表现最好的实验结果优于之前报道的笔记生成文献中的模型错误率,此外还优于人类注释者。我们建议的框架可用于评估临床环境中 LLM 输出的准确性和安全性。
{"title":"A Framework to Assess Clinical Safety and Hallucination Rates of LLMs for Medical Text Summarisation","authors":"Elham Asgari, Nina Montana-Brown, Magda Dubois, Saleh Khalil, Jasmine Balloch, Dominic Pimenta","doi":"10.1101/2024.09.12.24313556","DOIUrl":"https://doi.org/10.1101/2024.09.12.24313556","url":null,"abstract":"The integration of large language models (LLMs) into healthcare settings holds great promise for improving clinical workflow efficiency and enhancing patient care, with the potential to automate tasks such as text summarisation during consultations. The fidelity between LLM outputs and ground truth information is therefore paramount in healthcare, as errors in medical summary generation can lead to miscommunication between patients and clinicians, leading to incorrect diagnosis and treatment decisions and compromising patient safety. LLMs are well-known to produce a variety of errors. Currently, there is no established clinical framework for assessing the safety and accuracy of LLM-generated medical text.\u0000We have developed a new approach to: a) categorise LLM errors within the clinical documentation context, b) establish clinical safety metrics for the live usage phase, and c) suggest a framework named CREOLA for assessing the safety risk for errors. We present clinical error metrics over 18 different LLM experimental configurations for the clinical note generation task, consisting of 12,999 clinician-annotated sentences. We illustrate the utility of using our platform CREOLA for iteration over LLM architectures with two experiments. Overall, we find our best-performing experiments outperform previously reported model error rates in the note generation literature, and additionally outperform human annotators. Our suggested framework can be used to assess the accuracy and safety of LLM output in the clinical context.","PeriodicalId":501454,"journal":{"name":"medRxiv - Health Informatics","volume":"39 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142253409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing Large Language Models for Discharge Prediction: Best Practices in Leveraging Electronic Health Record Audit Logs 优化出院预测的大型语言模型:利用电子健康记录审计日志的最佳实践
Pub Date : 2024-09-13 DOI: 10.1101/2024.09.12.24313594
Xinmeng Zhang, Chao Yan, Yuyang Yang, Zhuohang Li, Yubo Feng, Bradley A. Malin, You Chen
Electronic Health Record (EHR) audit log data are increasingly utilized for clinical tasks, from workflow modeling to predictive analyses of discharge events, adverse kidney outcomes, and hospital readmissions. These data encapsulate user-EHR interactions, reflecting both healthcare professionals' behavior and patients' health statuses. To harness this temporal information effectively, this study explores the application of Large Language Models (LLMs) in leveraging audit log data for clinical prediction tasks, specifically focusing on discharge predictions. Utilizing a year's worth of EHR data from Vanderbilt University Medical Center, we fine-tuned LLMs with randomly selected 10,000 training examples. Our findings reveal that LLaMA-2 70B, with an AUROC of 0.80 [0.77-0.82], outperforms both GPT-4 128K in a zero-shot, with an AUROC of 0.68 [0.65-0.71], and DeBERTa, with an AUROC of 0.78 [0.75-0.82]. Among various serialization methods, the first-occurrence approach — wherein only the initial appearance of each event in a sequence is retained — shows superior performance. Furthermore, for the fine-tuned LLaMA-2 70B, logit outputs yield a higher AUROC of 0.80 [0.77-0.82] compared to text outputs, with an AUROC of 0.69 [0.67-0.72]. This study underscores the potential of fine-tuned LLMs, particularly when combined with strategic sequence serialization, in advancing clinical prediction tasks.
电子病历(EHR)审计日志数据越来越多地被用于临床任务,从工作流程建模到出院事件、不良肾脏结果和再入院的预测分析。这些数据囊括了用户与 EHR 之间的互动,反映了医护人员的行为和患者的健康状况。为了有效利用这些时间信息,本研究探索了大型语言模型(LLM)在利用审计日志数据进行临床预测任务中的应用,尤其侧重于出院预测。我们利用范德比尔特大学医疗中心一年的电子病历数据,通过随机选择的 10,000 个训练示例对 LLM 进行了微调。我们的研究结果表明,LLaMA-2 70B 的 AUROC 为 0.80 [0.77-0.82],优于 GPT-4 128K 的 AUROC 0.68 [0.65-0.71],也优于 DeBERTa 的 AUROC 0.78 [0.75-0.82]。在各种序列化方法中,首次出现法--即只保留序列中每个事件的首次出现--表现出更优越的性能。此外,对于经过微调的 LLaMA-2 70B,逻辑输出的 AUROC 为 0.80 [0.77-0.82],而文本输出的 AUROC 为 0.69 [0.67-0.72]。这项研究强调了微调 LLMs 的潜力,尤其是在与战略性序列序列化相结合时,可推动临床预测任务的发展。
{"title":"Optimizing Large Language Models for Discharge Prediction: Best Practices in Leveraging Electronic Health Record Audit Logs","authors":"Xinmeng Zhang, Chao Yan, Yuyang Yang, Zhuohang Li, Yubo Feng, Bradley A. Malin, You Chen","doi":"10.1101/2024.09.12.24313594","DOIUrl":"https://doi.org/10.1101/2024.09.12.24313594","url":null,"abstract":"Electronic Health Record (EHR) audit log data are increasingly utilized for clinical tasks, from workflow modeling to predictive analyses of discharge events, adverse kidney outcomes, and hospital readmissions. These data encapsulate user-EHR interactions, reflecting both healthcare professionals' behavior and patients' health statuses. To harness this temporal information effectively, this study explores the application of Large Language Models (LLMs) in leveraging audit log data for clinical prediction tasks, specifically focusing on discharge predictions. Utilizing a year's worth of EHR data from Vanderbilt University Medical Center, we fine-tuned LLMs with randomly selected 10,000 training examples. Our findings reveal that LLaMA-2 70B, with an AUROC of 0.80 [0.77-0.82], outperforms both GPT-4 128K in a zero-shot, with an AUROC of 0.68 [0.65-0.71], and DeBERTa, with an AUROC of 0.78 [0.75-0.82]. Among various serialization methods, the first-occurrence approach — wherein only the initial appearance of each event in a sequence is retained — shows superior performance. Furthermore, for the fine-tuned LLaMA-2 70B, logit outputs yield a higher AUROC of 0.80 [0.77-0.82] compared to text outputs, with an AUROC of 0.69 [0.67-0.72]. This study underscores the potential of fine-tuned LLMs, particularly when combined with strategic sequence serialization, in advancing clinical prediction tasks.","PeriodicalId":501454,"journal":{"name":"medRxiv - Health Informatics","volume":"50 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142253450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Dietary Supplement Question Answer via Retrieval-Augmented Generation (RAG) with LLM 利用 LLM 通过检索增强生成 (RAG) 增强膳食补充剂问题解答
Pub Date : 2024-09-12 DOI: 10.1101/2024.09.11.24313513
Yu Hou, Rui Zhang
Objective: To enhance the accuracy and reliability of dietary supplement (DS) question answering by integrating a novel Retrieval-Augmented Generation (RAG) LLM system with an updated and integrated DS knowledge base and providing a user-friendly interface. With.Materials and Methods: We developed iDISK2.0 by integrating updated data from multiple trusted sources, including NMCD, MSKCC, DSLD, and NHPD, and applied advanced integration strategies to reduce noise. We then applied the iDISK2.0 with a RAG system, leveraging the strengths of large language models (LLMs) and a biomedical knowledge graph (BKG) to address the hallucination issues inherent in standalone LLMs. The system enhances answer generation by using LLMs (GPT-4.0) to retrieve contextually relevant subgraphs from the BKG based on identified entities in the query. A user-friendly interface was built to facilitate easy access to DS knowledge through conversational text inputs.Results: The iDISK2.0 encompasses 174,317 entities across seven types, six types of relationships, and 471,063 attributes. The iDISK2.0-RAG system significantly improved the accuracy of DS-related information retrieval. Our evaluations showed that the system achieved over 95% accuracy in answering True/False and multiple-choice questions, outperforming standalone LLMs. Additionally, the user-friendly interface enabled efficient interaction, allowing users to input free-form text queries and receive accurate, contextually relevant responses. The integration process minimized data noise and ensured the most up-to-date and comprehensive DS information was available to users.Conclusion: The integration of iDISK2.0 with an RAG system effectively addresses the limitations of LLMs, providing a robust solution for accurate DS information retrieval. This study underscores the importance of combining structured knowledge graphs with advanced language models to enhance the precision and reliability of information retrieval systems, ultimately supporting better-informed decisions in DS-related research and healthcare.
目的通过将新颖的检索-增强生成(RAG)LLM 系统与更新和整合的膳食补充剂知识库相结合,并提供用户友好的界面,提高膳食补充剂(DS)问题解答的准确性和可靠性。材料与方法我们开发了 iDISK2.0,整合了来自多个可信来源(包括 NMCD、MSKCC、DSLD 和 NHPD)的最新数据,并采用先进的整合策略来减少噪音。然后,我们将 iDISK2.0 与 RAG 系统结合使用,充分利用大型语言模型 (LLM) 和生物医学知识图谱 (BKG) 的优势,解决独立 LLM 固有的幻觉问题。该系统通过使用 LLM(GPT-4.0),根据查询中已识别的实体从 BKG 中检索与上下文相关的子图,从而增强了答案生成能力。该系统还建立了一个用户友好界面,方便用户通过会话文本输入获取 DS 知识:iDISK2.0 包含 174,317 个实体,涉及七种类型、六种关系和 471,063 个属性。iDISK2.0-RAG 系统大大提高了 DS 相关信息检索的准确性。我们的评估结果表明,该系统在回答真/假问题和多项选择问题时的准确率超过 95%,优于独立的 LLM。此外,友好的用户界面实现了高效的交互,允许用户输入自由格式的文本查询,并获得准确的、与上下文相关的回复。整合过程最大限度地减少了数据噪音,确保用户可以获得最新、最全面的 DS 信息:iDISK2.0 与 RAG 系统的整合有效地解决了 LLM 的局限性,为准确的 DS 信息检索提供了强大的解决方案。这项研究强调了将结构化知识图谱与先进的语言模型相结合以提高信息检索系统的精确度和可靠性的重要性,最终支持在 DS 相关研究和医疗保健领域做出更明智的决策。
{"title":"Enhancing Dietary Supplement Question Answer via Retrieval-Augmented Generation (RAG) with LLM","authors":"Yu Hou, Rui Zhang","doi":"10.1101/2024.09.11.24313513","DOIUrl":"https://doi.org/10.1101/2024.09.11.24313513","url":null,"abstract":"Objective: To enhance the accuracy and reliability of dietary supplement (DS) question answering by integrating a novel Retrieval-Augmented Generation (RAG) LLM system with an updated and integrated DS knowledge base and providing a user-friendly interface. With.\u0000Materials and Methods: We developed iDISK2.0 by integrating updated data from multiple trusted sources, including NMCD, MSKCC, DSLD, and NHPD, and applied advanced integration strategies to reduce noise. We then applied the iDISK2.0 with a RAG system, leveraging the strengths of large language models (LLMs) and a biomedical knowledge graph (BKG) to address the hallucination issues inherent in standalone LLMs. The system enhances answer generation by using LLMs (GPT-4.0) to retrieve contextually relevant subgraphs from the BKG based on identified entities in the query. A user-friendly interface was built to facilitate easy access to DS knowledge through conversational text inputs.\u0000Results: The iDISK2.0 encompasses 174,317 entities across seven types, six types of relationships, and 471,063 attributes. The iDISK2.0-RAG system significantly improved the accuracy of DS-related information retrieval. Our evaluations showed that the system achieved over 95% accuracy in answering True/False and multiple-choice questions, outperforming standalone LLMs. Additionally, the user-friendly interface enabled efficient interaction, allowing users to input free-form text queries and receive accurate, contextually relevant responses. The integration process minimized data noise and ensured the most up-to-date and comprehensive DS information was available to users.\u0000Conclusion: The integration of iDISK2.0 with an RAG system effectively addresses the limitations of LLMs, providing a robust solution for accurate DS information retrieval. This study underscores the importance of combining structured knowledge graphs with advanced language models to enhance the precision and reliability of information retrieval systems, ultimately supporting better-informed decisions in DS-related research and healthcare.","PeriodicalId":501454,"journal":{"name":"medRxiv - Health Informatics","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance evaluation of an under-mattress sleep sensor versus polysomnography in >400 nights with healthy and unhealthy sleep 床垫下睡眠传感器与多导睡眠监测仪在大于 400 个睡眠健康和不健康的夜晚的性能评估
Pub Date : 2024-09-11 DOI: 10.1101/2024.09.09.24312921
Jack Manners, Eva Kemps, Bastien Lechat, Peter Catcheside, Danny Eckert, Hannah Scott
Consumer sleep trackers provide useful insight into sleep. However, large scale performance evaluation studies are needed to properly understand sleep tracker accuracy. This study evaluated performance of an under-mattress sensor to estimate sleep and wake versus polysomnography in a large sample, including individuals with and without sleep disorders and during day versus night sleep opportunities, across multiple in-laboratory studies.183 participants (51/49% male/female, mean[SD] age=45[18] years) attended the sleep laboratory for a research study including simultaneous polysomnography and under-mattress sensor (Withings Sleep Analyzer [WSA]) recordings. Epoch-by-epoch analyses determined accuracy, sensitivity, and specificity of the WSA versus polysomnography. Bland-Altman plots examined bias in sleep duration, efficiency, onset-latency, and wake after sleep onset.Overall WSA sleep-wake classification accuracy was 83%, sensitivity 95%, and specificity 37%. The WSA significantly overestimated total sleep time (48[81]minutes), Sleep efficiency (9[15]%), sleep onset latency (6[26]minutes), and underestimated wake after sleep onset (54[78]minutes). Accuracy and specificity were higher for night versus daytime sleep opportunities in healthy individuals (89% and 47% versus 82% and 26% respectively, p<0.05). Accuracy and sensitivity were also higher for healthy individuals (89% and 97%) versus those with sleep disorders (81% and 91%, p<0.05).WSA performance is comparable to other consumer sleep trackers, with high sensitivity but poor specificity compared to polysomnography. WSA performance was reasonably stable, but more variable in daytime sleep opportunities and in people with a sleep disorder. Contactless, under-mattress sleep sensors show promise for accurate sleep monitoring, noting the tendency to over-estimate sleep particularly where wake time is high.
消费类睡眠追踪器能帮助人们深入了解睡眠情况。然而,要正确理解睡眠追踪器的准确性,还需要进行大规模的性能评估研究。183 名参与者(51/49% 男/女,平均[标码]年龄=45[18]岁)在睡眠实验室参加了一项研究,包括多导睡眠图和床垫下传感器(Withings 睡眠分析仪 [WSA])的同步记录。逐次分析确定了 WSA 与多导睡眠图的准确性、灵敏度和特异性。平原-阿尔特曼图检查了睡眠时间、效率、起始-延迟和睡眠起始后唤醒的偏差。WSA睡眠-唤醒分类的总体准确率为83%,灵敏度为95%,特异性为37%。WSA明显高估了总睡眠时间(48[81]分钟)、睡眠效率(9[15]%)、睡眠开始潜伏期(6[26]分钟),低估了睡眠开始后的觉醒时间(54[78]分钟)。健康人夜间睡眠机会的准确性和特异性高于白天睡眠机会(分别为 89% 和 47% 与 82% 和 26%,p<0.05)。健康人(89% 和 97%)与睡眠障碍者(81% 和 91%,p<0.05)相比,WSA 的准确度和灵敏度也更高。WSA 的性能与其他消费者睡眠追踪器相当,与多导睡眠图相比,灵敏度高,但特异性差。WSA的性能相当稳定,但在白天睡眠机会和睡眠障碍患者中的性能变化较大。非接触式床垫下睡眠传感器有望实现精确的睡眠监测,但要注意的是,尤其是在唤醒时间较长的情况下,该传感器容易高估睡眠时间。
{"title":"Performance evaluation of an under-mattress sleep sensor versus polysomnography in >400 nights with healthy and unhealthy sleep","authors":"Jack Manners, Eva Kemps, Bastien Lechat, Peter Catcheside, Danny Eckert, Hannah Scott","doi":"10.1101/2024.09.09.24312921","DOIUrl":"https://doi.org/10.1101/2024.09.09.24312921","url":null,"abstract":"Consumer sleep trackers provide useful insight into sleep. However, large scale performance evaluation studies are needed to properly understand sleep tracker accuracy. This study evaluated performance of an under-mattress sensor to estimate sleep and wake versus polysomnography in a large sample, including individuals with and without sleep disorders and during day versus night sleep opportunities, across multiple in-laboratory studies.\u0000183 participants (51/49% male/female, mean[SD] age=45[18] years) attended the sleep laboratory for a research study including simultaneous polysomnography and under-mattress sensor (Withings Sleep Analyzer [WSA]) recordings. Epoch-by-epoch analyses determined accuracy, sensitivity, and specificity of the WSA versus polysomnography. Bland-Altman plots examined bias in sleep duration, efficiency, onset-latency, and wake after sleep onset.\u0000Overall WSA sleep-wake classification accuracy was 83%, sensitivity 95%, and specificity 37%. The WSA significantly overestimated total sleep time (48[81]minutes), Sleep efficiency (9[15]%), sleep onset latency (6[26]minutes), and underestimated wake after sleep onset (54[78]minutes). Accuracy and specificity were higher for night versus daytime sleep opportunities in healthy individuals (89% and 47% versus 82% and 26% respectively, p&lt;0.05). Accuracy and sensitivity were also higher for healthy individuals (89% and 97%) versus those with sleep disorders (81% and 91%, p&lt;0.05).\u0000WSA performance is comparable to other consumer sleep trackers, with high sensitivity but poor specificity compared to polysomnography. WSA performance was reasonably stable, but more variable in daytime sleep opportunities and in people with a sleep disorder. Contactless, under-mattress sleep sensors show promise for accurate sleep monitoring, noting the tendency to over-estimate sleep particularly where wake time is high.","PeriodicalId":501454,"journal":{"name":"medRxiv - Health Informatics","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Universal coordinate on wave-shape manifold of cardiovascular waveform signal for dynamic quantification and cross-subject comparison 用于动态量化和跨受试者比较的心血管波形信号波形流形上的通用坐标
Pub Date : 2024-09-11 DOI: 10.1101/2024.09.09.24313272
Hau-Tieng Wu, Ruey-Hsing Chou, Shen-Chih Wang, Cheng-Hsi Chang, Yu-Ting Lin
Objective: Quantifying physiological dynamics from nonstationary time series for clinical decision-making is challenging, especially when comparing data across different subjects. We propose a solution and validate it using two real-world surgical databases, focusing on underutilized arterial blood pressure (ABP) signals. Method: We apply a manifold learning algorithm, Dynamic Diffusion Maps (DDMap), combined with the novel Universal Coordinate (UC) algorithm to quantify dynamics from nonstationary time series. The method is demonstrated using ABP signal and validated with liver transplant and cardiovascular surgery databases, both containing clinical outcomes. Sensitivity analyses were conducted to assess robustness and identify optimal parameters. Results: UC application is validated by significant correlations between the derived index and clinical outcomes. Sensitivity analyses confirm the algorithms stability and help optimize parameters. Conclusions: DDMap combined with UC enables dynamic quantification of ABP signals and comparison across subjects. This technique repurposes typically discarded ABP signals in the operating room, with potential applications to other nonstationary biomedical signals in both hospital and homecare settings.
目的:为临床决策量化非平稳时间序列中的生理动态具有挑战性,尤其是在比较不同受试者的数据时。我们提出了一种解决方案,并利用两个真实世界的手术数据库进行了验证,重点是未充分利用的动脉血压 (ABP) 信号。方法:我们应用流形学习算法--动态扩散图(DDMap),结合新颖的通用坐标(UC)算法来量化非平稳时间序列的动态变化。该方法使用 ABP 信号进行了演示,并通过肝移植和心血管手术数据库(均包含临床结果)进行了验证。进行了敏感性分析,以评估稳健性并确定最佳参数。结果:得出的指数与临床结果之间的显著相关性验证了 UC 的应用。敏感性分析证实了算法的稳定性,并有助于优化参数。结论:DDMap 与 UC 相结合可实现 ABP 信号的动态量化和跨受试者比较。这项技术将手术室中通常被丢弃的 ABP 信号重新利用起来,有望应用于医院和家庭护理环境中的其他非稳态生物医学信号。
{"title":"Universal coordinate on wave-shape manifold of cardiovascular waveform signal for dynamic quantification and cross-subject comparison","authors":"Hau-Tieng Wu, Ruey-Hsing Chou, Shen-Chih Wang, Cheng-Hsi Chang, Yu-Ting Lin","doi":"10.1101/2024.09.09.24313272","DOIUrl":"https://doi.org/10.1101/2024.09.09.24313272","url":null,"abstract":"Objective: Quantifying physiological dynamics from nonstationary time series for clinical decision-making is challenging, especially when comparing data across different subjects. We propose a solution and validate it using two real-world surgical databases, focusing on underutilized arterial blood pressure (ABP) signals. Method: We apply a manifold learning algorithm, Dynamic Diffusion Maps (DDMap), combined with the novel Universal Coordinate (UC) algorithm to quantify dynamics from nonstationary time series. The method is demonstrated using ABP signal and validated with liver transplant and cardiovascular surgery databases, both containing clinical outcomes. Sensitivity analyses were conducted to assess robustness and identify optimal parameters. Results: UC application is validated by significant correlations between the derived index and clinical outcomes. Sensitivity analyses confirm the algorithms stability and help optimize parameters. Conclusions: DDMap combined with UC enables dynamic quantification of ABP signals and comparison across subjects. This technique repurposes typically discarded ABP signals in the operating room, with potential applications to other nonstationary biomedical signals in both hospital and homecare settings.","PeriodicalId":501454,"journal":{"name":"medRxiv - Health Informatics","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ROBI: a Robust and Optimized Biomarker Identifier to increase the likelihood of discovering relevant radiomic features. ROBI:稳健优化的生物标记识别器,提高发现相关放射学特征的可能性。
Pub Date : 2024-09-10 DOI: 10.1101/2024.09.09.24313059
Louis Rebaud, Nicolo Capobianco, Clementine Sarkozy, Anne-Segolene Cottereau, Laetitia Vercellino, Olivier Casasnovas, Catherine Thieblemont, Bruce Spottiswoode, Irene Buvat
Objectives: The Robust and Optimized Biomarker Identifier (ROBI) feature selection pipeline is introduced to improve the identification of informative biomarkers coding information not already captured by existing features. It aims to accurately maximize the number of discoveries while minimizing and estimating the number of false positives (FP) with an adjustable selection stringency.Methods: 500 synthetic datasets and retrospective data of 378 Diffuse Large B Cell Lymphoma (DLBCL) patients were used for validation. On the DLBCL data, two established radiomic biomarkers, TMTV and Dmax, were measured from the 18F-FDG PET/CT scans, and 10,000 random ones were generated. Selection was performed and verified on each dataset. The efficacy of ROBI has been compared to methods controlling for multiple testing and a Cox model with Elasticnet penalty.Results: On synthetic datasets, ROBI selected significantly more true positives (TP) than FP (p < 0.001), and for 99.3% of datasets, the number of FP was within the estimated 95% confidence interval. ROBI significantly increased the number of TP compared to usual feature selection methods (p < 0.001). On retrospective data, ROBI selected the two established biomarkers and one random biomarker and estimated 95% chance of selecting 0 or 1 FP and a probability of 0.0014 of selecting only FP. Bonferroni correction selected no feature, and Elasticnet selected 101 spurious features and discarded TMTV.Conclusion: ROBI selected relevant biomarkers while effectively controlling for FPs, outperforming conventional selection methods. This underscores its potential as a valuable asset for biomarker discovery.
目标:引入鲁棒和优化生物标记物识别器(ROBI)特征选择管道,以改进对现有特征尚未捕获的信息编码生物标记物的识别。方法:使用 500 个合成数据集和 378 名弥漫大 B 细胞淋巴瘤(DLBCL)患者的回顾性数据进行验证。在 DLBCL 数据中,通过 18F-FDG PET/CT 扫描测量了两个已确立的放射生物标志物 TMTV 和 Dmax,并随机生成了 10,000 个数据集。对每个数据集进行筛选和验证。将 ROBI 的功效与控制多重测试的方法和带有 Elasticnet 惩罚的 Cox 模型进行了比较:在合成数据集上,ROBI 选择的真阳性(TP)明显多于假阳性(FP)(p <0.001),99.3% 的数据集的假阳性数量在估计的 95% 置信区间内。与通常的特征选择方法相比,ROBI 大大增加了 TP 的数量(p < 0.001)。在回顾性数据中,ROBI 选择了两个确定的生物标志物和一个随机生物标志物,估计选择 0 或 1 个 FP 的概率为 95%,只选择 FP 的概率为 0.0014。Bonferroni校正没有选中任何特征,Elasticnet选中了101个虚假特征并丢弃了TMTV:ROBI选择了相关的生物标记物,同时有效地控制了FP,优于传统的选择方法。这凸显了其作为生物标记物发现的宝贵资产的潜力。
{"title":"ROBI: a Robust and Optimized Biomarker Identifier to increase the likelihood of discovering relevant radiomic features.","authors":"Louis Rebaud, Nicolo Capobianco, Clementine Sarkozy, Anne-Segolene Cottereau, Laetitia Vercellino, Olivier Casasnovas, Catherine Thieblemont, Bruce Spottiswoode, Irene Buvat","doi":"10.1101/2024.09.09.24313059","DOIUrl":"https://doi.org/10.1101/2024.09.09.24313059","url":null,"abstract":"Objectives: The Robust and Optimized Biomarker Identifier (ROBI) feature selection pipeline is introduced to improve the identification of informative biomarkers coding information not already captured by existing features. It aims to accurately maximize the number of discoveries while minimizing and estimating the number of false positives (FP) with an adjustable selection stringency.\u0000Methods: 500 synthetic datasets and retrospective data of 378 Diffuse Large B Cell Lymphoma (DLBCL) patients were used for validation. On the DLBCL data, two established radiomic biomarkers, TMTV and Dmax, were measured from the 18F-FDG PET/CT scans, and 10,000 random ones were generated. Selection was performed and verified on each dataset. The efficacy of ROBI has been compared to methods controlling for multiple testing and a Cox model with Elasticnet penalty.\u0000Results: On synthetic datasets, ROBI selected significantly more true positives (TP) than FP (p &lt; 0.001), and for 99.3% of datasets, the number of FP was within the estimated 95% confidence interval. ROBI significantly increased the number of TP compared to usual feature selection methods (p &lt; 0.001). On retrospective data, ROBI selected the two established biomarkers and one random biomarker and estimated 95% chance of selecting 0 or 1 FP and a probability of 0.0014 of selecting only FP. Bonferroni correction selected no feature, and Elasticnet selected 101 spurious features and discarded TMTV.\u0000Conclusion: ROBI selected relevant biomarkers while effectively controlling for FPs, outperforming conventional selection methods. This underscores its potential as a valuable asset for biomarker discovery.","PeriodicalId":501454,"journal":{"name":"medRxiv - Health Informatics","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting survival time for critically ill patients with heart failure using conformalized survival analysis 利用保形生存分析预测心力衰竭重症患者的生存时间
Pub Date : 2024-09-08 DOI: 10.1101/2024.09.07.24313245
Xiaomeng Wang, Zhimei Ren, Jiancheng Ye
Heart failure (HF) is a critical public health issue, particularly for critically ill patients in intensive care units (ICUs). Predicting survival outcome in critically ill patients is a difficult yet crucially important task for timely treatment. This study utilizes a novel approach, conformalized survival analysis (CSA), designed to construct lower bounds on the survival time in critically ill HF patients with high confidence. Utilizing data from the MIMIC-IV dataset, this work demonstrates that CSA outperforms traditional survival models, such as the Cox proportional hazards model and Accelerated Failure Time (AFT) model, particularly in providing reliable, interpretable, and individualized predictions. By applying CSA to a large, real-world dataset, the study highlights its potential to improve decision-making in critical care, offering a more nuanced and accurate tool for prognostication in a setting where precise predictions and guaranteed uncertainty quantification can significantly influence patient outcomes.
心力衰竭(HF)是一个重要的公共卫生问题,对于重症监护室(ICU)中的危重病人来说尤其如此。预测重症患者的生存结果是一项困难但对及时治疗至关重要的任务。本研究采用了一种新方法--保形生存分析(CSA),旨在构建具有高置信度的高血压重症患者生存时间下限。利用来自 MIMIC-IV 数据集的数据,这项工作证明了保形生存分析优于传统的生存模型,如 Cox 比例危险模型和加速衰竭时间(AFT)模型,尤其是在提供可靠、可解释和个性化预测方面。通过将 CSA 应用于大型真实数据集,该研究强调了 CSA 在改善重症监护决策方面的潜力,在精确预测和保证不确定性量化可显著影响患者预后的情况下,CSA 可为预后提供更细致、更准确的工具。
{"title":"Predicting survival time for critically ill patients with heart failure using conformalized survival analysis","authors":"Xiaomeng Wang, Zhimei Ren, Jiancheng Ye","doi":"10.1101/2024.09.07.24313245","DOIUrl":"https://doi.org/10.1101/2024.09.07.24313245","url":null,"abstract":"Heart failure (HF) is a critical public health issue, particularly for critically ill patients in intensive care units (ICUs). Predicting survival outcome in critically ill patients is a difficult yet crucially important task for timely treatment. This study utilizes a novel approach, conformalized survival analysis (CSA), designed to construct lower bounds on the survival time in critically ill HF patients with high confidence. Utilizing data from the MIMIC-IV dataset, this work demonstrates that CSA outperforms traditional survival models, such as the Cox proportional hazards model and Accelerated Failure Time (AFT) model, particularly in providing reliable, interpretable, and individualized predictions. By applying CSA to a large, real-world dataset, the study highlights its potential to improve decision-making in critical care, offering a more nuanced and accurate tool for prognostication in a setting where precise predictions and guaranteed uncertainty quantification can significantly influence patient outcomes.","PeriodicalId":501454,"journal":{"name":"medRxiv - Health Informatics","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knowledge mobilization with and for equity-deserving communities invested in research: A scoping review protocol 与研究领域中需要公平的群体一起并为他们进行知识动员:范围界定审查协议
Pub Date : 2024-09-07 DOI: 10.1101/2024.09.06.24313221
Ramy Barhouche, Samson Tse, Fiona Inglis, Debbie Chaves, Erin Allison, Tina Colaco, Melody E. Morton Ninomiya
The practice of putting research into action is known by various names, depending on disciplinary norms. Knowledge mobilization, translation, and transfer (collectively referred to as K*) are three common terminologies used in research literature. Knowledge-to-action opportunities and gaps in academic research often remain obscure to non-academic researchers in communities, policy and decision makers, and practitioners who could benefit from up-to-date information on health and wellbeing. Academic research training, funding, and performance metrics rarely prioritize or address non-academic community needs from research. We propose to conduct a scoping review on reported K* in community-driven research contexts, examining the governance, processes, methods, and benefits of K*, and mapping who, what, where, and when K* terminology is used. This protocol paper outlines our approach to gathering, screening, analyzing, and reporting on available published literature from four databases.
根据学科规范的不同,将研究付诸行动的做法有各种名称。知识动员、转化和转移(统称为 K*)是研究文献中常用的三个术语。对于社区中的非学术研究人员、政策和决策者以及从业人员而言,学术研究中知识转化为行动的机会和差距往往仍然模糊不清,而这些人可以从有关健康和福祉的最新信息中受益。学术研究培训、资金和绩效衡量标准很少优先考虑或解决非学术社区的研究需求。我们建议对社区驱动研究中报告的 K* 进行一次范围界定审查,检查 K* 的管理、流程、方法和益处,并绘制 K* 术语的使用对象、内容、地点和时间图。本协议文件概述了我们从四个数据库中收集、筛选、分析和报告已发表文献的方法。
{"title":"Knowledge mobilization with and for equity-deserving communities invested in research: A scoping review protocol","authors":"Ramy Barhouche, Samson Tse, Fiona Inglis, Debbie Chaves, Erin Allison, Tina Colaco, Melody E. Morton Ninomiya","doi":"10.1101/2024.09.06.24313221","DOIUrl":"https://doi.org/10.1101/2024.09.06.24313221","url":null,"abstract":"The practice of putting research into action is known by various names, depending on disciplinary norms. Knowledge mobilization, translation, and transfer (collectively referred to as K*) are three common terminologies used in research literature. Knowledge-to-action opportunities and gaps in academic research often remain obscure to non-academic researchers in communities, policy and decision makers, and practitioners who could benefit from up-to-date information on health and wellbeing. Academic research training, funding, and performance metrics rarely prioritize or address non-academic community needs from research. We propose to conduct a scoping review on reported K* in community-driven research contexts, examining the governance, processes, methods, and benefits of K*, and mapping who, what, where, and when K* terminology is used. This protocol paper outlines our approach to gathering, screening, analyzing, and reporting on available published literature from four databases.","PeriodicalId":501454,"journal":{"name":"medRxiv - Health Informatics","volume":"410 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CODE - XAI: Construing and Deciphering Treatment Effects via Explainable AI using Real-world Data CODE - XAI:通过使用真实世界数据的可解释人工智能构建和解读治疗效果
Pub Date : 2024-09-06 DOI: 10.1101/2024.09.04.24312866
Mingyu Lu, Ian Covert, Nathan J. White, Su-In Lee
Determining which features drive the treatment effect for individual patients has long been a complex and critical question in clinical decision-making. Evidence from randomized controlled trials (RCTs) are the gold standard for guiding treatment decisions. However, individual patient differences often complicate the application of RCT findings, leading to imperfect treatment options. Traditional subgroup analyses fall short due to data dimensionality, type, and study design. To overcome these limitations, we propose CODE-XAI, a framework that interprets Conditional Average Treatment Effect (CATE) models using Explainable AI (XAI) to perform feature discovery. CODE-XAI provides feature attribution at the individual subject level, enhancing our understanding of treatment responses. We benchmark these XAI methods using semi-synthetic data and RCTs, demonstrating their effectiveness in uncovering feature contributions and enabling cross-cohort analysis, advancing precision medicine and scientific discovery.
长期以来,在临床决策中,确定哪些特征会影响个体患者的治疗效果一直是一个复杂而关键的问题。来自随机对照试验(RCT)的证据是指导治疗决策的黄金标准。然而,患者的个体差异往往使随机对照试验结果的应用复杂化,导致治疗方案不完善。由于数据维度、类型和研究设计的原因,传统的亚组分析存在不足。为了克服这些局限性,我们提出了 CODE-XAI,这是一个利用可解释人工智能(XAI)解释条件平均治疗效果(CATE)模型的框架,用于进行特征发现。CODE-XAI 提供了个体受试者层面的特征归因,增强了我们对治疗反应的理解。我们使用半合成数据和 RCT 对这些 XAI 方法进行了基准测试,证明了它们在发现特征贡献和实现跨队列分析方面的有效性,从而推动了精准医疗和科学发现。
{"title":"CODE - XAI: Construing and Deciphering Treatment Effects via Explainable AI using Real-world Data","authors":"Mingyu Lu, Ian Covert, Nathan J. White, Su-In Lee","doi":"10.1101/2024.09.04.24312866","DOIUrl":"https://doi.org/10.1101/2024.09.04.24312866","url":null,"abstract":"Determining which features drive the treatment effect for individual patients has long been a complex and critical question in clinical decision-making. Evidence from randomized controlled trials (RCTs) are the gold standard for guiding treatment decisions. However, individual patient differences often complicate the application of RCT findings, leading to imperfect treatment options. Traditional subgroup analyses fall short due to data dimensionality, type, and study design. To overcome these limitations, we propose CODE-XAI, a framework that interprets Conditional Average Treatment Effect (CATE) models using Explainable AI (XAI) to perform feature discovery. CODE-XAI provides feature attribution at the individual subject level, enhancing our understanding of treatment responses. We benchmark these XAI methods using semi-synthetic data and RCTs, demonstrating their effectiveness in uncovering feature contributions and enabling cross-cohort analysis, advancing precision medicine and scientific discovery.","PeriodicalId":501454,"journal":{"name":"medRxiv - Health Informatics","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digital risk score sensitively identifies presence of α-synuclein aggregation or dopaminergic deficit 数字风险评分能灵敏地识别是否存在α-突触核蛋白聚集或多巴胺能缺陷
Pub Date : 2024-09-06 DOI: 10.1101/2024.09.05.24313156
Ann-Kathrin Schalkamp, Kathryn J Peall, Neil A Harrison, Valentina Escott-Price, Payam Barnaghi, Cynthia Sandor
Background Use of digital sensors to passively collect long-term offers a step change in our ability to screen for early signs of disease in the general population. Smartwatch data has been shown to identify Parkinson’s disease (PD) several years before the clinical diagnosis, however, has not been evaluated in comparison to biological and pathological markers such as dopaminergic imaging (DaTscan) or cerebrospinal fluid (CSF) alpha-synuclein seed amplification assay (SAA) in an at-risk cohort.
背景 使用数字传感器来长期被动地收集数据,为我们筛查普通人群疾病早期征兆的能力提供了一个进步。智能手表数据已被证明可在临床诊断前几年识别帕金森病(PD),但尚未在高危人群中与多巴胺能成像(DaTscan)或脑脊液(CSF)α-突触核蛋白种子扩增试验(SAA)等生物和病理标记物进行比较评估。
{"title":"Digital risk score sensitively identifies presence of α-synuclein aggregation or dopaminergic deficit","authors":"Ann-Kathrin Schalkamp, Kathryn J Peall, Neil A Harrison, Valentina Escott-Price, Payam Barnaghi, Cynthia Sandor","doi":"10.1101/2024.09.05.24313156","DOIUrl":"https://doi.org/10.1101/2024.09.05.24313156","url":null,"abstract":"<strong>Background</strong> Use of digital sensors to passively collect long-term offers a step change in our ability to screen for early signs of disease in the general population. Smartwatch data has been shown to identify Parkinson’s disease (PD) several years before the clinical diagnosis, however, has not been evaluated in comparison to biological and pathological markers such as dopaminergic imaging (DaTscan) or cerebrospinal fluid (CSF) alpha-synuclein seed amplification assay (SAA) in an at-risk cohort.","PeriodicalId":501454,"journal":{"name":"medRxiv - Health Informatics","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142211703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
medRxiv - Health Informatics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1