首页 > 最新文献

JMIR AI最新文献

英文 中文
Enhancing Clinical Relevance of Pretrained Language Models Through Integration of External Knowledge: Case Study on Cardiovascular Diagnosis From Electronic Health Records. 通过整合外部知识增强预训练语言模型的临床相关性:来自电子健康记录的心血管诊断案例研究。
Pub Date : 2024-08-06 DOI: 10.2196/56932
Qiuhao Lu, Andrew Wen, Thien Nguyen, Hongfang Liu

Background: Despite their growing use in health care, pretrained language models (PLMs) often lack clinical relevance due to insufficient domain expertise and poor interpretability. A key strategy to overcome these challenges is integrating external knowledge into PLMs, enhancing their adaptability and clinical usefulness. Current biomedical knowledge graphs like UMLS (Unified Medical Language System), SNOMED CT (Systematized Medical Nomenclature for Medicine-Clinical Terminology), and HPO (Human Phenotype Ontology), while comprehensive, fail to effectively connect general biomedical knowledge with physician insights. There is an equally important need for a model that integrates diverse knowledge in a way that is both unified and compartmentalized. This approach not only addresses the heterogeneous nature of domain knowledge but also recognizes the unique data and knowledge repositories of individual health care institutions, necessitating careful and respectful management of proprietary information.

Objective: This study aimed to enhance the clinical relevance and interpretability of PLMs by integrating external knowledge in a manner that respects the diversity and proprietary nature of health care data. We hypothesize that domain knowledge, when captured and distributed as stand-alone modules, can be effectively reintegrated into PLMs to significantly improve their adaptability and utility in clinical settings.

Methods: We demonstrate that through adapters, small and lightweight neural networks that enable the integration of extra information without full model fine-tuning, we can inject diverse sources of external domain knowledge into language models and improve the overall performance with an increased level of interpretability. As a practical application of this methodology, we introduce a novel task, structured as a case study, that endeavors to capture physician knowledge in assigning cardiovascular diagnoses from clinical narratives, where we extract diagnosis-comment pairs from electronic health records (EHRs) and cast the problem as text classification.

Results: The study demonstrates that integrating domain knowledge into PLMs significantly improves their performance. While improvements with ClinicalBERT are more modest, likely due to its pretraining on clinical texts, BERT (bidirectional encoder representations from transformer) equipped with knowledge adapters surprisingly matches or exceeds ClinicalBERT in several metrics. This underscores the effectiveness of knowledge adapters and highlights their potential in settings with strict data privacy constraints. This approach also increases the level of interpretability of these models in a clinical context, which enhances our ability to precisely identify and apply the most relevant domain knowledge for specific tasks, thereby optimizing the model's performance and tailoring it to meet specific c

背景:尽管预训练语言模型(PLMs)在医疗保健领域的应用越来越广泛,但由于领域专业知识不足和可解释性差,它们往往缺乏临床相关性。克服这些挑战的关键策略是将外部知识整合到 PLM 中,增强其适应性和临床实用性。目前的生物医学知识图谱,如 UMLS(统一医学语言系统)、SNOMED CT(系统化医学术语-临床术语)和 HPO(人类表型本体),虽然很全面,但未能有效地将一般生物医学知识与医生的见解联系起来。同样重要的是,我们需要一种既能统一又能分门别类地整合各种知识的模型。这种方法不仅能解决领域知识的异质性问题,还能认识到各个医疗机构独特的数据和知识库,因此有必要对专有信息进行谨慎和尊重的管理:本研究旨在以尊重医疗数据多样性和专有性的方式整合外部知识,从而提高 PLM 的临床相关性和可解释性。我们假设,领域知识在作为独立模块采集和分发时,可以有效地重新整合到 PLM 中,从而显著提高其在临床环境中的适应性和实用性:我们证明,通过适配器(小型轻量级神经网络,无需对模型进行全面微调即可整合额外信息),我们可以将多种外部领域知识注入语言模型,并通过提高可解释性来改善整体性能。作为该方法的实际应用,我们介绍了一项新任务,该任务以案例研究的形式构建,旨在从临床叙述中获取医生在指定心血管诊断方面的知识,我们从电子健康记录(EHR)中提取诊断-评论对,并将该问题作为文本分类:研究表明,将领域知识整合到 PLM 中能显著提高 PLM 的性能。虽然ClinicalBERT的改进幅度较小,这可能是由于它对临床文本进行了预训练,但配备了知识适配器的BERT(来自转换器的双向编码器表征)在多项指标上竟然达到或超过了ClinicalBERT。这凸显了知识适配器的有效性,并彰显了其在有严格数据隐私限制的环境中的潜力。这种方法还提高了这些模型在临床环境中的可解释性,从而增强了我们为特定任务精确识别和应用最相关领域知识的能力,从而优化了模型的性能并使其满足特定的临床需求:这项研究为创建注入医生知识的健康知识图谱奠定了基础,标志着医疗保健领域的 PLM 迈出了重要一步。值得注意的是,该模型兼顾了知识的全面性和选择性,解决了医学知识的异质性和医疗机构的隐私需求。
{"title":"Enhancing Clinical Relevance of Pretrained Language Models Through Integration of External Knowledge: Case Study on Cardiovascular Diagnosis From Electronic Health Records.","authors":"Qiuhao Lu, Andrew Wen, Thien Nguyen, Hongfang Liu","doi":"10.2196/56932","DOIUrl":"10.2196/56932","url":null,"abstract":"<p><strong>Background: </strong>Despite their growing use in health care, pretrained language models (PLMs) often lack clinical relevance due to insufficient domain expertise and poor interpretability. A key strategy to overcome these challenges is integrating external knowledge into PLMs, enhancing their adaptability and clinical usefulness. Current biomedical knowledge graphs like UMLS (Unified Medical Language System), SNOMED CT (Systematized Medical Nomenclature for Medicine-Clinical Terminology), and HPO (Human Phenotype Ontology), while comprehensive, fail to effectively connect general biomedical knowledge with physician insights. There is an equally important need for a model that integrates diverse knowledge in a way that is both unified and compartmentalized. This approach not only addresses the heterogeneous nature of domain knowledge but also recognizes the unique data and knowledge repositories of individual health care institutions, necessitating careful and respectful management of proprietary information.</p><p><strong>Objective: </strong>This study aimed to enhance the clinical relevance and interpretability of PLMs by integrating external knowledge in a manner that respects the diversity and proprietary nature of health care data. We hypothesize that domain knowledge, when captured and distributed as stand-alone modules, can be effectively reintegrated into PLMs to significantly improve their adaptability and utility in clinical settings.</p><p><strong>Methods: </strong>We demonstrate that through adapters, small and lightweight neural networks that enable the integration of extra information without full model fine-tuning, we can inject diverse sources of external domain knowledge into language models and improve the overall performance with an increased level of interpretability. As a practical application of this methodology, we introduce a novel task, structured as a case study, that endeavors to capture physician knowledge in assigning cardiovascular diagnoses from clinical narratives, where we extract diagnosis-comment pairs from electronic health records (EHRs) and cast the problem as text classification.</p><p><strong>Results: </strong>The study demonstrates that integrating domain knowledge into PLMs significantly improves their performance. While improvements with ClinicalBERT are more modest, likely due to its pretraining on clinical texts, BERT (bidirectional encoder representations from transformer) equipped with knowledge adapters surprisingly matches or exceeds ClinicalBERT in several metrics. This underscores the effectiveness of knowledge adapters and highlights their potential in settings with strict data privacy constraints. This approach also increases the level of interpretability of these models in a clinical context, which enhances our ability to precisely identify and apply the most relevant domain knowledge for specific tasks, thereby optimizing the model's performance and tailoring it to meet specific c","PeriodicalId":73551,"journal":{"name":"JMIR AI","volume":"3 ","pages":"e56932"},"PeriodicalIF":0.0,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11336492/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141894950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparing the Efficacy and Efficiency of Human and Generative AI: Qualitative Thematic Analyses. 比较人工智能和生成式人工智能的功效和效率:定性专题分析。
Pub Date : 2024-08-02 DOI: 10.2196/54482
Maximo R Prescott, Samantha Yeager, Lillian Ham, Carlos D Rivera Saldana, Vanessa Serrano, Joey Narez, Dafna Paltin, Jorge Delgado, David J Moore, Jessica Montoya
<p><strong>Background: </strong>Qualitative methods are incredibly beneficial to the dissemination and implementation of new digital health interventions; however, these methods can be time intensive and slow down dissemination when timely knowledge from the data sources is needed in ever-changing health systems. Recent advancements in generative artificial intelligence (GenAI) and their underlying large language models (LLMs) may provide a promising opportunity to expedite the qualitative analysis of textual data, but their efficacy and reliability remain unknown.</p><p><strong>Objective: </strong>The primary objectives of our study were to evaluate the consistency in themes, reliability of coding, and time needed for inductive and deductive thematic analyses between GenAI (ie, ChatGPT and Bard) and human coders.</p><p><strong>Methods: </strong>The qualitative data for this study consisted of 40 brief SMS text message reminder prompts used in a digital health intervention for promoting antiretroviral medication adherence among people with HIV who use methamphetamine. Inductive and deductive thematic analyses of these SMS text messages were conducted by 2 independent teams of human coders. An independent human analyst conducted analyses following both approaches using ChatGPT and Bard. The consistency in themes (or the extent to which the themes were the same) and reliability (or agreement in coding of themes) between methods were compared.</p><p><strong>Results: </strong>The themes generated by GenAI (both ChatGPT and Bard) were consistent with 71% (5/7) of the themes identified by human analysts following inductive thematic analysis. The consistency in themes was lower between humans and GenAI following a deductive thematic analysis procedure (ChatGPT: 6/12, 50%; Bard: 7/12, 58%). The percentage agreement (or intercoder reliability) for these congruent themes between human coders and GenAI ranged from fair to moderate (ChatGPT, inductive: 31/66, 47%; ChatGPT, deductive: 22/59, 37%; Bard, inductive: 20/54, 37%; Bard, deductive: 21/58, 36%). In general, ChatGPT and Bard performed similarly to each other across both types of qualitative analyses in terms of consistency of themes (inductive: 6/6, 100%; deductive: 5/6, 83%) and reliability of coding (inductive: 23/62, 37%; deductive: 22/47, 47%). On average, GenAI required significantly less overall time than human coders when conducting qualitative analysis (20, SD 3.5 min vs 567, SD 106.5 min).</p><p><strong>Conclusions: </strong>The promising consistency in the themes generated by human coders and GenAI suggests that these technologies hold promise in reducing the resource intensiveness of qualitative thematic analysis; however, the relatively lower reliability in coding between them suggests that hybrid approaches are necessary. Human coders appeared to be better than GenAI at identifying nuanced and interpretative themes. Future studies should consider how these powerful technologies can be bes
背景:定性方法对传播和实施新的数字健康干预措施大有裨益;然而,在瞬息万变的健康系统中,需要及时从数据源中获得知识时,这些方法可能会耗费大量时间,并减缓传播速度。生成式人工智能(GenAI)及其底层大型语言模型(LLMs)的最新进展可能为加快文本数据的定性分析提供了一个大有可为的机会,但其有效性和可靠性仍是未知数:我们研究的主要目的是评估 GenAI(即 ChatGPT 和 Bard)与人类编码员之间的主题一致性、编码可靠性以及归纳和演绎主题分析所需的时间:本研究的定性数据由 40 条简短的短信提示组成,这些短信提示用于数字健康干预,以促进吸食甲基苯丙胺的艾滋病病毒感染者坚持服用抗逆转录病毒药物。这些短信的归纳和演绎主题分析由两组独立的人工编码人员进行。一名独立的人工分析师使用 ChatGPT 和 Bard 按照这两种方法进行分析。比较了两种方法的主题一致性(或主题相同的程度)和可靠性(或主题编码的一致性):结果:GenAI(ChatGPT 和 Bard)生成的主题与人类分析师在归纳式主题分析后确定的主题的 71%(5/7)一致。在演绎式主题分析程序中,人类和 GenAI 的主题一致性较低(ChatGPT:6/12,50%;Bard:7/12,58%)。人类编码员和 GenAI 之间在这些一致主题上的一致性百分比(或编码员间可靠性)从一般到中等不等(ChatGPT,归纳法:31/66,47%;ChatGPT,演绎法:22/59,37%;Bard,归纳法:20/54,37%;Bard,演绎法:21/58,36%)。总体而言,在主题一致性(归纳法:6/6,100%;演绎法:5/6,83%)和编码可靠性(归纳法:23/62,37%;演绎法:22/47,47%)方面,ChatGPT 和 Bard 在两类定性分析中的表现相似。平均而言,在进行定性分析时,GenAI 所需的总体时间大大少于人类编码员(20 分钟,标差 3.5 分钟;567 分钟,标差 106.5 分钟):人类编码员和 GenAI 所生成的主题具有良好的一致性,这表明这些技术有望降低定性主题分析的资源密集度;然而,它们之间编码的可靠性相对较低,这表明有必要采用混合方法。在识别细微和解释性主题方面,人工编码者似乎比 GenAI 更胜一筹。未来的研究应考虑如何将这些强大的技术与人类编码员合作使用,以提高混合方法中定性研究的效率,同时降低它们可能带来的潜在伦理风险。
{"title":"Comparing the Efficacy and Efficiency of Human and Generative AI: Qualitative Thematic Analyses.","authors":"Maximo R Prescott, Samantha Yeager, Lillian Ham, Carlos D Rivera Saldana, Vanessa Serrano, Joey Narez, Dafna Paltin, Jorge Delgado, David J Moore, Jessica Montoya","doi":"10.2196/54482","DOIUrl":"10.2196/54482","url":null,"abstract":"&lt;p&gt;&lt;strong&gt;Background: &lt;/strong&gt;Qualitative methods are incredibly beneficial to the dissemination and implementation of new digital health interventions; however, these methods can be time intensive and slow down dissemination when timely knowledge from the data sources is needed in ever-changing health systems. Recent advancements in generative artificial intelligence (GenAI) and their underlying large language models (LLMs) may provide a promising opportunity to expedite the qualitative analysis of textual data, but their efficacy and reliability remain unknown.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Objective: &lt;/strong&gt;The primary objectives of our study were to evaluate the consistency in themes, reliability of coding, and time needed for inductive and deductive thematic analyses between GenAI (ie, ChatGPT and Bard) and human coders.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Methods: &lt;/strong&gt;The qualitative data for this study consisted of 40 brief SMS text message reminder prompts used in a digital health intervention for promoting antiretroviral medication adherence among people with HIV who use methamphetamine. Inductive and deductive thematic analyses of these SMS text messages were conducted by 2 independent teams of human coders. An independent human analyst conducted analyses following both approaches using ChatGPT and Bard. The consistency in themes (or the extent to which the themes were the same) and reliability (or agreement in coding of themes) between methods were compared.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Results: &lt;/strong&gt;The themes generated by GenAI (both ChatGPT and Bard) were consistent with 71% (5/7) of the themes identified by human analysts following inductive thematic analysis. The consistency in themes was lower between humans and GenAI following a deductive thematic analysis procedure (ChatGPT: 6/12, 50%; Bard: 7/12, 58%). The percentage agreement (or intercoder reliability) for these congruent themes between human coders and GenAI ranged from fair to moderate (ChatGPT, inductive: 31/66, 47%; ChatGPT, deductive: 22/59, 37%; Bard, inductive: 20/54, 37%; Bard, deductive: 21/58, 36%). In general, ChatGPT and Bard performed similarly to each other across both types of qualitative analyses in terms of consistency of themes (inductive: 6/6, 100%; deductive: 5/6, 83%) and reliability of coding (inductive: 23/62, 37%; deductive: 22/47, 47%). On average, GenAI required significantly less overall time than human coders when conducting qualitative analysis (20, SD 3.5 min vs 567, SD 106.5 min).&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Conclusions: &lt;/strong&gt;The promising consistency in the themes generated by human coders and GenAI suggests that these technologies hold promise in reducing the resource intensiveness of qualitative thematic analysis; however, the relatively lower reliability in coding between them suggests that hybrid approaches are necessary. Human coders appeared to be better than GenAI at identifying nuanced and interpretative themes. Future studies should consider how these powerful technologies can be bes","PeriodicalId":73551,"journal":{"name":"JMIR AI","volume":"3 ","pages":"e54482"},"PeriodicalIF":0.0,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11329846/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141879884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting Workers' Stress: Application of a High-Performance Algorithm Using Working-Style Characteristics. 预测工人的压力:使用工作方式特征的高效算法的应用。
Pub Date : 2024-08-02 DOI: 10.2196/55840
Hiroki Iwamoto, Saki Nakano, Ryotaro Tajima, Ryo Kiguchi, Yuki Yoshida, Yoshitake Kitanishi, Yasunori Aoki

Background: Work characteristics, such as teleworking rate, have been studied in relation to stress. However, the use of work-related data to improve a high-performance stress prediction model that suits an individual's lifestyle has not been evaluated.

Objective: This study aims to develop a novel, high-performance algorithm to predict an employee's stress among a group of employees with similar working characteristics.

Methods: This prospective observational study evaluated participants' responses to web‑based questionnaires, including attendance records and data collected using a wearable device. Data spanning 12 weeks (between January 17, 2022, and April 10, 2022) were collected from 194 Shionogi Group employees. Participants wore the Fitbit Charge 4 wearable device, which collected data on daily sleep, activity, and heart rate. Daily work shift data included details of working hours. Weekly questionnaire responses included the K6 questionnaire for depression/anxiety, a behavioral questionnaire, and the number of days lunch was missed. The proposed prediction model used a neighborhood cluster (N=20) with working-style characteristics similar to those of the prediction target person. Data from the previous week predicted stress levels the following week. Three models were compared by selecting appropriate training data: (1) single model, (2) proposed method 1, and (3) proposed method 2. Shapley Additive Explanations (SHAP) were calculated for the top 10 extracted features from the Extreme Gradient Boosting (XGBoost) model to evaluate the amount and contribution direction categorized by teleworking rates (mean): low: <0.2 (more than 4 days/week in office), middle: 0.2 to <0.6 (2 to 4 days/week in office), and high: ≥0.6 (less than 2 days/week in office).

Results: Data from 190 participants were used, with a teleworking rate ranging from 0% to 79%. The area under the curve (AUC) of the proposed method 2 was 0.84 (true positive vs false positive: 0.77 vs 0.26). Among participants with low teleworking rates, most features extracted were related to sleep, followed by activity and work. Among participants with high teleworking rates, most features were related to activity, followed by sleep and work. SHAP analysis showed that for participants with high teleworking rates, skipping lunch, working more/less than scheduled, higher fluctuations in heart rate, and lower mean sleep duration contributed to stress. In participants with low teleworking rates, coming too early or late to work (before/after 9 AM), a higher/lower than mean heart rate, lower fluctuations in heart rate, and burning more/fewer calories than normal contributed to stress.

Conclusions: Forming a neighborhood cluster with similar working styles based on teleworking rates and using it as training data improved the prediction performance. The validity of the neighborhood cluste

背景:远程工作率等工作特征与压力的关系已得到研究。然而,如何利用工作相关数据来改进适合个人生活方式的高性能压力预测模型,还没有进行过评估:本研究旨在开发一种新颖、高效的算法,以预测具有相似工作特征的员工群体的压力:这项前瞻性观察研究评估了参与者对网络问卷的回答,包括考勤记录和使用可穿戴设备收集的数据。研究收集了194名盐野义集团员工为期12周(2022年1月17日至2022年4月10日)的数据。参与者佩戴了 Fitbit Charge 4 可穿戴设备,该设备收集了每日睡眠、活动和心率数据。每日轮班数据包括工作时间的详细信息。每周的问卷答复包括抑郁/焦虑 K6 问卷、行为问卷以及错过午餐的天数。建议的预测模型使用了一个邻近群组(N=20),该群组具有与预测目标人群相似的工作方式特征。上一周的数据可预测下一周的压力水平。通过选择适当的训练数据,对三种模型进行了比较:(1)单一模型;(2)建议的方法 1;(3)建议的方法 2。对从极端梯度提升(XGBoost)模型中提取的前 10 个特征进行了夏普利加法解释(SHAP)计算,以评估按远程工作率(平均值)分类的数量和贡献方向:低:结果:使用了 190 名参与者的数据,远程工作率从 0% 到 79% 不等。拟议方法 2 的曲线下面积(AUC)为 0.84(真阳性与假阳性:0.77 与 0.26)。在远程工作率低的参与者中,提取的大多数特征与睡眠有关,其次是活动和工作。在远程工作率高的参与者中,大多数特征与活动有关,其次是睡眠和工作。SHAP 分析表明,对于远程工作率高的参与者来说,不吃午餐、工作时间多于或少于计划时间、心率波动较大以及平均睡眠时间较短都是造成压力的原因。在远程工作率低的参与者中,上班时间过早或过晚(上午 9 点之前/之后)、心率高于/低于平均值、心率波动较小、消耗的卡路里高于/低于正常水平都会导致压力:结论:根据远程工作率形成一个具有相似工作方式的邻域聚类,并将其作为训练数据,可以提高预测性能。不同远程工作水平的贡献特征及其贡献方向的差异表明了邻域聚类方法的有效性:umin umin000046394; https://www.umin.ac.jp/ctr/index.htm.
{"title":"Predicting Workers' Stress: Application of a High-Performance Algorithm Using Working-Style Characteristics.","authors":"Hiroki Iwamoto, Saki Nakano, Ryotaro Tajima, Ryo Kiguchi, Yuki Yoshida, Yoshitake Kitanishi, Yasunori Aoki","doi":"10.2196/55840","DOIUrl":"10.2196/55840","url":null,"abstract":"<p><strong>Background: </strong>Work characteristics, such as teleworking rate, have been studied in relation to stress. However, the use of work-related data to improve a high-performance stress prediction model that suits an individual's lifestyle has not been evaluated.</p><p><strong>Objective: </strong>This study aims to develop a novel, high-performance algorithm to predict an employee's stress among a group of employees with similar working characteristics.</p><p><strong>Methods: </strong>This prospective observational study evaluated participants' responses to web‑based questionnaires, including attendance records and data collected using a wearable device. Data spanning 12 weeks (between January 17, 2022, and April 10, 2022) were collected from 194 Shionogi Group employees. Participants wore the Fitbit Charge 4 wearable device, which collected data on daily sleep, activity, and heart rate. Daily work shift data included details of working hours. Weekly questionnaire responses included the K6 questionnaire for depression/anxiety, a behavioral questionnaire, and the number of days lunch was missed. The proposed prediction model used a neighborhood cluster (N=20) with working-style characteristics similar to those of the prediction target person. Data from the previous week predicted stress levels the following week. Three models were compared by selecting appropriate training data: (1) single model, (2) proposed method 1, and (3) proposed method 2. Shapley Additive Explanations (SHAP) were calculated for the top 10 extracted features from the Extreme Gradient Boosting (XGBoost) model to evaluate the amount and contribution direction categorized by teleworking rates (mean): low: <0.2 (more than 4 days/week in office), middle: 0.2 to <0.6 (2 to 4 days/week in office), and high: ≥0.6 (less than 2 days/week in office).</p><p><strong>Results: </strong>Data from 190 participants were used, with a teleworking rate ranging from 0% to 79%. The area under the curve (AUC) of the proposed method 2 was 0.84 (true positive vs false positive: 0.77 vs 0.26). Among participants with low teleworking rates, most features extracted were related to sleep, followed by activity and work. Among participants with high teleworking rates, most features were related to activity, followed by sleep and work. SHAP analysis showed that for participants with high teleworking rates, skipping lunch, working more/less than scheduled, higher fluctuations in heart rate, and lower mean sleep duration contributed to stress. In participants with low teleworking rates, coming too early or late to work (before/after 9 AM), a higher/lower than mean heart rate, lower fluctuations in heart rate, and burning more/fewer calories than normal contributed to stress.</p><p><strong>Conclusions: </strong>Forming a neighborhood cluster with similar working styles based on teleworking rates and using it as training data improved the prediction performance. The validity of the neighborhood cluste","PeriodicalId":73551,"journal":{"name":"JMIR AI","volume":"3 ","pages":"e55840"},"PeriodicalIF":0.0,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11329844/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141876895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Regulatory Frameworks for AI-Enabled Medical Device Software in China: Comparative Analysis and Review of Implications for Global Manufacturer. 中国人工智能医疗器械软件的监管框架:中国人工智能医疗器械软件监管框架:比较分析及对全球制造商的影响评述》。
Pub Date : 2024-07-29 DOI: 10.2196/46871
Yu Han, Aaron Ceross, Jeroen Bergmann

The China State Council released the new generation artificial intelligence (AI) development plan, outlining China's ambitious aspiration to assume global leadership in AI by the year 2030. This initiative underscores the extensive applicability of AI across diverse domains, including manufacturing, law, and medicine. With China establishing itself as a major producer and consumer of medical devices, there has been a notable increase in software registrations. This study aims to study the proliferation of health care-related software development within China. This work presents an overview of the Chinese regulatory framework for medical device software. The analysis covers both software as a medical device and software in a medical device. A comparative approach is employed to examine the regulations governing medical devices with AI and machine learning in China, the United States, and Europe. The study highlights the significant proliferation of health care-related software development within China, which has led to an increased demand for comprehensive regulatory guidance, particularly for international manufacturers. The comparative analysis reveals distinct regulatory frameworks and requirements across the three regions. This paper provides a useful outline of the current state of regulations for medical software in China and identifies the regulatory challenges posed by the rapid advancements in AI and machine learning technologies. Understanding these challenges is crucial for international manufacturers and stakeholders aiming to navigate the complex regulatory landscape.

中国国务院发布了《新一代人工智能(AI)发展规划》,概述了中国到 2030 年在人工智能领域占据全球领先地位的雄心壮志。这一举措强调了人工智能在制造、法律和医疗等不同领域的广泛适用性。随着中国成为医疗设备的主要生产国和消费国,软件注册量也显著增加。本研究旨在研究中国医疗相关软件开发的扩散情况。本研究概述了中国的医疗器械软件监管框架。分析涵盖了作为医疗器械的软件和医疗器械中的软件。研究采用比较的方法,考察了中国、美国和欧洲对人工智能和机器学习医疗器械的监管情况。研究强调了中国医疗保健相关软件开发的显著激增,这导致对全面监管指导的需求增加,尤其是对国际制造商而言。比较分析揭示了三个地区不同的监管框架和要求。本文对中国医疗软件的监管现状进行了有益的概述,并指出了人工智能和机器学习技术的快速发展所带来的监管挑战。了解这些挑战对于国际制造商和利益相关者驾驭复杂的监管环境至关重要。
{"title":"Regulatory Frameworks for AI-Enabled Medical Device Software in China: Comparative Analysis and Review of Implications for Global Manufacturer.","authors":"Yu Han, Aaron Ceross, Jeroen Bergmann","doi":"10.2196/46871","DOIUrl":"10.2196/46871","url":null,"abstract":"<p><p>The China State Council released the new generation artificial intelligence (AI) development plan, outlining China's ambitious aspiration to assume global leadership in AI by the year 2030. This initiative underscores the extensive applicability of AI across diverse domains, including manufacturing, law, and medicine. With China establishing itself as a major producer and consumer of medical devices, there has been a notable increase in software registrations. This study aims to study the proliferation of health care-related software development within China. This work presents an overview of the Chinese regulatory framework for medical device software. The analysis covers both software as a medical device and software in a medical device. A comparative approach is employed to examine the regulations governing medical devices with AI and machine learning in China, the United States, and Europe. The study highlights the significant proliferation of health care-related software development within China, which has led to an increased demand for comprehensive regulatory guidance, particularly for international manufacturers. The comparative analysis reveals distinct regulatory frameworks and requirements across the three regions. This paper provides a useful outline of the current state of regulations for medical software in China and identifies the regulatory challenges posed by the rapid advancements in AI and machine learning technologies. Understanding these challenges is crucial for international manufacturers and stakeholders aiming to navigate the complex regulatory landscape.</p>","PeriodicalId":73551,"journal":{"name":"JMIR AI","volume":"3 ","pages":"e46871"},"PeriodicalIF":0.0,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11319888/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141790220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing Clinical Trial Eligibility Design Using Natural Language Processing Models and Real-World Data: Algorithm Development and Validation. 利用自然语言处理模型和真实世界数据优化临床试验资格设计:算法开发与验证。
Pub Date : 2024-07-29 DOI: 10.2196/50800
Kyeryoung Lee, Zongzhi Liu, Yun Mai, Tomi Jun, Meng Ma, Tongyu Wang, Lei Ai, Ediz Calay, William Oh, Gustavo Stolovitzky, Eric Schadt, Xiaoyan Wang

Background: Clinical trials are vital for developing new therapies but can also delay drug development. Efficient trial data management, optimized trial protocol, and accurate patient identification are critical for reducing trial timelines. Natural language processing (NLP) has the potential to achieve these objectives.

Objective: This study aims to assess the feasibility of using data-driven approaches to optimize clinical trial protocol design and identify eligible patients. This involves creating a comprehensive eligibility criteria knowledge base integrated within electronic health records using deep learning-based NLP techniques.

Methods: We obtained data of 3281 industry-sponsored phase 2 or 3 interventional clinical trials recruiting patients with non-small cell lung cancer, prostate cancer, breast cancer, multiple myeloma, ulcerative colitis, and Crohn disease from ClinicalTrials.gov, spanning the period between 2013 and 2020. A customized bidirectional long short-term memory- and conditional random field-based NLP pipeline was used to extract all eligibility criteria attributes and convert hypernym concepts into computable hyponyms along with their corresponding values. To illustrate the simulation of clinical trial design for optimization purposes, we selected a subset of patients with non-small cell lung cancer (n=2775), curated from the Mount Sinai Health System, as a pilot study.

Results: We manually annotated the clinical trial eligibility corpus (485/3281, 14.78% trials) and constructed an eligibility criteria-specific ontology. Our customized NLP pipeline, developed based on the eligibility criteria-specific ontology that we created through manual annotation, achieved high precision (0.91, range 0.67-1.00) and recall (0.79, range 0.50-1) scores, as well as a high F1-score (0.83, range 0.67-1), enabling the efficient extraction of granular criteria entities and relevant attributes from 3281 clinical trials. A standardized eligibility criteria knowledge base, compatible with electronic health records, was developed by transforming hypernym concepts into machine-interpretable hyponyms along with their corresponding values. In addition, an interface prototype demonstrated the practicality of leveraging real-world data for optimizing clinical trial protocols and identifying eligible patients.

Conclusions: Our customized NLP pipeline successfully generated a standardized eligibility criteria knowledge base by transforming hypernym criteria into machine-readable hyponyms along with their corresponding values. A prototype interface integrating real-world patient information allows us to assess the impact of each eligibility criterion on the number of patients eligible for the trial. Leveraging NLP and real-world data in a data-driven approach holds promise for streamlining the overall clinical trial process, optimizi

背景:临床试验对开发新的疗法至关重要,但也会延误药物开发。高效的试验数据管理、优化的试验方案和准确的患者识别对于缩短试验时间至关重要。自然语言处理(NLP)有可能实现这些目标:本研究旨在评估使用数据驱动方法优化临床试验方案设计和识别合格患者的可行性。这包括使用基于深度学习的 NLP 技术创建一个集成在电子健康记录中的综合资格标准知识库:我们从 ClinicalTrials.gov 获取了 3281 项行业赞助的 2 期或 3 期介入性临床试验的数据,这些试验招募了非小细胞肺癌、前列腺癌、乳腺癌、多发性骨髓瘤、溃疡性结肠炎和克罗恩病患者,时间跨度为 2013 年至 2020 年。我们使用定制的基于双向长短期记忆和条件随机场的 NLP 管道来提取所有资格标准属性,并将超音概念转换为可计算的假名及其相应值。为了说明模拟临床试验设计以达到优化目的,我们选择了西奈山医疗系统的非小细胞肺癌患者子集(n=2775)作为试点研究:我们对临床试验资格语料库(485/3281,14.78% 的试验)进行了人工标注,并构建了针对资格标准的本体。我们基于人工标注创建的资格标准专用本体开发了定制的 NLP 管道,取得了较高的精确度(0.91,范围 0.67-1.00)和召回率(0.79,范围 0.50-1),以及较高的 F1 分数(0.83,范围 0.67-1),从而能够从 3281 项临床试验中高效提取细粒度的标准实体和相关属性。通过将超词义概念转化为机器可解释的次词义及其相应值,开发出了与电子健康记录兼容的标准化资格标准知识库。此外,一个界面原型展示了利用真实世界数据优化临床试验方案和识别合格患者的实用性:结论:我们定制的 NLP 管道通过将超义词标准转化为机器可读的同义词及其相应值,成功生成了标准化的资格标准知识库。通过集成真实世界患者信息的原型界面,我们可以评估每个资格标准对符合试验资格的患者人数的影响。在数据驱动方法中利用 NLP 和真实世界数据,有望简化整个临床试验过程、优化流程并提高患者识别效率。
{"title":"Optimizing Clinical Trial Eligibility Design Using Natural Language Processing Models and Real-World Data: Algorithm Development and Validation.","authors":"Kyeryoung Lee, Zongzhi Liu, Yun Mai, Tomi Jun, Meng Ma, Tongyu Wang, Lei Ai, Ediz Calay, William Oh, Gustavo Stolovitzky, Eric Schadt, Xiaoyan Wang","doi":"10.2196/50800","DOIUrl":"10.2196/50800","url":null,"abstract":"<p><strong>Background: </strong>Clinical trials are vital for developing new therapies but can also delay drug development. Efficient trial data management, optimized trial protocol, and accurate patient identification are critical for reducing trial timelines. Natural language processing (NLP) has the potential to achieve these objectives.</p><p><strong>Objective: </strong>This study aims to assess the feasibility of using data-driven approaches to optimize clinical trial protocol design and identify eligible patients. This involves creating a comprehensive eligibility criteria knowledge base integrated within electronic health records using deep learning-based NLP techniques.</p><p><strong>Methods: </strong>We obtained data of 3281 industry-sponsored phase 2 or 3 interventional clinical trials recruiting patients with non-small cell lung cancer, prostate cancer, breast cancer, multiple myeloma, ulcerative colitis, and Crohn disease from ClinicalTrials.gov, spanning the period between 2013 and 2020. A customized bidirectional long short-term memory- and conditional random field-based NLP pipeline was used to extract all eligibility criteria attributes and convert hypernym concepts into computable hyponyms along with their corresponding values. To illustrate the simulation of clinical trial design for optimization purposes, we selected a subset of patients with non-small cell lung cancer (n=2775), curated from the Mount Sinai Health System, as a pilot study.</p><p><strong>Results: </strong>We manually annotated the clinical trial eligibility corpus (485/3281, 14.78% trials) and constructed an eligibility criteria-specific ontology. Our customized NLP pipeline, developed based on the eligibility criteria-specific ontology that we created through manual annotation, achieved high precision (0.91, range 0.67-1.00) and recall (0.79, range 0.50-1) scores, as well as a high F<sub>1</sub>-score (0.83, range 0.67-1), enabling the efficient extraction of granular criteria entities and relevant attributes from 3281 clinical trials. A standardized eligibility criteria knowledge base, compatible with electronic health records, was developed by transforming hypernym concepts into machine-interpretable hyponyms along with their corresponding values. In addition, an interface prototype demonstrated the practicality of leveraging real-world data for optimizing clinical trial protocols and identifying eligible patients.</p><p><strong>Conclusions: </strong>Our customized NLP pipeline successfully generated a standardized eligibility criteria knowledge base by transforming hypernym criteria into machine-readable hyponyms along with their corresponding values. A prototype interface integrating real-world patient information allows us to assess the impact of each eligibility criterion on the number of patients eligible for the trial. Leveraging NLP and real-world data in a data-driven approach holds promise for streamlining the overall clinical trial process, optimizi","PeriodicalId":73551,"journal":{"name":"JMIR AI","volume":"3 ","pages":"e50800"},"PeriodicalIF":0.0,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11319878/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141790219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Use of Deep Neural Networks to Predict Obesity With Short Audio Recordings: Development and Usability Study. 利用深度神经网络通过简短录音预测肥胖:开发和可用性研究
Pub Date : 2024-07-25 DOI: 10.2196/54885
Jingyi Huang, Peiqi Guo, Sheng Zhang, Mengmeng Ji, Ruopeng An

Background: The escalating global prevalence of obesity has necessitated the exploration of novel diagnostic approaches. Recent scientific inquiries have indicated potential alterations in voice characteristics associated with obesity, suggesting the feasibility of using voice as a noninvasive biomarker for obesity detection.

Objective: This study aims to use deep neural networks to predict obesity status through the analysis of short audio recordings, investigating the relationship between vocal characteristics and obesity.

Methods: A pilot study was conducted with 696 participants, using self-reported BMI to classify individuals into obesity and nonobesity groups. Audio recordings of participants reading a short script were transformed into spectrograms and analyzed using an adapted YOLOv8 model (Ultralytics). The model performance was evaluated using accuracy, recall, precision, and F1-scores.

Results: The adapted YOLOv8 model demonstrated a global accuracy of 0.70 and a macro F1-score of 0.65. It was more effective in identifying nonobesity (F1-score of 0.77) than obesity (F1-score of 0.53). This moderate level of accuracy highlights the potential and challenges in using vocal biomarkers for obesity detection.

Conclusions: While the study shows promise in the field of voice-based medical diagnostics for obesity, it faces limitations such as reliance on self-reported BMI data and a small, homogenous sample size. These factors, coupled with variability in recording quality, necessitate further research with more robust methodologies and diverse samples to enhance the validity of this novel approach. The findings lay a foundational step for future investigations in using voice as a noninvasive biomarker for obesity detection.

背景:全球肥胖症发病率不断上升,因此有必要探索新的诊断方法。最近的科学调查表明,与肥胖相关的声音特征可能会发生变化,这表明将声音作为肥胖检测的非侵入性生物标志物是可行的:本研究旨在利用深度神经网络通过分析简短的音频记录来预测肥胖状况,研究声音特征与肥胖之间的关系:方法:对 696 名参与者进行了试点研究,使用自我报告的体重指数将个体分为肥胖组和非肥胖组。参与者朗读短文的录音被转换成频谱图,并使用改编的 YOLOv8 模型(Ultralytics)进行分析。使用准确度、召回率、精确度和 F1 分数对模型性能进行了评估:改编后的 YOLOv8 模型的总体准确率为 0.70,宏观 F1 分数为 0.65。与肥胖(F1 分数为 0.53)相比,该模型在识别非肥胖(F1 分数为 0.77)方面更为有效。这种中等程度的准确性凸显了使用声乐生物标记物检测肥胖的潜力和挑战:虽然这项研究在基于声音的肥胖症医疗诊断领域前景广阔,但它也面临着一些局限性,如依赖于自我报告的体重指数数据,样本量小且单一。这些因素加上录音质量的差异,使得有必要采用更可靠的方法和多样化的样本进行进一步研究,以提高这种新方法的有效性。这些研究结果为今后利用声音作为肥胖检测的无创生物标志物的研究奠定了基础。
{"title":"Use of Deep Neural Networks to Predict Obesity With Short Audio Recordings: Development and Usability Study.","authors":"Jingyi Huang, Peiqi Guo, Sheng Zhang, Mengmeng Ji, Ruopeng An","doi":"10.2196/54885","DOIUrl":"10.2196/54885","url":null,"abstract":"<p><strong>Background: </strong>The escalating global prevalence of obesity has necessitated the exploration of novel diagnostic approaches. Recent scientific inquiries have indicated potential alterations in voice characteristics associated with obesity, suggesting the feasibility of using voice as a noninvasive biomarker for obesity detection.</p><p><strong>Objective: </strong>This study aims to use deep neural networks to predict obesity status through the analysis of short audio recordings, investigating the relationship between vocal characteristics and obesity.</p><p><strong>Methods: </strong>A pilot study was conducted with 696 participants, using self-reported BMI to classify individuals into obesity and nonobesity groups. Audio recordings of participants reading a short script were transformed into spectrograms and analyzed using an adapted YOLOv8 model (Ultralytics). The model performance was evaluated using accuracy, recall, precision, and F<sub>1</sub>-scores.</p><p><strong>Results: </strong>The adapted YOLOv8 model demonstrated a global accuracy of 0.70 and a macro F<sub>1</sub>-score of 0.65. It was more effective in identifying nonobesity (F<sub>1</sub>-score of 0.77) than obesity (F<sub>1</sub>-score of 0.53). This moderate level of accuracy highlights the potential and challenges in using vocal biomarkers for obesity detection.</p><p><strong>Conclusions: </strong>While the study shows promise in the field of voice-based medical diagnostics for obesity, it faces limitations such as reliance on self-reported BMI data and a small, homogenous sample size. These factors, coupled with variability in recording quality, necessitate further research with more robust methodologies and diverse samples to enhance the validity of this novel approach. The findings lay a foundational step for future investigations in using voice as a noninvasive biomarker for obesity detection.</p>","PeriodicalId":73551,"journal":{"name":"JMIR AI","volume":"3 ","pages":"e54885"},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11310637/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141763047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Type 2 Diabetes Treatment Decisions With Interpretable Machine Learning Models for Predicting Hemoglobin A1c Changes: Machine Learning Model Development. 利用可解释的机器学习模型预测血红蛋白 A1c 变化,加强 2 型糖尿病治疗决策:机器学习模型开发。
Pub Date : 2024-07-18 DOI: 10.2196/56700
Hisashi Kurasawa, Kayo Waki, Tomohisa Seki, Akihiro Chiba, Akinori Fujino, Katsuyoshi Hayashi, Eri Nakahara, Tsuneyuki Haga, Takashi Noguchi, Kazuhiko Ohe

Background: Type 2 diabetes (T2D) is a significant global health challenge. Physicians need to assess whether future glycemic control will be poor on the current trajectory of usual care and usual-care treatment intensifications so that they can consider taking extra treatment measures to prevent poor outcomes. Predicting poor glycemic control from trends in hemoglobin A1c (HbA1c) levels is difficult due to the influence of seasonal fluctuations and other factors.

Objective: We sought to develop a model that accurately predicts poor glycemic control among patients with T2D receiving usual care.

Methods: Our machine learning model predicts poor glycemic control (HbA1c≥8%) using the transformer architecture, incorporating an attention mechanism to process irregularly spaced HbA1c time series and quantify temporal relationships of past HbA1c levels at each time point. We assessed the model using HbA1c levels from 7787 patients with T2D seeing specialist physicians at the University of Tokyo Hospital. The training data include instances of poor glycemic control occurring during usual care with usual-care treatment intensifications. We compared prediction accuracy, assessed with the area under the receiver operating characteristic curve, the area under the precision-recall curve, and the accuracy rate, to that of LightGBM.

Results: The area under the receiver operating characteristic curve, the area under the precision-recall curve, and the accuracy rate (95% confidence limits) of the proposed model were 0.925 (95% CI 0.923-0.928), 0.864 (95% CI 0.852-0.875), and 0.864 (95% CI 0.86-0.869), respectively. The proposed model achieved high prediction accuracy comparable to or surpassing LightGBM's performance. The model prioritized the most recent HbA1c levels for predictions. Older HbA1c levels in patients with poor glycemic control were slightly more influential in predictions compared to patients with good glycemic control.

Conclusions: The proposed model accurately predicts poor glycemic control for patients with T2D receiving usual care, including patients receiving usual-care treatment intensifications, allowing physicians to identify cases warranting extraordinary treatment intensifications. If used by a nonspecialist, the model's indication of likely future poor glycemic control may warrant a referral to a specialist. Future efforts could incorporate diverse and large-scale clinical data for improved accuracy.

背景:2 型糖尿病(T2D)是一项重大的全球性健康挑战。医生需要评估在当前常规护理和常规护理强化治疗的轨迹上,未来的血糖控制是否会很差,以便考虑采取额外的治疗措施,防止不良后果的发生。由于受到季节性波动和其他因素的影响,从血红蛋白 A1c(HbA1c)水平的变化趋势预测血糖控制不佳的情况非常困难:我们试图开发一种模型,准确预测接受常规治疗的 T2D 患者血糖控制不佳的情况:我们的机器学习模型采用变压器架构预测血糖控制不佳(HbA1c≥8%),该架构结合了注意力机制,可处理不规则间隔的 HbA1c 时间序列,并量化每个时间点过去 HbA1c 水平的时间关系。我们使用东京大学医院专科医生诊治的 7787 名 T2D 患者的 HbA1c 水平对该模型进行了评估。训练数据包括在常规治疗过程中出现的血糖控制不佳情况,以及常规治疗的强化治疗。我们用接收器操作特征曲线下面积、精确度-调用曲线下面积和准确率评估了预测准确性,并与 LightGBM 进行了比较:结果:拟议模型的接收者操作特征曲线下面积、精确度-召回曲线下面积和准确率(95% 置信限)分别为 0.925(95% CI 0.923-0.928)、0.864(95% CI 0.852-0.875)和 0.864(95% CI 0.86-0.869)。所提出的模型达到了很高的预测准确率,与 LightGBM 的性能相当,甚至超过了 LightGBM。该模型优先预测最近的 HbA1c 水平。与血糖控制良好的患者相比,血糖控制不佳的患者中较早的 HbA1c 水平对预测的影响稍大:结论:所提出的模型可准确预测接受常规治疗的 T2D 患者血糖控制不佳的情况,包括接受常规治疗强化的患者,从而使医生能够识别需要特别强化治疗的病例。如果由非专科医生使用,该模型对未来可能出现的血糖控制不佳的提示可能会成为转诊专科医生的理由。未来的工作可以纳入各种大规模临床数据,以提高准确性。
{"title":"Enhancing Type 2 Diabetes Treatment Decisions With Interpretable Machine Learning Models for Predicting Hemoglobin A1c Changes: Machine Learning Model Development.","authors":"Hisashi Kurasawa, Kayo Waki, Tomohisa Seki, Akihiro Chiba, Akinori Fujino, Katsuyoshi Hayashi, Eri Nakahara, Tsuneyuki Haga, Takashi Noguchi, Kazuhiko Ohe","doi":"10.2196/56700","DOIUrl":"10.2196/56700","url":null,"abstract":"<p><strong>Background: </strong>Type 2 diabetes (T2D) is a significant global health challenge. Physicians need to assess whether future glycemic control will be poor on the current trajectory of usual care and usual-care treatment intensifications so that they can consider taking extra treatment measures to prevent poor outcomes. Predicting poor glycemic control from trends in hemoglobin A<sub>1c</sub> (HbA<sub>1c</sub>) levels is difficult due to the influence of seasonal fluctuations and other factors.</p><p><strong>Objective: </strong>We sought to develop a model that accurately predicts poor glycemic control among patients with T2D receiving usual care.</p><p><strong>Methods: </strong>Our machine learning model predicts poor glycemic control (HbA<sub>1c</sub>≥8%) using the transformer architecture, incorporating an attention mechanism to process irregularly spaced HbA<sub>1c</sub> time series and quantify temporal relationships of past HbA<sub>1c</sub> levels at each time point. We assessed the model using HbA<sub>1c</sub> levels from 7787 patients with T2D seeing specialist physicians at the University of Tokyo Hospital. The training data include instances of poor glycemic control occurring during usual care with usual-care treatment intensifications. We compared prediction accuracy, assessed with the area under the receiver operating characteristic curve, the area under the precision-recall curve, and the accuracy rate, to that of LightGBM.</p><p><strong>Results: </strong>The area under the receiver operating characteristic curve, the area under the precision-recall curve, and the accuracy rate (95% confidence limits) of the proposed model were 0.925 (95% CI 0.923-0.928), 0.864 (95% CI 0.852-0.875), and 0.864 (95% CI 0.86-0.869), respectively. The proposed model achieved high prediction accuracy comparable to or surpassing LightGBM's performance. The model prioritized the most recent HbA<sub>1c</sub> levels for predictions. Older HbA<sub>1c</sub> levels in patients with poor glycemic control were slightly more influential in predictions compared to patients with good glycemic control.</p><p><strong>Conclusions: </strong>The proposed model accurately predicts poor glycemic control for patients with T2D receiving usual care, including patients receiving usual-care treatment intensifications, allowing physicians to identify cases warranting extraordinary treatment intensifications. If used by a nonspecialist, the model's indication of likely future poor glycemic control may warrant a referral to a specialist. Future efforts could incorporate diverse and large-scale clinical data for improved accuracy.</p>","PeriodicalId":73551,"journal":{"name":"JMIR AI","volume":"3 ","pages":"e56700"},"PeriodicalIF":0.0,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11294778/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141636021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiscale Bowel Sound Event Spotting in Highly Imbalanced Wearable Monitoring Data: Algorithm Development and Validation Study. 在高度不平衡的可穿戴监测数据中发现多尺度肠鸣音事件:算法开发与验证研究。
Pub Date : 2024-07-10 DOI: 10.2196/51118
Annalisa Baronetto, Luisa Graf, Sarah Fischer, Markus F Neurath, Oliver Amft

Background: Abdominal auscultation (i.e., listening to bowel sounds (BSs)) can be used to analyze digestion. An automated retrieval of BS would be beneficial to assess gastrointestinal disorders noninvasively.

Objective: This study aims to develop a multiscale spotting model to detect BSs in continuous audio data from a wearable monitoring system.

Methods: We designed a spotting model based on the Efficient-U-Net (EffUNet) architecture to analyze 10-second audio segments at a time and spot BSs with a temporal resolution of 25 ms. Evaluation data were collected across different digestive phases from 18 healthy participants and 9 patients with inflammatory bowel disease (IBD). Audio data were recorded in a daytime setting with a smart T-Shirt that embeds digital microphones. The data set was annotated by independent raters with substantial agreement (Cohen κ between 0.70 and 0.75), resulting in 136 hours of labeled data. In total, 11,482 BSs were analyzed, with a BS duration ranging between 18 ms and 6.3 seconds. The share of BSs in the data set (BS ratio) was 0.0089. We analyzed the performance depending on noise level, BS duration, and BS event rate. We also report spotting timing errors.

Results: Leave-one-participant-out cross-validation of BS event spotting yielded a median F1-score of 0.73 for both healthy volunteers and patients with IBD. EffUNet detected BSs under different noise conditions with 0.73 recall and 0.72 precision. In particular, for a signal-to-noise ratio over 4 dB, more than 83% of BSs were recognized, with precision of 0.77 or more. EffUNet recall dropped below 0.60 for BS duration of 1.5 seconds or less. At a BS ratio greater than 0.05, the precision of our model was over 0.83. For both healthy participants and patients with IBD, insertion and deletion timing errors were the largest, with a total of 15.54 minutes of insertion errors and 13.08 minutes of deletion errors over the total audio data set. On our data set, EffUNet outperformed existing BS spotting models that provide similar temporal resolution.

Conclusions: The EffUNet spotter is robust against background noise and can retrieve BSs with varying duration. EffUNet outperforms previous BS detection approaches in unmodified audio data, containing highly sparse BS events.

背景:腹部听诊(即听肠鸣音 (BS))可用于分析消化情况。自动检索肠鸣音有利于无创评估胃肠道疾病:本研究旨在开发一种多尺度定点模型,以检测来自可穿戴监测系统的连续音频数据中的 BS:我们设计了一个基于 Efficient-U-Net (EffUNet) 架构的定点模型,一次分析 10 秒钟的音频片段,以 25 毫秒的时间分辨率发现 BS。我们收集了 18 名健康参与者和 9 名炎症性肠病 (IBD) 患者在不同消化阶段的评估数据。音频数据是在日间环境中使用嵌入数字麦克风的智能 T-Shirt 录制的。数据集由独立评测人员进行标注,标注结果基本一致(Cohen κ 在 0.70 和 0.75 之间),最终得到 136 个小时的标注数据。总共分析了 11,482 个 BS,BS 持续时间在 18 毫秒到 6.3 秒之间。数据集中的 BS 比例为 0.0089。我们分析了噪声水平、BS 持续时间和 BS 事件率对性能的影响。我们还报告了定点计时误差:结果:对健康志愿者和 IBD 患者而言,BS 事件发现的留空交叉验证得出的中位 F1 分数为 0.73。EffUNet 在不同噪声条件下检测到 BS 的召回率为 0.73,精确率为 0.72。特别是信噪比超过 4 dB 时,超过 83% 的 BS 被识别,精确度达到或超过 0.77。当 BS 持续时间为 1.5 秒或更短时,EffUNet 的召回率降至 0.60 以下。当 BS 比率大于 0.05 时,我们模型的精确度超过 0.83。对于健康参与者和 IBD 患者来说,插入和删除时间误差都是最大的,在整个音频数据集中,插入误差总计 15.54 分钟,删除误差总计 13.08 分钟。在我们的数据集上,EffUNet 的表现优于提供类似时间分辨率的现有 BS 发现模型:EffUNet 定位器对背景噪声具有很强的鲁棒性,可以检索不同持续时间的 BS。在包含高度稀疏 BS 事件的未修改音频数据中,EffUNet 的表现优于之前的 BS 检测方法。
{"title":"Multiscale Bowel Sound Event Spotting in Highly Imbalanced Wearable Monitoring Data: Algorithm Development and Validation Study.","authors":"Annalisa Baronetto, Luisa Graf, Sarah Fischer, Markus F Neurath, Oliver Amft","doi":"10.2196/51118","DOIUrl":"10.2196/51118","url":null,"abstract":"<p><strong>Background: </strong>Abdominal auscultation (i.e., listening to bowel sounds (BSs)) can be used to analyze digestion. An automated retrieval of BS would be beneficial to assess gastrointestinal disorders noninvasively.</p><p><strong>Objective: </strong>This study aims to develop a multiscale spotting model to detect BSs in continuous audio data from a wearable monitoring system.</p><p><strong>Methods: </strong>We designed a spotting model based on the Efficient-U-Net (EffUNet) architecture to analyze 10-second audio segments at a time and spot BSs with a temporal resolution of 25 ms. Evaluation data were collected across different digestive phases from 18 healthy participants and 9 patients with inflammatory bowel disease (IBD). Audio data were recorded in a daytime setting with a smart T-Shirt that embeds digital microphones. The data set was annotated by independent raters with substantial agreement (Cohen κ between 0.70 and 0.75), resulting in 136 hours of labeled data. In total, 11,482 BSs were analyzed, with a BS duration ranging between 18 ms and 6.3 seconds. The share of BSs in the data set (BS ratio) was 0.0089. We analyzed the performance depending on noise level, BS duration, and BS event rate. We also report spotting timing errors.</p><p><strong>Results: </strong>Leave-one-participant-out cross-validation of BS event spotting yielded a median F<sub>1</sub>-score of 0.73 for both healthy volunteers and patients with IBD. EffUNet detected BSs under different noise conditions with 0.73 recall and 0.72 precision. In particular, for a signal-to-noise ratio over 4 dB, more than 83% of BSs were recognized, with precision of 0.77 or more. EffUNet recall dropped below 0.60 for BS duration of 1.5 seconds or less. At a BS ratio greater than 0.05, the precision of our model was over 0.83. For both healthy participants and patients with IBD, insertion and deletion timing errors were the largest, with a total of 15.54 minutes of insertion errors and 13.08 minutes of deletion errors over the total audio data set. On our data set, EffUNet outperformed existing BS spotting models that provide similar temporal resolution.</p><p><strong>Conclusions: </strong>The EffUNet spotter is robust against background noise and can retrieve BSs with varying duration. EffUNet outperforms previous BS detection approaches in unmodified audio data, containing highly sparse BS events.</p>","PeriodicalId":73551,"journal":{"name":"JMIR AI","volume":"3 ","pages":"e51118"},"PeriodicalIF":0.0,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11269970/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141565295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction: Feasibility of Multimodal Artificial Intelligence Using GPT-4 Vision for the Classification of Middle Ear Disease: Qualitative Study and Validation. 更正:使用 GPT-4 视觉进行中耳疾病分类的多模态人工智能的可行性:定性研究与验证。
Pub Date : 2024-07-09 DOI: 10.2196/62990
Masao Noda, Hidekane Yoshimura, Takuya Okubo, Ryota Koshu, Yuki Uchiyama, Akihiro Nomura, Makoto Ito, Yutaka Takumi

[This corrects the article DOI: 10.2196/58342.].

[此处更正了文章 DOI:10.2196/58342]。
{"title":"Correction: Feasibility of Multimodal Artificial Intelligence Using GPT-4 Vision for the Classification of Middle Ear Disease: Qualitative Study and Validation.","authors":"Masao Noda, Hidekane Yoshimura, Takuya Okubo, Ryota Koshu, Yuki Uchiyama, Akihiro Nomura, Makoto Ito, Yutaka Takumi","doi":"10.2196/62990","DOIUrl":"10.2196/62990","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.2196/58342.].</p>","PeriodicalId":73551,"journal":{"name":"JMIR AI","volume":"3 ","pages":"e62990"},"PeriodicalIF":0.0,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11267114/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141565574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Augmenting Telepostpartum Care With Vision-Based Detection of Breastfeeding-Related Conditions: Algorithm Development and Validation. 通过基于视觉的母乳喂养相关状况检测,加强远程产后护理:算法开发与验证
Pub Date : 2024-06-24 DOI: 10.2196/54798
Jessica De Souza, Varun Kumar Viswanath, Jessica Maria Echterhoff, Kristina Chamberlain, Edward Jay Wang

Background: Breastfeeding benefits both the mother and infant and is a topic of attention in public health. After childbirth, untreated medical conditions or lack of support lead many mothers to discontinue breastfeeding. For instance, nipple damage and mastitis affect 80% and 20% of US mothers, respectively. Lactation consultants (LCs) help mothers with breastfeeding, providing in-person, remote, and hybrid lactation support. LCs guide, encourage, and find ways for mothers to have a better experience breastfeeding. Current telehealth services help mothers seek LCs for breastfeeding support, where images help them identify and address many issues. Due to the disproportional ratio of LCs and mothers in need, these professionals are often overloaded and burned out.

Objective: This study aims to investigate the effectiveness of 5 distinct convolutional neural networks in detecting healthy lactating breasts and 6 breastfeeding-related issues by only using red, green, and blue images. Our goal was to assess the applicability of this algorithm as an auxiliary resource for LCs to identify painful breast conditions quickly, better manage their patients through triage, respond promptly to patient needs, and enhance the overall experience and care for breastfeeding mothers.

Methods: We evaluated the potential for 5 classification models to detect breastfeeding-related conditions using 1078 breast and nipple images gathered from web-based and physical educational resources. We used the convolutional neural networks Resnet50, Visual Geometry Group model with 16 layers (VGG16), InceptionV3, EfficientNetV2, and DenseNet169 to classify the images across 7 classes: healthy, abscess, mastitis, nipple blebs, dermatosis, engorgement, and nipple damage by improper feeding or misuse of breast pumps. We also evaluated the models' ability to distinguish between healthy and unhealthy images. We present an analysis of the classification challenges, identifying image traits that may confound the detection model.

Results: The best model achieves an average area under the receiver operating characteristic curve of 0.93 for all conditions after data augmentation for multiclass classification. For binary classification, we achieved, with the best model, an average area under the curve of 0.96 for all conditions after data augmentation. Several factors contributed to the misclassification of images, including similar visual features in the conditions that precede other conditions (such as the mastitis spectrum disorder), partially covered breasts or nipples, and images depicting multiple conditions in the same breast.

Conclusions: This vision-based automated detection technique offers an opportunity to enhance postpartum care for mothers and can potentially help alleviate the workload of LCs by expediting decision-making processes.

背景:母乳喂养对母亲和婴儿都有好处,是公共卫生领域关注的话题。分娩后,未经治疗的疾病或缺乏支持导致许多母亲停止母乳喂养。例如,乳头损伤和乳腺炎分别影响了 80% 和 20% 的美国母亲。哺乳顾问(LC)帮助母亲进行母乳喂养,提供面对面、远程和混合哺乳支持。泌乳顾问指导、鼓励并想方设法让母亲获得更好的母乳喂养体验。目前的远程医疗服务可帮助母亲寻求哺乳指导师的母乳喂养支持,图像可帮助她们发现并解决许多问题。由于母乳喂养咨询师和有需要的母亲的比例失调,这些专业人员经常超负荷工作,疲惫不堪:本研究旨在调查 5 种不同的卷积神经网络在检测健康哺乳乳房和 6 种母乳喂养相关问题时的有效性,只使用红色、绿色和蓝色图像。我们的目标是评估该算法作为辅助资源的适用性,以便 LCs 快速识别乳房疼痛状况,通过分流更好地管理患者,及时响应患者需求,并提升母乳喂养母亲的整体体验和护理:我们使用从网络和实体教育资源中收集的 1078 张乳房和乳头图像,评估了 5 个分类模型检测母乳喂养相关疾病的潜力。我们使用卷积神经网络 Resnet50、具有 16 层的视觉几何组模型 (VGG16)、InceptionV3、EfficientNetV2 和 DenseNet169 对图像进行了 7 个类别的分类:健康、脓肿、乳腺炎、乳头出血、皮炎、充血以及因喂养不当或滥用吸奶器造成的乳头损伤。我们还评估了模型区分健康和不健康图像的能力。我们对分类挑战进行了分析,找出了可能会干扰检测模型的图像特征:结果:对于多类分类,最佳模型在数据增强后,在所有条件下的接收器工作特征曲线下的平均面积为 0.93。在二元分类中,使用最佳模型,数据扩增后所有条件下的平均曲线下面积为 0.96。导致图像分类错误的因素有很多,包括先于其他病症(如乳腺炎谱系障碍)的病症中的相似视觉特征、部分覆盖的乳房或乳头,以及描述同一乳房中多种病症的图像:这种基于视觉的自动检测技术为加强产后母亲护理提供了机会,并有可能通过加快决策过程来减轻乳腺科医生的工作量。
{"title":"Augmenting Telepostpartum Care With Vision-Based Detection of Breastfeeding-Related Conditions: Algorithm Development and Validation.","authors":"Jessica De Souza, Varun Kumar Viswanath, Jessica Maria Echterhoff, Kristina Chamberlain, Edward Jay Wang","doi":"10.2196/54798","DOIUrl":"10.2196/54798","url":null,"abstract":"<p><strong>Background: </strong>Breastfeeding benefits both the mother and infant and is a topic of attention in public health. After childbirth, untreated medical conditions or lack of support lead many mothers to discontinue breastfeeding. For instance, nipple damage and mastitis affect 80% and 20% of US mothers, respectively. Lactation consultants (LCs) help mothers with breastfeeding, providing in-person, remote, and hybrid lactation support. LCs guide, encourage, and find ways for mothers to have a better experience breastfeeding. Current telehealth services help mothers seek LCs for breastfeeding support, where images help them identify and address many issues. Due to the disproportional ratio of LCs and mothers in need, these professionals are often overloaded and burned out.</p><p><strong>Objective: </strong>This study aims to investigate the effectiveness of 5 distinct convolutional neural networks in detecting healthy lactating breasts and 6 breastfeeding-related issues by only using red, green, and blue images. Our goal was to assess the applicability of this algorithm as an auxiliary resource for LCs to identify painful breast conditions quickly, better manage their patients through triage, respond promptly to patient needs, and enhance the overall experience and care for breastfeeding mothers.</p><p><strong>Methods: </strong>We evaluated the potential for 5 classification models to detect breastfeeding-related conditions using 1078 breast and nipple images gathered from web-based and physical educational resources. We used the convolutional neural networks Resnet50, Visual Geometry Group model with 16 layers (VGG16), InceptionV3, EfficientNetV2, and DenseNet169 to classify the images across 7 classes: healthy, abscess, mastitis, nipple blebs, dermatosis, engorgement, and nipple damage by improper feeding or misuse of breast pumps. We also evaluated the models' ability to distinguish between healthy and unhealthy images. We present an analysis of the classification challenges, identifying image traits that may confound the detection model.</p><p><strong>Results: </strong>The best model achieves an average area under the receiver operating characteristic curve of 0.93 for all conditions after data augmentation for multiclass classification. For binary classification, we achieved, with the best model, an average area under the curve of 0.96 for all conditions after data augmentation. Several factors contributed to the misclassification of images, including similar visual features in the conditions that precede other conditions (such as the mastitis spectrum disorder), partially covered breasts or nipples, and images depicting multiple conditions in the same breast.</p><p><strong>Conclusions: </strong>This vision-based automated detection technique offers an opportunity to enhance postpartum care for mothers and can potentially help alleviate the workload of LCs by expediting decision-making processes.</p>","PeriodicalId":73551,"journal":{"name":"JMIR AI","volume":"3 ","pages":"e54798"},"PeriodicalIF":0.0,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11231616/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141447712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
JMIR AI
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1