首页 > 最新文献

Healthcare analytics (New York, N.Y.)最新文献

英文 中文
An analytical evaluation of imputation methods for enhancing cardiac care data integrity 提高心脏保健数据完整性的数据输入方法的分析评价
Pub Date : 2026-01-23 DOI: 10.1016/j.health.2026.100452
Rajarajeswari Ganesan , Carlijn M.A. Buck , Chang Sun , Marcel van’t Veer , Lukas R.C. Dekker , Frans N. van de Vosse , Wouter Huberts
Electronic Health Records (EHRs) are comprised of digitally stored patient and population health data. Unfortunately, EHRs are often far from complete, and these incomplete health records are referred to as missingness. Missingness in EHRs is a hindrance factor in utilizing Machine Learning (ML) for data mining and developing decision support applications. Missingness also limits EHRs’ reusability for retrospective clinical studies. In fact, missingness adversely affects the accuracy and reliability of ML models and clinical studies. Imputation is an effective approach to deal with missing values and to improve the reliability of ML models and clinical studies. However, previous imputation studies are spread across different healthcare datasets and are not universally applicable. In addition, there is a lack of studies focusing on the rationale for the imputation of healthcare datasets. Moreover, the quality of imputation methods is often assessed without considering the medical interpretation. In this study, we therefore aim to characterize the impact on the accuracy of different methods for the imputation of cardiac EHRs, specifically from a ML and medical perspective. Two cardiac EHR datasets with missing values for cardiovascular diseases (CVDs) are used. Multiple imputation methods (mean, median, K-nearest neighbor, and multiple variants of iterative imputation) are considered. From an ML perspective, the post-imputation effects are assessed by quantifying the ML models’ capability to classify CVDs. The distribution of clinically interesting variables is evaluated for clinical comprehension. Our study shows that information in missingness and magnitude of variable missingness are the key factors in the selection of imputation methods for diverse EHR-based applications.
电子健康记录(EHRs)由数字存储的病人和人口健康数据组成。不幸的是,电子病历往往远未完成,这些不完整的健康记录被称为缺失。电子病历中的缺失是利用机器学习(ML)进行数据挖掘和开发决策支持应用程序的阻碍因素。缺失也限制了电子病历在回顾性临床研究中的可重用性。事实上,缺失会对ML模型和临床研究的准确性和可靠性产生不利影响。输入是处理缺失值和提高ML模型和临床研究可靠性的有效方法。然而,以前的归算研究分散在不同的医疗数据集,并不是普遍适用的。此外,缺乏关注医疗数据集的基本原理的研究。此外,评估植入方法的质量往往不考虑医学解释。因此,在本研究中,我们的目的是描述不同方法对心脏电子病历输入准确性的影响,特别是从ML和医学的角度。使用了两个缺失心血管疾病(cvd)值的心脏EHR数据集。考虑了多种插值方法(均值、中值、k近邻和迭代插值的多种变体)。从机器学习的角度来看,通过量化机器学习模型对cvd的分类能力来评估后归算效应。临床感兴趣的变量的分布被评估为临床理解。我们的研究表明,缺失信息和变量缺失的大小是选择各种基于电子病历的应用程序的imputation方法的关键因素。
{"title":"An analytical evaluation of imputation methods for enhancing cardiac care data integrity","authors":"Rajarajeswari Ganesan ,&nbsp;Carlijn M.A. Buck ,&nbsp;Chang Sun ,&nbsp;Marcel van’t Veer ,&nbsp;Lukas R.C. Dekker ,&nbsp;Frans N. van de Vosse ,&nbsp;Wouter Huberts","doi":"10.1016/j.health.2026.100452","DOIUrl":"10.1016/j.health.2026.100452","url":null,"abstract":"<div><div>Electronic Health Records (EHRs) are comprised of digitally stored patient and population health data. Unfortunately, EHRs are often far from complete, and these incomplete health records are referred to as missingness. Missingness in EHRs is a hindrance factor in utilizing Machine Learning (ML) for data mining and developing decision support applications. Missingness also limits EHRs’ reusability for retrospective clinical studies. In fact, missingness adversely affects the accuracy and reliability of ML models and clinical studies. Imputation is an effective approach to deal with missing values and to improve the reliability of ML models and clinical studies. However, previous imputation studies are spread across different healthcare datasets and are not universally applicable. In addition, there is a lack of studies focusing on the rationale for the imputation of healthcare datasets. Moreover, the quality of imputation methods is often assessed without considering the medical interpretation. In this study, we therefore aim to characterize the impact on the accuracy of different methods for the imputation of cardiac EHRs, specifically from a ML and medical perspective. Two cardiac EHR datasets with missing values for cardiovascular diseases (CVDs) are used. Multiple imputation methods (mean, median, K-nearest neighbor, and multiple variants of iterative imputation) are considered. From an ML perspective, the post-imputation effects are assessed by quantifying the ML models’ capability to classify CVDs. The distribution of clinically interesting variables is evaluated for clinical comprehension. Our study shows that information in missingness and magnitude of variable missingness are the key factors in the selection of imputation methods for diverse EHR-based applications.</div></div>","PeriodicalId":73222,"journal":{"name":"Healthcare analytics (New York, N.Y.)","volume":"9 ","pages":"Article 100452"},"PeriodicalIF":0.0,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146023163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Bayesian framework for enhancing health data accuracy in pooled cross-sectional analysis 提高汇总横截面分析中健康数据准确性的贝叶斯框架
Pub Date : 2026-01-20 DOI: 10.1016/j.health.2026.100448
Romuald Daniel Boy-ngbogbele , Oscar Ngesa , Thomas Mageto , Célestin C. Kokonendji
The analysis of pooled cross-sectional data plays a vital role in various disciplines, including epidemiology, economics, and the social sciences, by enabling the identification of trends and patterns over time. This study develops statistical models specifically designed to analyze pooled cross-sectional data while accounting for measurement error, with a particular focus on estimating the prevalence of malnutrition among children under five years of age in Cameroon. Measurement error is a persistent issue in surveys, especially in resource-limited settings where data collection accuracy may be compromised. To address this, the research employs logistic regression within a Bayesian framework to reduce the impact of measurement error on malnutrition prevalence estimates, thereby providing more reliable information for policymakers and public health professionals. Through both simulation studies and application to real-world data from Cameroon, the study demonstrates the effectiveness of the proposed models in improving the accuracy and precision of estimates, offering deeper insights into childhood malnutrition in the country. This work advances statistical methodologies for survey data analysis by providing robust tools to address measurement error and support evidence-based interventions to combat malnutrition in Cameroon and similar contexts worldwide.
对汇集的横截面数据进行分析,通过识别随时间变化的趋势和模式,在包括流行病学、经济学和社会科学在内的各个学科中发挥着至关重要的作用。本研究开发了专门用于分析汇总横截面数据的统计模型,同时考虑了测量误差,特别侧重于估计喀麦隆五岁以下儿童营养不良的普遍程度。测量误差是调查中一个长期存在的问题,特别是在资源有限的情况下,数据收集的准确性可能会受到损害。为了解决这个问题,该研究采用了贝叶斯框架内的逻辑回归,以减少测量误差对营养不良患病率估计的影响,从而为政策制定者和公共卫生专业人员提供更可靠的信息。通过模拟研究和对喀麦隆实际数据的应用,该研究证明了所提出的模型在提高估计的准确性和精度方面的有效性,为该国儿童营养不良问题提供了更深入的见解。这项工作通过提供强有力的工具来解决测量误差,并支持以证据为基础的干预措施,以解决喀麦隆和世界各地类似情况下的营养不良问题,从而推进了调查数据分析的统计方法。
{"title":"A Bayesian framework for enhancing health data accuracy in pooled cross-sectional analysis","authors":"Romuald Daniel Boy-ngbogbele ,&nbsp;Oscar Ngesa ,&nbsp;Thomas Mageto ,&nbsp;Célestin C. Kokonendji","doi":"10.1016/j.health.2026.100448","DOIUrl":"10.1016/j.health.2026.100448","url":null,"abstract":"<div><div>The analysis of pooled cross-sectional data plays a vital role in various disciplines, including epidemiology, economics, and the social sciences, by enabling the identification of trends and patterns over time. This study develops statistical models specifically designed to analyze pooled cross-sectional data while accounting for measurement error, with a particular focus on estimating the prevalence of malnutrition among children under five years of age in Cameroon. Measurement error is a persistent issue in surveys, especially in resource-limited settings where data collection accuracy may be compromised. To address this, the research employs logistic regression within a Bayesian framework to reduce the impact of measurement error on malnutrition prevalence estimates, thereby providing more reliable information for policymakers and public health professionals. Through both simulation studies and application to real-world data from Cameroon, the study demonstrates the effectiveness of the proposed models in improving the accuracy and precision of estimates, offering deeper insights into childhood malnutrition in the country. This work advances statistical methodologies for survey data analysis by providing robust tools to address measurement error and support evidence-based interventions to combat malnutrition in Cameroon and similar contexts worldwide.</div></div>","PeriodicalId":73222,"journal":{"name":"Healthcare analytics (New York, N.Y.)","volume":"9 ","pages":"Article 100448"},"PeriodicalIF":0.0,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146023111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An automated analytics approach for diabetic retinopathy detection with ensemble deep learning models in healthcare 医疗保健中集成深度学习模型用于糖尿病视网膜病变检测的自动分析方法
Pub Date : 2026-01-19 DOI: 10.1016/j.health.2026.100449
Md Saykot Khandakar, Md Samsuddoha, Sohely Jahan, Rahat Hossain Faisal
Diabetic Retinopathy (DR) is a leading complication of prolonged diabetes, which poses a significant threat to vision and may lead to permanent blindness. Early identification and timely intervention are crucial to preventing disease progression. Traditionally, DR diagnosis relies on medical examination of retinal fundus images by expert ophthalmologists, which is time-consuming and resource intensive. However, deep learning techniques, particularly medical imaging, have demonstrated remarkable performance in the automated detection and classification of DR. This study proposes an ensemble-based deep learning framework using feature-level fusion stacking, which integrates four complementary convolutional neural networks named ReXInDen and three complementary convolutional neural networks named ReXDen for automated DR detection from retinal fundus images. These frameworks extract high-level features from each backbone, concatenate them into a unified representation, and classify using a feedforward neural network. Three datasets were utilized to validate the model including a region-specific dataset collected from Bangladeshi medical sources. The proposed ReXInDen model achieved accuracies of 98.27%, and 98.69% on Dataset 1 and Dataset 2, while ReXDen achieved the highest accuracy of 99.05% on Dataset 3. These results indicate a substantial improvement over individual models and demonstrate the potential of the ensemble approach to support early-stage DR detection. Moreover, these models show promise for integration into automated DR screening tools that can aid in reducing the global burden of diabetic vision loss.
糖尿病视网膜病变(DR)是长期糖尿病的主要并发症,对视力造成严重威胁,并可能导致永久性失明。早期识别和及时干预对预防疾病进展至关重要。传统上,DR的诊断依赖于眼科专家对视网膜眼底图像的医学检查,这既耗时又耗费资源。然而,深度学习技术,特别是医学成像技术,已经在DR的自动检测和分类方面表现出了显著的性能。本研究提出了一个基于集成的深度学习框架,使用特征级融合堆叠,将四个互补的卷积神经网络ReXInDen和三个互补的卷积神经网络ReXDen集成在一起,用于视网膜眼底图像的DR自动检测。这些框架从每个主干提取高级特征,将它们连接成一个统一的表示,并使用前馈神经网络进行分类。使用了三个数据集来验证模型,其中包括从孟加拉国医疗来源收集的特定区域数据集。本文提出的ReXInDen模型在数据集1和数据集2上的准确率分别为98.27%和98.69%,而在数据集3上的准确率最高为99.05%。这些结果表明了对单个模型的实质性改进,并证明了集成方法支持早期DR检测的潜力。此外,这些模型有望整合到自动DR筛查工具中,有助于减轻糖尿病视力丧失的全球负担。
{"title":"An automated analytics approach for diabetic retinopathy detection with ensemble deep learning models in healthcare","authors":"Md Saykot Khandakar,&nbsp;Md Samsuddoha,&nbsp;Sohely Jahan,&nbsp;Rahat Hossain Faisal","doi":"10.1016/j.health.2026.100449","DOIUrl":"10.1016/j.health.2026.100449","url":null,"abstract":"<div><div>Diabetic Retinopathy (DR) is a leading complication of prolonged diabetes, which poses a significant threat to vision and may lead to permanent blindness. Early identification and timely intervention are crucial to preventing disease progression. Traditionally, DR diagnosis relies on medical examination of retinal fundus images by expert ophthalmologists, which is time-consuming and resource intensive. However, deep learning techniques, particularly medical imaging, have demonstrated remarkable performance in the automated detection and classification of DR. This study proposes an ensemble-based deep learning framework using feature-level fusion stacking, which integrates four complementary convolutional neural networks named ReXInDen and three complementary convolutional neural networks named ReXDen for automated DR detection from retinal fundus images. These frameworks extract high-level features from each backbone, concatenate them into a unified representation, and classify using a feedforward neural network. Three datasets were utilized to validate the model including a region-specific dataset collected from Bangladeshi medical sources. The proposed ReXInDen model achieved accuracies of 98.27%, and 98.69% on Dataset 1 and Dataset 2, while ReXDen achieved the highest accuracy of 99.05% on Dataset 3. These results indicate a substantial improvement over individual models and demonstrate the potential of the ensemble approach to support early-stage DR detection. Moreover, these models show promise for integration into automated DR screening tools that can aid in reducing the global burden of diabetic vision loss.</div></div>","PeriodicalId":73222,"journal":{"name":"Healthcare analytics (New York, N.Y.)","volume":"9 ","pages":"Article 100449"},"PeriodicalIF":0.0,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146023161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A frequency-driven quantum and graph-based method for robust brain tumor analysis 一种基于频率驱动的量子和图形的稳健脑肿瘤分析方法
Pub Date : 2026-01-16 DOI: 10.1016/j.health.2026.100451
Ripon Kumar Debnath, Al Musabbir, Md. Motaharul Islam
Brain tumor segmentation remains a significant challenge in medical image analytics due to the limited ability of current models to detect small lesions, capture spectral information, and represent anatomical context effectively. This study introduces the Frequency-Quantum-Graph Network (FQG-Net), an analytical framework that integrates quantum computing principles, adaptive frequency-domain processing, and graph-based contextual learning to enhance segmentation precision. The model employs quantum entanglement and superposition effects to enrich feature representation, an adaptive frequency enhancement mechanism to amplify tumor-specific spectral characteristics, and a graph neural contextual memory to preserve spatial and anatomical relationships. Multimodal MRI data are processed through selective quantum residual blocks that dynamically activate network components based on analytical requirements, ensuring both efficiency and stability. Empirical evaluations across multiple benchmark datasets demonstrate that FQG-Net delivers consistent improvements over state-of-the-art segmentation models, achieving higher accuracy, stronger generalization across datasets, and superior performance in detecting small and heterogeneous tumor regions. These findings highlight the analytical strength of quantum-enhanced deep learning and its potential to advance precision diagnostics in healthcare imaging.
由于当前模型检测小病变、捕获光谱信息和有效表示解剖背景的能力有限,脑肿瘤分割仍然是医学图像分析中的一个重大挑战。本研究引入了频率-量子图网络(FQG-Net),这是一个集成了量子计算原理、自适应频域处理和基于图的上下文学习的分析框架,以提高分割精度。该模型利用量子纠缠和叠加效应来丰富特征表征,利用自适应频率增强机制来放大肿瘤特异性光谱特征,利用图形神经情境记忆来保持空间和解剖关系。多模态MRI数据通过选择性量子残差块进行处理,这些残差块根据分析需求动态激活网络组件,确保了效率和稳定性。跨多个基准数据集的经验评估表明,FQG-Net在最先进的分割模型上提供了一致的改进,实现了更高的准确性,跨数据集的泛化能力更强,在检测小肿瘤和异质性肿瘤区域方面表现优异。这些发现突出了量子增强深度学习的分析能力及其在医疗成像领域推进精确诊断的潜力。
{"title":"A frequency-driven quantum and graph-based method for robust brain tumor analysis","authors":"Ripon Kumar Debnath,&nbsp;Al Musabbir,&nbsp;Md. Motaharul Islam","doi":"10.1016/j.health.2026.100451","DOIUrl":"10.1016/j.health.2026.100451","url":null,"abstract":"<div><div>Brain tumor segmentation remains a significant challenge in medical image analytics due to the limited ability of current models to detect small lesions, capture spectral information, and represent anatomical context effectively. This study introduces the Frequency-Quantum-Graph Network (FQG-Net), an analytical framework that integrates quantum computing principles, adaptive frequency-domain processing, and graph-based contextual learning to enhance segmentation precision. The model employs quantum entanglement and superposition effects to enrich feature representation, an adaptive frequency enhancement mechanism to amplify tumor-specific spectral characteristics, and a graph neural contextual memory to preserve spatial and anatomical relationships. Multimodal MRI data are processed through selective quantum residual blocks that dynamically activate network components based on analytical requirements, ensuring both efficiency and stability. Empirical evaluations across multiple benchmark datasets demonstrate that FQG-Net delivers consistent improvements over state-of-the-art segmentation models, achieving higher accuracy, stronger generalization across datasets, and superior performance in detecting small and heterogeneous tumor regions. These findings highlight the analytical strength of quantum-enhanced deep learning and its potential to advance precision diagnostics in healthcare imaging.</div></div>","PeriodicalId":73222,"journal":{"name":"Healthcare analytics (New York, N.Y.)","volume":"9 ","pages":"Article 100451"},"PeriodicalIF":0.0,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145977605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An analytics framework for graph-based anomaly detection in healthcare time series 用于医疗保健时间序列中基于图的异常检测的分析框架
Pub Date : 2026-01-16 DOI: 10.1016/j.health.2026.100447
Emerson Yoshiaki Okano , Daniel Aloise , Mariá C.V. Nascimento
Anomaly detection in time series plays a vital role in diverse domains such as healthcare, finance, and industrial monitoring, where identifying deviations from normal behavior can signal critical events. While traditional methods often focus on univariate time series and assume fixed temporal dynamics, real-world systems are typically multivariate and characterized by complex interdependencies. Ignoring these relationships can lead to suboptimal detection of system-level anomalies. This paper proposes a novel graph-based framework for multivariate time series anomaly detection that explicitly captures temporal patterns and structural relationships among variables. Individual univariate time series are first transformed into Horizontal Visibility Graphs (HVGs), which are then combined into multiplex networks to preserve inter-layer interactions. Additionally, we construct feature-based similarity graphs derived from statistical properties of the series to model inter-series relations. Anomalies are identified by comparing the neighborhood structure of each series against a historical reference set, enabling the detection of subtle and coordinated deviations. Computational experiments on real-world healthcare data illustrate the behavior and practical relevance of the proposed approach in capturing complex anomalies, offering a robust and interpretable alternative to traditional techniques.
时间序列中的异常检测在医疗保健、金融和工业监控等不同领域中发挥着至关重要的作用,在这些领域中,识别与正常行为的偏差可以发出关键事件的信号。传统方法通常关注单变量时间序列,并假设固定的时间动态,而现实世界系统通常是多变量的,具有复杂的相互依赖性。忽略这些关系可能导致对系统级异常的次优检测。本文提出了一种新的基于图的多变量时间序列异常检测框架,该框架明确地捕获了变量之间的时间模式和结构关系。首先将单个单变量时间序列转换为水平可见性图(hvg),然后将其组合成多路网络以保持层间的相互作用。此外,我们根据序列的统计属性构建基于特征的相似图来模拟序列间的关系。通过将每个序列的邻域结构与历史参考集进行比较来识别异常,从而能够检测细微和协调的偏差。对现实世界医疗保健数据的计算实验说明了所提出的方法在捕获复杂异常方面的行为和实际相关性,为传统技术提供了一种健壮且可解释的替代方案。
{"title":"An analytics framework for graph-based anomaly detection in healthcare time series","authors":"Emerson Yoshiaki Okano ,&nbsp;Daniel Aloise ,&nbsp;Mariá C.V. Nascimento","doi":"10.1016/j.health.2026.100447","DOIUrl":"10.1016/j.health.2026.100447","url":null,"abstract":"<div><div>Anomaly detection in time series plays a vital role in diverse domains such as healthcare, finance, and industrial monitoring, where identifying deviations from normal behavior can signal critical events. While traditional methods often focus on univariate time series and assume fixed temporal dynamics, real-world systems are typically multivariate and characterized by complex interdependencies. Ignoring these relationships can lead to suboptimal detection of system-level anomalies. This paper proposes a novel graph-based framework for multivariate time series anomaly detection that explicitly captures temporal patterns and structural relationships among variables. Individual univariate time series are first transformed into Horizontal Visibility Graphs (HVGs), which are then combined into multiplex networks to preserve inter-layer interactions. Additionally, we construct feature-based similarity graphs derived from statistical properties of the series to model inter-series relations. Anomalies are identified by comparing the neighborhood structure of each series against a historical reference set, enabling the detection of subtle and coordinated deviations. Computational experiments on real-world healthcare data illustrate the behavior and practical relevance of the proposed approach in capturing complex anomalies, offering a robust and interpretable alternative to traditional techniques.</div></div>","PeriodicalId":73222,"journal":{"name":"Healthcare analytics (New York, N.Y.)","volume":"9 ","pages":"Article 100447"},"PeriodicalIF":0.0,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146023162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comparative analysis of predictive analytics approaches to uncovering subtypes of acute inflammation using machine learning 使用机器学习揭示急性炎症亚型的预测分析方法的比较分析
Pub Date : 2025-12-30 DOI: 10.1016/j.health.2025.100446
Roopashri Shetty, Aditi Shrivastava, Shwetha Rai, Geetha M.
Early prediction of acute cystitis and acute pyelonephritis plays a critical role in improving patient outcomes. This study develops predictive analytics models for these conditions using a pre-processed Acute Inflammation Dataset and four classification algorithms: Logistic Regression (LR), Support Vector Machine (SVM), Decision Tree (DT), and Random Forest (RF). In addition, two clustering techniques, Density-Based Spatial Clustering of Applications with Noise (DBSCAN) and K-Means, are employed to uncover latent structures within the data. Both random sampling and stratified sampling are applied to ensure balanced data representation across clinical classes. The performance of the classification models is evaluated using accuracy, precision, recall, and the F1-score, while clustering performance is assessed using the Silhouette score. The results show that stratified sampling improves the performance of the DT, SVM, and LR classifiers, whereas the RF classifier achieves optimal performance under random sampling. Clustering analysis identifies two disease subclasses, with DBSCAN achieving a maximum Silhouette score of 1.0 for MinPts = 5 and epsilon values of 0.5, 1, and 2 using both Euclidean and Manhattan distance metrics. The K-Means algorithm achieves its best performance with a Silhouette score of 0.67 for K = 5 using the Minkowski distance metric. Overall, the findings demonstrate the effectiveness of machine learning and data mining techniques in enhancing diagnostic modeling and clinical decision-making for acute inflammatory conditions, contributing to more timely and accurate patient care.
早期预测急性膀胱炎和急性肾盂肾炎在改善患者预后方面起着至关重要的作用。本研究使用预处理的急性炎症数据集和四种分类算法:逻辑回归(LR)、支持向量机(SVM)、决策树(DT)和随机森林(RF),为这些情况开发了预测分析模型。此外,两种聚类技术,基于密度的空间聚类应用噪声(DBSCAN)和K-Means,被用来揭示数据中的潜在结构。随机抽样和分层抽样的应用,以确保平衡的数据表示跨临床类。分类模型的性能使用准确性、精密度、召回率和f1分数来评估,而聚类性能使用Silhouette分数来评估。结果表明,分层抽样提高了DT、SVM和LR分类器的性能,而RF分类器在随机抽样下的性能最优。聚类分析确定了两种疾病亚类,使用欧几里得和曼哈顿距离指标,DBSCAN在MinPts = 5时的最大Silhouette评分为1.0,epsilon值为0.5、1和2。使用Minkowski距离度量,K = 5时,K- means算法的Silhouette得分为0.67,达到最佳性能。总的来说,研究结果证明了机器学习和数据挖掘技术在增强急性炎症的诊断建模和临床决策方面的有效性,有助于更及时和准确的患者护理。
{"title":"A comparative analysis of predictive analytics approaches to uncovering subtypes of acute inflammation using machine learning","authors":"Roopashri Shetty,&nbsp;Aditi Shrivastava,&nbsp;Shwetha Rai,&nbsp;Geetha M.","doi":"10.1016/j.health.2025.100446","DOIUrl":"10.1016/j.health.2025.100446","url":null,"abstract":"<div><div>Early prediction of acute cystitis and acute pyelonephritis plays a critical role in improving patient outcomes. This study develops predictive analytics models for these conditions using a pre-processed Acute Inflammation Dataset and four classification algorithms: Logistic Regression (LR), Support Vector Machine (SVM), Decision Tree (DT), and Random Forest (RF). In addition, two clustering techniques, Density-Based Spatial Clustering of Applications with Noise (DBSCAN) and K-Means, are employed to uncover latent structures within the data. Both random sampling and stratified sampling are applied to ensure balanced data representation across clinical classes. The performance of the classification models is evaluated using accuracy, precision, recall, and the F1-score, while clustering performance is assessed using the Silhouette score. The results show that stratified sampling improves the performance of the DT, SVM, and LR classifiers, whereas the RF classifier achieves optimal performance under random sampling. Clustering analysis identifies two disease subclasses, with DBSCAN achieving a maximum Silhouette score of 1.0 for MinPts = 5 and epsilon values of 0.5, 1, and 2 using both Euclidean and Manhattan distance metrics. The K-Means algorithm achieves its best performance with a Silhouette score of 0.67 for K = 5 using the Minkowski distance metric. Overall, the findings demonstrate the effectiveness of machine learning and data mining techniques in enhancing diagnostic modeling and clinical decision-making for acute inflammatory conditions, contributing to more timely and accurate patient care.</div></div>","PeriodicalId":73222,"journal":{"name":"Healthcare analytics (New York, N.Y.)","volume":"9 ","pages":"Article 100446"},"PeriodicalIF":0.0,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An intelligent machine learning approach for predicting and explaining brain injury severity 预测和解释脑损伤严重程度的智能机器学习方法
Pub Date : 2025-12-26 DOI: 10.1016/j.health.2025.100445
Hoang Bach Nguyen , Quang Tung Pham , Sinh Huy Nguyen , Chi Thanh Nguyen , Thanh Hai Tran , Hai Vu
Traumatic brain injury (TBI) requires timely and reliable severity assessment to support critical clinical decision-making. This study proposes an interpretable machine learning framework for TBI severity prediction using two datasets: the public HPTBI dataset and a newly developed 103_TBI dataset comprising 504 patients. After data preprocessing and feature selection, ensemble learning models-particularly Random Forest and XGBoost-achieved accuracies exceeding 94%. To enhance transparency and clinical trust, we introduce a dual-layer interpretability strategy that integrates post-hoc explanation techniques (SHAP, LIME, PFI, PDP, and counterfactual analysis) with a knowledge-graph-based evaluation of feature interactions. The attribution methods show high agreement (correlation>0.91) and consistently identify key clinical predictors such as the Glasgow Coma Scale (GCS), midline shift, and pulse rate. These insights align closely with expert judgment, supporting the clinical credibility of the model explanations. Additionally, the knowledge graph reveals multivariate relationships critical to outcome determination. By integrating predictive models with clinical interpretability techniques, the proposed framework offers reliable clinical support to assist neurotrauma triage and expert validation. This work therefore demonstrates the potential of integrating explainable AI with domain knowledge to advance TBI severity prediction.
创伤性脑损伤(TBI)需要及时可靠的严重程度评估,以支持关键的临床决策。本研究提出了一个可解释的TBI严重性预测机器学习框架,使用两个数据集:公共HPTBI数据集和新开发的包含504名患者的103_TBI数据集。在数据预处理和特征选择之后,集成学习模型(特别是Random Forest和xgboost)的准确率超过了94%。为了提高透明度和临床信任,我们引入了一种双层可解释性策略,该策略将事后解释技术(SHAP、LIME、PFI、PDP和反事实分析)与基于知识图的特征交互评估相结合。归因方法显示出高度的一致性(相关性0.91),并一致地识别出关键的临床预测指标,如格拉斯哥昏迷量表(GCS)、中线移位和脉搏率。这些见解与专家判断密切相关,支持模型解释的临床可信度。此外,知识图谱揭示了对结果确定至关重要的多变量关系。通过将预测模型与临床可解释性技术相结合,所提出的框架为辅助神经创伤分诊和专家验证提供了可靠的临床支持。因此,这项工作证明了将可解释的人工智能与领域知识相结合以推进TBI严重程度预测的潜力。
{"title":"An intelligent machine learning approach for predicting and explaining brain injury severity","authors":"Hoang Bach Nguyen ,&nbsp;Quang Tung Pham ,&nbsp;Sinh Huy Nguyen ,&nbsp;Chi Thanh Nguyen ,&nbsp;Thanh Hai Tran ,&nbsp;Hai Vu","doi":"10.1016/j.health.2025.100445","DOIUrl":"10.1016/j.health.2025.100445","url":null,"abstract":"<div><div>Traumatic brain injury (TBI) requires timely and reliable severity assessment to support critical clinical decision-making. This study proposes an interpretable machine learning framework for TBI severity prediction using two datasets: the public HPTBI dataset and a newly developed 103_TBI dataset comprising 504 patients. After data preprocessing and feature selection, ensemble learning models-particularly Random Forest and XGBoost-achieved accuracies exceeding 94%. To enhance transparency and clinical trust, we introduce a dual-layer interpretability strategy that integrates post-hoc explanation techniques (SHAP, LIME, PFI, PDP, and counterfactual analysis) with a knowledge-graph-based evaluation of feature interactions. The attribution methods show high agreement (<span><math><mrow><mi>c</mi><mi>o</mi><mi>r</mi><mi>r</mi><mi>e</mi><mi>l</mi><mi>a</mi><mi>t</mi><mi>i</mi><mi>o</mi><mi>n</mi><mo>&gt;</mo><mn>0</mn><mo>.</mo><mn>91</mn></mrow></math></span>) and consistently identify key clinical predictors such as the Glasgow Coma Scale (GCS), midline shift, and pulse rate. These insights align closely with expert judgment, supporting the clinical credibility of the model explanations. Additionally, the knowledge graph reveals multivariate relationships critical to outcome determination. By integrating predictive models with clinical interpretability techniques, the proposed framework offers reliable clinical support to assist neurotrauma triage and expert validation. This work therefore demonstrates the potential of integrating explainable AI with domain knowledge to advance TBI severity prediction.</div></div>","PeriodicalId":73222,"journal":{"name":"Healthcare analytics (New York, N.Y.)","volume":"9 ","pages":"Article 100445"},"PeriodicalIF":0.0,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An analytics-based framework for early detection of cervical cancer using predictive modeling 基于分析的宫颈癌早期检测框架的预测建模
Pub Date : 2025-12-12 DOI: 10.1016/j.health.2025.100442
Wirapong Chansanam , Kittichai Nilubol , Pichayada Suphajaroonshab , Chunqiu Li
This study aims to develop and evaluate advanced machine learning (ML) models for accurate and scalable early detection of cervical cancer, addressing critical limitations in current diagnostic practices. In leveraging exploratory data analysis (EDA), rigorous data preprocessing, and multiple ML techniques—including Random Forest, ANN, SVM, XGBoost, and ensemble models—we systematically analyzed a comprehensive dataset from the UCI repository comprising demographic, clinical, and behavioral features. Results indicated that the Random Forest model achieved the highest performance, with an accuracy of 98.4 %, a sensitivity of 99.3 %, and a specificity of 97.6 %, substantially surpassing the other evaluated models. Despite limitations related to dataset homogeneity and potential biases introduced by synthetic oversampling methods, these findings represent significant methodological and practical advancements. By offering an interpretable and robust diagnostic tool, the study significantly contributes to the improvement of cervical cancer detection, particularly benefitting low-resource clinical environments where effective, scalable screening methods are urgently needed. The proposed framework—developed and evaluated solely on the UCI tabular cervical cancer dataset—achieved high discriminative performance with the Random Forest model (accuracy = 98.4 %, sensitivity = 99.3 %, specificity = 97.6 %). A previously published imaging-based ResNet-50 model (AUC = 0.97) is referenced for contextual comparison only and was not part of our experimental work. However, deployment in resource-constrained environments will require further optimization and cost-efficiency analyses to confirm feasibility.
本研究旨在开发和评估先进的机器学习(ML)模型,用于准确和可扩展的宫颈癌早期检测,解决当前诊断实践中的关键限制。利用探索性数据分析(EDA)、严格的数据预处理和多种机器学习技术(包括随机森林、人工神经网络、支持向量机、XGBoost和集成模型),我们系统地分析了来自UCI存储库的综合数据集,包括人口统计、临床和行为特征。结果表明,随机森林模型取得了最高的性能,准确率为98.4% %,灵敏度为99.3 %,特异性为97.6% %,大大超过了其他评估模型。尽管存在与数据集同质性和合成过采样方法引入的潜在偏差相关的局限性,但这些发现代表了方法和实践上的重大进步。通过提供一种可解释的、强大的诊断工具,该研究显著有助于提高宫颈癌的检测,特别是有利于资源匮乏的临床环境,这些环境迫切需要有效的、可扩展的筛查方法。拟议中的framework-developed和评估仅仅在UCI表格宫颈癌dataset-achieved高区别的性能与随机森林模型(精度 = 98.4  %,敏感性 = 99.3  %,特异性 = 97.6 %)。先前发表的基于成像的ResNet-50模型(AUC = 0.97)仅用于上下文比较,而不是我们实验工作的一部分。然而,在资源受限的环境中部署,需要进一步优化和成本效益分析,以确认可行性。
{"title":"An analytics-based framework for early detection of cervical cancer using predictive modeling","authors":"Wirapong Chansanam ,&nbsp;Kittichai Nilubol ,&nbsp;Pichayada Suphajaroonshab ,&nbsp;Chunqiu Li","doi":"10.1016/j.health.2025.100442","DOIUrl":"10.1016/j.health.2025.100442","url":null,"abstract":"<div><div>This study aims to develop and evaluate advanced machine learning (ML) models for accurate and scalable early detection of cervical cancer, addressing critical limitations in current diagnostic practices. In leveraging exploratory data analysis (EDA), rigorous data preprocessing, and multiple ML techniques—including Random Forest, ANN, SVM, XGBoost, and ensemble models—we systematically analyzed a comprehensive dataset from the UCI repository comprising demographic, clinical, and behavioral features. Results indicated that the Random Forest model achieved the highest performance, with an accuracy of 98.4 %, a sensitivity of 99.3 %, and a specificity of 97.6 %, substantially surpassing the other evaluated models. Despite limitations related to dataset homogeneity and potential biases introduced by synthetic oversampling methods, these findings represent significant methodological and practical advancements. By offering an interpretable and robust diagnostic tool, the study significantly contributes to the improvement of cervical cancer detection, particularly benefitting low-resource clinical environments where effective, scalable screening methods are urgently needed. The proposed framework—developed and evaluated solely on the UCI tabular cervical cancer dataset—achieved high discriminative performance with the Random Forest model (accuracy = 98.4 %, sensitivity = 99.3 %, specificity = 97.6 %). A previously published imaging-based ResNet-50 model (AUC = 0.97) is referenced for contextual comparison only and was not part of our experimental work. However, deployment in resource-constrained environments will require further optimization and cost-efficiency analyses to confirm feasibility.</div></div>","PeriodicalId":73222,"journal":{"name":"Healthcare analytics (New York, N.Y.)","volume":"9 ","pages":"Article 100442"},"PeriodicalIF":0.0,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145791821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An ensemble learning approach for predicting hospital stay in transplant patients 预测移植患者住院时间的集成学习方法
Pub Date : 2025-12-12 DOI: 10.1016/j.health.2025.100444
Zahra Gharibi
The rising incidence of heart and lung failure has increased the demand for effective transplant management strategies. Predicting Hospital Length of Stay (HLOS) is essential for reducing cost variability, optimizing resource utilization, and supporting patient recovery. This study uses data from the United Network for Organ Sharing (UNOS) to develop and validate an Ensemble Meta Stacked (EMS) model for predicting hospitalization duration after heart and lung transplantation. Expert-informed feature engineering incorporates donor and recipient compatibility measures, while a hybrid two-stage feature selection process combines expert evaluation with the Boruta algorithm to identify key predictors across demographic, clinical, behavioral, and geographical domains. Twelve predictive models are developed, including five base learners for each organ type and an EMS model that integrates their outputs through a Random Forest (RF) meta learner. Among the base learners, RF achieves the highest accuracy, but the EMS consistently outperforms all individual models. Sensitivity analysis confirms the robustness of model performance under different feature sources and scaling procedures, while paired statistical tests confirm that the improvement in predictive accuracy of EMS compared to the base learners is not due to random variation. The study also links predictive metrics to stakeholder priorities: policymakers and payers benefit from stable forecasts that control financial variability, hospital administrators rely on consistent prediction accuracy for capacity planning and resource allocation, and clinicians depend on bias-related metrics to guide safer discharge decisions. The EMS framework advances data-driven management in transplantation, supporting more efficient, equitable, and clinically responsible care.
心脏和肺衰竭的发病率上升,增加了对有效的移植管理策略的需求。预测住院时间(HLOS)对于降低成本可变性、优化资源利用和支持患者康复至关重要。本研究使用来自联合器官共享网络(UNOS)的数据来开发和验证用于预测心肺移植后住院时间的集成Meta堆叠(EMS)模型。专家信息特征工程结合了供体和受体的兼容性措施,而混合两阶段特征选择过程结合了专家评估和Boruta算法,以确定跨越人口统计学、临床、行为和地理领域的关键预测因素。开发了12个预测模型,包括每个器官类型的5个基本学习器和一个EMS模型,该模型通过随机森林(RF)元学习器集成了它们的输出。在基础学习器中,RF达到最高的准确率,但EMS始终优于所有单个模型。敏感性分析证实了模型性能在不同特征源和缩放程序下的稳健性,而配对统计检验证实了EMS与基础学习器相比预测精度的提高不是由于随机变化。该研究还将预测指标与利益相关者的优先事项联系起来:政策制定者和支付方受益于控制财务变化的稳定预测,医院管理人员依赖于容量规划和资源分配的一致预测准确性,临床医生依赖于与偏差相关的指标来指导更安全的出院决策。EMS框架推进了数据驱动的移植管理,支持更有效、公平和临床负责任的护理。
{"title":"An ensemble learning approach for predicting hospital stay in transplant patients","authors":"Zahra Gharibi","doi":"10.1016/j.health.2025.100444","DOIUrl":"10.1016/j.health.2025.100444","url":null,"abstract":"<div><div>The rising incidence of heart and lung failure has increased the demand for effective transplant management strategies. Predicting Hospital Length of Stay (HLOS) is essential for reducing cost variability, optimizing resource utilization, and supporting patient recovery. This study uses data from the United Network for Organ Sharing (UNOS) to develop and validate an Ensemble Meta Stacked (EMS) model for predicting hospitalization duration after heart and lung transplantation. Expert-informed feature engineering incorporates donor and recipient compatibility measures, while a hybrid two-stage feature selection process combines expert evaluation with the Boruta algorithm to identify key predictors across demographic, clinical, behavioral, and geographical domains. Twelve predictive models are developed, including five base learners for each organ type and an EMS model that integrates their outputs through a Random Forest (RF) meta learner. Among the base learners, RF achieves the highest accuracy, but the EMS consistently outperforms all individual models. Sensitivity analysis confirms the robustness of model performance under different feature sources and scaling procedures, while paired statistical tests confirm that the improvement in predictive accuracy of EMS compared to the base learners is not due to random variation. The study also links predictive metrics to stakeholder priorities: policymakers and payers benefit from stable forecasts that control financial variability, hospital administrators rely on consistent prediction accuracy for capacity planning and resource allocation, and clinicians depend on bias-related metrics to guide safer discharge decisions. The EMS framework advances data-driven management in transplantation, supporting more efficient, equitable, and clinically responsible care.</div></div>","PeriodicalId":73222,"journal":{"name":"Healthcare analytics (New York, N.Y.)","volume":"9 ","pages":"Article 100444"},"PeriodicalIF":0.0,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145791819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An unsupervised machine learning approach for defining surge levels in emergency medical services 用于定义紧急医疗服务中激增水平的无监督机器学习方法
Pub Date : 2025-12-08 DOI: 10.1016/j.health.2025.100443
Qixuan Zhao , Adair Collins , Judah Goldstein , Onur Pakkanlilar , Peter Vanberkel
A surge period occurs when demand significantly exceeds available capacity, creating operational strain in emergency medical services (EMS) and leading to measurable declines in system performance. Although surge levels are a critical metric for EMS operations, no established method exists for their objective definition. This study introduces a genetic algorithm-based unsupervised clustering model designed to define surge levels using EMS operational data. Unlike the National Emergency Department Overcrowding Scale, which depends on subjective assessments, the proposed approach objectively categorizes surge levels and supports regional customization through hyperparameter tuning and feature selection. The model's adaptability allows healthcare leaders to determine the desired number of surge-level categories and tailor the feature set to local operational needs. A case study in Nova Scotia, Canada, demonstrates the model's effectiveness, accurately identifying 88.96 % of busy periods with recall and precision of 96.49 % and 78.57 %, respectively. These results indicate that the approach provides a robust and flexible tool for defining surge levels, enabling data-driven decision-making in EMS system management.
当需求大大超过可用容量时,会出现激增期,这会给紧急医疗服务(EMS)造成运营压力,并导致系统性能明显下降。虽然电涌水平是EMS操作的一个关键指标,但没有既定的方法来确定其客观定义。本研究引入一种基于遗传算法的无监督聚类模型,设计用于使用EMS运行数据定义浪涌水平。与依赖主观评估的国家急诊科过度拥挤量表不同,拟议的方法客观地对激增水平进行分类,并通过超参数调整和特征选择支持区域定制。该模型的适应性使医疗保健领导者能够确定所需的激增级别类别数量,并根据本地操作需求定制功能集。以加拿大新斯科舍省为例,验证了该模型的有效性,准确识别出88.96 %的繁忙时段,查全率和查准率分别为96.49 %和78.57 %。这些结果表明,该方法为定义浪涌水平提供了一个强大而灵活的工具,使EMS系统管理中的数据驱动决策成为可能。
{"title":"An unsupervised machine learning approach for defining surge levels in emergency medical services","authors":"Qixuan Zhao ,&nbsp;Adair Collins ,&nbsp;Judah Goldstein ,&nbsp;Onur Pakkanlilar ,&nbsp;Peter Vanberkel","doi":"10.1016/j.health.2025.100443","DOIUrl":"10.1016/j.health.2025.100443","url":null,"abstract":"<div><div>A surge period occurs when demand significantly exceeds available capacity, creating operational strain in emergency medical services (EMS) and leading to measurable declines in system performance. Although surge levels are a critical metric for EMS operations, no established method exists for their objective definition. This study introduces a genetic algorithm-based unsupervised clustering model designed to define surge levels using EMS operational data. Unlike the National Emergency Department Overcrowding Scale, which depends on subjective assessments, the proposed approach objectively categorizes surge levels and supports regional customization through hyperparameter tuning and feature selection. The model's adaptability allows healthcare leaders to determine the desired number of surge-level categories and tailor the feature set to local operational needs. A case study in Nova Scotia, Canada, demonstrates the model's effectiveness, accurately identifying 88.96 % of busy periods with recall and precision of 96.49 % and 78.57 %, respectively. These results indicate that the approach provides a robust and flexible tool for defining surge levels, enabling data-driven decision-making in EMS system management.</div></div>","PeriodicalId":73222,"journal":{"name":"Healthcare analytics (New York, N.Y.)","volume":"9 ","pages":"Article 100443"},"PeriodicalIF":0.0,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145791820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Healthcare analytics (New York, N.Y.)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1