首页 > 最新文献

Computer methods and programs in biomedicine最新文献

英文 中文
Towards clinical prediction with transparency: An explainable AI approach to survival modelling in residential aged care
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-02-15 DOI: 10.1016/j.cmpb.2025.108653
Teo Susnjak, Elise Griffin

Background and Objective:

Scalable, flexible and highly interpretable tools for predicting mortality in residential aged care facilities for the purpose of informing and optimizing palliative care decisions, do not exist. This study is the first and most comprehensive work applying machine learning to address this need while seeking to offer a transformative approach to integrating AI into palliative care decision-making. The objective is to predict survival in elderly individuals six months post-admission to residential aged care facilities with patient-level interpretability for transparency and support for clinical decision-making for palliative care options.

Methods:

Data from 11,944 residents across 40 facilities, with a novel combination of 18 features was used to develop predictive models, comparing standard approaches like Cox Proportional Hazards, Ridge and Lasso Regression with machine learning algorithms, Gradient Boosting (GB) and Random Survival Forest. Model calibration was performed together with ROC and a suite of evaluation metrics to analyze results. Explainable AI (XAI) tools were used to demonstrate both the cohort-level and patient-level model interpretability to enable transparency in the clinical usage of the models. TRIPOD reporting guidelines were followed, with model parameters and code provided publicly.

Results:

GB was the top performer with a Dynamic AUROC of 0.746 and a Concordance Index of 0.716 for six-month survival prediction. Explainable AI tools provided insights into key features such as comorbidities, cognitive impairment, and nutritional status, revealing their impact on survival outcomes and interactions that inform clinical decision-making. The calibrated model showed near-optimal performance with adjustable clinically relevant thresholds. The integration of XAI tools proved effective in enhancing the transparency and trustworthiness of predictions, offering actionable insights that support informed and ethically responsible end-of-life (EoL) care decisions in aged care settings.

Conclusion:

This study successfully applied machine learning to create viable survival models for aged care residents, demonstrating their usability for clinical settings via a suite of interpretable tools. The findings support the introduction into clinical trials of machine learning with explainable AI tools in geriatric medicine for mortality prediction to enhance the quality of EoL care and informed discussions regarding palliative care.
{"title":"Towards clinical prediction with transparency: An explainable AI approach to survival modelling in residential aged care","authors":"Teo Susnjak,&nbsp;Elise Griffin","doi":"10.1016/j.cmpb.2025.108653","DOIUrl":"10.1016/j.cmpb.2025.108653","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Scalable, flexible and highly interpretable tools for predicting mortality in residential aged care facilities for the purpose of informing and optimizing palliative care decisions, do not exist. This study is the first and most comprehensive work applying machine learning to address this need while seeking to offer a transformative approach to integrating AI into palliative care decision-making. The objective is to predict survival in elderly individuals six months post-admission to residential aged care facilities with patient-level interpretability for transparency and support for clinical decision-making for palliative care options.</div></div><div><h3>Methods:</h3><div>Data from 11,944 residents across 40 facilities, with a novel combination of 18 features was used to develop predictive models, comparing standard approaches like Cox Proportional Hazards, Ridge and Lasso Regression with machine learning algorithms, Gradient Boosting (GB) and Random Survival Forest. Model calibration was performed together with ROC and a suite of evaluation metrics to analyze results. Explainable AI (XAI) tools were used to demonstrate both the cohort-level and patient-level model interpretability to enable transparency in the clinical usage of the models. TRIPOD reporting guidelines were followed, with model parameters and code provided publicly.</div></div><div><h3>Results:</h3><div>GB was the top performer with a Dynamic AUROC of 0.746 and a Concordance Index of 0.716 for six-month survival prediction. Explainable AI tools provided insights into key features such as comorbidities, cognitive impairment, and nutritional status, revealing their impact on survival outcomes and interactions that inform clinical decision-making. The calibrated model showed near-optimal performance with adjustable clinically relevant thresholds. The integration of XAI tools proved effective in enhancing the transparency and trustworthiness of predictions, offering actionable insights that support informed and ethically responsible end-of-life (EoL) care decisions in aged care settings.</div></div><div><h3>Conclusion:</h3><div>This study successfully applied machine learning to create viable survival models for aged care residents, demonstrating their usability for clinical settings via a suite of interpretable tools. The findings support the introduction into clinical trials of machine learning with explainable AI tools in geriatric medicine for mortality prediction to enhance the quality of EoL care and informed discussions regarding palliative care.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"263 ","pages":"Article 108653"},"PeriodicalIF":4.9,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143427667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DuDo-RAC: Dual-domain optimization for ring artifact correction in photon counting CT
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-02-13 DOI: 10.1016/j.cmpb.2025.108636
Shengqi Kan , Chenlong Ren , Ze Liu , Yuchen Lu , Shouhua Luo , Xu Ji , Yang Chen

Background and objective:

Due to the inconsistent response of photon counting detectors (PCDs) pixels to X-rays, there is an obvious presence of low-frequency ring artifacts in CT reconstructed images. Traditional CT ring artifact correction methods are ineffective in correcting low-frequency ring artifacts. Although the pixel-wise polynomial correction method can correct low-frequency ring artifacts, there may still remain residual artifacts due to the inaccuracy in the coefficient measurement and the inability of polynomial functions to perfectly model the relationship between the thickness and post-log raw data. To resolve such problems, this work proposes a high and low frequency ring artifact correction method based on dual-domain optimization (DuDo-RAC).

Methods:

This method is independent of spectral information and training data, making it suitable for various energy thresholds. Its principle is to model the inconsistent response as pixel-wise polynomial functions, with the coefficients for each pixel being determined via a dual-domain optimization framework. Since ring artifacts manifest as stripes after polar transformations, smoothing operations are utilized to further weaken the residual ring artifacts after the pre-correction process. Furthermore, a multi-resolution gradient loss function is designed to iteratively optimize the polynomial correction coefficients for a better assessment of ring removal performance.

Results:

The results have demonstrated that the proposed method can effectively correct the high and low frequency ring artifacts in PCD-CT images while preserving the image structure and details.

Conclusion:

DuDo-RAC proposed in this study obtains effective ring artifact correction results in PCD-CT images.
{"title":"DuDo-RAC: Dual-domain optimization for ring artifact correction in photon counting CT","authors":"Shengqi Kan ,&nbsp;Chenlong Ren ,&nbsp;Ze Liu ,&nbsp;Yuchen Lu ,&nbsp;Shouhua Luo ,&nbsp;Xu Ji ,&nbsp;Yang Chen","doi":"10.1016/j.cmpb.2025.108636","DOIUrl":"10.1016/j.cmpb.2025.108636","url":null,"abstract":"<div><h3>Background and objective:</h3><div>Due to the inconsistent response of photon counting detectors (PCDs) pixels to X-rays, there is an obvious presence of low-frequency ring artifacts in CT reconstructed images. Traditional CT ring artifact correction methods are ineffective in correcting low-frequency ring artifacts. Although the pixel-wise polynomial correction method can correct low-frequency ring artifacts, there may still remain residual artifacts due to the inaccuracy in the coefficient measurement and the inability of polynomial functions to perfectly model the relationship between the thickness and post-log raw data. To resolve such problems, this work proposes a high and low frequency ring artifact correction method based on dual-domain optimization (DuDo-RAC).</div></div><div><h3>Methods:</h3><div>This method is independent of spectral information and training data, making it suitable for various energy thresholds. Its principle is to model the inconsistent response as pixel-wise polynomial functions, with the coefficients for each pixel being determined via a dual-domain optimization framework. Since ring artifacts manifest as stripes after polar transformations, smoothing operations are utilized to further weaken the residual ring artifacts after the pre-correction process. Furthermore, a multi-resolution gradient loss function is designed to iteratively optimize the polynomial correction coefficients for a better assessment of ring removal performance.</div></div><div><h3>Results:</h3><div>The results have demonstrated that the proposed method can effectively correct the high and low frequency ring artifacts in PCD-CT images while preserving the image structure and details.</div></div><div><h3>Conclusion:</h3><div>DuDo-RAC proposed in this study obtains effective ring artifact correction results in PCD-CT images.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"263 ","pages":"Article 108636"},"PeriodicalIF":4.9,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143428074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
pFedBCC: Personalizing Federated multi-target domain adaptive segmentation via Bi-pole Collaborative Calibration
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-02-12 DOI: 10.1016/j.cmpb.2025.108635
Huaqi Zhang , Pengyu Wang , Jie Liu , Jing Qin

Background and Objective:

Multi-target domain adaptation (MTDA) is a well-established technology for unsupervised segmentation. It can significantly reduce the workload of large-scale data annotations, but assumes that each domain data can be freely accessed. However, data privacy limit its deployment in real-world medical scenes. Aiming at this problem, federated learning (FL) commits a paradigm to handle private cross-institution data.

Methods:

This paper makes the first attempt to apply FedMTDA to medical image segmentation by proposing a personalized Federated Bi-pole Collaborative Calibration (pFedBCC) framework, which leverages unannotated private client data and a public source-domain model to learn a global model at the central server for unsupervised multi-type immunohistochemically (IHC) image segmentation. Concretely, pFedBCC tackles two significant challenges in FedMTDA including client-side prediction drift and server-side aggregation drift via Semantic-affinity-driven Personalized Label Calibration (SPLC) and Source-knowledge-oriented Consistent Gradient Calibration (SCGC). To alleviate local prediction drift, SPLC personalizes a cross-domain graph reasoning module for each client, which achieves semantic-affinity alignment between high-level source- and target-domain features to produce pseudo labels that are semantically consistent with source-domain labels to guide client training. To further alleviate global aggregation drift, SCGC develops a new conflict-gradient clipping scheme, which takes the source-domain gradient as a guidance to ensure that all clients update with similar gradient directions and magnitudes, thereby improving the generalization of the global model.

Results:

pFedBCC is evaluated on private and public IHC benchmarks, including the proposed MT-IHC dataset, and the panCK, BCData, DLBC-Morph and LYON19 datasets. Overall, pFedBCC achieves the best performance of 88.8% PA on MT-IHC, as well as 88.4% PA on the LYON19 dataset, respectively.

Conclusions:

The proposed pFedBCC performs better than all comparison methods. The ablation study also confirms the contribution of SPLC and SCGC for unsupervised multi-type IHC image segmentation. This paper constructs a MT-IHC dataset containing more than 19,000 IHC images of 10 types (CgA, CK, Syn, CD, Ki67, P40, P53, EMA, TdT and BCL). Extensive experiments on the MT-IHC and public IHC datasets confirm that pFedBCC outperforms existing FL and DA methods.
{"title":"pFedBCC: Personalizing Federated multi-target domain adaptive segmentation via Bi-pole Collaborative Calibration","authors":"Huaqi Zhang ,&nbsp;Pengyu Wang ,&nbsp;Jie Liu ,&nbsp;Jing Qin","doi":"10.1016/j.cmpb.2025.108635","DOIUrl":"10.1016/j.cmpb.2025.108635","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Multi-target domain adaptation (MTDA) is a well-established technology for unsupervised segmentation. It can significantly reduce the workload of large-scale data annotations, but assumes that each domain data can be freely accessed. However, data privacy limit its deployment in real-world medical scenes. Aiming at this problem, federated learning (FL) commits a paradigm to handle private cross-institution data.</div></div><div><h3>Methods:</h3><div>This paper makes the first attempt to apply FedMTDA to medical image segmentation by proposing a personalized Federated Bi-pole Collaborative Calibration (pFedBCC) framework, which leverages unannotated private client data and a public source-domain model to learn a global model at the central server for unsupervised multi-type immunohistochemically (IHC) image segmentation. Concretely, pFedBCC tackles two significant challenges in FedMTDA including client-side prediction drift and server-side aggregation drift via Semantic-affinity-driven Personalized Label Calibration (SPLC) and Source-knowledge-oriented Consistent Gradient Calibration (SCGC). To alleviate local prediction drift, SPLC personalizes a cross-domain graph reasoning module for each client, which achieves semantic-affinity alignment between high-level source- and target-domain features to produce pseudo labels that are semantically consistent with source-domain labels to guide client training. To further alleviate global aggregation drift, SCGC develops a new conflict-gradient clipping scheme, which takes the source-domain gradient as a guidance to ensure that all clients update with similar gradient directions and magnitudes, thereby improving the generalization of the global model.</div></div><div><h3>Results:</h3><div>pFedBCC is evaluated on private and public IHC benchmarks, including the proposed MT-IHC dataset, and the panCK, BCData, DLBC-Morph and LYON19 datasets. Overall, pFedBCC achieves the best performance of 88.8% PA on MT-IHC, as well as 88.4% PA on the LYON19 dataset, respectively.</div></div><div><h3>Conclusions:</h3><div>The proposed pFedBCC performs better than all comparison methods. The ablation study also confirms the contribution of SPLC and SCGC for unsupervised multi-type IHC image segmentation. This paper constructs a MT-IHC dataset containing more than 19,000 IHC images of 10 types (CgA, CK, Syn, CD, Ki67, P40, P53, EMA, TdT and BCL). Extensive experiments on the MT-IHC and public IHC datasets confirm that pFedBCC outperforms existing FL and DA methods.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"263 ","pages":"Article 108635"},"PeriodicalIF":4.9,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143418846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neuradicon: Operational representation learning of neuroimaging reports
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-02-12 DOI: 10.1016/j.cmpb.2025.108638
Henry Watkins , Robert Gray , Adam Julius , Yee-Haur Mah , James Teo , Walter H.L. Pinaya , Paul Wright , Ashwani Jha , Holger Engleitner , Jorge Cardoso , Sebastien Ourselin , Geraint Rees , Rolf Jaeger , Parashkev Nachev

Background and Objective:

Radiological reports typically summarize the content and interpretation of imaging studies in unstructured form that precludes quantitative analysis. This limits the monitoring of radiological services to throughput undifferentiated by content, impeding specific, targeted operational optimization. Here we present Neuradicon, a natural language processing (NLP) framework for quantitative analysis of neuroradiological reports.

Methods:

Our framework is a hybrid of rule-based and machine-learning models to represent neurological reports in succinct, quantitative form optimally suited to operational guidance. These include probabilistic models for text classification and tagging tasks, alongside auto-encoders for learning latent representations and statistical mapping of the latent space.

Results:

We demonstrate the application of Neuradicon to operational phenotyping of a corpus of 336,569 reports, and report excellent generalizability across time and two independent healthcare institutions. In particular, we report pathology classification metrics with f1-scores of 0.96 on prospective data, and semantic means of interrogating the phenotypes surfaced via latent space representations.

Conclusion:

Neuradicon allows the segmentation, analysis, classification, representation and interrogation of neuroradiological reports structure and content. It offers a blueprint for the extraction of rich, quantitative, actionable signals from unstructured text data in an operational context.
{"title":"Neuradicon: Operational representation learning of neuroimaging reports","authors":"Henry Watkins ,&nbsp;Robert Gray ,&nbsp;Adam Julius ,&nbsp;Yee-Haur Mah ,&nbsp;James Teo ,&nbsp;Walter H.L. Pinaya ,&nbsp;Paul Wright ,&nbsp;Ashwani Jha ,&nbsp;Holger Engleitner ,&nbsp;Jorge Cardoso ,&nbsp;Sebastien Ourselin ,&nbsp;Geraint Rees ,&nbsp;Rolf Jaeger ,&nbsp;Parashkev Nachev","doi":"10.1016/j.cmpb.2025.108638","DOIUrl":"10.1016/j.cmpb.2025.108638","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Radiological reports typically summarize the content and interpretation of imaging studies in unstructured form that precludes quantitative analysis. This limits the monitoring of radiological services to throughput undifferentiated by content, impeding specific, targeted operational optimization. Here we present Neuradicon, a natural language processing (NLP) framework for quantitative analysis of neuroradiological reports.</div></div><div><h3>Methods:</h3><div>Our framework is a hybrid of rule-based and machine-learning models to represent neurological reports in succinct, quantitative form optimally suited to operational guidance. These include probabilistic models for text classification and tagging tasks, alongside auto-encoders for learning latent representations and statistical mapping of the latent space.</div></div><div><h3>Results:</h3><div>We demonstrate the application of Neuradicon to operational phenotyping of a corpus of 336,569 reports, and report excellent generalizability across time and two independent healthcare institutions. In particular, we report pathology classification metrics with f1-scores of 0.96 on prospective data, and semantic means of interrogating the phenotypes surfaced via latent space representations.</div></div><div><h3>Conclusion:</h3><div>Neuradicon allows the segmentation, analysis, classification, representation and interrogation of neuroradiological reports structure and content. It offers a blueprint for the extraction of rich, quantitative, actionable signals from unstructured text data in an operational context.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"262 ","pages":"Article 108638"},"PeriodicalIF":4.9,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143402667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reliability of characterising coronary artery flow with the flow-split outflow strategy: Comparison against the multiscale approach
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-02-12 DOI: 10.1016/j.cmpb.2025.108669
Mingzi Zhang , Hamed Keramati , Ramtin Gharleghi, Susann Beier

Background

In computational modelling of coronary haemodynamics, imposing patient-specific flow conditions is paramount, yet often impractical due to resource and time constraints, limiting the ability to perform a large number of simulations particularly for diseased cases.

Objective

To compare coronary haemodynamics quantified using a simplified flow-split strategy with varying exponents against the clinically verified but computationally intensive multiscale simulations under both resting and hyperaemic conditions in arteries with varying degrees of stenosis.

Methods

Six patient-specific left coronary artery trees were segmented and reconstructed, including three with severe (>70 %) and three with mild (<50 %) focal stenoses. Simulations were performed for the entire coronary tree to account for the flow-limiting effects from epicardial artery stenoses. Both a 0D-3D coupled multiscale model and a flow-split approach with four different exponents (2.0, 2.27, 2.33, and 3.0) were used. The resulting prominent haemodynamic metrics were statistically compared between the two methods.

Results

Flow-split and multiscale simulations did not significantly differ under resting conditions regardless of the stenosis severity. However, under hyperaemic conditions, the flow-split method significantly overestimated the time-averaged wall shear stress by up to 16.8 Pa (p = 0.031) and underestimate the fractional flow reserve by 0.327 (p = 0.043), with larger discrepancies observed in severe stenoses than in mild ones. Varying the exponent from 2.0 to 3.0 within the flow-split methods did not significantly affect the haemodynamic results (p > 0.141).

Conclusions

Flow-split strategies with exponents between 2.0 and 3.0 are appropriate for modelling stenosed coronaries under resting conditions. Multiscale simulations are recommended for accurate modelling of hyperaemic conditions, especially in severely stenosed arteries.(247/250 words)
{"title":"Reliability of characterising coronary artery flow with the flow-split outflow strategy: Comparison against the multiscale approach","authors":"Mingzi Zhang ,&nbsp;Hamed Keramati ,&nbsp;Ramtin Gharleghi,&nbsp;Susann Beier","doi":"10.1016/j.cmpb.2025.108669","DOIUrl":"10.1016/j.cmpb.2025.108669","url":null,"abstract":"<div><h3>Background</h3><div>In computational modelling of coronary haemodynamics, imposing patient-specific flow conditions is paramount, yet often impractical due to resource and time constraints, limiting the ability to perform a large number of simulations particularly for diseased cases.</div></div><div><h3>Objective</h3><div>To compare coronary haemodynamics quantified using a simplified flow-split strategy with varying exponents against the clinically verified but computationally intensive multiscale simulations under both resting and hyperaemic conditions in arteries with varying degrees of stenosis.</div></div><div><h3>Methods</h3><div>Six patient-specific left coronary artery trees were segmented and reconstructed, including three with severe (&gt;70 %) and three with mild (&lt;50 %) focal stenoses. Simulations were performed for the entire coronary tree to account for the flow-limiting effects from epicardial artery stenoses. Both a 0D-3D coupled multiscale model and a flow-split approach with four different exponents (2.0, 2.27, 2.33, and 3.0) were used. The resulting prominent haemodynamic metrics were statistically compared between the two methods.</div></div><div><h3>Results</h3><div>Flow-split and multiscale simulations did not significantly differ under resting conditions regardless of the stenosis severity. However, under hyperaemic conditions, the flow-split method significantly overestimated the time-averaged wall shear stress by up to 16.8 Pa (<em>p</em> = 0.031) and underestimate the fractional flow reserve by 0.327 (<em>p</em> = 0.043), with larger discrepancies observed in severe stenoses than in mild ones. Varying the exponent from 2.0 to 3.0 within the flow-split methods did not significantly affect the haemodynamic results (<em>p</em> &gt; 0.141).</div></div><div><h3>Conclusions</h3><div>Flow-split strategies with exponents between 2.0 and 3.0 are appropriate for modelling stenosed coronaries under resting conditions. Multiscale simulations are recommended for accurate modelling of hyperaemic conditions, especially in severely stenosed arteries.(247/250 words)</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"263 ","pages":"Article 108669"},"PeriodicalIF":4.9,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143418847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vertical federated learning based on data subset representation for healthcare application
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-02-12 DOI: 10.1016/j.cmpb.2025.108623
Yukun Shi , Jilin Zhang , Meiting Xue , Yan Zeng , Gangyong Jia , Qihong Yu , Miaoqi Li

Background and Objective

: Artificial intelligence is increasingly essential for disease classification and clinical diagnosis tasks in healthcare. Given the strict privacy needs of healthcare data, Vertical Federated Learning (VFL) has been introduced. VFL allows multiple hospitals to collaboratively train models on vertically partitioned data, where each holds only the patient’s partial data features, thus maintaining patient confidentiality. However, VFL applications in healthcare scenarios with fewer samples and labels are challenging because existing methods heavily depend on labeled samples and do not consider the intrinsic connections among the data across hospitals.

Methods

: This paper proposes FedRL, a representation-based VFL method that enhances the performance of downstream tasks by utilizing aligned data for federated representation pretraining. The proposed method creates the same feature dimensions subsets by splitting the local data, exploiting the relationships among these subsets, constructing a bespoke loss function, and collaboratively training a representation model to these subsets across all participating hospitals. This model captures the latent representations of the global data, which are then applied to the downstream classification tasks.

Results and Conclusion

: The proposed FedRL method was validated through experiments on three healthcare datasets. The results demonstrate that the proposed method outperforms several existing methods across three performance metrics. Specifically, FedRL achieves average improvements of 4.7%, 5.6%, and 4.8% in accuracy, AUC, and F1-score, respectively, compared to current methods. In addition, FedRL demonstrates greater robustness and consistent performance in scenarios with limited labeled samples, thereby confirming its effectiveness and potential use in healthcare data analysis.
{"title":"Vertical federated learning based on data subset representation for healthcare application","authors":"Yukun Shi ,&nbsp;Jilin Zhang ,&nbsp;Meiting Xue ,&nbsp;Yan Zeng ,&nbsp;Gangyong Jia ,&nbsp;Qihong Yu ,&nbsp;Miaoqi Li","doi":"10.1016/j.cmpb.2025.108623","DOIUrl":"10.1016/j.cmpb.2025.108623","url":null,"abstract":"<div><h3>Background and Objective</h3><div>: Artificial intelligence is increasingly essential for disease classification and clinical diagnosis tasks in healthcare. Given the strict privacy needs of healthcare data, Vertical Federated Learning (VFL) has been introduced. VFL allows multiple hospitals to collaboratively train models on vertically partitioned data, where each holds only the patient’s partial data features, thus maintaining patient confidentiality. However, VFL applications in healthcare scenarios with fewer samples and labels are challenging because existing methods heavily depend on labeled samples and do not consider the intrinsic connections among the data across hospitals.</div></div><div><h3>Methods</h3><div>: This paper proposes FedRL, a representation-based VFL method that enhances the performance of downstream tasks by utilizing aligned data for federated representation pretraining. The proposed method creates the same feature dimensions subsets by splitting the local data, exploiting the relationships among these subsets, constructing a bespoke loss function, and collaboratively training a representation model to these subsets across all participating hospitals. This model captures the latent representations of the global data, which are then applied to the downstream classification tasks.</div></div><div><h3>Results and Conclusion</h3><div>: The proposed FedRL method was validated through experiments on three healthcare datasets. The results demonstrate that the proposed method outperforms several existing methods across three performance metrics. Specifically, FedRL achieves average improvements of 4.7%, 5.6%, and 4.8% in accuracy, AUC, and F1-score, respectively, compared to current methods. In addition, FedRL demonstrates greater robustness and consistent performance in scenarios with limited labeled samples, thereby confirming its effectiveness and potential use in healthcare data analysis.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"263 ","pages":"Article 108623"},"PeriodicalIF":4.9,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143418848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LMTTM-VMI: Linked Memory Token Turing Machine for 3D volumetric medical image classification
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-02-11 DOI: 10.1016/j.cmpb.2025.108640
Hongkai Wei , Yang Yang , Shijie Sun , Mingtao Feng , Rong Wang , Xianfeng Han
Biomedical imaging is vital for the diagnosis and treatment of various medical conditions, yet the effective integration of deep learning technologies into this field presents challenges. Traditional methods often struggle to efficiently capture the spatial characteristics and intricate structural features of 3D volumetric medical images, limiting memory utilization and model adaptability. To address this, we introduce a Linked Memory Token Turing Machine (LMTTM), which utilizes external linked memory to efficiently process spatial dependencies and structural complexities within 3D volumetric medical images, aiding in accurate diagnoses. LMTTM can efficiently record the features of 3D volumetric medical images in an external linked memory module, enhancing complex image classification through improved feature accumulation and reasoning capabilities. Our experiments on six 3D volumetric medical image datasets from the MedMNIST v2 demonstrate that our proposed LMTTM model achieves average ACC of 82.4%, attaining state-of-the-art (SOTA) performance. Moreover, ablation studies confirmed that the Linked Memory outperforms its predecessor, TTM’s original Memory, by up to 5.7%, highlighting LMTTM’s effectiveness in 3D volumetric medical image classification and its potential to assist healthcare professionals in diagnosis and treatment planning. The code is released at https://github.com/hongkai-wei/LMTTM-VMI.
{"title":"LMTTM-VMI: Linked Memory Token Turing Machine for 3D volumetric medical image classification","authors":"Hongkai Wei ,&nbsp;Yang Yang ,&nbsp;Shijie Sun ,&nbsp;Mingtao Feng ,&nbsp;Rong Wang ,&nbsp;Xianfeng Han","doi":"10.1016/j.cmpb.2025.108640","DOIUrl":"10.1016/j.cmpb.2025.108640","url":null,"abstract":"<div><div>Biomedical imaging is vital for the diagnosis and treatment of various medical conditions, yet the effective integration of deep learning technologies into this field presents challenges. Traditional methods often struggle to efficiently capture the spatial characteristics and intricate structural features of 3D volumetric medical images, limiting memory utilization and model adaptability. To address this, we introduce a Linked Memory Token Turing Machine (LMTTM), which utilizes external linked memory to efficiently process spatial dependencies and structural complexities within 3D volumetric medical images, aiding in accurate diagnoses. LMTTM can efficiently record the features of 3D volumetric medical images in an external linked memory module, enhancing complex image classification through improved feature accumulation and reasoning capabilities. Our experiments on six 3D volumetric medical image datasets from the MedMNIST v2 demonstrate that our proposed LMTTM model achieves average ACC of 82.4%, attaining state-of-the-art (SOTA) performance. Moreover, ablation studies confirmed that the Linked Memory outperforms its predecessor, TTM’s original Memory, by up to 5.7%, highlighting LMTTM’s effectiveness in 3D volumetric medical image classification and its potential to assist healthcare professionals in diagnosis and treatment planning. The code is released at <span><span>https://github.com/hongkai-wei/LMTTM-VMI</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"262 ","pages":"Article 108640"},"PeriodicalIF":4.9,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143395211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An accurate and trustworthy deep learning approach for bladder tumor segmentation with uncertainty estimation
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-02-10 DOI: 10.1016/j.cmpb.2025.108645
Jie Xu , Haixin Wang , Min Lu , Hai Bi , Deng Li , Zixuan Xue , Qi Zhang
Background and Objective: Although deep learning-based intelligent diagnosis of bladder cancer has achieved excellent performance, the reliability of neural network predicted results may not be evaluated. This study aims to explore a trustworthy AI-based tumor segmentation model, which not only outputs predicted results but also provides confidence information about the predictions.
Methods: This paper proposes a novel model for bladder tumor segmentation with uncertainty estimation (BSU), which is not merely able to effectively segment the lesion area but also yields an uncertainty map showing the confidence information of the segmentation results. In contrast to previous uncertainty estimation, we utilize test time augmentation (TTA) and test time dropout (TTD) to estimate aleatoric uncertainty and epistemic uncertainty in both internal and external datasets to explore the effects of both uncertainties on different datasets.
Results: Our BSU model achieved the Dice coefficients of 0.766 and 0.848 on internal and external cystoscopy datasets, respectively, along with accuracy of 0.950 and 0.954. Compared to the state-of-the-art methods, our BSU model demonstrated superior performance, which was further validated by the statistically significance of the t-tests at the conventional level. Clinical experiments verified the practical value of uncertainty estimation in real-world bladder cancer diagnostics.
Conclusions: The proposed BSU model is able to visualize the confidence of the segmentation results, serving as a valuable addition for assisting urologists in enhancing both the precision and efficiency of bladder cancer diagnoses in clinical practice.
{"title":"An accurate and trustworthy deep learning approach for bladder tumor segmentation with uncertainty estimation","authors":"Jie Xu ,&nbsp;Haixin Wang ,&nbsp;Min Lu ,&nbsp;Hai Bi ,&nbsp;Deng Li ,&nbsp;Zixuan Xue ,&nbsp;Qi Zhang","doi":"10.1016/j.cmpb.2025.108645","DOIUrl":"10.1016/j.cmpb.2025.108645","url":null,"abstract":"<div><div><em>Background and Objective:</em> Although deep learning-based intelligent diagnosis of bladder cancer has achieved excellent performance, the reliability of neural network predicted results may not be evaluated. This study aims to explore a trustworthy AI-based tumor segmentation model, which not only outputs predicted results but also provides confidence information about the predictions.</div><div><em>Methods:</em> This paper proposes a novel model for bladder tumor segmentation with uncertainty estimation (BSU), which is not merely able to effectively segment the lesion area but also yields an uncertainty map showing the confidence information of the segmentation results. In contrast to previous uncertainty estimation, we utilize test time augmentation (TTA) and test time dropout (TTD) to estimate aleatoric uncertainty and epistemic uncertainty in both internal and external datasets to explore the effects of both uncertainties on different datasets.</div><div><em>Results:</em> Our BSU model achieved the Dice coefficients of 0.766 and 0.848 on internal and external cystoscopy datasets, respectively, along with accuracy of 0.950 and 0.954. Compared to the state-of-the-art methods, our BSU model demonstrated superior performance, which was further validated by the statistically significance of the t-tests at the conventional level. Clinical experiments verified the practical value of uncertainty estimation in real-world bladder cancer diagnostics.</div><div><em>Conclusions:</em> The proposed BSU model is able to visualize the confidence of the segmentation results, serving as a valuable addition for assisting urologists in enhancing both the precision and efficiency of bladder cancer diagnoses in clinical practice.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"263 ","pages":"Article 108645"},"PeriodicalIF":4.9,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143418845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving clinical decision making by creating surrogate models from health technology assessment models: A case study on Type 1 Diabetes Melitus
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-02-10 DOI: 10.1016/j.cmpb.2025.108646
Rafael Arnay del Arco, Iván Castilla Rodríguez, Marco A. Cabrera Hernández

Background and Objective:

Computerized clinical decision support systems (CDSS) that incorporate the latest scientific evidence are essential for enhancing patient care quality. Such systems typically rely on some kind of model to accurately represent the knowledge required to assess the clinicians. Although the use of complex and computationally demanding simulation models is common in this field, such models limit the potential applications of CDSSs, both in real-time applications and in simulation-in-the-loop optimization tools. This paper presents a case study on Type 1 Diabetes Mellitus (T1DM) to demonstrate the development of surrogate models from health technology assessment models, with the aim of enhancing the potential of CDSSs.

Methods:

The paper details the process of developing machine learning (ML) based surrogate models, including the generation of a dataset for training and testing, and the comparison of different ML techniques. A number of distinct groupings of comorbidities were utilized in the creation of models, which were trained to predict confidence intervals for the time to develop each complication.

Results:

The results of the intersection over union (IoU) analysis between the simulation model output and the surrogate models output for the comorbidities under study were greater than 0.9.

Conclusion:

The study concludes that ML-based surrogate models are a viable solution for real-time clinical decision-making, offering a substantial speedup in execution time compared to traditional simulation models.
{"title":"Improving clinical decision making by creating surrogate models from health technology assessment models: A case study on Type 1 Diabetes Melitus","authors":"Rafael Arnay del Arco,&nbsp;Iván Castilla Rodríguez,&nbsp;Marco A. Cabrera Hernández","doi":"10.1016/j.cmpb.2025.108646","DOIUrl":"10.1016/j.cmpb.2025.108646","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Computerized clinical decision support systems (CDSS) that incorporate the latest scientific evidence are essential for enhancing patient care quality. Such systems typically rely on some kind of model to accurately represent the knowledge required to assess the clinicians. Although the use of complex and computationally demanding simulation models is common in this field, such models limit the potential applications of CDSSs, both in real-time applications and in simulation-in-the-loop optimization tools. This paper presents a case study on Type 1 Diabetes Mellitus (T1DM) to demonstrate the development of surrogate models from health technology assessment models, with the aim of enhancing the potential of CDSSs.</div></div><div><h3>Methods:</h3><div>The paper details the process of developing machine learning (ML) based surrogate models, including the generation of a dataset for training and testing, and the comparison of different ML techniques. A number of distinct groupings of comorbidities were utilized in the creation of models, which were trained to predict confidence intervals for the time to develop each complication.</div></div><div><h3>Results:</h3><div>The results of the intersection over union (IoU) analysis between the simulation model output and the surrogate models output for the comorbidities under study were greater than 0.9.</div></div><div><h3>Conclusion:</h3><div>The study concludes that ML-based surrogate models are a viable solution for real-time clinical decision-making, offering a substantial speedup in execution time compared to traditional simulation models.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"262 ","pages":"Article 108646"},"PeriodicalIF":4.9,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143419941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Methods for estimating resting energy expenditure in intensive care patients: A comparative study of predictive equations with machine learning and deep learning approaches
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-02-09 DOI: 10.1016/j.cmpb.2025.108657
Christopher Yew Shuen Ang , Mohd Basri Mat Nor , Nur Sazwi Nordin , Thant Zin Kyi , Ailin Razali , Yeong Shiong Chiew

Background

Accurate estimation of resting energy expenditure (REE) is critical for guiding nutritional therapy in critically ill patients. While indirect calorimetry (IC) is the gold standard for REE measurement, it is not routinely feasible in clinical settings due to its complexity and cost. Predictive equations (PEs) offer a simpler alternative but are often inaccurate in critically ill populations. While recent advancements in machine learning (ML) and deep learning (DL) offer potential for improving REE estimation by capturing complex relationships between physiological variables, these approaches have not yet been widely applied or validated in critically ill populations.

Methodology

This prospective study compared the performance of nine commonly used PEs, including the Harris-Benedict (H-B1919), Penn State, and TAH equations, with ML models (XGBoost, Random Forest Regressor [RFR], Support Vector Regression), and DL models (Convolutional Neural Networks [CNN]) in estimating REE in critically ill patients. A dataset of 300 IC measurements from an intensive care unit (ICU) was used, with REE measured by both IC and PEs. The ML/DL models were trained using a combination of static (i.e., age, height, body weight) and dynamic (i.e., minute ventilation, body temperature) variables. A five-fold cross validation was performed to assess the model prediction performance using the root mean square error (RMSE) metric.

Results

Of the PEs analysed, H-B1919 yielded the lowest RMSE at 362 calories. However, the XGBoost and RFR models significantly outperformed all PEs, achieving RMSE values of 199 and 200 calories, respectively. The CNN model demonstrated the poorest performance among ML models, with an RMSE of 250 calories. The inclusion of additional categorical variables such as body mass index (BMI) and body temperature classes slightly reduced RMSE across ML and DL models. Despite data augmentation and imputation techniques, no significant improvements in model performance were observed.

Conclusion

ML models, particularly XGBoost and RFR, provide more accurate REE estimations than traditional PEs, highlighting their potential to better capture the complex, non-linear relationships between physiological variables and REE. These models offer a promising alternative for guiding nutritional therapy in clinical settings, though further validation on independent datasets and across diverse patient populations is warranted.
{"title":"Methods for estimating resting energy expenditure in intensive care patients: A comparative study of predictive equations with machine learning and deep learning approaches","authors":"Christopher Yew Shuen Ang ,&nbsp;Mohd Basri Mat Nor ,&nbsp;Nur Sazwi Nordin ,&nbsp;Thant Zin Kyi ,&nbsp;Ailin Razali ,&nbsp;Yeong Shiong Chiew","doi":"10.1016/j.cmpb.2025.108657","DOIUrl":"10.1016/j.cmpb.2025.108657","url":null,"abstract":"<div><h3>Background</h3><div>Accurate estimation of resting energy expenditure (REE) is critical for guiding nutritional therapy in critically ill patients. While indirect calorimetry (IC) is the gold standard for REE measurement, it is not routinely feasible in clinical settings due to its complexity and cost. Predictive equations (PEs) offer a simpler alternative but are often inaccurate in critically ill populations. While recent advancements in machine learning (ML) and deep learning (DL) offer potential for improving REE estimation by capturing complex relationships between physiological variables, these approaches have not yet been widely applied or validated in critically ill populations.</div></div><div><h3>Methodology</h3><div>This prospective study compared the performance of nine commonly used PEs, including the Harris-Benedict (H-B1919), Penn State, and TAH equations, with ML models (XGBoost, Random Forest Regressor [RFR], Support Vector Regression), and DL models (Convolutional Neural Networks [CNN]) in estimating REE in critically ill patients. A dataset of 300 IC measurements from an intensive care unit (ICU) was used, with REE measured by both IC and PEs. The ML/DL models were trained using a combination of static (i.e., age, height, body weight) and dynamic (i.e., minute ventilation, body temperature) variables. A five-fold cross validation was performed to assess the model prediction performance using the root mean square error (RMSE) metric.</div></div><div><h3>Results</h3><div>Of the PEs analysed, H-B1919 yielded the lowest RMSE at 362 calories. However, the XGBoost and RFR models significantly outperformed all PEs, achieving RMSE values of 199 and 200 calories, respectively. The CNN model demonstrated the poorest performance among ML models, with an RMSE of 250 calories. The inclusion of additional categorical variables such as body mass index (BMI) and body temperature classes slightly reduced RMSE across ML and DL models. Despite data augmentation and imputation techniques, no significant improvements in model performance were observed.</div></div><div><h3>Conclusion</h3><div>ML models, particularly XGBoost and RFR, provide more accurate REE estimations than traditional PEs, highlighting their potential to better capture the complex, non-linear relationships between physiological variables and REE. These models offer a promising alternative for guiding nutritional therapy in clinical settings, though further validation on independent datasets and across diverse patient populations is warranted.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"262 ","pages":"Article 108657"},"PeriodicalIF":4.9,"publicationDate":"2025-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143402668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer methods and programs in biomedicine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1