首页 > 最新文献

Computers in biology and medicine最新文献

英文 中文
Stacking based ensemble learning framework for identification of nitrotyrosine sites. 基于堆叠的集合学习框架,用于识别硝基酪氨酸位点。
IF 7 2区 医学 Q1 BIOLOGY Pub Date : 2024-10-03 DOI: 10.1016/j.compbiomed.2024.109200
Aiman Parvez, Syed Danish Ali, Hilal Tayara, Kil To Chong

Protein nitrotyrosine is an essential post-translational modification that results from the nitration of tyrosine amino acid residues. This modification is known to be associated with the regulation and characterization of several biological functions and diseases. Therefore, accurate identification of nitrotyrosine sites plays a significant role in the elucidating progress of associated biological signs. In this regard, we reported an accurate computational tool known as iNTyro-Stack for the identification of protein nitrotyrosine sites. iNTyro-Stack is a machine-learning model based on a stacking algorithm. The base classifiers in stacking are selected based on the highest performance. The feature map employed is a linear combination of the amino composition encoding schemes, including the composition of k-spaced amino acid pairs and tri-peptide composition. The recursive feature elimination technique is used for significant feature selection. The performance of the proposed method is evaluated using k-fold cross-validation and independent testing approaches. iNTyro-Stack achieved an accuracy of 86.3% and a Matthews correlation coefficient (MCC) of 72.6% in cross-validation. Its generalization capability was further validated on an imbalanced independent test set, where it attained an accuracy of 69.32%. iNTyro-Stack outperforms existing state-of-the-art methods across both evaluation techniques. The github repository is create to reproduce the method and results of iNTyro-Stack, accessible on: https://github.com/waleed551/iNTyro-Stack/.

{"title":"Stacking based ensemble learning framework for identification of nitrotyrosine sites.","authors":"Aiman Parvez, Syed Danish Ali, Hilal Tayara, Kil To Chong","doi":"10.1016/j.compbiomed.2024.109200","DOIUrl":"https://doi.org/10.1016/j.compbiomed.2024.109200","url":null,"abstract":"<p><p>Protein nitrotyrosine is an essential post-translational modification that results from the nitration of tyrosine amino acid residues. This modification is known to be associated with the regulation and characterization of several biological functions and diseases. Therefore, accurate identification of nitrotyrosine sites plays a significant role in the elucidating progress of associated biological signs. In this regard, we reported an accurate computational tool known as iNTyro-Stack for the identification of protein nitrotyrosine sites. iNTyro-Stack is a machine-learning model based on a stacking algorithm. The base classifiers in stacking are selected based on the highest performance. The feature map employed is a linear combination of the amino composition encoding schemes, including the composition of k-spaced amino acid pairs and tri-peptide composition. The recursive feature elimination technique is used for significant feature selection. The performance of the proposed method is evaluated using k-fold cross-validation and independent testing approaches. iNTyro-Stack achieved an accuracy of 86.3% and a Matthews correlation coefficient (MCC) of 72.6% in cross-validation. Its generalization capability was further validated on an imbalanced independent test set, where it attained an accuracy of 69.32%. iNTyro-Stack outperforms existing state-of-the-art methods across both evaluation techniques. The github repository is create to reproduce the method and results of iNTyro-Stack, accessible on: https://github.com/waleed551/iNTyro-Stack/.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142375267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Two-stage deep learning framework for occlusal crown depth image generation. 用于生成咬合冠深度图像的两阶段深度学习框架。
IF 7 2区 医学 Q1 BIOLOGY Pub Date : 2024-10-03 DOI: 10.1016/j.compbiomed.2024.109220
Junghyun Roh, Junhwi Kim, Jimin Lee

The generation of depth images of occlusal dental crowns is complicated by the need for customization in each case. To decrease the workload of skilled dental technicians, various computer vision models have been used to generate realistic occlusal crown depth images with definite crown surface structures that can ultimately be reconstructed to three-dimensional crowns and directly used in patient treatment. However, it has remained difficult to generate images of the structure of dental crowns in a fluid position using computer vision models. In this paper, we propose a two-stage model for generating depth images of occlusal crowns in diverse positions. The model is divided into two parts: segmentation and inpainting to obtain both shape and surface structure accuracy. The segmentation network focuses on the position and size of the crowns, which allows the model to adapt to diverse targets. The inpainting network based on a GAN generates curved structures of the crown surfaces based on the target jaw image and a binary mask made by the segmentation network. The performance of the model is evaluated via quantitative metrics for the area detection and pixel-value metrics. Compared to the baseline model, the proposed method reduced the MSE score from 0.007001 to 0.002618 and increased DICE score from 0.9333 to 0.9648. It indicates that the model showed better performance in terms of the binary mask from the addition of the segmentation network and the internal structure through the use of inpainting networks. Also, the results demonstrated an improved ability of the proposed model to restore realistic details compared to other models.

{"title":"Two-stage deep learning framework for occlusal crown depth image generation.","authors":"Junghyun Roh, Junhwi Kim, Jimin Lee","doi":"10.1016/j.compbiomed.2024.109220","DOIUrl":"https://doi.org/10.1016/j.compbiomed.2024.109220","url":null,"abstract":"<p><p>The generation of depth images of occlusal dental crowns is complicated by the need for customization in each case. To decrease the workload of skilled dental technicians, various computer vision models have been used to generate realistic occlusal crown depth images with definite crown surface structures that can ultimately be reconstructed to three-dimensional crowns and directly used in patient treatment. However, it has remained difficult to generate images of the structure of dental crowns in a fluid position using computer vision models. In this paper, we propose a two-stage model for generating depth images of occlusal crowns in diverse positions. The model is divided into two parts: segmentation and inpainting to obtain both shape and surface structure accuracy. The segmentation network focuses on the position and size of the crowns, which allows the model to adapt to diverse targets. The inpainting network based on a GAN generates curved structures of the crown surfaces based on the target jaw image and a binary mask made by the segmentation network. The performance of the model is evaluated via quantitative metrics for the area detection and pixel-value metrics. Compared to the baseline model, the proposed method reduced the MSE score from 0.007001 to 0.002618 and increased DICE score from 0.9333 to 0.9648. It indicates that the model showed better performance in terms of the binary mask from the addition of the segmentation network and the internal structure through the use of inpainting networks. Also, the results demonstrated an improved ability of the proposed model to restore realistic details compared to other models.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142375268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Shuffled ECA-Net for stress detection from multimodal wearable sensor data.
IF 7 2区 医学 Q1 BIOLOGY Pub Date : 2024-10-03 DOI: 10.1016/j.compbiomed.2024.109217
Namho Kim, Seongjae Lee, Junho Kim, So Yoon Choi, Sung-Min Park

Background: Recently, stress has been recognized as a key factor in the emergence of individual and social issues. Numerous attempts have been made to develop sensor-augmented psychological stress detection techniques, although existing methods are often impractical or overly subjective. To overcome these limitations, we acquired a dataset utilizing both wireless wearable multimodal sensors and salivary cortisol tests for supervised learning. We also developed a novel deep neural network (DNN) model that maximizes the benefits of sensor fusion.

Method: We devised a DNN involving a shuffled efficient channel attention (ECA) module called a shuffled ECA-Net, which achieves advanced feature-level sensor fusion by considering inter-modality relationships. Through an experiment involving salivary cortisol tests on 26 participants, we acquired multiple bio-signals including electrocardiograms, respiratory waveforms, and electrogastrograms in both relaxed and stressed mental states. A training dataset was generated from the obtained data. Using the dataset, our proposed model was optimized and evaluated ten times through five-fold cross-validation, while varying a random seed.

Results: Our proposed model achieved acceptable performance in stress detection, showing 0.916 accuracy, 0.917 sensitivity, 0.916 specificity, 0.914 F1-score, and 0.964 area under the receiver operating characteristic curve (AUROC). Furthermore, we demonstrated that combining multiple bio-signals with a shuffled ECA module can more accurately detect psychological stress.

Conclusions: We believe that our proposed model, coupled with the evidence for the viability of multimodal sensor fusion and a shuffled ECA-Net, would significantly contribute to the resolution of stress-related issues.

{"title":"Shuffled ECA-Net for stress detection from multimodal wearable sensor data.","authors":"Namho Kim, Seongjae Lee, Junho Kim, So Yoon Choi, Sung-Min Park","doi":"10.1016/j.compbiomed.2024.109217","DOIUrl":"https://doi.org/10.1016/j.compbiomed.2024.109217","url":null,"abstract":"<p><strong>Background: </strong>Recently, stress has been recognized as a key factor in the emergence of individual and social issues. Numerous attempts have been made to develop sensor-augmented psychological stress detection techniques, although existing methods are often impractical or overly subjective. To overcome these limitations, we acquired a dataset utilizing both wireless wearable multimodal sensors and salivary cortisol tests for supervised learning. We also developed a novel deep neural network (DNN) model that maximizes the benefits of sensor fusion.</p><p><strong>Method: </strong>We devised a DNN involving a shuffled efficient channel attention (ECA) module called a shuffled ECA-Net, which achieves advanced feature-level sensor fusion by considering inter-modality relationships. Through an experiment involving salivary cortisol tests on 26 participants, we acquired multiple bio-signals including electrocardiograms, respiratory waveforms, and electrogastrograms in both relaxed and stressed mental states. A training dataset was generated from the obtained data. Using the dataset, our proposed model was optimized and evaluated ten times through five-fold cross-validation, while varying a random seed.</p><p><strong>Results: </strong>Our proposed model achieved acceptable performance in stress detection, showing 0.916 accuracy, 0.917 sensitivity, 0.916 specificity, 0.914 F1-score, and 0.964 area under the receiver operating characteristic curve (AUROC). Furthermore, we demonstrated that combining multiple bio-signals with a shuffled ECA module can more accurately detect psychological stress.</p><p><strong>Conclusions: </strong>We believe that our proposed model, coupled with the evidence for the viability of multimodal sensor fusion and a shuffled ECA-Net, would significantly contribute to the resolution of stress-related issues.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142375249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight medical image segmentation network with multi-scale feature-guided fusion. 多尺度特征引导融合的轻量级医学图像分割网络。
IF 7 2区 医学 Q1 BIOLOGY Pub Date : 2024-10-03 DOI: 10.1016/j.compbiomed.2024.109204
Zhiqin Zhu, Kun Yu, Guanqiu Qi, Baisen Cong, Yuanyuan Li, Zexin Li, Xinbo Gao

In the field of computer-aided medical diagnosis, it is crucial to adapt medical image segmentation to limited computing resources. There is tremendous value in developing accurate, real-time vision processing models that require minimal computational resources. When building lightweight models, there is always a trade-off between computational cost and segmentation performance. Performance often suffers when applying models to meet resource-constrained scenarios characterized by computation, memory, or storage constraints. This remains an ongoing challenge. This paper proposes a lightweight network for medical image segmentation. It introduces a lightweight transformer, proposes a simplified core feature extraction network to capture more semantic information, and builds a multi-scale feature interaction guidance framework. The fusion module embedded in this framework is designed to address spatial and channel complexities. Through the multi-scale feature interaction guidance framework and fusion module, the proposed network achieves robust semantic information extraction from low-resolution feature maps and rich spatial information retrieval from high-resolution feature maps while ensuring segmentation performance. This significantly reduces the parameter requirements for maintaining deep features within the network, resulting in faster inference and reduced floating-point operations (FLOPs) and parameter counts. Experimental results on ISIC2017 and ISIC2018 datasets confirm the effectiveness of the proposed network in medical image segmentation tasks. For instance, on the ISIC2017 dataset, the proposed network achieved a segmentation accuracy of 82.33 % mIoU, and a speed of 71.26 FPS on 256 × 256 images using a GeForce GTX 3090 GPU. Furthermore, the proposed network is tremendously lightweight, containing only 0.524M parameters. The corresponding source codes are available at https://github.com/CurbUni/LMIS-lightweight-network.

在计算机辅助医疗诊断领域,使医学图像分割适应有限的计算资源至关重要。开发需要最少计算资源的精确、实时视觉处理模型具有巨大价值。在建立轻量级模型时,总是需要在计算成本和分割性能之间做出权衡。当应用模型来满足计算、内存或存储限制等资源受限的场景时,性能往往会受到影响。这仍然是一个持续的挑战。本文提出了一种用于医学图像分割的轻量级网络。它引入了一个轻量级转换器,提出了一个简化的核心特征提取网络以捕捉更多语义信息,并建立了一个多尺度特征交互指导框架。该框架中嵌入的融合模块旨在解决空间和通道复杂性问题。通过多尺度特征交互引导框架和融合模块,所提出的网络在确保分割性能的同时,实现了从低分辨率特征图中提取稳健的语义信息和从高分辨率特征图中检索丰富的空间信息。这大大降低了在网络中维护深度特征的参数要求,从而加快了推理速度,减少了浮点运算(FLOP)和参数数量。在 ISIC2017 和 ISIC2018 数据集上的实验结果证实了所提出的网络在医学图像分割任务中的有效性。例如,在 ISIC2017 数据集上,使用 GeForce GTX 3090 GPU 对 256 × 256 图像进行分割时,所提出的网络达到了 82.33 % mIoU 的分割准确率和 71.26 FPS 的速度。此外,该网络非常轻便,仅包含 0.524M 个参数。相应的源代码见 https://github.com/CurbUni/LMIS-lightweight-network。
{"title":"Lightweight medical image segmentation network with multi-scale feature-guided fusion.","authors":"Zhiqin Zhu, Kun Yu, Guanqiu Qi, Baisen Cong, Yuanyuan Li, Zexin Li, Xinbo Gao","doi":"10.1016/j.compbiomed.2024.109204","DOIUrl":"https://doi.org/10.1016/j.compbiomed.2024.109204","url":null,"abstract":"<p><p>In the field of computer-aided medical diagnosis, it is crucial to adapt medical image segmentation to limited computing resources. There is tremendous value in developing accurate, real-time vision processing models that require minimal computational resources. When building lightweight models, there is always a trade-off between computational cost and segmentation performance. Performance often suffers when applying models to meet resource-constrained scenarios characterized by computation, memory, or storage constraints. This remains an ongoing challenge. This paper proposes a lightweight network for medical image segmentation. It introduces a lightweight transformer, proposes a simplified core feature extraction network to capture more semantic information, and builds a multi-scale feature interaction guidance framework. The fusion module embedded in this framework is designed to address spatial and channel complexities. Through the multi-scale feature interaction guidance framework and fusion module, the proposed network achieves robust semantic information extraction from low-resolution feature maps and rich spatial information retrieval from high-resolution feature maps while ensuring segmentation performance. This significantly reduces the parameter requirements for maintaining deep features within the network, resulting in faster inference and reduced floating-point operations (FLOPs) and parameter counts. Experimental results on ISIC2017 and ISIC2018 datasets confirm the effectiveness of the proposed network in medical image segmentation tasks. For instance, on the ISIC2017 dataset, the proposed network achieved a segmentation accuracy of 82.33 % mIoU, and a speed of 71.26 FPS on 256 × 256 images using a GeForce GTX 3090 GPU. Furthermore, the proposed network is tremendously lightweight, containing only 0.524M parameters. The corresponding source codes are available at https://github.com/CurbUni/LMIS-lightweight-network.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142375248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Portable noninvasive technologies for early breast cancer detection: A systematic review. 用于早期乳腺癌检测的便携式无创技术:系统综述。
IF 7 2区 医学 Q1 BIOLOGY Pub Date : 2024-10-02 DOI: 10.1016/j.compbiomed.2024.109219
Shadrack O Aboagye, John A Hunt, Graham Ball, Yang Wei

Breast cancer remains a leading cause of cancer mortality worldwide, with early detection crucial for improving outcomes. This systematic review evaluates recent advances in portable non-invasive technologies for early breast cancer detection, assessing their methods, performance, and potential for clinical implementation. A comprehensive literature search was conducted across major databases for relevant studies published between 2015 and 2024. Data on technology types, detection methods, and diagnostic performance were extracted and synthesized from 41 included studies. The review examined microwave imaging, electrical impedance tomography (EIT), thermography, bioimpedance spectroscopy (BIS), and pressure sensing technologies. Microwave imaging and EIT showed the most promise, with some studies reporting sensitivities and specificities over 90 %. However, most technologies are still in early stages of development with limited large-scale clinical validation. These innovations could complement existing gold standards, potentially improving screening rates and outcomes, especially in underserved populations, whiles decreasing screening waiting times in developed countries. Further research is therefore needed to validate their clinical efficacy, address implementation challenges, and assess their impact on patient outcomes before widespread adoption can be recommended.

乳腺癌仍然是全球癌症死亡的主要原因,早期检测对改善预后至关重要。本系统综述评估了用于早期乳腺癌检测的便携式无创技术的最新进展,评估了这些技术的方法、性能和临床应用潜力。我们在主要数据库中对 2015 年至 2024 年间发表的相关研究进行了全面的文献检索。从纳入的 41 项研究中提取并综合了有关技术类型、检测方法和诊断性能的数据。综述研究了微波成像、电阻抗断层扫描(EIT)、热成像、生物阻抗光谱(BIS)和压力传感技术。微波成像和电阻抗断层扫描最有前景,一些研究报告的灵敏度和特异性超过 90%。不过,大多数技术仍处于早期开发阶段,大规模临床验证有限。这些创新技术可以补充现有的黄金标准,有可能提高筛查率和筛查结果,尤其是在服务不足的人群中,同时缩短发达国家的筛查等待时间。因此,在建议广泛采用之前,还需要开展进一步的研究,以验证其临床疗效,解决实施方面的挑战,并评估其对患者预后的影响。
{"title":"Portable noninvasive technologies for early breast cancer detection: A systematic review.","authors":"Shadrack O Aboagye, John A Hunt, Graham Ball, Yang Wei","doi":"10.1016/j.compbiomed.2024.109219","DOIUrl":"https://doi.org/10.1016/j.compbiomed.2024.109219","url":null,"abstract":"<p><p>Breast cancer remains a leading cause of cancer mortality worldwide, with early detection crucial for improving outcomes. This systematic review evaluates recent advances in portable non-invasive technologies for early breast cancer detection, assessing their methods, performance, and potential for clinical implementation. A comprehensive literature search was conducted across major databases for relevant studies published between 2015 and 2024. Data on technology types, detection methods, and diagnostic performance were extracted and synthesized from 41 included studies. The review examined microwave imaging, electrical impedance tomography (EIT), thermography, bioimpedance spectroscopy (BIS), and pressure sensing technologies. Microwave imaging and EIT showed the most promise, with some studies reporting sensitivities and specificities over 90 %. However, most technologies are still in early stages of development with limited large-scale clinical validation. These innovations could complement existing gold standards, potentially improving screening rates and outcomes, especially in underserved populations, whiles decreasing screening waiting times in developed countries. Further research is therefore needed to validate their clinical efficacy, address implementation challenges, and assess their impact on patient outcomes before widespread adoption can be recommended.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142371209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On-site burn severity assessment using smartphone-captured color burn wound images. 使用智能手机捕捉的彩色烧伤创面图像进行现场烧伤严重程度评估。
IF 7 2区 医学 Q1 BIOLOGY Pub Date : 2024-10-02 DOI: 10.1016/j.compbiomed.2024.109171
Xiayu Xu, Qilong Bu, Jingmeng Xie, Hang Li, Feng Xu, Jing Li

Accurate assessment of burn severity is crucial for the management of burn injuries. Currently, clinicians mainly rely on visual inspection to assess burns, characterized by notable inter-observer discrepancies. In this study, we introduce an innovative analysis platform using color burn wound images for automatic burn severity assessment. To do this, we propose a novel joint-task deep learning model, which is capable of simultaneously segmenting both burn regions and body parts, the two crucial components in calculating the percentage of total body surface area (%TBSA). Asymmetric attention mechanism is introduced, allowing attention guidance from the body part segmentation task to the burn region segmentation task. A user-friendly mobile application is developed to facilitate a fast assessment of burn severity at clinical settings. The proposed framework was evaluated on a dataset comprising 1340 color burn wound images captured on-site at clinical settings. The average Dice coefficients for burn depth segmentation and body part segmentation are 85.12 % and 85.36 %, respectively. The R2 for %TBSA assessment is 0.9136. The source codes for the joint-task framework and the application are released on Github (https://github.com/xjtu-mia/BurnAnalysis). The proposed platform holds the potential to be widely used at clinical settings to facilitate a fast and precise burn assessment.

烧伤严重程度的准确评估对于烧伤的治疗至关重要。目前,临床医生主要依靠肉眼观察来评估烧伤,观察者之间存在明显差异。在本研究中,我们利用彩色烧伤创面图像引入了一个创新的分析平台,用于自动评估烧伤严重程度。为此,我们提出了一种新颖的联合任务深度学习模型,该模型能够同时分割烧伤区域和身体部位,这是计算总体表面积百分比(%TBSA)的两个关键部分。该模型引入了非对称注意力机制,可将注意力从身体部位分割任务引导到烧伤区域分割任务。为了便于在临床环境中快速评估烧伤严重程度,我们开发了一款用户友好型移动应用程序。所提出的框架在一个数据集上进行了评估,该数据集包括 1340 幅在临床环境中现场采集的彩色烧伤创面图像。烧伤深度分割和身体部位分割的平均 Dice 系数分别为 85.12 % 和 85.36 %。%TBSA评估的 R2 为 0.9136。联合任务框架和应用程序的源代码发布在 Github 上 (https://github.com/xjtu-mia/BurnAnalysis)。拟议的平台有望广泛应用于临床环境,以促进快速、精确的烧伤评估。
{"title":"On-site burn severity assessment using smartphone-captured color burn wound images.","authors":"Xiayu Xu, Qilong Bu, Jingmeng Xie, Hang Li, Feng Xu, Jing Li","doi":"10.1016/j.compbiomed.2024.109171","DOIUrl":"https://doi.org/10.1016/j.compbiomed.2024.109171","url":null,"abstract":"<p><p>Accurate assessment of burn severity is crucial for the management of burn injuries. Currently, clinicians mainly rely on visual inspection to assess burns, characterized by notable inter-observer discrepancies. In this study, we introduce an innovative analysis platform using color burn wound images for automatic burn severity assessment. To do this, we propose a novel joint-task deep learning model, which is capable of simultaneously segmenting both burn regions and body parts, the two crucial components in calculating the percentage of total body surface area (%TBSA). Asymmetric attention mechanism is introduced, allowing attention guidance from the body part segmentation task to the burn region segmentation task. A user-friendly mobile application is developed to facilitate a fast assessment of burn severity at clinical settings. The proposed framework was evaluated on a dataset comprising 1340 color burn wound images captured on-site at clinical settings. The average Dice coefficients for burn depth segmentation and body part segmentation are 85.12 % and 85.36 %, respectively. The R<sup>2</sup> for %TBSA assessment is 0.9136. The source codes for the joint-task framework and the application are released on Github (https://github.com/xjtu-mia/BurnAnalysis). The proposed platform holds the potential to be widely used at clinical settings to facilitate a fast and precise burn assessment.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142371207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pan-cancer characterization of cellular senescence reveals its inter-tumor heterogeneity associated with the tumor microenvironment and prognosis. 细胞衰老的泛癌症特征揭示了其与肿瘤微环境和预后相关的肿瘤间异质性。
IF 7 2区 医学 Q1 BIOLOGY Pub Date : 2024-10-02 DOI: 10.1016/j.compbiomed.2024.109196
Kang Li, Chen Guo, Rufeng Li, Yufei Yao, Min Qiang, Yuanyuan Chen, Kangsheng Tu, Yungang Xu

Cellular senescence (CS) is characterized by the irreversible cell cycle arrest and plays a key role in aging and diseases, such as cancer. Recent years have witnessed the burgeoning exploration of the intricate relationship between CS and cancer, with CS recognized as either a suppressing or promoting factor and officially acknowledged as one of the 14 cancer hallmarks. However, a comprehensive characterization remains absent from elucidating the divergences of this relationship across different cancer types and its involvement in the multi-facets of tumor development. Here we systematically assessed the cellular senescence of over 10,000 tumor samples from 33 cancer types, starting by defining a set of cancer-associated CS signatures and deriving a quantitative metric representing the CS status, called CS score. We then investigated the CS heterogeneity and its intricate relationship with the prognosis, immune infiltration, and therapeutic responses across different cancers. As a result, cellular senescence demonstrated two distinct prognostic groups: the protective group with eleven cancers, such as LIHC, and the risky group with four cancers, including STAD. Subsequent in-depth investigations between these two groups unveiled the potential molecular and cellular mechanisms underlying the distinct effects of cellular senescence, involving the divergent activation of specific pathways and variances in immune cell infiltrations. These results were further supported by the disparate associations of CS status with the responses to immuno- and chemo-therapies observed between the two groups. Overall, our study offers a deeper understanding of inter-tumor heterogeneity of cellular senescence associated with the tumor microenvironment and cancer prognosis.

细胞衰老(CS)以不可逆的细胞周期停滞为特征,在衰老和癌症等疾病中起着关键作用。近年来,人们对细胞衰老与癌症之间错综复杂的关系进行了蓬勃的探索,细胞衰老被认为是一种抑制或促进因素,并被正式确认为 14 种癌症标志之一。然而,在阐明这种关系在不同癌症类型中的差异及其在肿瘤发生发展的多方面参与时,仍然缺乏全面的特征描述。在这里,我们系统地评估了来自 33 种癌症类型的 10,000 多个肿瘤样本的细胞衰老情况,首先定义了一组与癌症相关的 CS 标志,并得出了代表 CS 状态的定量指标,即 CS 评分。然后,我们研究了 CS 的异质性及其与不同癌症的预后、免疫浸润和治疗反应之间错综复杂的关系。结果表明,细胞衰老显示出两个不同的预后组:保护组(包括 LIHC 等 11 种癌症)和风险组(包括 STAD 等 4 种癌症)。随后对这两组癌症进行的深入研究揭示了细胞衰老产生不同影响的潜在分子和细胞机制,包括特定通路的不同激活和免疫细胞浸润的差异。两组患者的 CS 状态与免疫和化疗反应之间的差异也进一步证实了这些结果。总之,我们的研究加深了人们对与肿瘤微环境和癌症预后相关的肿瘤间细胞衰老异质性的理解。
{"title":"Pan-cancer characterization of cellular senescence reveals its inter-tumor heterogeneity associated with the tumor microenvironment and prognosis.","authors":"Kang Li, Chen Guo, Rufeng Li, Yufei Yao, Min Qiang, Yuanyuan Chen, Kangsheng Tu, Yungang Xu","doi":"10.1016/j.compbiomed.2024.109196","DOIUrl":"https://doi.org/10.1016/j.compbiomed.2024.109196","url":null,"abstract":"<p><p>Cellular senescence (CS) is characterized by the irreversible cell cycle arrest and plays a key role in aging and diseases, such as cancer. Recent years have witnessed the burgeoning exploration of the intricate relationship between CS and cancer, with CS recognized as either a suppressing or promoting factor and officially acknowledged as one of the 14 cancer hallmarks. However, a comprehensive characterization remains absent from elucidating the divergences of this relationship across different cancer types and its involvement in the multi-facets of tumor development. Here we systematically assessed the cellular senescence of over 10,000 tumor samples from 33 cancer types, starting by defining a set of cancer-associated CS signatures and deriving a quantitative metric representing the CS status, called CS score. We then investigated the CS heterogeneity and its intricate relationship with the prognosis, immune infiltration, and therapeutic responses across different cancers. As a result, cellular senescence demonstrated two distinct prognostic groups: the protective group with eleven cancers, such as LIHC, and the risky group with four cancers, including STAD. Subsequent in-depth investigations between these two groups unveiled the potential molecular and cellular mechanisms underlying the distinct effects of cellular senescence, involving the divergent activation of specific pathways and variances in immune cell infiltrations. These results were further supported by the disparate associations of CS status with the responses to immuno- and chemo-therapies observed between the two groups. Overall, our study offers a deeper understanding of inter-tumor heterogeneity of cellular senescence associated with the tumor microenvironment and cancer prognosis.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142371208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A joint analysis proposal of nonlinear longitudinal and time-to-event right-, interval-censored data for modeling pregnancy miscarriage. 用于模拟妊娠流产的非线性纵向和时间到事件右间隔删失数据的联合分析建议。
IF 7 2区 医学 Q1 BIOLOGY Pub Date : 2024-10-02 DOI: 10.1016/j.compbiomed.2024.109186
Rolando de la Cruz, Marc Lavielle, Cristian Meza, Vicente Núñez-Antón

Pregnancy in-vitro fertilization (IVF) cases are associated with adverse first-trimester outcomes in comparison to spontaneously achieved pregnancies. Human chorionic gonadotrophin β subunit (β-HCG) is a well-known biomarker for the diagnosis and monitoring of pregnancy after IVF. Low levels of β-HCG during this period are related to miscarriage, ectopic pregnancy, and IVF procedure failures. Longitudinal profiles of β-HCG can be used to distinguish between normal and abnormal pregnancies and to assist and guide the clinician in better management and monitoring of post-IVF pregnancies. Therefore, assessing the association between longitudinally measured β-HCG serum concentration and time to early miscarriage is of crucial interest to clinicians. A common joint modeling approach is to use the longitudinal β-HCG trajectory to determine the risk of miscarriage. This work was motivated by a follow-up study with normal and abnormal pregnancies where β-HCG serum concentrations were measured in 173 young women during a gestational age of 9-86 days in Santiago, Chile. Some women experienced a miscarriage event, and their exact event times were unknown, so we have interval-censored data, with the event occurring between the last time of the observed measurement and ten days later. However, for those women belonging to the normal pregnancy group; that is, carrying a pregnancy to a full-term event, right censoring data are observed. Estimation procedures are based on the Stochastic Approximation of the Expectation-Maximization (SAEM) algorithm.

与自然妊娠相比,体外受精(IVF)妊娠与第一胎不良妊娠结局有关。人绒毛膜促性腺激素β亚基(β-HCG)是诊断和监测体外受精后妊娠情况的著名生物标志物。在此期间,β-HCG 水平过低与流产、异位妊娠和试管婴儿手术失败有关。β-HCG的纵向曲线可用于区分正常妊娠和异常妊娠,并协助和指导临床医生更好地管理和监测试管婴儿术后妊娠。因此,评估纵向测量的β-HCG血清浓度与早期流产时间之间的关联对临床医生来说至关重要。一种常见的联合建模方法是利用纵向β-HCG轨迹来确定流产风险。这项工作的灵感来自于对智利圣地亚哥 173 名孕龄为 9-86 天的年轻女性进行的正常妊娠和异常妊娠随访研究。有些妇女发生了流产事件,但其确切的流产时间不详,因此我们采用了间隔删失数据,即流产事件发生在最后一次测量到十天之后。然而,对于属于正常妊娠组(即怀孕至足月的妇女)的妇女,我们观察到的是右删失数据。估计程序基于期望最大化随机逼近算法(SAEM)。
{"title":"A joint analysis proposal of nonlinear longitudinal and time-to-event right-, interval-censored data for modeling pregnancy miscarriage.","authors":"Rolando de la Cruz, Marc Lavielle, Cristian Meza, Vicente Núñez-Antón","doi":"10.1016/j.compbiomed.2024.109186","DOIUrl":"https://doi.org/10.1016/j.compbiomed.2024.109186","url":null,"abstract":"<p><p>Pregnancy in-vitro fertilization (IVF) cases are associated with adverse first-trimester outcomes in comparison to spontaneously achieved pregnancies. Human chorionic gonadotrophin β subunit (β-HCG) is a well-known biomarker for the diagnosis and monitoring of pregnancy after IVF. Low levels of β-HCG during this period are related to miscarriage, ectopic pregnancy, and IVF procedure failures. Longitudinal profiles of β-HCG can be used to distinguish between normal and abnormal pregnancies and to assist and guide the clinician in better management and monitoring of post-IVF pregnancies. Therefore, assessing the association between longitudinally measured β-HCG serum concentration and time to early miscarriage is of crucial interest to clinicians. A common joint modeling approach is to use the longitudinal β-HCG trajectory to determine the risk of miscarriage. This work was motivated by a follow-up study with normal and abnormal pregnancies where β-HCG serum concentrations were measured in 173 young women during a gestational age of 9-86 days in Santiago, Chile. Some women experienced a miscarriage event, and their exact event times were unknown, so we have interval-censored data, with the event occurring between the last time of the observed measurement and ten days later. However, for those women belonging to the normal pregnancy group; that is, carrying a pregnancy to a full-term event, right censoring data are observed. Estimation procedures are based on the Stochastic Approximation of the Expectation-Maximization (SAEM) algorithm.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142371195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MediAlbertina: An European Portuguese medical language model. MediAlbertina:欧洲葡萄牙语医学语言模型。
IF 7 2区 医学 Q1 BIOLOGY Pub Date : 2024-10-02 DOI: 10.1016/j.compbiomed.2024.109233
Miguel Nunes, João Boné, João C Ferreira, Pedro Chaves, Luis B Elvas

Background: Patient medical information often exists in unstructured text containing abbreviations and acronyms deemed essential to conserve time and space but posing challenges for automated interpretation. Leveraging the efficacy of Transformers in natural language processing, our objective was to use the knowledge acquired by a language model and continue its pre-training to develop an European Portuguese (PT-PT) healthcare-domain language model.

Methods: After carrying out a filtering process, Albertina PT-PT 900M was selected as our base language model, and we continued its pre-training using more than 2.6 million electronic medical records from Portugal's largest public hospital. MediAlbertina 900M has been created through domain adaptation on this data using masked language modelling.

Results: The comparison with our baseline was made through the usage of both perplexity, which decreased from about 20 to 1.6 values, and the fine-tuning and evaluation of information extraction models such as Named Entity Recognition and Assertion Status. MediAlbertina PT-PT outperformed Albertina PT-PT in both tasks by 4-6% on recall and f1-score.

Conclusions: This study contributes with the first publicly available medical language model trained with PT-PT data. It underscores the efficacy of domain adaptation and offers a contribution to the scientific community in overcoming obstacles of non-English languages. With MediAlbertina, further steps can be taken to assist physicians, in creating decision support systems or building medical timelines in order to perform profiling, by fine-tuning MediAlbertina for PT- PT medical tasks.

背景:患者的医疗信息通常以非结构化文本的形式存在,其中包含的缩写和首字母缩略词被认为对节省时间和空间至关重要,但却给自动解释带来了挑战。利用 Transformers 在自然语言处理方面的功效,我们的目标是利用语言模型获得的知识并继续对其进行预训练,以开发欧洲葡萄牙语(PT-PT)医疗保健领域语言模型:经过筛选,我们选择了阿尔贝蒂娜 PT-PT 900M 作为基础语言模型,并利用葡萄牙最大公立医院的 260 多万份电子病历继续对其进行预训练。MediAlbertina 900M 是在这些数据的基础上,利用掩码语言建模技术进行领域适应性调整后创建的:与我们的基线进行比较时,我们使用了复杂度(从约 20 个值下降到 1.6 个值)以及信息提取模型(如命名实体识别和断言状态)的微调和评估。在这两项任务中,MediAlbertina PT-PT 的召回率和 f1 分数都比 Albertina PT-PT 高出 4-6% :本研究首次公开了使用 PT-PT 数据训练的医学语言模型。它强调了领域适应的有效性,并为科学界克服非英语语言的障碍做出了贡献。有了 MediAlbertina,就可以采取进一步措施,通过针对 PT-PT 医疗任务对 MediAlbertina 进行微调,协助医生创建决策支持系统或建立医疗时间表,以便进行特征分析。
{"title":"MediAlbertina: An European Portuguese medical language model.","authors":"Miguel Nunes, João Boné, João C Ferreira, Pedro Chaves, Luis B Elvas","doi":"10.1016/j.compbiomed.2024.109233","DOIUrl":"https://doi.org/10.1016/j.compbiomed.2024.109233","url":null,"abstract":"<p><strong>Background: </strong>Patient medical information often exists in unstructured text containing abbreviations and acronyms deemed essential to conserve time and space but posing challenges for automated interpretation. Leveraging the efficacy of Transformers in natural language processing, our objective was to use the knowledge acquired by a language model and continue its pre-training to develop an European Portuguese (PT-PT) healthcare-domain language model.</p><p><strong>Methods: </strong>After carrying out a filtering process, Albertina PT-PT 900M was selected as our base language model, and we continued its pre-training using more than 2.6 million electronic medical records from Portugal's largest public hospital. MediAlbertina 900M has been created through domain adaptation on this data using masked language modelling.</p><p><strong>Results: </strong>The comparison with our baseline was made through the usage of both perplexity, which decreased from about 20 to 1.6 values, and the fine-tuning and evaluation of information extraction models such as Named Entity Recognition and Assertion Status. MediAlbertina PT-PT outperformed Albertina PT-PT in both tasks by 4-6% on recall and f1-score.</p><p><strong>Conclusions: </strong>This study contributes with the first publicly available medical language model trained with PT-PT data. It underscores the efficacy of domain adaptation and offers a contribution to the scientific community in overcoming obstacles of non-English languages. With MediAlbertina, further steps can be taken to assist physicians, in creating decision support systems or building medical timelines in order to perform profiling, by fine-tuning MediAlbertina for PT- PT medical tasks.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142371206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards dental diagnostic systems: Synergizing wavelet transform with generative adversarial networks for enhanced image data fusion. 牙科诊断系统:小波变换与生成式对抗网络协同增强图像数据融合。
IF 7 2区 医学 Q1 BIOLOGY Pub Date : 2024-10-02 DOI: 10.1016/j.compbiomed.2024.109241
Abdullah A Al-Haddad, Luttfi A Al-Haddad, Sinan A Al-Haddad, Alaa Abdulhady Jaber, Zeashan Hameed Khan, Hafiz Zia Ur Rehman

The advent of precision diagnostics in pediatric dentistry is shifting towards ensuring early detection of dental diseases, a critical factor in safeguarding the oral health of the younger population. In this study, an innovative approach is introduced, wherein Discrete Wavelet Transform (DWT) and Generative Adversarial Networks (GANs) are synergized within an Image Data Fusion (IDF) framework to enhance the accuracy of dental disease diagnosis through dental diagnostic systems. Dental panoramic radiographs from pediatric patients were utilized to demonstrate how the integration of DWT and GANs can significantly improve the informativeness of dental images. In the IDF process, the original images, GAN-augmented images, and wavelet-transformed images are combined to create a comprehensive dataset. DWT was employed for the decomposition of images into frequency components to enhance the visibility of subtle pathological features. Simultaneously, GANs were used to augment the dataset with high-quality, synthetic radiographic images indistinguishable from real ones, to provide robust data training. These integrated images are then fed into an Artificial Neural Network (ANN) for the classification of dental diseases. The utilization of the ANN in this context demonstrates the system's robustness and culminates in achieving an unprecedented accuracy rate of 0.897, 0.905 precision, recall of 0.897, and specificity of 0.968. Additionally, this study explores the feasibility of embedding the diagnostic system into dental X-ray scanners by leveraging lightweight models and cloud-based solutions to minimize resource constraints. Such integration is posited to revolutionize dental care by providing real-time, accurate disease detection capabilities, which significantly reduces diagnostical delays and enhances treatment outcomes.

儿童牙科精确诊断技术的出现正在转向确保牙科疾病的早期检测,这是保障年轻人口腔健康的一个关键因素。本研究介绍了一种创新方法,即在图像数据融合(IDF)框架内将离散小波变换(DWT)和生成对抗网络(GANs)协同作用,通过牙科诊断系统提高牙科疾病诊断的准确性。我们利用儿科患者的牙科全景X光片来展示 DWT 和 GANs 的融合如何显著提高牙科图像的信息量。在 IDF 处理过程中,原始图像、GAN 增强图像和小波变换图像被组合在一起,形成一个综合数据集。DWT 被用于将图像分解为频率成分,以提高细微病理特征的可见度。与此同时,GANs 被用于用与真实图像无异的高质量合成放射图像来增强数据集,以提供稳健的数据训练。然后将这些综合图像输入人工神经网络(ANN),对牙科疾病进行分类。在这种情况下使用人工神经网络证明了该系统的鲁棒性,并最终实现了前所未有的 0.897 的准确率、0.905 的精确率、0.897 的召回率和 0.968 的特异性。此外,这项研究还探讨了将诊断系统嵌入牙科 X 射线扫描仪的可行性,利用轻量级模型和基于云的解决方案最大限度地减少资源限制。这种整合可提供实时、准确的疾病检测能力,大大减少诊断延误,提高治疗效果,从而彻底改变牙科护理。
{"title":"Towards dental diagnostic systems: Synergizing wavelet transform with generative adversarial networks for enhanced image data fusion.","authors":"Abdullah A Al-Haddad, Luttfi A Al-Haddad, Sinan A Al-Haddad, Alaa Abdulhady Jaber, Zeashan Hameed Khan, Hafiz Zia Ur Rehman","doi":"10.1016/j.compbiomed.2024.109241","DOIUrl":"https://doi.org/10.1016/j.compbiomed.2024.109241","url":null,"abstract":"<p><p>The advent of precision diagnostics in pediatric dentistry is shifting towards ensuring early detection of dental diseases, a critical factor in safeguarding the oral health of the younger population. In this study, an innovative approach is introduced, wherein Discrete Wavelet Transform (DWT) and Generative Adversarial Networks (GANs) are synergized within an Image Data Fusion (IDF) framework to enhance the accuracy of dental disease diagnosis through dental diagnostic systems. Dental panoramic radiographs from pediatric patients were utilized to demonstrate how the integration of DWT and GANs can significantly improve the informativeness of dental images. In the IDF process, the original images, GAN-augmented images, and wavelet-transformed images are combined to create a comprehensive dataset. DWT was employed for the decomposition of images into frequency components to enhance the visibility of subtle pathological features. Simultaneously, GANs were used to augment the dataset with high-quality, synthetic radiographic images indistinguishable from real ones, to provide robust data training. These integrated images are then fed into an Artificial Neural Network (ANN) for the classification of dental diseases. The utilization of the ANN in this context demonstrates the system's robustness and culminates in achieving an unprecedented accuracy rate of 0.897, 0.905 precision, recall of 0.897, and specificity of 0.968. Additionally, this study explores the feasibility of embedding the diagnostic system into dental X-ray scanners by leveraging lightweight models and cloud-based solutions to minimize resource constraints. Such integration is posited to revolutionize dental care by providing real-time, accurate disease detection capabilities, which significantly reduces diagnostical delays and enhances treatment outcomes.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142371211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers in biology and medicine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1