首页 > 最新文献

Journal of X-Ray Science and Technology最新文献

英文 中文
Radiomics meets transformers: A novel approach to tumor segmentation and classification in mammography for breast cancer. 放射组学与变形:乳腺癌乳房x光检查中肿瘤分割和分类的新方法。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-11-01 Epub Date: 2025-07-29 DOI: 10.1177/08953996251351624
Mohamed J Saadh, Qusay Mohammed Hussain, Rafid Jihad Albadr, Hardik Doshi, M M Rekha, Mayank Kundlas, Amrita Pal, Jasur Rizaev, Waam Mohammed Taher, Mariem Alwan, Mahmod Jasem Jawad, Ali M Ali Al-Nuaimi, Bagher Farhood

ObjectiveThis study aimed to develop a robust framework for breast cancer diagnosis by integrating advanced segmentation and classification approaches. Transformer-based and U-Net segmentation models were combined with radiomic feature extraction and machine learning classifiers to improve segmentation precision and classification accuracy in mammographic images.Materials and MethodsA multi-center dataset of 8000 mammograms (4200 normal, 3800 abnormal) was used. Segmentation was performed using Transformer-based and U-Net models, evaluated through Dice Coefficient (DSC), Intersection over Union (IoU), Hausdorff Distance (HD95), and Pixel-Wise Accuracy. Radiomic features were extracted from segmented masks, with Recursive Feature Elimination (RFE) and Analysis of Variance (ANOVA) employed to select significant features. Classifiers including Logistic Regression, XGBoost, CatBoost, and a Stacking Ensemble model were applied to classify tumors into benign or malignant. Classification performance was assessed using accuracy, sensitivity, F1 score, and AUC-ROC. SHAP analysis validated feature importance, and Q-value heatmaps evaluated statistical significance.ResultsThe Transformer-based model achieved superior segmentation results with DSC (0.94 ± 0.01 training, 0.92 ± 0.02 test), IoU (0.91 ± 0.01 training, 0.89 ± 0.02 test), HD95 (3.0 ± 0.3 mm training, 3.3 ± 0.4 mm test), and Pixel-Wise Accuracy (0.96 ± 0.01 training, 0.94 ± 0.02 test), consistently outperforming U-Net across all metrics. For classification, Transformer-segmented features with the Stacking Ensemble achieved the highest test results: 93% accuracy, 92% sensitivity, 93% F1 score, and 95% AUC. U-Net-segmented features achieved lower metrics, with the best test accuracy at 84%. SHAP analysis confirmed the importance of features like Gray-Level Non-Uniformity and Zone Entropy.ConclusionThis study demonstrates the superiority of Transformer-based segmentation integrated with radiomic feature selection and robust classification models. The framework provides a precise and interpretable solution for breast cancer diagnosis, with potential for scalability to 3D imaging and multimodal datasets.

目的本研究旨在通过整合先进的分割和分类方法,建立一个强大的乳腺癌诊断框架。将基于transformer和U-Net的分割模型与放射学特征提取和机器学习分类器相结合,提高乳房x线图像的分割精度和分类精度。材料与方法采用多中心数据集8000张乳房x光片(4200张正常,3800张异常)。使用基于transformer和U-Net模型进行分割,通过Dice Coefficient (DSC)、Intersection over Union (IoU)、Hausdorff Distance (HD95)和Pixel-Wise Accuracy进行评估。利用递归特征消除法(RFE)和方差分析法(ANOVA)选择显著特征,从分割后的掩模中提取放射组学特征。分类器包括Logistic回归、XGBoost、CatBoost和堆叠集成模型,用于将肿瘤分为良性或恶性。采用准确性、敏感性、F1评分和AUC-ROC评价分类效果。SHAP分析验证了特征重要性,q值热图评估了统计显著性。结果基于transformer的模型在DSC(0.94±0.01训练值,0.92±0.02测试值)、IoU(0.91±0.01训练值,0.89±0.02测试值)、HD95(3.0±0.3 mm训练值,3.3±0.4 mm测试值)和Pixel-Wise Accuracy(0.96±0.01训练值,0.94±0.02测试值)上均取得了较好的分割效果,在所有指标上均优于U-Net。对于分类,使用Stacking Ensemble的Transformer-segmented feature获得了最高的测试结果:93%的准确率,92%的灵敏度,93%的F1分数和95%的AUC。u - net分割的特征实现了较低的指标,最佳测试准确率为84%。SHAP分析证实了灰度非均匀性和区域熵等特征的重要性。结论结合放射学特征选择和鲁棒分类模型,验证了基于变压器的图像分割方法的优越性。该框架为乳腺癌诊断提供了精确且可解释的解决方案,具有可扩展到3D成像和多模态数据集的潜力。
{"title":"Radiomics meets transformers: A novel approach to tumor segmentation and classification in mammography for breast cancer.","authors":"Mohamed J Saadh, Qusay Mohammed Hussain, Rafid Jihad Albadr, Hardik Doshi, M M Rekha, Mayank Kundlas, Amrita Pal, Jasur Rizaev, Waam Mohammed Taher, Mariem Alwan, Mahmod Jasem Jawad, Ali M Ali Al-Nuaimi, Bagher Farhood","doi":"10.1177/08953996251351624","DOIUrl":"10.1177/08953996251351624","url":null,"abstract":"<p><p>ObjectiveThis study aimed to develop a robust framework for breast cancer diagnosis by integrating advanced segmentation and classification approaches. Transformer-based and U-Net segmentation models were combined with radiomic feature extraction and machine learning classifiers to improve segmentation precision and classification accuracy in mammographic images.Materials and MethodsA multi-center dataset of 8000 mammograms (4200 normal, 3800 abnormal) was used. Segmentation was performed using Transformer-based and U-Net models, evaluated through Dice Coefficient (DSC), Intersection over Union (IoU), Hausdorff Distance (HD95), and Pixel-Wise Accuracy. Radiomic features were extracted from segmented masks, with Recursive Feature Elimination (RFE) and Analysis of Variance (ANOVA) employed to select significant features. Classifiers including Logistic Regression, XGBoost, CatBoost, and a Stacking Ensemble model were applied to classify tumors into benign or malignant. Classification performance was assessed using accuracy, sensitivity, F1 score, and AUC-ROC. SHAP analysis validated feature importance, and Q-value heatmaps evaluated statistical significance.ResultsThe Transformer-based model achieved superior segmentation results with DSC (0.94 ± 0.01 training, 0.92 ± 0.02 test), IoU (0.91 ± 0.01 training, 0.89 ± 0.02 test), HD95 (3.0 ± 0.3 mm training, 3.3 ± 0.4 mm test), and Pixel-Wise Accuracy (0.96 ± 0.01 training, 0.94 ± 0.02 test), consistently outperforming U-Net across all metrics. For classification, Transformer-segmented features with the Stacking Ensemble achieved the highest test results: 93% accuracy, 92% sensitivity, 93% F1 score, and 95% AUC. U-Net-segmented features achieved lower metrics, with the best test accuracy at 84%. SHAP analysis confirmed the importance of features like Gray-Level Non-Uniformity and Zone Entropy.ConclusionThis study demonstrates the superiority of Transformer-based segmentation integrated with radiomic feature selection and robust classification models. The framework provides a precise and interpretable solution for breast cancer diagnosis, with potential for scalability to 3D imaging and multimodal datasets.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"1039-1058"},"PeriodicalIF":1.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144734878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PET/CT radiomics for non-invasive prediction of immunotherapy efficacy in cervical cancer. PET/CT放射组学无创预测宫颈癌免疫治疗疗效。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-11-01 Epub Date: 2025-08-28 DOI: 10.1177/08953996251367203
Tianming Du, Chen Li, Marcin Grzegozek, Xinyu Huang, Md Rahaman, Xinghao Wang, Hongzan Sun

PurposeThe prediction of immunotherapy efficacy in cervical cancer patients remains a critical clinical challenge. This study aims to develop and validate a deep learning-based automatic tumor segmentation method on PET/CT images, extract texture features from the tumor regions in cervical cancer patients, and investigate their correlation with PD-L1 expression. Furthermore, a predictive model for immunotherapy efficacy will be constructed.MethodsWe retrospectively collected data from 283 pathologically confirmed cervical cancer patients who underwent 18F-FDG PET/CT examinations, divided into three subsets. Subset-I (n = 97) was used to develop a deep learning-based segmentation model using Attention-UNet and region-growing methods on co-registered PET/CT images. Subset-II (n = 101) was used to explore correlations between radiomic features and PD-L1 expression. Subset-III (n = 85) was used to construct and validate a radiomic model for predicting immunotherapy response.ResultsUsing Subset-I, a segmentation model was developed. The segmentation model achieved optimal performance at the 94th epoch with an IoU of 0.746 in the validation set. Manual evaluation confirmed accurate tumor localization. Sixteen features demonstrated excellent reproducibility (ICC > 0.75). Using Subset-II, PD-L1-correlated features were extracted and identified. In Subset-II, 183 features showed significant correlations with PD-L1 expression (P < 0.05).Using these features in Subset-III, a predictive model for immunotherapy efficacy was constructed and evaluated. In Subset-III, the SVM-based radiomic model achieved the best predictive performance with an AUC of 0.935.ConclusionWe validated, respectively in Subset-I, Subset-II, and Subset-III, that deep learning models incorporating medical prior knowledge can accurately and automatically segment cervical cancer lesions, that texture features extracted from 18F-FDG PET/CT are significantly associated with PD-L1 expression, and that predictive models based on these features can effectively predict the efficacy of PD-L1 immunotherapy. This approach offers a non-invasive, efficient, and cost-effective tool for guiding individualized immunotherapy in cervical cancer patients and may help reduce patient burden, accelerate treatment planning.

目的预测宫颈癌患者免疫治疗的疗效仍然是一个重要的临床挑战。本研究旨在开发并验证一种基于深度学习的PET/CT图像自动肿瘤分割方法,提取宫颈癌患者肿瘤区域的纹理特征,并研究其与PD-L1表达的相关性。进而构建免疫治疗疗效预测模型。方法回顾性收集283例经病理证实行18F-FDG PET/CT检查的宫颈癌患者资料,将其分为3个亚组。使用子集i (n = 97)开发基于深度学习的分割模型,使用Attention-UNet和区域生长方法对共同配准的PET/CT图像进行分割。子集ii (n = 101)用于探讨放射学特征与PD-L1表达之间的相关性。子集iii (n = 85)用于构建和验证预测免疫治疗反应的放射学模型。结果利用子集i建立了一个分割模型。该分割模型在验证集的IoU为0.746,在第94 epoch达到了最佳性能。人工评估证实肿瘤定位准确。16个特征具有良好的重现性(ICC > 0.75)。利用子集ii提取并识别pd - l1相关特征。在Subset-II中,183个特征与PD-L1表达显著相关(p18f - fdg PET/CT与PD-L1表达显著相关),基于这些特征的预测模型可以有效预测PD-L1免疫治疗的疗效。该方法为指导宫颈癌患者的个体化免疫治疗提供了一种无创、高效、经济的工具,有助于减轻患者负担,加快治疗计划。
{"title":"PET/CT radiomics for non-invasive prediction of immunotherapy efficacy in cervical cancer.","authors":"Tianming Du, Chen Li, Marcin Grzegozek, Xinyu Huang, Md Rahaman, Xinghao Wang, Hongzan Sun","doi":"10.1177/08953996251367203","DOIUrl":"10.1177/08953996251367203","url":null,"abstract":"<p><p>PurposeThe prediction of immunotherapy efficacy in cervical cancer patients remains a critical clinical challenge. This study aims to develop and validate a deep learning-based automatic tumor segmentation method on PET/CT images, extract texture features from the tumor regions in cervical cancer patients, and investigate their correlation with PD-L1 expression. Furthermore, a predictive model for immunotherapy efficacy will be constructed.MethodsWe retrospectively collected data from 283 pathologically confirmed cervical cancer patients who underwent <sup>18</sup>F-FDG PET/CT examinations, divided into three subsets. Subset-I (n = 97) was used to develop a deep learning-based segmentation model using Attention-UNet and region-growing methods on co-registered PET/CT images. Subset-II (n = 101) was used to explore correlations between radiomic features and PD-L1 expression. Subset-III (n = 85) was used to construct and validate a radiomic model for predicting immunotherapy response.ResultsUsing Subset-I, a segmentation model was developed. The segmentation model achieved optimal performance at the 94th epoch with an IoU of 0.746 in the validation set. Manual evaluation confirmed accurate tumor localization. Sixteen features demonstrated excellent reproducibility (ICC > 0.75). Using Subset-II, PD-L1-correlated features were extracted and identified. In Subset-II, 183 features showed significant correlations with PD-L1 expression (P < 0.05).Using these features in Subset-III, a predictive model for immunotherapy efficacy was constructed and evaluated. In Subset-III, the SVM-based radiomic model achieved the best predictive performance with an AUC of 0.935.ConclusionWe validated, respectively in Subset-I, Subset-II, and Subset-III, that deep learning models incorporating medical prior knowledge can accurately and automatically segment cervical cancer lesions, that texture features extracted from <sup>18</sup>F-FDG PET/CT are significantly associated with PD-L1 expression, and that predictive models based on these features can effectively predict the efficacy of PD-L1 immunotherapy. This approach offers a non-invasive, efficient, and cost-effective tool for guiding individualized immunotherapy in cervical cancer patients and may help reduce patient burden, accelerate treatment planning.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"1081-1092"},"PeriodicalIF":1.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144976716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corrigendum to "Promptable segmentation of CT lung lesions based on improved U-Net and Segment Anything model (SAM)". “基于改进U-Net和分段模型(SAM)的CT肺病变快速分割”的勘误表。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-11-01 Epub Date: 2025-07-20 DOI: 10.1177/08953996251358389
{"title":"Corrigendum to \"Promptable segmentation of CT lung lesions based on improved U-Net and Segment Anything model (SAM)\".","authors":"","doi":"10.1177/08953996251358389","DOIUrl":"10.1177/08953996251358389","url":null,"abstract":"","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"1128"},"PeriodicalIF":1.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144676350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Promptable segmentation of CT lung lesions based on improved U-Net and Segment Anything model (SAM). 基于改进U-Net和分段任意模型(SAM)的CT肺病变快速分割。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-11-01 Epub Date: 2025-05-11 DOI: 10.1177/08953996251333364
Wensong Yan, Yunhua Xu, Shiju Yan

BackgroundComputed tomography (CT) is widely used in clinical diagnosis of lung diseases. The automatic segmentation of lesions in CT images aids in the development of intelligent lung disease diagnosis.ObjectiveThis study aims to address the issue of imprecise segmentation in CT images due to the blurred detailed features of lesions, which can easily be confused with surrounding tissues.MethodsWe proposed a promptable segmentation method based on an improved U-Net and Segment Anything model (SAM) to improve segmentation accuracy of lung lesions in CT images. The improved U-Net incorporates a multi-scale attention module based on a channel attention mechanism ECA (Efficient Channel Attention) to improve recognition of detailed feature information at edge of lesions; and a promptable clipping module to incorporate physicians' prior knowledge into the model to reduce background interference. Segment Anything model (SAM) has a strong ability to recognize lesions and pulmonary atelectasis or organs. We combine the two to improve overall segmentation performances.ResultsOn the LUAN16 dataset and a lung CT dataset provided by the Shanghai Chest Hospital, the proposed method achieves Dice coefficients of 80.12% and 92.06%, and Positive Predictive Values of 81.25% and 91.91%, which are superior to most existing mainstream segmentation methods.ConclusionThe proposed method can be used to improve segmentation accuracy of lung lesions in CT images, enhance automation level of existing computer-aided diagnostic systems, and provide more effective assistance to radiologists in clinical practice.

背景计算机断层扫描(CT)广泛应用于临床肺部疾病的诊断。CT图像中病灶的自动分割有助于肺部疾病智能诊断的发展。目的解决CT图像中病灶细节特征模糊,容易与周围组织混淆,导致分割不精确的问题。方法提出了一种基于改进U-Net和分段任意模型(SAM)的快速分割方法,以提高CT图像中肺部病变的分割精度。改进后的U-Net集成了基于通道注意机制ECA (Efficient channel attention)的多尺度注意模块,提高了对病灶边缘详细特征信息的识别;以及一个提示剪辑模块,将医生的先验知识整合到模型中,以减少背景干扰。SAM (Segment Anything model)对病变及肺不张或脏器的识别能力强。我们将两者结合起来以提高整体分割性能。结果在LUAN16数据集和上海胸科医院提供的肺CT数据集上,所提出的分割方法的Dice系数分别为80.12%和92.06%,Positive Predictive Values分别为81.25%和91.91%,优于目前大多数主流分割方法。结论该方法可提高CT图像中肺部病变的分割精度,提高现有计算机辅助诊断系统的自动化水平,为放射科医师临床工作提供更有效的辅助。
{"title":"Promptable segmentation of CT lung lesions based on improved U-Net and Segment Anything model (SAM).","authors":"Wensong Yan, Yunhua Xu, Shiju Yan","doi":"10.1177/08953996251333364","DOIUrl":"10.1177/08953996251333364","url":null,"abstract":"<p><p>BackgroundComputed tomography (CT) is widely used in clinical diagnosis of lung diseases. The automatic segmentation of lesions in CT images aids in the development of intelligent lung disease diagnosis.ObjectiveThis study aims to address the issue of imprecise segmentation in CT images due to the blurred detailed features of lesions, which can easily be confused with surrounding tissues.MethodsWe proposed a promptable segmentation method based on an improved U-Net and Segment Anything model (SAM) to improve segmentation accuracy of lung lesions in CT images. The improved U-Net incorporates a multi-scale attention module based on a channel attention mechanism ECA (Efficient Channel Attention) to improve recognition of detailed feature information at edge of lesions; and a promptable clipping module to incorporate physicians' prior knowledge into the model to reduce background interference. Segment Anything model (SAM) has a strong ability to recognize lesions and pulmonary atelectasis or organs. We combine the two to improve overall segmentation performances.ResultsOn the LUAN16 dataset and a lung CT dataset provided by the Shanghai Chest Hospital, the proposed method achieves Dice coefficients of 80.12% and 92.06%, and Positive Predictive Values of 81.25% and 91.91%, which are superior to most existing mainstream segmentation methods.ConclusionThe proposed method can be used to improve segmentation accuracy of lung lesions in CT images, enhance automation level of existing computer-aided diagnostic systems, and provide more effective assistance to radiologists in clinical practice.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"1015-1026"},"PeriodicalIF":1.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144054951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Assessment of acute rib fracture detection system from chest X-ray: Preliminary study for early radiological diagnosis. 胸片对急性肋骨骨折检测系统的自我评价:早期放射学诊断的初步研究。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-11-01 Epub Date: 2025-07-28 DOI: 10.1177/08953996251361041
Hong Kyu Lee, Hyoung Soo Kim, Sung Gyun Kim, Jae Yong Park

ObjectiveDetecting and accurately diagnosing rib fractures in chest radiographs is a challenging and time-consuming task for radiologists. This study presents a novel deep learning system designed to automate the detection and segmentation of rib fractures in chest radiographs.MethodsThe proposed method combines CenterNet with HRNet v2 for precise fracture region identification and HRNet-W48 with contextual representation to enhance rib segmentation. A dataset consisting of 1006 chest radiographs from a tertiary hospital in Korea was used, with a split of 7:2:1 for training, validation, and testing.ResultsThe rib fracture detection component achieved a sensitivity of 0.7171, indicating its effectiveness in identifying fractures. Additionally, the rib segmentation performance was measured by a dice score of 0.86, demonstrating its accuracy in delineating rib structures. Visual assessment results further highlight the model's capability to pinpoint fractures and segment ribs accurately.ConclusionThis innovative approach holds promise for improving rib fracture detection and rib segmentation, offering potential benefits in clinical practice for more efficient and accurate diagnosis in the field of medical image analysis.

目的对放射科医师来说,在胸片上检测和准确诊断肋骨骨折是一项具有挑战性和耗时的任务。本研究提出了一种新的深度学习系统,旨在自动检测和分割胸片中的肋骨骨折。方法结合CenterNet和HRNet v2进行骨折区域的精确识别,结合HRNet- w48进行肋骨分割。使用韩国一家三级医院的1006张胸片数据集,按照7:2:1的比例进行训练、验证和测试。结果肋骨骨折检测组件的灵敏度为0.7171,表明其对骨折的识别是有效的。此外,肋骨分割性能的dice得分为0.86,证明了其在描绘肋骨结构方面的准确性。目视评估结果进一步突出了该模型精确定位骨折和分段肋骨的能力。结论该方法有望改善肋骨骨折检测和肋骨分割,为医学图像分析领域更高效、准确的诊断提供潜在的临床应用价值。
{"title":"Self-Assessment of acute rib fracture detection system from chest X-ray: Preliminary study for early radiological diagnosis.","authors":"Hong Kyu Lee, Hyoung Soo Kim, Sung Gyun Kim, Jae Yong Park","doi":"10.1177/08953996251361041","DOIUrl":"10.1177/08953996251361041","url":null,"abstract":"<p><p>ObjectiveDetecting and accurately diagnosing rib fractures in chest radiographs is a challenging and time-consuming task for radiologists. This study presents a novel deep learning system designed to automate the detection and segmentation of rib fractures in chest radiographs.MethodsThe proposed method combines CenterNet with HRNet v2 for precise fracture region identification and HRNet-W48 with contextual representation to enhance rib segmentation. A dataset consisting of 1006 chest radiographs from a tertiary hospital in Korea was used, with a split of 7:2:1 for training, validation, and testing.ResultsThe rib fracture detection component achieved a sensitivity of 0.7171, indicating its effectiveness in identifying fractures. Additionally, the rib segmentation performance was measured by a dice score of 0.86, demonstrating its accuracy in delineating rib structures. Visual assessment results further highlight the model's capability to pinpoint fractures and segment ribs accurately.ConclusionThis innovative approach holds promise for improving rib fracture detection and rib segmentation, offering potential benefits in clinical practice for more efficient and accurate diagnosis in the field of medical image analysis.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"1027-1038"},"PeriodicalIF":1.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144734879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anatomy-aware transformer-based model for precise rectal cancer detection and localization in MRI scans. 基于解剖学感知变压器的精确直肠癌MRI扫描检测和定位模型。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-11-01 Epub Date: 2025-08-25 DOI: 10.1177/08953996251370580
Shanshan Li, Yu Zhang, Yao Hong, Wei Yuan, Jihong Sun

Rectal cancer is a major cause of cancer-related mortality, requiring accurate diagnosis via MRI scans. However, detecting rectal cancer in MRI scans is challenging due to image complexity and the need for precise localization. While transformer-based object detection has excelled in natural images, applying these models to medical data is hindered by limited medical imaging resources. To address this, we propose the Spatially Prioritized Detection Transformer (SP DETR), which incorporates a Spatially Prioritized (SP) Decoder to constrain anchor boxes to regions of interest (ROI) based on anatomical maps, focusing the model on areas most likely to contain cancer. Additionally, the SP cross-attention mechanism refines the learning of anchor box offsets. To improve small cancer detection, we introduce the Global Context-Guided Feature Fusion Module (GCGFF), leveraging a transformer encoder for global context and a Globally-Guided Semantic Fusion Block (GGSF) to enhance high-level semantic features. Experimental results show that our model significantly improves detection accuracy, especially for small rectal cancers, demonstrating the effectiveness of integrating anatomical priors with transformer-based models for clinical applications.

直肠癌是癌症相关死亡的主要原因,需要通过核磁共振扫描进行准确诊断。然而,由于图像复杂性和精确定位的需要,在MRI扫描中检测直肠癌是具有挑战性的。虽然基于变压器的目标检测在自然图像中表现出色,但将这些模型应用于医学数据受到有限的医学成像资源的阻碍。为了解决这个问题,我们提出了空间优先检测变压器(SP DETR),它包含一个空间优先(SP)解码器,将锚盒约束到基于解剖图的感兴趣区域(ROI),将模型集中在最有可能包含癌症的区域。此外,SP交叉注意机制细化了锚盒偏移的学习。为了改进小型癌症检测,我们引入了全局上下文引导特征融合模块(GCGFF),利用全局上下文的变压器编码器和全局引导语义融合块(GGSF)来增强高级语义特征。实验结果表明,我们的模型显著提高了检测精度,特别是对小直肠癌的检测精度,证明了将解剖学先验与基于变压器的模型结合起来用于临床应用的有效性。
{"title":"Anatomy-aware transformer-based model for precise rectal cancer detection and localization in MRI scans.","authors":"Shanshan Li, Yu Zhang, Yao Hong, Wei Yuan, Jihong Sun","doi":"10.1177/08953996251370580","DOIUrl":"10.1177/08953996251370580","url":null,"abstract":"<p><p>Rectal cancer is a major cause of cancer-related mortality, requiring accurate diagnosis via MRI scans. However, detecting rectal cancer in MRI scans is challenging due to image complexity and the need for precise localization. While transformer-based object detection has excelled in natural images, applying these models to medical data is hindered by limited medical imaging resources. To address this, we propose the Spatially Prioritized Detection Transformer (SP DETR), which incorporates a Spatially Prioritized (SP) Decoder to constrain anchor boxes to regions of interest (ROI) based on anatomical maps, focusing the model on areas most likely to contain cancer. Additionally, the SP cross-attention mechanism refines the learning of anchor box offsets. To improve small cancer detection, we introduce the Global Context-Guided Feature Fusion Module (GCGFF), leveraging a transformer encoder for global context and a Globally-Guided Semantic Fusion Block (GGSF) to enhance high-level semantic features. Experimental results show that our model significantly improves detection accuracy, especially for small rectal cancers, demonstrating the effectiveness of integrating anatomical priors with transformer-based models for clinical applications.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"1059-1070"},"PeriodicalIF":1.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144976725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel scatter correction method for energy-resolving photon-counting detector based CBCT imaging. 基于能量分辨光子计数探测器的CBCT成像散射校正新方法。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-11-01 Epub Date: 2025-09-25 DOI: 10.1177/08953996251351618
Xin Zhang, Heran Wang, Yuhang Tan, Jiongtao Zhu, Hairong Zheng, Dong Liang, Yongshuai Ge

BackgroundTo generate high-quality CT images for an energy-resolving photon-counting detector (PCD) based cone beam CT (CBCT) system, it is essential to mitigate the scatter shading artifacts.ObjectiveThe aim of this study is to explore the capability of an energy-modulated scatter correction method, named e-Grid, in removing the scatter shading artifacts in energy-resolving PCD CBCT imaging.MethodsIn the e-Grid method, a linear approximation is assumed between the high-energy primary/scatter signals and the low-energy primary/scatter signals acquired from the two energy windows of a PCD. Calibration experiments were conducted to determine the parameters used in the aforementioned signal model. Physical validation experiments with head and abdominal phantoms were performed on a PCD CBCT imaging benchtop system.ResultsIt was found that the e-Grid method could significantly eliminate scatter cupping artifacts in both low-energy and high-energy PCD CBCT imaging for objects with varying dimensions. Quantitatively, results demonstrated that the e-Grid method reduced scatter artifacts by more than 70% in both low-energy and high-energy PCD CBCT images.ConclusionsIn this study, it is demonstrated that the e-Grid scatter correction method has great potential for reducing scatter shading artifacts in energy-resolving PCD CBCT imaging.

为了生成高质量的基于能量分辨光子计数检测器(PCD)的锥束CT (CBCT)系统的CT图像,必须消除散射阴影伪影。目的探讨能量调制散射校正方法e-Grid在能量分辨PCD CBCT成像中去除散射阴影伪影的能力。方法在e-Grid方法中,假设从PCD的两个能量窗口获得的高能初级/散射信号与低能初级/散射信号之间呈线性近似。进行校准实验以确定上述信号模型中使用的参数。在PCD CBCT成像台式系统上进行了头部和腹部幻象的物理验证实验。结果发现e-Grid方法在不同尺寸物体的低能和高能PCD CBCT成像中都能明显消除散点火罐伪影。定量结果表明,e-Grid方法在低能和高能PCD CBCT图像中都能减少70%以上的散射伪影。结论在能量分辨PCD CBCT成像中,e网格散射校正方法在减少散射阴影伪影方面具有很大的潜力。
{"title":"A novel scatter correction method for energy-resolving photon-counting detector based CBCT imaging.","authors":"Xin Zhang, Heran Wang, Yuhang Tan, Jiongtao Zhu, Hairong Zheng, Dong Liang, Yongshuai Ge","doi":"10.1177/08953996251351618","DOIUrl":"10.1177/08953996251351618","url":null,"abstract":"<p><p>BackgroundTo generate high-quality CT images for an energy-resolving photon-counting detector (PCD) based cone beam CT (CBCT) system, it is essential to mitigate the scatter shading artifacts.ObjectiveThe aim of this study is to explore the capability of an energy-modulated scatter correction method, named e-Grid, in removing the scatter shading artifacts in energy-resolving PCD CBCT imaging.MethodsIn the e-Grid method, a linear approximation is assumed between the high-energy primary/scatter signals and the low-energy primary/scatter signals acquired from the two energy windows of a PCD. Calibration experiments were conducted to determine the parameters used in the aforementioned signal model. Physical validation experiments with head and abdominal phantoms were performed on a PCD CBCT imaging benchtop system.ResultsIt was found that the e-Grid method could significantly eliminate scatter cupping artifacts in both low-energy and high-energy PCD CBCT imaging for objects with varying dimensions. Quantitatively, results demonstrated that the e-Grid method reduced scatter artifacts by more than 70% in both low-energy and high-energy PCD CBCT images.ConclusionsIn this study, it is demonstrated that the e-Grid scatter correction method has great potential for reducing scatter shading artifacts in energy-resolving PCD CBCT imaging.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"1093-1103"},"PeriodicalIF":1.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145151327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalization of parallel ghost imaging based on laboratory X-ray source. 基于实验室x射线源的平行鬼影成像推广。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-11-01 Epub Date: 2025-08-25 DOI: 10.1177/08953996251367214
Nixi Zhao, Junxiong Fang, Jie Tang, Changzhe Zhao, Jianwen Wu, Han Guo, Haipeng Zhang, Tiqiao Xiao

Ghost imaging is an imaging technique that achieves image reconstruction by measuring the intensity correlation function between the reference arm and the object arm. In parallel ghost imaging, each pixel of a position-sensitive detector is further regarded as a bucket detector, enabling the parallel acquisition of hundreds or thousands of ghost imaging subsystems in a single measurement, thus realizing high-resolution imaging with extremely low measurement counts. Relying on synchrotron radiation, we have achieved X-ray parallel ghost imaging with high pixel resolution, low dose, and ultra-large field of view. However, the dependence of X-ray parallel ghost imaging on synchrotron radiation has set extremely high thresholds for the dissemination and application of this technology. In this work, we broke away from synchrotron radiation facility and completed the pipeline-style acquisition of parallel ghost imaging using rough and inexpensive equipment in the most reproducible way for others. Eventually, we achieved ghost imaging with an effective pixel size of 8.03 μm, an image size of 2880 × 2280, and a minimum of 10 measurement numbers (a sampling rate of 0.62%) using a laboratory X-ray light source. It can be achieved merely by making minor modifications to any industrial CT device. With a total experimental cost of only $40, this work demonstrates great universality. We have put forward a comprehensive framework for the practical application of parallel ghost imaging, which is an essential prerequisite for the generalization of parallel ghost imaging to enter the commercial and practical arenas.

鬼影成像是一种通过测量参考臂与目标臂之间的强度相关函数来实现图像重建的成像技术。在并行鬼影成像中,将位置敏感探测器的每个像素进一步视为一个桶形探测器,可以在一次测量中并行采集数百或数千个鬼影成像子系统,从而以极低的测量次数实现高分辨率成像。依托同步辐射,实现了高像素分辨率、低剂量、超大视场的x射线平行鬼影成像。然而,x射线平行鬼影成像对同步辐射的依赖为该技术的推广和应用设置了极高的门槛。在这项工作中,我们脱离了同步辐射设备,使用粗糙和廉价的设备,以最可复制的方式完成了流水线式的平行鬼像采集。最终,我们使用实验室x射线光源实现了有效像素尺寸为8.03 μm,图像尺寸为2880 × 2280,最少10个测量数(采样率为0.62%)的鬼影成像。它可以通过对任何工业CT设备进行微小的修改来实现。实验总成本仅为40美元,证明了这项工作具有很强的通用性。我们提出了并行鬼像实际应用的综合框架,这是并行鬼像推广进入商业和实用领域的必要前提。
{"title":"Generalization of parallel ghost imaging based on laboratory X-ray source.","authors":"Nixi Zhao, Junxiong Fang, Jie Tang, Changzhe Zhao, Jianwen Wu, Han Guo, Haipeng Zhang, Tiqiao Xiao","doi":"10.1177/08953996251367214","DOIUrl":"10.1177/08953996251367214","url":null,"abstract":"<p><p>Ghost imaging is an imaging technique that achieves image reconstruction by measuring the intensity correlation function between the reference arm and the object arm. In parallel ghost imaging, each pixel of a position-sensitive detector is further regarded as a bucket detector, enabling the parallel acquisition of hundreds or thousands of ghost imaging subsystems in a single measurement, thus realizing high-resolution imaging with extremely low measurement counts. Relying on synchrotron radiation, we have achieved X-ray parallel ghost imaging with high pixel resolution, low dose, and ultra-large field of view. However, the dependence of X-ray parallel ghost imaging on synchrotron radiation has set extremely high thresholds for the dissemination and application of this technology. In this work, we broke away from synchrotron radiation facility and completed the pipeline-style acquisition of parallel ghost imaging using rough and inexpensive equipment in the most reproducible way for others. Eventually, we achieved ghost imaging with an effective pixel size of 8.03 μm, an image size of 2880 × 2280, and a minimum of 10 measurement numbers (a sampling rate of 0.62%) using a laboratory X-ray light source. It can be achieved merely by making minor modifications to any industrial CT device. With a total experimental cost of only $40, this work demonstrates great universality. We have put forward a comprehensive framework for the practical application of parallel ghost imaging, which is an essential prerequisite for the generalization of parallel ghost imaging to enter the commercial and practical arenas.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"1071-1080"},"PeriodicalIF":1.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144976657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adversarial consistency-based semi-supervised pneumonia segmentation using dual multiscale feature selection and fusion mean teacher model and triple-attention dynamic convolution in chest CTs. 基于对抗性一致性的胸部ct半监督肺炎分割:双多尺度特征选择、融合平均教师模型和三注意动态卷积。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-11-01 Epub Date: 2025-09-15 DOI: 10.1177/08953996251367210
Yu Gu, Jianning Zang, Lidong Yang, Baohua Zhang, Jing Wang, Xiaoqi Lu, Jianjun Li, Xin Liu, Ying Zhao, Dahua Yu, Siyuan Tang, Qun He

Recently, semi-supervised learning has demonstrated significant potential in the field of medical image segmentation. However, the majority of the methods fail to establish connections among diverse sample data. Moreover, segmentation networks that utilize fixed parameters can impede model training and even amplify the risk of overfitting. To address these challenges, this paper proposes an adversarial consistency-based semi-supervised segmentation method, leveraging a dual multiscale mean teacher model. First, by designing a discriminator network with adaptive feature selection and training it alternately with the segmentation network, the method enhances the segmentation network's ability to transfer knowledge from the limited labeled data to the unlabeled data. The discriminator evaluates the quality of the segmentation network's results for both labeled and unlabeled data, while simultaneously guiding the network to learn consistency in segmentation performance throughout the training process. Second, we design a Triple-attention dynamic convolutional (TADC) module, which allows the convolution kernel parameters to be adjusted flexibly according to different input data. This improves the feature representation capability of the network model and helps reduce the risk of overfitting. Finally, we propose a novel feature selection and fusion module (FSFM) within the segmentation network, which dynamically selects and integrates important features to enhance the saliency of key information, improving the overall performance of the model. The proposed adversarial consistency-based semi-supervised segmentation method is applied to the MosMedData dataset. The results demonstrate that the segmentation network outperforms the baseline model, achieving improvements of 3.83%, 3.97%, 3.14% in terms of Dice, Jaccard, and NSD scores, respectively, for the segmentation of pneumonia lesions. The proposed segmentation method outperforms state-of-the-art segmentation networks and demonstrates superior potential for segmenting pneumonia lesions, as evidenced by extensive experiments conducted on the MosMedData and COVID-19-P20 datasets.

近年来,半监督学习在医学图像分割领域显示出巨大的潜力。然而,大多数方法不能建立不同样本数据之间的联系。此外,使用固定参数的分割网络会阻碍模型训练,甚至会增加过拟合的风险。为了解决这些挑战,本文提出了一种基于对抗性一致性的半监督分割方法,利用双多尺度平均教师模型。首先,该方法通过设计具有自适应特征选择的判别器网络,并与分割网络交替训练,增强了分割网络从有限标记数据向未标记数据传递知识的能力;鉴别器对标记和未标记数据的分割网络结果的质量进行评估,同时指导网络在整个训练过程中学习分割性能的一致性。其次,我们设计了一个三注意力动态卷积(TADC)模块,该模块可以根据不同的输入数据灵活调整卷积核参数。这提高了网络模型的特征表示能力,有助于降低过拟合的风险。最后,在分割网络中提出了一种新的特征选择和融合模块(FSFM),该模块动态选择和融合重要特征,增强关键信息的显著性,提高模型的整体性能。将提出的基于对抗性一致性的半监督分割方法应用于MosMedData数据集。结果表明,该分割网络优于基线模型,在Dice、Jaccard和NSD评分方面,对肺炎病灶的分割分别提高了3.83%、3.97%和3.14%。在MosMedData和COVID-19-P20数据集上进行的大量实验证明,所提出的分割方法优于最先进的分割网络,在分割肺炎病变方面具有优越的潜力。
{"title":"Adversarial consistency-based semi-supervised pneumonia segmentation using dual multiscale feature selection and fusion mean teacher model and triple-attention dynamic convolution in chest CTs.","authors":"Yu Gu, Jianning Zang, Lidong Yang, Baohua Zhang, Jing Wang, Xiaoqi Lu, Jianjun Li, Xin Liu, Ying Zhao, Dahua Yu, Siyuan Tang, Qun He","doi":"10.1177/08953996251367210","DOIUrl":"10.1177/08953996251367210","url":null,"abstract":"<p><p>Recently, semi-supervised learning has demonstrated significant potential in the field of medical image segmentation. However, the majority of the methods fail to establish connections among diverse sample data. Moreover, segmentation networks that utilize fixed parameters can impede model training and even amplify the risk of overfitting. To address these challenges, this paper proposes an adversarial consistency-based semi-supervised segmentation method, leveraging a dual multiscale mean teacher model. First, by designing a discriminator network with adaptive feature selection and training it alternately with the segmentation network, the method enhances the segmentation network's ability to transfer knowledge from the limited labeled data to the unlabeled data. The discriminator evaluates the quality of the segmentation network's results for both labeled and unlabeled data, while simultaneously guiding the network to learn consistency in segmentation performance throughout the training process. Second, we design a Triple-attention dynamic convolutional (TADC) module, which allows the convolution kernel parameters to be adjusted flexibly according to different input data. This improves the feature representation capability of the network model and helps reduce the risk of overfitting. Finally, we propose a novel feature selection and fusion module (FSFM) within the segmentation network, which dynamically selects and integrates important features to enhance the saliency of key information, improving the overall performance of the model. The proposed adversarial consistency-based semi-supervised segmentation method is applied to the MosMedData dataset. The results demonstrate that the segmentation network outperforms the baseline model, achieving improvements of 3.83%, 3.97%, 3.14% in terms of Dice, Jaccard, and NSD scores, respectively, for the segmentation of pneumonia lesions. The proposed segmentation method outperforms state-of-the-art segmentation networks and demonstrates superior potential for segmenting pneumonia lesions, as evidenced by extensive experiments conducted on the MosMedData and COVID-19-P20 datasets.</p>","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"1104-1127"},"PeriodicalIF":1.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145070997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Retraction: Investigations on coronary artery plaque detection and subclassification using machine learning classifier. 缩回:冠状动脉斑块检测与分类机器学习的研究。
IF 1.4 3区 医学 Q3 INSTRUMENTS & INSTRUMENTATION Pub Date : 2025-10-21 DOI: 10.1177/08953996251386435
{"title":"Retraction: Investigations on coronary artery plaque detection and subclassification using machine learning classifier.","authors":"","doi":"10.1177/08953996251386435","DOIUrl":"10.1177/08953996251386435","url":null,"abstract":"","PeriodicalId":49948,"journal":{"name":"Journal of X-Ray Science and Technology","volume":" ","pages":"8953996251386435"},"PeriodicalIF":1.4,"publicationDate":"2025-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145349603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of X-Ray Science and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1