首页 > 最新文献

BMC Medical Imaging最新文献

英文 中文
Desmoplastic Small Round Cell Tumor: a study of CT, MRI, PET/CT multimodal imaging features and their correlations with pathology.
IF 2.9 3区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-12-18 DOI: 10.1186/s12880-024-01500-4
Kaiwei Xu, Yi Chen, Wenqi Shen, Fan Liu, Ruoyu Wu, Jiajing Ni, Linwei Wang, Chunqu Chen, Lubin Zhu, Weijian Zhou, Jian Zhang, Changjing Zuo, Jianhua Wang

Purpose: Exploring the computed tomography (CT), magnetic resonance imaging (MRI), and fluorodeoxyglucose positron emission tomography (FDG-PET)/CT Multimodal Imaging Characteristics of Desmoplastic Small Round Cell Tumor (DSRCT) to enhance the diagnostic proficiency of this condition.

Methods: A retrospective analysis was performed on clinical data and multimodal imaging manifestations (CT, MRI, FDG-PET/CT) of eight cases of DSRCT. These findings were systematically compared with pathological results to succinctly summarize imaging features and elucidate their associations with both clinical and pathological characteristics.

Results: All eight cases within this cohort exhibited abdominal-pelvic masses, comprising six solitary masses and two instances of multiple nodules, except for one case located in the left kidney, the remaining cases lacked a clear organ source. On plain images, seven cases exhibited patchy areas of low density within the masses, four cases showed calcification within the masses. Post-contrast imaging displayed mild-to-moderate, uneven enhancement. Larger masses displayed patchy areas without significant enhancement at the center. In the four MRI examinations, T1-weighted images exhibited uneven, low signal intensity, while T2-weighted images demonstrated uneven high signal intensity. Imaging unveiled four cases of liver metastasis, four cases of ascites, seven cases of lymph node metastasis, three cases of diffuse peritoneal thickening, and one case involving left ureter invasion with obstruction. In the FDG-PET/CT examinations of seven cases, multiple abnormal FDG accumulations were observed in the abdominal cavity, retroperitoneum, pelvis, and liver. One postoperative case revealed a new metastatic focus near the colonic hepatic region. The range of maximum standardized uptake values (SUVmax) for all lesions are 6.62-11.15.

Conclusions: DSRCT is commonly seen in young men, and the imaging results are mostly multiple lesions with no clear organ source. Other common findings include intratumoral calcification, liver metastasis, ascites, peritoneal metastasis, and retroperitoneal lymph node enlargement. The combined use of CT, MRI and FDG-PET/CT can improve the diagnostic accuracy and treatment evaluation of DSRCT. However, it is imperative to underscore that the definitive diagnosis remains contingent upon pathological examination.

{"title":"Desmoplastic Small Round Cell Tumor: a study of CT, MRI, PET/CT multimodal imaging features and their correlations with pathology.","authors":"Kaiwei Xu, Yi Chen, Wenqi Shen, Fan Liu, Ruoyu Wu, Jiajing Ni, Linwei Wang, Chunqu Chen, Lubin Zhu, Weijian Zhou, Jian Zhang, Changjing Zuo, Jianhua Wang","doi":"10.1186/s12880-024-01500-4","DOIUrl":"10.1186/s12880-024-01500-4","url":null,"abstract":"<p><strong>Purpose: </strong>Exploring the computed tomography (CT), magnetic resonance imaging (MRI), and fluorodeoxyglucose positron emission tomography (FDG-PET)/CT Multimodal Imaging Characteristics of Desmoplastic Small Round Cell Tumor (DSRCT) to enhance the diagnostic proficiency of this condition.</p><p><strong>Methods: </strong>A retrospective analysis was performed on clinical data and multimodal imaging manifestations (CT, MRI, FDG-PET/CT) of eight cases of DSRCT. These findings were systematically compared with pathological results to succinctly summarize imaging features and elucidate their associations with both clinical and pathological characteristics.</p><p><strong>Results: </strong>All eight cases within this cohort exhibited abdominal-pelvic masses, comprising six solitary masses and two instances of multiple nodules, except for one case located in the left kidney, the remaining cases lacked a clear organ source. On plain images, seven cases exhibited patchy areas of low density within the masses, four cases showed calcification within the masses. Post-contrast imaging displayed mild-to-moderate, uneven enhancement. Larger masses displayed patchy areas without significant enhancement at the center. In the four MRI examinations, T1-weighted images exhibited uneven, low signal intensity, while T2-weighted images demonstrated uneven high signal intensity. Imaging unveiled four cases of liver metastasis, four cases of ascites, seven cases of lymph node metastasis, three cases of diffuse peritoneal thickening, and one case involving left ureter invasion with obstruction. In the FDG-PET/CT examinations of seven cases, multiple abnormal FDG accumulations were observed in the abdominal cavity, retroperitoneum, pelvis, and liver. One postoperative case revealed a new metastatic focus near the colonic hepatic region. The range of maximum standardized uptake values (SUV<sub>max</sub>) for all lesions are 6.62-11.15.</p><p><strong>Conclusions: </strong>DSRCT is commonly seen in young men, and the imaging results are mostly multiple lesions with no clear organ source. Other common findings include intratumoral calcification, liver metastasis, ascites, peritoneal metastasis, and retroperitoneal lymph node enlargement. The combined use of CT, MRI and FDG-PET/CT can improve the diagnostic accuracy and treatment evaluation of DSRCT. However, it is imperative to underscore that the definitive diagnosis remains contingent upon pathological examination.</p>","PeriodicalId":9020,"journal":{"name":"BMC Medical Imaging","volume":"24 1","pages":"336"},"PeriodicalIF":2.9,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11658151/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142852377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MHAGuideNet: a 3D pre-trained guidance model for Alzheimer's Disease diagnosis using 2D multi-planar sMRI images.
IF 2.9 3区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-12-18 DOI: 10.1186/s12880-024-01520-0
Yuanbi Nie, Qiushi Cui, Wenyuan Li, Yang Lü, Tianqing Deng

Background: Alzheimer's Disease is a neurodegenerative condition leading to irreversible and progressive brain damage, with possible features such as structural atrophy. Effective precision diagnosis is crucial for slowing disease progression and reducing the incidence rate and morbidity. Traditional computer-aided diagnostic methods using structural MRI data often focus on capturing such features but face challenges, like overfitting with 3D image analysis and insufficient feature capture with 2D slices, potentially missing multi-planar information, and the complementary nature of features across different orientations.

Methods: The study introduces MHAGuideNet, a classification method incorporating a guidance network utilizing multi-head attention. The model utilizes a pre-trained 3D convolutional neural network to direct the feature extraction of multi-planar 2D slices, specifically targeting the detection of features like structural atrophy. Additionally, a hybrid 2D slice-level network combining 2D CNN and 2D Swin Transformer is employed to capture the interrelations between the atrophy in different brain structures associated with Alzheimer's Disease.

Results: The proposed MHAGuideNet is tested using two datasets: the ADNI and OASIS datasets. The model achieves an accuracy of 97.58%, specificity of 99.89%, F1 score of 93.98%, and AUC of 99.31% on the ADNI test dataset, demonstrating superior performance in distinguishing between Alzheimer's Disease and cognitively normal subjects. Furthermore, testing on the independent OASIA test dataset yields an accuracy of 96.02%, demonstrating the model's robust performance across different datasets.

Conclusion: MHAGuideNet shows great promise as an effective tool for the computer-aided diagnosis of Alzheimer's Disease. Within the guidance of information from the 3D pre-trained CNN, the ability to leverage multi-planar information and capture subtle brain changes, including the interrelations between different structural atrophies, underscores its potential for clinical application.

{"title":"MHAGuideNet: a 3D pre-trained guidance model for Alzheimer's Disease diagnosis using 2D multi-planar sMRI images.","authors":"Yuanbi Nie, Qiushi Cui, Wenyuan Li, Yang Lü, Tianqing Deng","doi":"10.1186/s12880-024-01520-0","DOIUrl":"10.1186/s12880-024-01520-0","url":null,"abstract":"<p><strong>Background: </strong>Alzheimer's Disease is a neurodegenerative condition leading to irreversible and progressive brain damage, with possible features such as structural atrophy. Effective precision diagnosis is crucial for slowing disease progression and reducing the incidence rate and morbidity. Traditional computer-aided diagnostic methods using structural MRI data often focus on capturing such features but face challenges, like overfitting with 3D image analysis and insufficient feature capture with 2D slices, potentially missing multi-planar information, and the complementary nature of features across different orientations.</p><p><strong>Methods: </strong>The study introduces MHAGuideNet, a classification method incorporating a guidance network utilizing multi-head attention. The model utilizes a pre-trained 3D convolutional neural network to direct the feature extraction of multi-planar 2D slices, specifically targeting the detection of features like structural atrophy. Additionally, a hybrid 2D slice-level network combining 2D CNN and 2D Swin Transformer is employed to capture the interrelations between the atrophy in different brain structures associated with Alzheimer's Disease.</p><p><strong>Results: </strong>The proposed MHAGuideNet is tested using two datasets: the ADNI and OASIS datasets. The model achieves an accuracy of 97.58%, specificity of 99.89%, F1 score of 93.98%, and AUC of 99.31% on the ADNI test dataset, demonstrating superior performance in distinguishing between Alzheimer's Disease and cognitively normal subjects. Furthermore, testing on the independent OASIA test dataset yields an accuracy of 96.02%, demonstrating the model's robust performance across different datasets.</p><p><strong>Conclusion: </strong>MHAGuideNet shows great promise as an effective tool for the computer-aided diagnosis of Alzheimer's Disease. Within the guidance of information from the 3D pre-trained CNN, the ability to leverage multi-planar information and capture subtle brain changes, including the interrelations between different structural atrophies, underscores its potential for clinical application.</p>","PeriodicalId":9020,"journal":{"name":"BMC Medical Imaging","volume":"24 1","pages":"338"},"PeriodicalIF":2.9,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11656594/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142852408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced Cross-stage-attention U-Net for esophageal target volume segmentation.
IF 2.9 3区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-12-18 DOI: 10.1186/s12880-024-01515-x
Xiao Lou, Juan Zhu, Jian Yang, Youzhe Zhu, Huazhong Shu, Baosheng Li

Purpose: The segmentation of target volume and organs at risk (OAR) was a significant part of radiotherapy. Specifically, determining the location and scale of the esophagus in simulated computed tomography images was difficult and time-consuming primarily due to its complex structure and low contrast with the surrounding tissues. In this study, an Enhanced Cross-stage-attention U-Net was proposed to solve the segmentation problem for the esophageal gross tumor volume (GTV) and clinical tumor volume (CTV) in CT images.

Methods: First, a module based on principal component analysis theory was constructed to pre-extract the features of the input image. Then, a cross-stage based feature fusion model was designed to replace the skip concatenation of original UNet, which was composed of Wide Range Attention unit, Small-kernel Local Attention unit, and Inverted Bottleneck unit. WRA was employed to capture global attention, whose large convolution kernel was further decomposed to simplify the calculation. SLA was used to complement the local attention to WRA. IBN was structed to fuse the extracted features, where a global frequency response layer was built to redistribute the frequency response of the fused feature maps.

Results: The proposed method was compared with relevant published esophageal segmentation methods. The prediction of the proposed network was MSD = 2.83(1.62, 4.76)mm, HD = 11.79 ± 6.02 mm, DC = 72.45 ± 19.18% in GTV; MSD = 5.26(2.18, 8.82)mm, HD = 16.22 ± 10.01 mm, DC = 71.06 ± 17.72% in CTV.

Conclusion: The reconstruction of the skip concatenation in UNet showed an improvement of performance for esophageal segmentation. The results showed the proposed network had better effect on esophageal GTV and CTV segmentation.

{"title":"Enhanced Cross-stage-attention U-Net for esophageal target volume segmentation.","authors":"Xiao Lou, Juan Zhu, Jian Yang, Youzhe Zhu, Huazhong Shu, Baosheng Li","doi":"10.1186/s12880-024-01515-x","DOIUrl":"10.1186/s12880-024-01515-x","url":null,"abstract":"<p><strong>Purpose: </strong>The segmentation of target volume and organs at risk (OAR) was a significant part of radiotherapy. Specifically, determining the location and scale of the esophagus in simulated computed tomography images was difficult and time-consuming primarily due to its complex structure and low contrast with the surrounding tissues. In this study, an Enhanced Cross-stage-attention U-Net was proposed to solve the segmentation problem for the esophageal gross tumor volume (GTV) and clinical tumor volume (CTV) in CT images.</p><p><strong>Methods: </strong>First, a module based on principal component analysis theory was constructed to pre-extract the features of the input image. Then, a cross-stage based feature fusion model was designed to replace the skip concatenation of original UNet, which was composed of Wide Range Attention unit, Small-kernel Local Attention unit, and Inverted Bottleneck unit. WRA was employed to capture global attention, whose large convolution kernel was further decomposed to simplify the calculation. SLA was used to complement the local attention to WRA. IBN was structed to fuse the extracted features, where a global frequency response layer was built to redistribute the frequency response of the fused feature maps.</p><p><strong>Results: </strong>The proposed method was compared with relevant published esophageal segmentation methods. The prediction of the proposed network was MSD = 2.83(1.62, 4.76)mm, HD = 11.79 ± 6.02 mm, DC = 72.45 ± 19.18% in GTV; MSD = 5.26(2.18, 8.82)mm, HD = 16.22 ± 10.01 mm, DC = 71.06 ± 17.72% in CTV.</p><p><strong>Conclusion: </strong>The reconstruction of the skip concatenation in UNet showed an improvement of performance for esophageal segmentation. The results showed the proposed network had better effect on esophageal GTV and CTV segmentation.</p>","PeriodicalId":9020,"journal":{"name":"BMC Medical Imaging","volume":"24 1","pages":"339"},"PeriodicalIF":2.9,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11656919/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142852407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An ultrasound image segmentation method for thyroid nodules based on dual-path attention mechanism-enhanced UNet+.
IF 2.9 3区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-12-18 DOI: 10.1186/s12880-024-01521-z
Peizhen Dong, Ronghua Zhang, Jun Li, Changzheng Liu, Wen Liu, Jiale Hu, Yongqiang Yang, Xiang Li

Purpose: This study aims to design an auxiliary segmentation model for thyroid nodules to increase diagnostic accuracy and efficiency, thereby reducing the workload of medical personnel.

Methods: This study proposes a Dual-Path Attention Mechanism (DPAM)-UNet++ model, which can automatically segment thyroid nodules in ultrasound images. Specifically, the model incorporates dual-path attention modules into the skip connections of the UNet++ network to capture global contextual information in feature maps. The model's performance was evaluated using Intersection over Union (IoU), F1_score, accuracy, etc. Additionally, a new integrated loss function was designed for the DPAM-UNet++ network.

Results: Comparative experiments with classical segmentation models revealed that the DPAM-UNet++ model achieved an IoU of 0.7451, an F1_score of 0.8310, an accuracy of 0.9718, a precision of 0.8443, a recall of 0.8702, an Area Under Curve (AUC) of 0.9213, and an HD95 of 35.31. Except for the precision metric, this model outperformed the other models on all the indicators and achieved a segmentation effect that was more similar to that of the ground truth labels. Additionally, ablation experiments verified the effectiveness and necessity of the dual-path attention mechanism and the integrated loss function.

Conclusion: The segmentation model proposed in this study can effectively capture global contextual information in ultrasound images and accurately identify the locations of nodule areas. The model yields excellent segmentation results, especially for small and multiple nodules. Additionally, the integrated loss function improves the segmentation of nodule edges, enhancing the model's accuracy in segmenting edge details.

{"title":"An ultrasound image segmentation method for thyroid nodules based on dual-path attention mechanism-enhanced UNet+.","authors":"Peizhen Dong, Ronghua Zhang, Jun Li, Changzheng Liu, Wen Liu, Jiale Hu, Yongqiang Yang, Xiang Li","doi":"10.1186/s12880-024-01521-z","DOIUrl":"10.1186/s12880-024-01521-z","url":null,"abstract":"<p><strong>Purpose: </strong>This study aims to design an auxiliary segmentation model for thyroid nodules to increase diagnostic accuracy and efficiency, thereby reducing the workload of medical personnel.</p><p><strong>Methods: </strong>This study proposes a Dual-Path Attention Mechanism (DPAM)-UNet++ model, which can automatically segment thyroid nodules in ultrasound images. Specifically, the model incorporates dual-path attention modules into the skip connections of the UNet++ network to capture global contextual information in feature maps. The model's performance was evaluated using Intersection over Union (IoU), F1_score, accuracy, etc. Additionally, a new integrated loss function was designed for the DPAM-UNet++ network.</p><p><strong>Results: </strong>Comparative experiments with classical segmentation models revealed that the DPAM-UNet++ model achieved an IoU of 0.7451, an F1_score of 0.8310, an accuracy of 0.9718, a precision of 0.8443, a recall of 0.8702, an Area Under Curve (AUC) of 0.9213, and an HD95 of 35.31. Except for the precision metric, this model outperformed the other models on all the indicators and achieved a segmentation effect that was more similar to that of the ground truth labels. Additionally, ablation experiments verified the effectiveness and necessity of the dual-path attention mechanism and the integrated loss function.</p><p><strong>Conclusion: </strong>The segmentation model proposed in this study can effectively capture global contextual information in ultrasound images and accurately identify the locations of nodule areas. The model yields excellent segmentation results, especially for small and multiple nodules. Additionally, the integrated loss function improves the segmentation of nodule edges, enhancing the model's accuracy in segmenting edge details.</p>","PeriodicalId":9020,"journal":{"name":"BMC Medical Imaging","volume":"24 1","pages":"341"},"PeriodicalIF":2.9,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11656873/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142852346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep superpixel generation and clustering for weakly supervised segmentation of brain tumors in MR images.
IF 2.9 3区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-12-18 DOI: 10.1186/s12880-024-01523-x
Jay J Yoo, Khashayar Namdar, Farzad Khalvati

Purpose: Training machine learning models to segment tumors and other anomalies in medical images is an important step for developing diagnostic tools but generally requires manually annotated ground truth segmentations, which necessitates significant time and resources. We aim to develop a pipeline that can be trained using readily accessible binary image-level classification labels, to effectively segment regions of interest without requiring ground truth annotations.

Methods: This work proposes the use of a deep superpixel generation model and a deep superpixel clustering model trained simultaneously to output weakly supervised brain tumor segmentations. The superpixel generation model's output is selected and clustered together by the superpixel clustering model. Additionally, we train a classifier using binary image-level labels (i.e., labels indicating whether an image contains a tumor), which is used to guide the training by localizing undersegmented seeds as a loss term. The proposed simultaneous use of superpixel generation and clustering models, and the guided localization approach allow for the output weakly supervised tumor segmentations to capture contextual information that is propagated to both models during training, resulting in superpixels that specifically contour the tumors. We evaluate the performance of the pipeline using Dice coefficient and 95% Hausdorff distance (HD95) and compare the performance to state-of-the-art baselines. These baselines include the state-of-the-art weakly supervised segmentation method using both seeds and superpixels (CAM-S), and the Segment Anything Model (SAM).

Results: We used 2D slices of magnetic resonance brain scans from the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2020 dataset and labels indicating the presence of tumors to train and evaluate the pipeline. On an external test cohort from the BraTS 2023 dataset, our method achieved a mean Dice coefficient of 0.745 and a mean HD95 of 20.8, outperforming all baselines, including CAM-S and SAM, which resulted in mean Dice coefficients of 0.646 and 0.641, and mean HD95 of 21.2 and 27.3, respectively.

Conclusion: The proposed combination of deep superpixel generation, deep superpixel clustering, and the incorporation of undersegmented seeds as a loss term improves weakly supervised segmentation.

{"title":"Deep superpixel generation and clustering for weakly supervised segmentation of brain tumors in MR images.","authors":"Jay J Yoo, Khashayar Namdar, Farzad Khalvati","doi":"10.1186/s12880-024-01523-x","DOIUrl":"10.1186/s12880-024-01523-x","url":null,"abstract":"<p><strong>Purpose: </strong>Training machine learning models to segment tumors and other anomalies in medical images is an important step for developing diagnostic tools but generally requires manually annotated ground truth segmentations, which necessitates significant time and resources. We aim to develop a pipeline that can be trained using readily accessible binary image-level classification labels, to effectively segment regions of interest without requiring ground truth annotations.</p><p><strong>Methods: </strong>This work proposes the use of a deep superpixel generation model and a deep superpixel clustering model trained simultaneously to output weakly supervised brain tumor segmentations. The superpixel generation model's output is selected and clustered together by the superpixel clustering model. Additionally, we train a classifier using binary image-level labels (i.e., labels indicating whether an image contains a tumor), which is used to guide the training by localizing undersegmented seeds as a loss term. The proposed simultaneous use of superpixel generation and clustering models, and the guided localization approach allow for the output weakly supervised tumor segmentations to capture contextual information that is propagated to both models during training, resulting in superpixels that specifically contour the tumors. We evaluate the performance of the pipeline using Dice coefficient and 95% Hausdorff distance (HD95) and compare the performance to state-of-the-art baselines. These baselines include the state-of-the-art weakly supervised segmentation method using both seeds and superpixels (CAM-S), and the Segment Anything Model (SAM).</p><p><strong>Results: </strong>We used 2D slices of magnetic resonance brain scans from the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2020 dataset and labels indicating the presence of tumors to train and evaluate the pipeline. On an external test cohort from the BraTS 2023 dataset, our method achieved a mean Dice coefficient of 0.745 and a mean HD95 of 20.8, outperforming all baselines, including CAM-S and SAM, which resulted in mean Dice coefficients of 0.646 and 0.641, and mean HD95 of 21.2 and 27.3, respectively.</p><p><strong>Conclusion: </strong>The proposed combination of deep superpixel generation, deep superpixel clustering, and the incorporation of undersegmented seeds as a loss term improves weakly supervised segmentation.</p>","PeriodicalId":9020,"journal":{"name":"BMC Medical Imaging","volume":"24 1","pages":"335"},"PeriodicalIF":2.9,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11657002/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142852375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DKI and 1H-MRS in angiogenesis evaluation of soft tissue sarcomas: a prospective clinical study based on MRI-pathology control method.
IF 2.9 3区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-12-18 DOI: 10.1186/s12880-024-01526-8
Wubing Han, Cheng Xin, Zeguo Wang, Fei Wang, Yu Cheng, Xingrong Yang, Yangyun Zhou, Juntong Liu, Wanjiang Yu, Shaowu Wang

Background: The vascular endothelial growth factor (VEGF) and microvessel density (MVD) have been widely employed as angiogenesis indicators in the diagnosis and treatment of soft tissue sarcomas. While diffusion kurtosis imaging (DKI) and proton magnetic resonance spectroscopy (1H-MRS) imaging hold potential in assessing angiogenesis in other tumors, their reliability in correlating with angiogenesis in soft tissue sarcomas remains uncertain, contingent upon accurately acquiring the region of interest (ROI).

Methods: 23 patients with soft tissue sarcomas (STSs) confirmed by pathology were selected, underwent DKI and 1H-MRS at 3.0T MRI. The DKI parameters mean diffusivity (MD), mean kurtosis (MK), kurtosis anisotropy (KA), and 1H-MRS parameters choline (Cho), lipid/lactate (LL) were measured by two radiologists. Two pathologists obtained pathological slices using a new sampling method called MRI-pathology control and evaluated VEGF and MVD in the selected regions. Correlations between MRI parameters and angiogenesis markers were assessed by Person or Spearman tests.

Results: The DKI parameters MD and KA, and the 1H-MRS parameters Cho and LL, have varying degrees of correlation with the expression levels of VEGF and MVD. Among them, Cho exhibits the strongest correlation (r = 0.875, P < 0.001; r = 0.807, P < 0.001).

Conclusion: Based on this preliminary clinical studies, DKI and 1H-MRS parameters are correlated with angiogenesis markers obtained through the "MRI-pathology control" method.

{"title":"DKI and <sup>1</sup>H-MRS in angiogenesis evaluation of soft tissue sarcomas: a prospective clinical study based on MRI-pathology control method.","authors":"Wubing Han, Cheng Xin, Zeguo Wang, Fei Wang, Yu Cheng, Xingrong Yang, Yangyun Zhou, Juntong Liu, Wanjiang Yu, Shaowu Wang","doi":"10.1186/s12880-024-01526-8","DOIUrl":"10.1186/s12880-024-01526-8","url":null,"abstract":"<p><strong>Background: </strong>The vascular endothelial growth factor (VEGF) and microvessel density (MVD) have been widely employed as angiogenesis indicators in the diagnosis and treatment of soft tissue sarcomas. While diffusion kurtosis imaging (DKI) and proton magnetic resonance spectroscopy (<sup>1</sup>H-MRS) imaging hold potential in assessing angiogenesis in other tumors, their reliability in correlating with angiogenesis in soft tissue sarcomas remains uncertain, contingent upon accurately acquiring the region of interest (ROI).</p><p><strong>Methods: </strong>23 patients with soft tissue sarcomas (STSs) confirmed by pathology were selected, underwent DKI and <sup>1</sup>H-MRS at 3.0T MRI. The DKI parameters mean diffusivity (MD), mean kurtosis (MK), kurtosis anisotropy (KA), and <sup>1</sup>H-MRS parameters choline (Cho), lipid/lactate (LL) were measured by two radiologists. Two pathologists obtained pathological slices using a new sampling method called MRI-pathology control and evaluated VEGF and MVD in the selected regions. Correlations between MRI parameters and angiogenesis markers were assessed by Person or Spearman tests.</p><p><strong>Results: </strong>The DKI parameters MD and KA, and the <sup>1</sup>H-MRS parameters Cho and LL, have varying degrees of correlation with the expression levels of VEGF and MVD. Among them, Cho exhibits the strongest correlation (r = 0.875, P < 0.001; r = 0.807, P < 0.001).</p><p><strong>Conclusion: </strong>Based on this preliminary clinical studies, DKI and <sup>1</sup>H-MRS parameters are correlated with angiogenesis markers obtained through the \"MRI-pathology control\" method.</p>","PeriodicalId":9020,"journal":{"name":"BMC Medical Imaging","volume":"24 1","pages":"340"},"PeriodicalIF":2.9,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11657525/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142852380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Preoperative CT-based morphological heterogeneity for predicting survival in patients with colorectal cancer liver metastases after surgical resection: a retrospective study.
IF 2.9 3区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-12-18 DOI: 10.1186/s12880-024-01524-w
Qian Xing, Yong Cui, Ming Liu, Xiao-Lei Gu, Xiao-Ting Li, Bao-Cai Xing, Ying-Shi Sun

Objective: To explore the value of preoperative CT-based morphological heterogeneity (MH) for predicting local tumor disease-free survival (LTDFS) and progression-free survival (PFS) in patients with colorectal cancer liver metastases (CRLM).

Methods: The latest CT data of 102 CRLM patients were retrospectively analyzed. The morphological score of each liver metastasis was obtained, and the morphological heterogeneity difference (MHD) was calculated. The receiver operating characteristic (ROC) curve was drawn, and the cutoff value was found. The Kaplan-Meier method was used to draw survival curves of patients with or without MH. The Cox regression analysis was used to build the model with MH and clinical characteristics for predicting PFS.

Results: In 78 patients without MH, median PFS was 9.0 months (95% CI:6.5-11.5), while in 24 patients with MH, median PFS was 6.0 months (95% CI:4.0-8.1), indicating that MH significantly affected PFS (p = 0.001). MH affected PFS in both the chemotherapy group and the chemotherapy combined with targeted therapy group (p = 0.005, p = 0.043). MH, preoperative carcinoembryonic antigen (CEA) and chemotherapy after surgery were independent predictors for postoperative PFS in patients with CRLM.

Conclusion: Preoperative CT-based MH had good efficacy for predicting LTDFS and PFS of CRLM patients after surgical resection, regardless of preoperative treatment. MH is one of the independent predictors of PFS.

{"title":"Preoperative CT-based morphological heterogeneity for predicting survival in patients with colorectal cancer liver metastases after surgical resection: a retrospective study.","authors":"Qian Xing, Yong Cui, Ming Liu, Xiao-Lei Gu, Xiao-Ting Li, Bao-Cai Xing, Ying-Shi Sun","doi":"10.1186/s12880-024-01524-w","DOIUrl":"10.1186/s12880-024-01524-w","url":null,"abstract":"<p><strong>Objective: </strong>To explore the value of preoperative CT-based morphological heterogeneity (MH) for predicting local tumor disease-free survival (LTDFS) and progression-free survival (PFS) in patients with colorectal cancer liver metastases (CRLM).</p><p><strong>Methods: </strong>The latest CT data of 102 CRLM patients were retrospectively analyzed. The morphological score of each liver metastasis was obtained, and the morphological heterogeneity difference (MHD) was calculated. The receiver operating characteristic (ROC) curve was drawn, and the cutoff value was found. The Kaplan-Meier method was used to draw survival curves of patients with or without MH. The Cox regression analysis was used to build the model with MH and clinical characteristics for predicting PFS.</p><p><strong>Results: </strong>In 78 patients without MH, median PFS was 9.0 months (95% CI:6.5-11.5), while in 24 patients with MH, median PFS was 6.0 months (95% CI:4.0-8.1), indicating that MH significantly affected PFS (p = 0.001). MH affected PFS in both the chemotherapy group and the chemotherapy combined with targeted therapy group (p = 0.005, p = 0.043). MH, preoperative carcinoembryonic antigen (CEA) and chemotherapy after surgery were independent predictors for postoperative PFS in patients with CRLM.</p><p><strong>Conclusion: </strong>Preoperative CT-based MH had good efficacy for predicting LTDFS and PFS of CRLM patients after surgical resection, regardless of preoperative treatment. MH is one of the independent predictors of PFS.</p>","PeriodicalId":9020,"journal":{"name":"BMC Medical Imaging","volume":"24 1","pages":"343"},"PeriodicalIF":2.9,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11657693/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142852410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Shape-based disease grading via functional maps and graph convolutional networks with application to Alzheimer's disease.
IF 2.9 3区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-12-18 DOI: 10.1186/s12880-024-01513-z
Julius Mayer, Daniel Baum, Felix Ambellan, Christoph von Tycowicz

Shape analysis provides methods for understanding anatomical structures extracted from medical images. However, the underlying notions of shape spaces that are frequently employed come with strict assumptions prohibiting the analysis of incomplete and/or topologically varying shapes. This work aims to alleviate these limitations by adapting the concept of functional maps. Further, we present a graph-based learning approach for morphometric classification of disease states that uses novel shape descriptors based on this concept. We demonstrate the performance of the derived classifier on the open-access ADNI database differentiating normal controls and subjects with Alzheimer's disease. Notably, the experiments show that our approach can improve over state-of-the-art from geometric deep learning.

{"title":"Shape-based disease grading via functional maps and graph convolutional networks with application to Alzheimer's disease.","authors":"Julius Mayer, Daniel Baum, Felix Ambellan, Christoph von Tycowicz","doi":"10.1186/s12880-024-01513-z","DOIUrl":"10.1186/s12880-024-01513-z","url":null,"abstract":"<p><p>Shape analysis provides methods for understanding anatomical structures extracted from medical images. However, the underlying notions of shape spaces that are frequently employed come with strict assumptions prohibiting the analysis of incomplete and/or topologically varying shapes. This work aims to alleviate these limitations by adapting the concept of functional maps. Further, we present a graph-based learning approach for morphometric classification of disease states that uses novel shape descriptors based on this concept. We demonstrate the performance of the derived classifier on the open-access ADNI database differentiating normal controls and subjects with Alzheimer's disease. Notably, the experiments show that our approach can improve over state-of-the-art from geometric deep learning.</p>","PeriodicalId":9020,"journal":{"name":"BMC Medical Imaging","volume":"24 1","pages":"342"},"PeriodicalIF":2.9,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11657580/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142852412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Novel neural network classification of maternal fetal ultrasound planes through optimized feature selection.
IF 2.9 3区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-12-18 DOI: 10.1186/s12880-024-01453-8
S Rathika, K Mahendran, H Sudarsan, S Vijay Ananth

Ultrasound (US) imaging is an essential diagnostic technique in prenatal care, enabling enhanced surveillance of fetal growth and development. Fetal ultrasonography standard planes are crucial for evaluating fetal development parameters and detecting abnormalities. Real-time imaging, low cost, non-invasiveness, and accessibility make US imaging indispensable in clinical practice. However, acquiring fetal US planes with correct fetal anatomical features is a difficult and time-consuming task, even for experienced sonographers. Medical imaging using AI shows promise for addressing current challenges. In response to this challenge, a Deep Learning (DL)-based automated categorization method for maternal fetal US planes are introduced to enhance detection efficiency and diagnosis accuracy. This paper presents a hybrid optimization technique for feature selection and introduces a novel Radial Basis Function Neural Network (RBFNN) for reliable maternal fetal US plane classification. A large dataset of maternal-fetal screening US images was collected from publicly available sources and categorized into six groups: the four fetal anatomical planes, the mother's cervix, and an additional category. Feature extraction is performed using Gray-Level Co-occurrence Matrix (GLCM), and optimization methods such as Particle Swarm Optimization (PSO), Grey Wolf Optimization (GWO), and a hybrid Particle Swarm Optimization and Grey Wolf Optimization (PSOGWO) approach are utilized to select the most relevant features. The optimized features from each algorithm are then input into both conventional and proposed DL models. Experimental results indicate that the proposed approach surpasses conventional DL models in performance. Furthermore, the proposed model is evaluated against previously published models, showcasing its superior classification accuracy. In conclusion, our proposed approach provides a solid foundation for automating the classification of fetal US planes, leveraging optimization and DL techniques to enhance prenatal diagnosis and care.

{"title":"Novel neural network classification of maternal fetal ultrasound planes through optimized feature selection.","authors":"S Rathika, K Mahendran, H Sudarsan, S Vijay Ananth","doi":"10.1186/s12880-024-01453-8","DOIUrl":"10.1186/s12880-024-01453-8","url":null,"abstract":"<p><p>Ultrasound (US) imaging is an essential diagnostic technique in prenatal care, enabling enhanced surveillance of fetal growth and development. Fetal ultrasonography standard planes are crucial for evaluating fetal development parameters and detecting abnormalities. Real-time imaging, low cost, non-invasiveness, and accessibility make US imaging indispensable in clinical practice. However, acquiring fetal US planes with correct fetal anatomical features is a difficult and time-consuming task, even for experienced sonographers. Medical imaging using AI shows promise for addressing current challenges. In response to this challenge, a Deep Learning (DL)-based automated categorization method for maternal fetal US planes are introduced to enhance detection efficiency and diagnosis accuracy. This paper presents a hybrid optimization technique for feature selection and introduces a novel Radial Basis Function Neural Network (RBFNN) for reliable maternal fetal US plane classification. A large dataset of maternal-fetal screening US images was collected from publicly available sources and categorized into six groups: the four fetal anatomical planes, the mother's cervix, and an additional category. Feature extraction is performed using Gray-Level Co-occurrence Matrix (GLCM), and optimization methods such as Particle Swarm Optimization (PSO), Grey Wolf Optimization (GWO), and a hybrid Particle Swarm Optimization and Grey Wolf Optimization (PSOGWO) approach are utilized to select the most relevant features. The optimized features from each algorithm are then input into both conventional and proposed DL models. Experimental results indicate that the proposed approach surpasses conventional DL models in performance. Furthermore, the proposed model is evaluated against previously published models, showcasing its superior classification accuracy. In conclusion, our proposed approach provides a solid foundation for automating the classification of fetal US planes, leveraging optimization and DL techniques to enhance prenatal diagnosis and care.</p>","PeriodicalId":9020,"journal":{"name":"BMC Medical Imaging","volume":"24 1","pages":"337"},"PeriodicalIF":2.9,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11657708/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142852409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Segmentation for mammography classification utilizing deep convolutional neural network.
IF 2.9 3区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-12-18 DOI: 10.1186/s12880-024-01510-2
Dip Kumar Saha, Tuhin Hossain, Mejdl Safran, Sultan Alfarhood, M F Mridha, Dunren Che

Background: Mammography for the diagnosis of early breast cancer (BC) relies heavily on the identification of breast masses. However, in the early stages, it might be challenging to ascertain whether a breast mass is benign or malignant. Consequently, many deep learning (DL)-based computer-aided diagnosis (CAD) approaches for BC classification have been developed.

Methods: Recently, the transformer model has emerged as a method for overcoming the constraints of convolutional neural networks (CNN). Thus, our primary goal was to determine how well an improved transformer model could distinguish between benign and malignant breast tissues. In this instance, we drew on the Mendeley data repository's INbreast dataset, which includes benign and malignant breast types. Additionally, the segmentation anything model (SAM) method was used to generate the optimized cutoff for region of interest (ROI) extraction from all mammograms. We implemented a successful architecture modification at the bottom layer of a pyramid transformer (PTr) to identify BC from mammography images.

Results: The proposed PTr model using a transfer learning (TL) approach with a segmentation technique achieved the best accuracy of 99.96% for binary classifications with an area under the curve (AUC) score of 99.98%, respectively. We also compared the performance of the proposed model with other transformer model vision transformers (ViT) and DL models, MobileNetV3 and EfficientNetB7, respectively.

Conclusions: In this study, a modified transformer model is proposed for BC prediction and mammography image classification using segmentation approaches. Data segmentation techniques accurately identify the regions affected by BC. Finally, the proposed transformer model accurately classified benign and malignant breast tissues, which is vital for radiologists to guide future treatment.

{"title":"Segmentation for mammography classification utilizing deep convolutional neural network.","authors":"Dip Kumar Saha, Tuhin Hossain, Mejdl Safran, Sultan Alfarhood, M F Mridha, Dunren Che","doi":"10.1186/s12880-024-01510-2","DOIUrl":"10.1186/s12880-024-01510-2","url":null,"abstract":"<p><strong>Background: </strong>Mammography for the diagnosis of early breast cancer (BC) relies heavily on the identification of breast masses. However, in the early stages, it might be challenging to ascertain whether a breast mass is benign or malignant. Consequently, many deep learning (DL)-based computer-aided diagnosis (CAD) approaches for BC classification have been developed.</p><p><strong>Methods: </strong>Recently, the transformer model has emerged as a method for overcoming the constraints of convolutional neural networks (CNN). Thus, our primary goal was to determine how well an improved transformer model could distinguish between benign and malignant breast tissues. In this instance, we drew on the Mendeley data repository's INbreast dataset, which includes benign and malignant breast types. Additionally, the segmentation anything model (SAM) method was used to generate the optimized cutoff for region of interest (ROI) extraction from all mammograms. We implemented a successful architecture modification at the bottom layer of a pyramid transformer (PTr) to identify BC from mammography images.</p><p><strong>Results: </strong>The proposed PTr model using a transfer learning (TL) approach with a segmentation technique achieved the best accuracy of 99.96% for binary classifications with an area under the curve (AUC) score of 99.98%, respectively. We also compared the performance of the proposed model with other transformer model vision transformers (ViT) and DL models, MobileNetV3 and EfficientNetB7, respectively.</p><p><strong>Conclusions: </strong>In this study, a modified transformer model is proposed for BC prediction and mammography image classification using segmentation approaches. Data segmentation techniques accurately identify the regions affected by BC. Finally, the proposed transformer model accurately classified benign and malignant breast tissues, which is vital for radiologists to guide future treatment.</p>","PeriodicalId":9020,"journal":{"name":"BMC Medical Imaging","volume":"24 1","pages":"334"},"PeriodicalIF":2.9,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11656821/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142852411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
BMC Medical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1