首页 > 最新文献

Journal of Medical Imaging最新文献

英文 中文
Impact of menopause and age on breast density and background parenchymal enhancement in dynamic contrast-enhanced magnetic resonance imaging.
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-03-11 DOI: 10.1117/1.JMI.12.S2.S22002
Grey Kuling, Jennifer D Brooks, Belinda Curpen, Ellen Warner, Anne L Martel

Purpose: Breast density (BD) and background parenchymal enhancement (BPE) are important imaging biomarkers for breast cancer (BC) risk. We aim to evaluate longitudinal changes in quantitative BD and BPE in high-risk women undergoing dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), focusing on the effects of age and transition into menopause.

Approach: A retrospective cohort study analyzed 834 high-risk women undergoing breast DCE-MRI for screening between 2005 and 2020. Quantitative BD and BPE were derived using deep-learning segmentation. Linear mixed-effects models assessed longitudinal changes and the effects of age, menopausal status, weeks since the last menstrual period (LMP-wks), body mass index (BMI), and hormone replacement therapy (HRT) on these imaging biomarkers.

Results: BD decreased with age across all menopausal stages, whereas BPE declined with age in postmenopausal women but remained stable in premenopausal women. HRT elevated BPE in postmenopausal women. Perimenopausal women exhibited decreases in both BD and BPE during the menopausal transition, though cross-sectional age at menopause had no significant effect on either measure. Fibroglandular tissue was positively associated with BPE in perimenopausal women.

Conclusions: We highlight the dynamic impact of menopause on BD and BPE and correlate well with the known relationship between risk and age at menopause. These findings advance the understanding of imaging biomarkers in high-risk populations and may contribute to the development of improved risk assessment leading to personalized chemoprevention and BC screening recommendations.

{"title":"Impact of menopause and age on breast density and background parenchymal enhancement in dynamic contrast-enhanced magnetic resonance imaging.","authors":"Grey Kuling, Jennifer D Brooks, Belinda Curpen, Ellen Warner, Anne L Martel","doi":"10.1117/1.JMI.12.S2.S22002","DOIUrl":"10.1117/1.JMI.12.S2.S22002","url":null,"abstract":"<p><strong>Purpose: </strong>Breast density (BD) and background parenchymal enhancement (BPE) are important imaging biomarkers for breast cancer (BC) risk. We aim to evaluate longitudinal changes in quantitative BD and BPE in high-risk women undergoing dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), focusing on the effects of age and transition into menopause.</p><p><strong>Approach: </strong>A retrospective cohort study analyzed 834 high-risk women undergoing breast DCE-MRI for screening between 2005 and 2020. Quantitative BD and BPE were derived using deep-learning segmentation. Linear mixed-effects models assessed longitudinal changes and the effects of age, menopausal status, weeks since the last menstrual period (LMP-wks), body mass index (BMI), and hormone replacement therapy (HRT) on these imaging biomarkers.</p><p><strong>Results: </strong>BD decreased with age across all menopausal stages, whereas BPE declined with age in postmenopausal women but remained stable in premenopausal women. HRT elevated BPE in postmenopausal women. Perimenopausal women exhibited decreases in both BD and BPE during the menopausal transition, though cross-sectional age at menopause had no significant effect on either measure. Fibroglandular tissue was positively associated with BPE in perimenopausal women.</p><p><strong>Conclusions: </strong>We highlight the dynamic impact of menopause on BD and BPE and correlate well with the known relationship between risk and age at menopause. These findings advance the understanding of imaging biomarkers in high-risk populations and may contribute to the development of improved risk assessment leading to personalized chemoprevention and BC screening recommendations.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 Suppl 2","pages":"S22002"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11894108/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143617600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-supervised semantic segmentation of cell nuclei with diffusion model and collaborative learning. 利用扩散模型和协作学习对细胞核进行半监督语义分割
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-03-20 DOI: 10.1117/1.JMI.12.6.061403
Zhuchen Shao, Sourya Sengupta, Mark A Anastasio, Hua Li

Purpose: Automated segmentation and classification of the cell nuclei in microscopic images is crucial for disease diagnosis and tissue microenvironment analysis. Given the difficulties in acquiring large labeled datasets for supervised learning, semi-supervised methods offer alternatives by utilizing unlabeled data alongside labeled data. Effective semi-supervised methods to address the challenges of extremely limited labeled data or diverse datasets with varying numbers and types of annotations remain under-explored.

Approach: Unlike other semi-supervised learning methods that iteratively use labeled and unlabeled data for model training, we introduce a semi-supervised learning framework that combines a latent diffusion model (LDM) with a transformer-based decoder, allowing for independent usage of unlabeled data to optimize their contribution to model training. The model is trained based on a sequential training strategy. LDM is trained in an unsupervised manner on diverse datasets, independent of cell nuclei types, thereby expanding the training data and enhancing training performance. The pre-trained LDM serves as a powerful feature extractor to support the transformer-based decoder's supervised training on limited labeled data and improve final segmentation performance. In addition, the paper explores a collaborative learning strategy to enhance segmentation performance on out-of-distribution (OOD) data.

Results: Extensive experiments conducted on four diverse datasets demonstrated that the proposed framework significantly outperformed other semi-supervised and supervised methods for both in-distribution and OOD cases. Through collaborative learning with supervised methods, diffusion model and transformer decoder-based segmentation (DTSeg) achieved consistent performance across varying cell types and different amounts of labeled data.

Conclusions: The proposed DTSeg framework addresses cell nuclei segmentation under limited labeled data by integrating unsupervised LDM training on diverse unlabeled datasets. Collaborative learning demonstrated effectiveness in enhancing the generalization capability of DTSeg to achieve superior results across diverse datasets and cases. Furthermore, the method supports multi-channel inputs and demonstrates strong generalization to both in-distribution and OOD scenarios.

{"title":"Semi-supervised semantic segmentation of cell nuclei with diffusion model and collaborative learning.","authors":"Zhuchen Shao, Sourya Sengupta, Mark A Anastasio, Hua Li","doi":"10.1117/1.JMI.12.6.061403","DOIUrl":"10.1117/1.JMI.12.6.061403","url":null,"abstract":"<p><strong>Purpose: </strong>Automated segmentation and classification of the cell nuclei in microscopic images is crucial for disease diagnosis and tissue microenvironment analysis. Given the difficulties in acquiring large labeled datasets for supervised learning, semi-supervised methods offer alternatives by utilizing unlabeled data alongside labeled data. Effective semi-supervised methods to address the challenges of extremely limited labeled data or diverse datasets with varying numbers and types of annotations remain under-explored.</p><p><strong>Approach: </strong>Unlike other semi-supervised learning methods that iteratively use labeled and unlabeled data for model training, we introduce a semi-supervised learning framework that combines a latent diffusion model (LDM) with a transformer-based decoder, allowing for independent usage of unlabeled data to optimize their contribution to model training. The model is trained based on a sequential training strategy. LDM is trained in an unsupervised manner on diverse datasets, independent of cell nuclei types, thereby expanding the training data and enhancing training performance. The pre-trained LDM serves as a powerful feature extractor to support the transformer-based decoder's supervised training on limited labeled data and improve final segmentation performance. In addition, the paper explores a collaborative learning strategy to enhance segmentation performance on out-of-distribution (OOD) data.</p><p><strong>Results: </strong>Extensive experiments conducted on four diverse datasets demonstrated that the proposed framework significantly outperformed other semi-supervised and supervised methods for both in-distribution and OOD cases. Through collaborative learning with supervised methods, diffusion model and transformer decoder-based segmentation (DTSeg) achieved consistent performance across varying cell types and different amounts of labeled data.</p><p><strong>Conclusions: </strong>The proposed DTSeg framework addresses cell nuclei segmentation under limited labeled data by integrating unsupervised LDM training on diverse unlabeled datasets. Collaborative learning demonstrated effectiveness in enhancing the generalization capability of DTSeg to achieve superior results across diverse datasets and cases. Furthermore, the method supports multi-channel inputs and demonstrates strong generalization to both in-distribution and OOD scenarios.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"061403"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11924957/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143694064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HID-CON: weakly supervised intrahepatic cholangiocarcinoma subtype classification of whole slide images using contrastive hidden class detection.
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-03-12 DOI: 10.1117/1.JMI.12.6.061402
Jing Wei Tan, Kyoungbun Lee, Won-Ki Jeong

Purpose: Biliary tract cancer, also known as intrahepatic cholangiocarcinoma (IHCC), is a rare disease that shows no clear symptoms during its early stage, but its prognosis depends highly on the cancer subtype. Hence, an accurate cancer subtype classification model is necessary to provide better treatment plans to patients and to reduce mortality. However, annotating histopathology images at the pixel or patch level is time-consuming and labor-intensive for giga-pixel whole slide images. To address this problem, we propose a weakly supervised method for classifying IHCC subtypes using only image-level labels.

Approach: The core idea of the proposed method is to detect regions (i.e., subimages or patches) commonly included in all subtypes, which we name the "hidden class," and to remove them via iterative application of contrastive loss and label smoothing. Doing so will enable us to obtain only patches that faithfully represent each subtype, which are then used to train the image-level classification model by multiple instance learning (MIL).

Results: Our method outperforms the state-of-the-art weakly supervised learning methods ABMIL, TransMIL, and DTFD-MIL by 17 % , 18%, and 8%, respectively, and achieves performance comparable to that of supervised methods.

Conclusions: The introduction of a hidden class to represent patches commonly found across all subtypes enhances the accuracy of IHCC classification and addresses the weak labeling problem in histopathology images.

{"title":"HID-CON: weakly supervised intrahepatic cholangiocarcinoma subtype classification of whole slide images using contrastive hidden class detection.","authors":"Jing Wei Tan, Kyoungbun Lee, Won-Ki Jeong","doi":"10.1117/1.JMI.12.6.061402","DOIUrl":"10.1117/1.JMI.12.6.061402","url":null,"abstract":"<p><strong>Purpose: </strong>Biliary tract cancer, also known as intrahepatic cholangiocarcinoma (IHCC), is a rare disease that shows no clear symptoms during its early stage, but its prognosis depends highly on the cancer subtype. Hence, an accurate cancer subtype classification model is necessary to provide better treatment plans to patients and to reduce mortality. However, annotating histopathology images at the pixel or patch level is time-consuming and labor-intensive for giga-pixel whole slide images. To address this problem, we propose a weakly supervised method for classifying IHCC subtypes using only image-level labels.</p><p><strong>Approach: </strong>The core idea of the proposed method is to detect regions (i.e., subimages or patches) commonly included in all subtypes, which we name the \"hidden class,\" and to remove them via iterative application of contrastive loss and label smoothing. Doing so will enable us to obtain only patches that faithfully represent each subtype, which are then used to train the image-level classification model by multiple instance learning (MIL).</p><p><strong>Results: </strong>Our method outperforms the state-of-the-art weakly supervised learning methods ABMIL, TransMIL, and DTFD-MIL by <math><mrow><mo>∼</mo> <mn>17</mn> <mo>%</mo></mrow> </math> , 18%, and 8%, respectively, and achieves performance comparable to that of supervised methods.</p><p><strong>Conclusions: </strong>The introduction of a hidden class to represent patches commonly found across all subtypes enhances the accuracy of IHCC classification and addresses the weak labeling problem in histopathology images.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 6","pages":"061402"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11898109/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143626473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Frequency-based texture analysis of non-Gaussian properties of digital breast tomosynthesis images and comparison across two vendors. 基于频率的数字乳腺断层合成图像非高斯特性纹理分析以及两家供应商之间的比较。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-03-20 DOI: 10.1117/1.JMI.12.S2.S22004
Kai Yang, Craig K Abbey, Bruno Barufaldi, Xinhua Li, Theodore A Marschall, Bob Liu

Purpose: We aim to analyze higher-order textural components of digital breast tomosynthesis (DBT) images to quantify differences in the appearance of breast parenchyma produced by different vendors.

Approach: We included consecutive women who had normal screening DBT exams in January 2018 from a GE system and in adjacent years from Hologic systems. Laplacian fractional entropy (LFE), as a measure of non-Gaussian statistical properties of breast tissue texture, was calculated from for-presentation Craniocaudal (CC) view DBT slices and synthetic mammograms (SMs) through frequency-based filtering with Gabor filters, which were considered mathematical models for human visual response to image textures. The LFE values were compared within and across subjects and vendors along with secondary parameters (laterality, year-to-year, modality, and breast density) via two-way analysis of variance (ANOVA) tests using frequency as one of the two independent variables, and a P -value < 0.05 was considered statistically significant.

Results: A total of 8529 CC view DBT slices and SM images from 73 screening exams in 25 women were analyzed. Significant differences in LFE were observed for different frequencies ( P < 0.001 ) and across vendors (GE versus Hologic DBT: P < 0.001 , GE versus Hologic SM: P < 0.001 ).

Conclusion: Significant differences in perception of breast parenchyma textures among two DBT vendors were demonstrated via higher-order non-Gaussian statistical properties. This finding extends previously observed differences in anatomical noise power spectra in DBT images and provides quantitative evidence to support caution in across-vendor comparative reading and will be beneficial to facilitate future development of vendor-neutral artificial intelligence algorithms for breast cancer screening.

{"title":"Frequency-based texture analysis of non-Gaussian properties of digital breast tomosynthesis images and comparison across two vendors.","authors":"Kai Yang, Craig K Abbey, Bruno Barufaldi, Xinhua Li, Theodore A Marschall, Bob Liu","doi":"10.1117/1.JMI.12.S2.S22004","DOIUrl":"10.1117/1.JMI.12.S2.S22004","url":null,"abstract":"<p><strong>Purpose: </strong>We aim to analyze higher-order textural components of digital breast tomosynthesis (DBT) images to quantify differences in the appearance of breast parenchyma produced by different vendors.</p><p><strong>Approach: </strong>We included consecutive women who had normal screening DBT exams in January 2018 from a GE system and in adjacent years from Hologic systems. Laplacian fractional entropy (LFE), as a measure of non-Gaussian statistical properties of breast tissue texture, was calculated from for-presentation Craniocaudal (CC) view DBT slices and synthetic mammograms (SMs) through frequency-based filtering with Gabor filters, which were considered mathematical models for human visual response to image textures. The LFE values were compared within and across subjects and vendors along with secondary parameters (laterality, year-to-year, modality, and breast density) via two-way analysis of variance (ANOVA) tests using frequency as one of the two independent variables, and a <math><mrow><mi>P</mi></mrow> </math> -value <math><mrow><mo><</mo> <mn>0.05</mn></mrow> </math> was considered statistically significant.</p><p><strong>Results: </strong>A total of 8529 CC view DBT slices and SM images from 73 screening exams in 25 women were analyzed. Significant differences in LFE were observed for different frequencies ( <math><mrow><mi>P</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ) and across vendors (GE versus Hologic DBT: <math><mrow><mi>P</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> , GE versus Hologic SM: <math><mrow><mi>P</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ).</p><p><strong>Conclusion: </strong>Significant differences in perception of breast parenchyma textures among two DBT vendors were demonstrated via higher-order non-Gaussian statistical properties. This finding extends previously observed differences in anatomical noise power spectra in DBT images and provides quantitative evidence to support caution in across-vendor comparative reading and will be beneficial to facilitate future development of vendor-neutral artificial intelligence algorithms for breast cancer screening.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 Suppl 2","pages":"S22004"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11925074/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143694062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating the effect of adding comparisons with prior mammograms to standalone digital breast tomosynthesis screening.
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-11-01 Epub Date: 2025-03-24 DOI: 10.1117/1.JMI.12.S2.S22003
Pontus Timberg, Gustav Hellgren, Magnus Dustler, Anders Tingberg

Purpose: The purpose is to retrospectively investigate how the addition of prior and concurrent mammograms affects wide-angle digital breast tomosynthesis (DBT) screening false-positive recall rates, malignancy scoring, and recall agreement.

Approach: A total of 200 cases were selected from the Malmö Breast Tomosynthesis Screening Trial. They consist of 150 recalled cases [30 true positives (TPs), 120 false positives (FPs), and 50 healthy, non-recalled true-negative (TN) cases]. The positive cases were categorized based on being recalled by either DBT, digital mammography (DM), or both. Each case had DBT, synthetic mammography (SM), and DM (prior screening round) images. Five radiologists participated in a reading study where detection, risk of malignancy, and recall were assessed. They read each case twice, once using only DBT and once using DBT together with SM and DM priors.

Results: The results showed a significant reduction in recall rates for all FP categories, as well as for the TN cases, when adding SM and prior DM to DBT. This resulted also in a significant increase in recall agreement for these categories, with more of the negative cases being recalled by few or no readers. These categories were overall rated as appearing more malignant in the DBT reading arm. For the TP categories, there was a significant decrease in recalls for DM-recalled cancers ( p = 0.047 ), but no significant difference for DBT-recalled cancers ( p = 0.063 ), or DBT/DM-recalled cancers ( p = 0.208 ).

Conclusions: Similar to the documented effect of priors in DM screening, we suggest that added two-dimensional priors improve the specificity of DBT screening but may reduce the sensitivity.

{"title":"Investigating the effect of adding comparisons with prior mammograms to standalone digital breast tomosynthesis screening.","authors":"Pontus Timberg, Gustav Hellgren, Magnus Dustler, Anders Tingberg","doi":"10.1117/1.JMI.12.S2.S22003","DOIUrl":"10.1117/1.JMI.12.S2.S22003","url":null,"abstract":"<p><strong>Purpose: </strong>The purpose is to retrospectively investigate how the addition of prior and concurrent mammograms affects wide-angle digital breast tomosynthesis (DBT) screening false-positive recall rates, malignancy scoring, and recall agreement.</p><p><strong>Approach: </strong>A total of 200 cases were selected from the Malmö Breast Tomosynthesis Screening Trial. They consist of 150 recalled cases [30 true positives (TPs), 120 false positives (FPs), and 50 healthy, non-recalled true-negative (TN) cases]. The positive cases were categorized based on being recalled by either DBT, digital mammography (DM), or both. Each case had DBT, synthetic mammography (SM), and DM (prior screening round) images. Five radiologists participated in a reading study where detection, risk of malignancy, and recall were assessed. They read each case twice, once using only DBT and once using DBT together with SM and DM priors.</p><p><strong>Results: </strong>The results showed a significant reduction in recall rates for all FP categories, as well as for the TN cases, when adding SM and prior DM to DBT. This resulted also in a significant increase in recall agreement for these categories, with more of the negative cases being recalled by few or no readers. These categories were overall rated as appearing more malignant in the DBT reading arm. For the TP categories, there was a significant decrease in recalls for DM-recalled cancers ( <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.047</mn></mrow> </math> ), but no significant difference for DBT-recalled cancers ( <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.063</mn></mrow> </math> ), or DBT/DM-recalled cancers ( <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.208</mn></mrow> </math> ).</p><p><strong>Conclusions: </strong>Similar to the documented effect of priors in DM screening, we suggest that added two-dimensional priors improve the specificity of DBT screening but may reduce the sensitivity.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 Suppl 2","pages":"S22003"},"PeriodicalIF":1.9,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11931293/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143711591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Breast cancer survivors' perceptual map of breast reconstruction appearance outcomes.
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-09-01 Epub Date: 2025-03-19 DOI: 10.1117/1.JMI.12.5.051802
Haoqi Wang, Xiomara T Gonzalez, Gabriela A Renta-López, Mary Catherine Bordes, Michael C Hout, Seung W Choi, Gregory P Reece, Mia K Markey

Purpose: It is often hard for patients to articulate their expectations about breast reconstruction appearance outcomes to their providers. Our overarching goal is to develop a tool to help patients visually express what they expect to look like after reconstruction. We aim to comprehensively understand how breast cancer survivors perceive diverse breast appearance states by mapping them onto a low-dimensional Euclidean space, which simplifies the complex information about perceptual similarity relationships into a more interpretable form.

Approach: We recruited breast cancer survivors and conducted observer experiments to assess the visual similarities among clinical photographs depicting a range of appearances of the torso relevant to breast reconstruction. Then, we developed a perceptual map to illuminate how breast cancer survivors perceive and distinguish among these appearance states.

Results: We sampled 100 photographs as stimuli and recruited 34 breast cancer survivors locally. The resulting perceptual map, constructed in two dimensions, offers valuable insights into factors influencing breast cancer survivors' perceptions of breast reconstruction outcomes. Our findings highlight specific aspects, such as the number of nipples, symmetry, ptosis, scars, and breast shape, that emerge as particularly noteworthy for breast cancer survivors.

Conclusions: Analysis of the perceptual map identified factors associated with breast cancer survivors' perceptions of breast appearance states that should be emphasized in the appearance consultation process. The perceptual map could be used to assist patients in visually expressing what they expect to look like. Our study lays the groundwork for evaluating interventions intended to help patients form realistic expectations.

{"title":"Breast cancer survivors' perceptual map of breast reconstruction appearance outcomes.","authors":"Haoqi Wang, Xiomara T Gonzalez, Gabriela A Renta-López, Mary Catherine Bordes, Michael C Hout, Seung W Choi, Gregory P Reece, Mia K Markey","doi":"10.1117/1.JMI.12.5.051802","DOIUrl":"10.1117/1.JMI.12.5.051802","url":null,"abstract":"<p><strong>Purpose: </strong>It is often hard for patients to articulate their expectations about breast reconstruction appearance outcomes to their providers. Our overarching goal is to develop a tool to help patients visually express what they expect to look like after reconstruction. We aim to comprehensively understand how breast cancer survivors perceive diverse breast appearance states by mapping them onto a low-dimensional Euclidean space, which simplifies the complex information about perceptual similarity relationships into a more interpretable form.</p><p><strong>Approach: </strong>We recruited breast cancer survivors and conducted observer experiments to assess the visual similarities among clinical photographs depicting a range of appearances of the torso relevant to breast reconstruction. Then, we developed a perceptual map to illuminate how breast cancer survivors perceive and distinguish among these appearance states.</p><p><strong>Results: </strong>We sampled 100 photographs as stimuli and recruited 34 breast cancer survivors locally. The resulting perceptual map, constructed in two dimensions, offers valuable insights into factors influencing breast cancer survivors' perceptions of breast reconstruction outcomes. Our findings highlight specific aspects, such as the number of nipples, symmetry, ptosis, scars, and breast shape, that emerge as particularly noteworthy for breast cancer survivors.</p><p><strong>Conclusions: </strong>Analysis of the perceptual map identified factors associated with breast cancer survivors' perceptions of breast appearance states that should be emphasized in the appearance consultation process. The perceptual map could be used to assist patients in visually expressing what they expect to look like. Our study lays the groundwork for evaluating interventions intended to help patients form realistic expectations.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 5","pages":"051802"},"PeriodicalIF":1.9,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11921042/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143671445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SAM-MedUS: a foundational model for universal ultrasound image segmentation.
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-03-01 Epub Date: 2025-02-27 DOI: 10.1117/1.JMI.12.2.027001
Feng Tian, Jintao Zhai, Jinru Gong, Weirui Lei, Shuai Chang, Fangfang Ju, Shengyou Qian, Xiao Zou

Purpose: Segmentation of ultrasound images for medical diagnosis, monitoring, and research is crucial, and although existing methods perform well, they are limited by specific organs, tumors, and image devices. Applications of the Segment Anything Model (SAM), such as SAM-med2d, use a large number of medical datasets that contain only a small fraction of the ultrasound medical images.

Approach: In this work, we proposed a SAM-MedUS model for generic ultrasound image segmentation that utilizes the latest publicly available ultrasound image dataset to create a diverse dataset containing eight site categories for training and testing. We integrated ConvNext V2 and CM blocks in the encoder for better global context extraction. In addition, a boundary loss function is used to improve the segmentation of fuzzy boundaries and low-contrast ultrasound images.

Results: Experimental results show that SAM-MedUS outperforms recent methods on multiple ultrasound datasets. For the more easily datasets such as the adult kidney, it achieves 87.93% IoU and 93.58% dice, whereas for more complex ones such as the infant vein, IoU and dice reach 62.31% and 78.93%, respectively.

Conclusions: We collected and collated an ultrasound dataset of multiple different site types to achieve uniform segmentation of ultrasound images. In addition, the use of additional auxiliary branches ConvNext V2 and CM block enhances the ability of the model to extract global information and the use of boundary loss allows the model to exhibit robust performance and excellent generalization ability.

{"title":"SAM-MedUS: a foundational model for universal ultrasound image segmentation.","authors":"Feng Tian, Jintao Zhai, Jinru Gong, Weirui Lei, Shuai Chang, Fangfang Ju, Shengyou Qian, Xiao Zou","doi":"10.1117/1.JMI.12.2.027001","DOIUrl":"https://doi.org/10.1117/1.JMI.12.2.027001","url":null,"abstract":"<p><strong>Purpose: </strong>Segmentation of ultrasound images for medical diagnosis, monitoring, and research is crucial, and although existing methods perform well, they are limited by specific organs, tumors, and image devices. Applications of the Segment Anything Model (SAM), such as SAM-med2d, use a large number of medical datasets that contain only a small fraction of the ultrasound medical images.</p><p><strong>Approach: </strong>In this work, we proposed a SAM-MedUS model for generic ultrasound image segmentation that utilizes the latest publicly available ultrasound image dataset to create a diverse dataset containing eight site categories for training and testing. We integrated ConvNext V2 and CM blocks in the encoder for better global context extraction. In addition, a boundary loss function is used to improve the segmentation of fuzzy boundaries and low-contrast ultrasound images.</p><p><strong>Results: </strong>Experimental results show that SAM-MedUS outperforms recent methods on multiple ultrasound datasets. For the more easily datasets such as the adult kidney, it achieves 87.93% IoU and 93.58% dice, whereas for more complex ones such as the infant vein, IoU and dice reach 62.31% and 78.93%, respectively.</p><p><strong>Conclusions: </strong>We collected and collated an ultrasound dataset of multiple different site types to achieve uniform segmentation of ultrasound images. In addition, the use of additional auxiliary branches ConvNext V2 and CM block enhances the ability of the model to extract global information and the use of boundary loss allows the model to exhibit robust performance and excellent generalization ability.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"027001"},"PeriodicalIF":1.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11865838/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143543463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distributed and networked analysis of volumetric image data for remote collaboration of microscopy image analysis.
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-03-01 Epub Date: 2025-03-11 DOI: 10.1117/1.JMI.12.2.024001
Alain Chen, Shuo Han, Soonam Lee, Chichen Fu, Changye Yang, Liming Wu, Seth Winfree, Kenneth W Dunn, Paul Salama, Edward J Delp

Purpose: The advancement of high-content optical microscopy has enabled the acquisition of very large three-dimensional (3D) image datasets. The analysis of these image volumes requires more computational resources than a biologist may have access to in typical desktop or laptop computers. This is especially true if machine learning tools are being used for image analysis. With the increased amount of data analysis and computational complexity, there is a need for a more accessible, easy-to-use, and efficient network-based 3D image processing system. The distributed and networked analysis of volumetric image data (DINAVID) system was developed to enable remote analysis of 3D microscopy images for biologists.

Approach: We present an overview of the DINAVID system and compare it to other tools currently available for microscopy image analysis. DINAVID is designed using open-source tools and has two main sub-systems, a computational system for 3D microscopy image processing and analysis and a 3D visualization system.

Results: DINAVID is a network-based system with a simple web interface that allows biologists to upload 3D volumes for analysis and visualization. DINAVID enables the image access model of a center hosting image volumes and remote users analyzing those volumes, without the need for remote users to manage any computational resources.

Conclusions: The DINAVID system, designed and developed using open-source tools, enables biologists to analyze and visualize 3D microscopy volumes remotely without the need to manage computational resources. DINAVID also provides several image analysis tools, including pre-processing and several segmentation models.

{"title":"Distributed and networked analysis of volumetric image data for remote collaboration of microscopy image analysis.","authors":"Alain Chen, Shuo Han, Soonam Lee, Chichen Fu, Changye Yang, Liming Wu, Seth Winfree, Kenneth W Dunn, Paul Salama, Edward J Delp","doi":"10.1117/1.JMI.12.2.024001","DOIUrl":"10.1117/1.JMI.12.2.024001","url":null,"abstract":"<p><strong>Purpose: </strong>The advancement of high-content optical microscopy has enabled the acquisition of very large three-dimensional (3D) image datasets. The analysis of these image volumes requires more computational resources than a biologist may have access to in typical desktop or laptop computers. This is especially true if machine learning tools are being used for image analysis. With the increased amount of data analysis and computational complexity, there is a need for a more accessible, easy-to-use, and efficient network-based 3D image processing system. The distributed and networked analysis of volumetric image data (DINAVID) system was developed to enable remote analysis of 3D microscopy images for biologists.</p><p><strong>Approach: </strong>We present an overview of the DINAVID system and compare it to other tools currently available for microscopy image analysis. DINAVID is designed using open-source tools and has two main sub-systems, a computational system for 3D microscopy image processing and analysis and a 3D visualization system.</p><p><strong>Results: </strong>DINAVID is a network-based system with a simple web interface that allows biologists to upload 3D volumes for analysis and visualization. DINAVID enables the image access model of a center hosting image volumes and remote users analyzing those volumes, without the need for remote users to manage any computational resources.</p><p><strong>Conclusions: </strong>The DINAVID system, designed and developed using open-source tools, enables biologists to analyze and visualize 3D microscopy volumes remotely without the need to manage computational resources. DINAVID also provides several image analysis tools, including pre-processing and several segmentation models.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"024001"},"PeriodicalIF":1.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11895998/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143617553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using a fully automated, quantitative fissure integrity score extracted from chest CT scans of emphysema patients to predict endobronchial valve response.
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-03-01 Epub Date: 2025-03-13 DOI: 10.1117/1.JMI.12.2.024501
Dallas K Tada, Grace H Kim, Jonathan G Goldin, Pangyu Teng, Kalyani Vyapari, Ashley Banola, Fereidoun Abtin, Michael McNitt-Gray, Matthew S Brown

Purpose: We aim to develop and validate a prediction model using a previously developed fully automated quantitative fissure integrity score (FIS) extracted from pre-treatment CT images to identify suitable candidates for endobronchial valve (EBV) treatment.

Approach: We retrospectively collected 96 anonymized pre- and post-treatment chest computed tomography (CT) exams from patients with moderate to severe emphysema and who underwent EBV treatment. We used a previously developed fully automated, deep learning-based approach to quantitatively assess the completeness of each fissure by obtaining the FIS for each fissure from each patient's pre-treatment CT exam. The response to EBV treatment was recorded as the amount of targeted lobe volume reduction (TLVR) compared with target lobe volume prior to treatment as assessed on the pre- and post-treatment CT scans. EBV placement was considered successful with a TLVR of 350    cc . The dataset was split into a training set ( N = 58 ) and a test set ( N = 38 ) to train and validate a logistic regression model using fivefold cross-validation; the extracted FIS of each patient's targeted treatment lobe was the primary CT predictor. Using the training set, a receiver operating characteristic (ROC) curve analysis and predictive values were quantified over a range of FIS thresholds to determine an optimal cutoff value that would distinguish complete and incomplete fissures, which was used to evaluate predictive values of the test set cases.

Results: ROC analysis of the training set provided an AUC of 0.83, and the determined FIS threshold was 89.5%. Using this threshold on the test set achieved an accuracy of 81.6%, specificity (Sp) of 90.9%, sensitivity (Sn) of 77.8%, positive predictive value (PPV) of 62.5%, and negative predictive value of 95.5%.

Conclusions: A model using the quantified FIS shows potential as a predictive biomarker for whether a targeted lobe will achieve successful volume reduction from EBV treatment.

{"title":"Using a fully automated, quantitative fissure integrity score extracted from chest CT scans of emphysema patients to predict endobronchial valve response.","authors":"Dallas K Tada, Grace H Kim, Jonathan G Goldin, Pangyu Teng, Kalyani Vyapari, Ashley Banola, Fereidoun Abtin, Michael McNitt-Gray, Matthew S Brown","doi":"10.1117/1.JMI.12.2.024501","DOIUrl":"https://doi.org/10.1117/1.JMI.12.2.024501","url":null,"abstract":"<p><strong>Purpose: </strong>We aim to develop and validate a prediction model using a previously developed fully automated quantitative fissure integrity score (FIS) extracted from pre-treatment CT images to identify suitable candidates for endobronchial valve (EBV) treatment.</p><p><strong>Approach: </strong>We retrospectively collected 96 anonymized pre- and post-treatment chest computed tomography (CT) exams from patients with moderate to severe emphysema and who underwent EBV treatment. We used a previously developed fully automated, deep learning-based approach to quantitatively assess the completeness of each fissure by obtaining the FIS for each fissure from each patient's pre-treatment CT exam. The response to EBV treatment was recorded as the amount of targeted lobe volume reduction (TLVR) compared with target lobe volume prior to treatment as assessed on the pre- and post-treatment CT scans. EBV placement was considered successful with a TLVR of <math><mrow><mo>≥</mo> <mn>350</mn> <mtext>  </mtext> <mi>cc</mi></mrow> </math> . The dataset was split into a training set ( <math><mrow><mi>N</mi> <mo>=</mo> <mn>58</mn></mrow> </math> ) and a test set ( <math><mrow><mi>N</mi> <mo>=</mo> <mn>38</mn></mrow> </math> ) to train and validate a logistic regression model using fivefold cross-validation; the extracted FIS of each patient's targeted treatment lobe was the primary CT predictor. Using the training set, a receiver operating characteristic (ROC) curve analysis and predictive values were quantified over a range of FIS thresholds to determine an optimal cutoff value that would distinguish complete and incomplete fissures, which was used to evaluate predictive values of the test set cases.</p><p><strong>Results: </strong>ROC analysis of the training set provided an AUC of 0.83, and the determined FIS threshold was 89.5%. Using this threshold on the test set achieved an accuracy of 81.6%, specificity (Sp) of 90.9%, sensitivity (Sn) of 77.8%, positive predictive value (PPV) of 62.5%, and negative predictive value of 95.5%.</p><p><strong>Conclusions: </strong>A model using the quantified FIS shows potential as a predictive biomarker for whether a targeted lobe will achieve successful volume reduction from EBV treatment.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"024501"},"PeriodicalIF":1.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11906092/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143651339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bidirectional teaching between lightweight multi-view networks for intestine segmentation from CT volume.
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-03-01 Epub Date: 2025-03-31 DOI: 10.1117/1.JMI.12.2.024003
Qin An, Hirohisa Oda, Yuichiro Hayashi, Takayuki Kitasaka, Aitaro Takimoto, Akinari Hinoki, Hiroo Uchida, Kojiro Suzuki, Masahiro Oda, Kensaku Mori

Purpose: We present a semi-supervised method for intestine segmentation to assist clinicians in diagnosing intestinal diseases. Accurate segmentation is essential for planning treatments for conditions such as intestinal obstruction. Although fully supervised learning performs well with abundant labeled data, the complexity of the intestine's spatial structure makes labeling time-intensive, resulting in limited labeled data. We propose a 3D segmentation network with a bidirectional teaching strategy to enhance segmentation accuracy using this limited dataset.

Method: The proposed semi-supervised method segments the intestine from computed tomography (CT) volumes using bidirectional teaching, where two backbones with different initial weights are trained simultaneously to generate pseudo-labels and employ unlabeled data, mitigating the challenge of limited labeled data. Intestine segmentation is further complicated by complex spatial features. To address this, we propose a lightweight multi-view symmetric network, which uses small-sized convolutional kernels instead of large ones to reduce parameters and capture multi-scale features from diverse perceptual fields, enhancing learning ability.

Results: We evaluated the proposed method with 59 CT volumes and repeated all experiments five times. Experimental results showed that the average Dice of the proposed method was 80.45%, the average precision was 84.12%, and the average recall was 78.84%.

Conclusions: The proposed method can effectively utilize large-scale unlabeled data with pseudo-labels, which is crucial in reducing the effect of limited labeled data in medical image segmentation. Furthermore, we assign different weights to the pseudo-labels to improve their reliability. From the result, we can see that the method produced competitive performance compared with previous methods.

{"title":"Bidirectional teaching between lightweight multi-view networks for intestine segmentation from CT volume.","authors":"Qin An, Hirohisa Oda, Yuichiro Hayashi, Takayuki Kitasaka, Aitaro Takimoto, Akinari Hinoki, Hiroo Uchida, Kojiro Suzuki, Masahiro Oda, Kensaku Mori","doi":"10.1117/1.JMI.12.2.024003","DOIUrl":"10.1117/1.JMI.12.2.024003","url":null,"abstract":"<p><strong>Purpose: </strong>We present a semi-supervised method for intestine segmentation to assist clinicians in diagnosing intestinal diseases. Accurate segmentation is essential for planning treatments for conditions such as intestinal obstruction. Although fully supervised learning performs well with abundant labeled data, the complexity of the intestine's spatial structure makes labeling time-intensive, resulting in limited labeled data. We propose a 3D segmentation network with a bidirectional teaching strategy to enhance segmentation accuracy using this limited dataset.</p><p><strong>Method: </strong>The proposed semi-supervised method segments the intestine from computed tomography (CT) volumes using bidirectional teaching, where two backbones with different initial weights are trained simultaneously to generate pseudo-labels and employ unlabeled data, mitigating the challenge of limited labeled data. Intestine segmentation is further complicated by complex spatial features. To address this, we propose a lightweight multi-view symmetric network, which uses small-sized convolutional kernels instead of large ones to reduce parameters and capture multi-scale features from diverse perceptual fields, enhancing learning ability.</p><p><strong>Results: </strong>We evaluated the proposed method with 59 CT volumes and repeated all experiments five times. Experimental results showed that the average Dice of the proposed method was 80.45%, the average precision was 84.12%, and the average recall was 78.84%.</p><p><strong>Conclusions: </strong>The proposed method can effectively utilize large-scale unlabeled data with pseudo-labels, which is crucial in reducing the effect of limited labeled data in medical image segmentation. Furthermore, we assign different weights to the pseudo-labels to improve their reliability. From the result, we can see that the method produced competitive performance compared with previous methods.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"12 2","pages":"024003"},"PeriodicalIF":1.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11957399/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143765396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Medical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1