首页 > 最新文献

Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention最新文献

英文 中文
Hard Negative Sample Mining for Whole Slide Image Classification. 全幻灯片图像分类的硬负样本挖掘。
Wentao Huang, Xiaoling Hu, Shahira Abousamra, Prateek Prasanna, Chao Chen

Weakly supervised whole slide image (WSI) classification is challenging due to the lack of patch-level labels and high computational costs. State-of-the-art methods use self-supervised patch-wise feature representations for multiple instance learning (MIL). Recently, methods have been proposed to fine-tune the feature representation on the downstream task using pseudo labeling, but mostly focusing on selecting high-quality positive patches. In this paper, we propose to mine hard negative samples during fine-tuning. This allows us to obtain better feature representations and reduce the training cost. Furthermore, we propose a novel patch-wise ranking loss in MIL to better exploit these hard negative samples. Experiments on two public datasets demonstrate the efficacy of these proposed ideas. Our codes are available at https://github.com/winston52/HNM-WSI.

弱监督全幻灯片图像(WSI)分类由于缺乏补丁级标签和高计算成本而具有挑战性。最先进的方法使用自监督的补丁智能特征表示进行多实例学习(MIL)。最近,人们提出了使用伪标记对下游任务的特征表示进行微调的方法,但主要集中在选择高质量的正补丁上。在本文中,我们提出在微调过程中挖掘硬负样本。这使我们能够获得更好的特征表示并降低训练成本。此外,我们提出了一种新的基于补丁的MIL排序损失,以更好地利用这些硬负样本。在两个公共数据集上的实验证明了这些方法的有效性。我们的代码可在https://github.com/winston52/HNM-WSI上获得。
{"title":"Hard Negative Sample Mining for Whole Slide Image Classification.","authors":"Wentao Huang, Xiaoling Hu, Shahira Abousamra, Prateek Prasanna, Chao Chen","doi":"10.1007/978-3-031-72083-3_14","DOIUrl":"10.1007/978-3-031-72083-3_14","url":null,"abstract":"<p><p>Weakly supervised whole slide image (WSI) classification is challenging due to the lack of patch-level labels and high computational costs. State-of-the-art methods use self-supervised patch-wise feature representations for multiple instance learning (MIL). Recently, methods have been proposed to fine-tune the feature representation on the downstream task using pseudo labeling, but mostly focusing on selecting high-quality positive patches. In this paper, we propose to mine hard negative samples during fine-tuning. This allows us to obtain better feature representations and reduce the training cost. Furthermore, we propose a novel patch-wise ranking loss in MIL to better exploit these hard negative samples. Experiments on two public datasets demonstrate the efficacy of these proposed ideas. Our codes are available at https://github.com/winston52/HNM-WSI.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15004 ","pages":"144-154"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12185924/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144487609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PRISM: A Promptable and Robust Interactive Segmentation Model with Visual Prompts. PRISM:一个具有视觉提示的可提示和健壮的交互式分割模型。
Hao Li, Han Liu, Dewei Hu, Jiacheng Wang, Ipek Oguz

In this paper, we present PRISM, a Promptable and Robust Interactive Segmentation Model, aiming for precise segmentation of 3D medical images. PRISM accepts various visual inputs, including points, boxes, and scribbles as sparse prompts, as well as masks as dense prompts. Specifically, PRISM is designed with four principles to achieve robustness: (1) Iterative learning. The model produces segmentations by using visual prompts from previous iterations to achieve progressive improvement. (2) Confidence learning. PRISM employs multiple segmentation heads per input image, each generating a continuous map and a confidence score to optimize predictions. (3) Corrective learning. Following each segmentation iteration, PRISM employs a shallow corrective refinement network to reassign mislabeled voxels. (4) Hybrid design. PRISM integrates hybrid encoders to better capture both the local and global information. Comprehensive validation of PRISM is conducted using four public datasets for tumor segmentation in the colon, pancreas, liver, and kidney, highlighting challenges caused by anatomical variations and ambiguous boundaries in accurate tumor identification. Compared to state-of-the-art methods, both with and without prompt engineering, PRISM significantly improves performance, achieving results that are close to human levels. The code is publicly available at https://github.com/MedICL-VU/PRISM.

本文提出了一种快速、稳健的交互式分割模型PRISM,旨在对三维医学图像进行精确分割。PRISM接受各种视觉输入,包括作为稀疏提示的点、框和涂鸦,以及作为密集提示的掩码。具体来说,PRISM的设计遵循四个原则来实现鲁棒性:(1)迭代学习。该模型通过使用来自先前迭代的可视化提示来产生分割,以实现渐进式改进。(2)自信学习。PRISM为每个输入图像使用多个分割头,每个头生成一个连续的地图和一个置信度分数来优化预测。(3)纠正性学习。在每次分割迭代之后,PRISM使用浅校正细化网络重新分配错误标记的体素。(4)混合设计。PRISM集成了混合编码器,以更好地捕获本地和全局信息。使用结肠、胰腺、肝脏和肾脏四个公共数据集对PRISM进行了全面验证,突出了解剖变异和模糊边界对准确识别肿瘤带来的挑战。与最先进的方法相比,无论是否及时进行工程设计,PRISM都显著提高了性能,实现了接近人类水平的结果。该代码可在https://github.com/MedICL-VU/PRISM上公开获得。
{"title":"PRISM: A Promptable and Robust Interactive Segmentation Model with Visual Prompts.","authors":"Hao Li, Han Liu, Dewei Hu, Jiacheng Wang, Ipek Oguz","doi":"10.1007/978-3-031-72384-1_37","DOIUrl":"10.1007/978-3-031-72384-1_37","url":null,"abstract":"<p><p>In this paper, we present PRISM, a <b>P</b>romptable and <b>R</b>obust <b>I</b>nteractive <b>S</b>egmentation <b>M</b>odel, aiming for precise segmentation of 3D medical images. PRISM accepts various visual inputs, including points, boxes, and scribbles as sparse prompts, as well as masks as dense prompts. Specifically, PRISM is designed with four principles to achieve robustness: (1) Iterative learning. The model produces segmentations by using visual prompts from previous iterations to achieve progressive improvement. (2) Confidence learning. PRISM employs multiple segmentation heads per input image, each generating a continuous map and a confidence score to optimize predictions. (3) Corrective learning. Following each segmentation iteration, PRISM employs a shallow corrective refinement network to reassign mislabeled voxels. (4) Hybrid design. PRISM integrates hybrid encoders to better capture both the local and global information. Comprehensive validation of PRISM is conducted using four public datasets for tumor segmentation in the colon, pancreas, liver, and kidney, highlighting challenges caused by anatomical variations and ambiguous boundaries in accurate tumor identification. Compared to state-of-the-art methods, both with and without prompt engineering, PRISM significantly improves performance, achieving results that are close to human levels. The code is publicly available at https://github.com/MedICL-VU/PRISM.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15003 ","pages":"389-399"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12128912/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144217993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SinoSynth: A Physics-Based Domain Randomization Approach for Generalizable CBCT Image Enhancement. SinoSynth:一种基于物理的领域随机化方法用于广义CBCT图像增强。
Yunkui Pang, Yilin Liu, Xu Chen, Pew-Thian Yap, Jun Lian

Cone Beam Computed Tomography (CBCT) finds diverse applications in medicine. Ensuring high image quality in CBCT scans is essential for accurate diagnosis and treatment delivery. Yet, the susceptibility of CBCT images to noise and artifacts undermines both their usefulness and reliability. Existing methods typically address CBCT artifacts through image-to-image translation approaches. These methods, however, are limited by the artifact types present in the training data, which may not cover the complete spectrum of CBCT degradations stemming from variations in imaging protocols. Gathering additional data to encompass all possible scenarios can often pose a challenge. To address this, we present SinoSynth, a physics-based degradation model that simulates various CBCT-specific artifacts to generate a diverse set of synthetic CBCT images from high-quality CT images, without requiring pre-aligned data. Through extensive experiments, we demonstrate that several different generative networks trained on our synthesized data achieve remarkable results on heterogeneous multi-institutional datasets, outperforming even the same networks trained on actual data. We further show that our degradation model conveniently provides an avenue to enforce anatomical constraints in conditional generative models, yielding high-quality and structure-preserving synthetic CT images (https://github.com/Pangyk/SinoSynth).

锥形束计算机断层扫描(CBCT)在医学上有多种应用。确保CBCT扫描的高图像质量对于准确诊断和治疗至关重要。然而,CBCT图像对噪声和伪影的敏感性破坏了它们的有用性和可靠性。现有的方法通常通过图像到图像的转换方法来处理CBCT伪影。然而,这些方法受到训练数据中存在的伪影类型的限制,这些伪影类型可能无法涵盖由成像协议变化引起的CBCT退化的完整范围。收集额外的数据以涵盖所有可能的情况往往会带来挑战。为了解决这个问题,我们提出了SinoSynth,这是一个基于物理的退化模型,可以模拟各种CBCT特定的伪像,从高质量的CT图像中生成各种合成CBCT图像,而不需要预先对齐的数据。通过广泛的实验,我们证明了在我们的合成数据上训练的几个不同的生成网络在异构多机构数据集上取得了显着的结果,甚至优于在实际数据上训练的相同网络。我们进一步表明,我们的降解模型方便地提供了一种在条件生成模型中强制执行解剖约束的途径,从而产生高质量和保留结构的合成CT图像(https://github.com/Pangyk/SinoSynth)。
{"title":"SinoSynth: A Physics-Based Domain Randomization Approach for Generalizable CBCT Image Enhancement.","authors":"Yunkui Pang, Yilin Liu, Xu Chen, Pew-Thian Yap, Jun Lian","doi":"10.1007/978-3-031-72104-5_62","DOIUrl":"10.1007/978-3-031-72104-5_62","url":null,"abstract":"<p><p>Cone Beam Computed Tomography (CBCT) finds diverse applications in medicine. Ensuring high image quality in CBCT scans is essential for accurate diagnosis and treatment delivery. Yet, the susceptibility of CBCT images to noise and artifacts undermines both their usefulness and reliability. Existing methods typically address CBCT artifacts through image-to-image translation approaches. These methods, however, are limited by the artifact types present in the training data, which may not cover the complete spectrum of CBCT degradations stemming from variations in imaging protocols. Gathering additional data to encompass all possible scenarios can often pose a challenge. To address this, we present SinoSynth, a physics-based degradation model that simulates various CBCT-specific artifacts to generate a diverse set of synthetic CBCT images from high-quality CT images, <i>without</i> requiring pre-aligned data. Through extensive experiments, we demonstrate that several different generative networks trained on our synthesized data achieve remarkable results on heterogeneous multi-institutional datasets, outperforming even the same networks trained on actual data. We further show that our degradation model conveniently provides an avenue to enforce anatomical constraints in conditional generative models, yielding high-quality and structure-preserving synthetic CT images (https://github.com/Pangyk/SinoSynth).</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15007 ","pages":"646-656"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12711319/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145783989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Two Projections Suffice for Cerebral Vascular Reconstruction. 两个投影足以进行脑血管重建。
Alexandre Cafaro, Reuben Dorent, Nazim Haouchine, Vincent Lepetit, Nikos Paragios, William M Wells, Sarah Frisken

3D reconstruction of cerebral vasculature from 2D biplanar projections could significantly improve diagnosis and treatment planning. We introduce a novel approach to tackle this challenging task by initially backprojecting the two projections, a process that traditionally results in unsatisfactory outcomes due to inherent ambiguities. To overcome this, we employ a U-Net approach trained to resolve these ambiguities, leading to significant improvement in reconstruction quality. The process is further refined using a Maximum A Posteriori strategy with a prior that favors continuity, leading to enhanced 3D reconstructions. We evaluated our approach using a comprehensive dataset comprising segmentations from approximately 700 MR angiography scans, from which we generated paired realistic biplanar DRRs. Upon testing with held-out data, our method achieved an 80% Dice similarity w.r.t the ground truth, superior to existing methods. Our code and dataset are available at https://github.com/Wapity/3DBrainXVascular.

二维双平面投影三维重建脑血管系统可显著提高诊断和治疗计划。我们引入了一种新颖的方法来解决这一具有挑战性的任务,即最初反向投影两个投影,由于固有的模糊性,这一过程通常会导致不满意的结果。为了克服这一点,我们采用U-Net方法来解决这些歧义,从而显著提高重建质量。该过程使用最大后验策略进一步细化,具有有利于连续性的先验,从而增强了3D重建。我们使用了一个综合数据集来评估我们的方法,该数据集包括大约700个MR血管造影扫描的分割,从中我们生成了成对的逼真的双平面drr。在对持有数据进行测试后,我们的方法达到了80%的骰子相似度,优于现有的方法。我们的代码和数据集可在https://github.com/Wapity/3DBrainXVascular上获得。
{"title":"Two Projections Suffice for Cerebral Vascular Reconstruction.","authors":"Alexandre Cafaro, Reuben Dorent, Nazim Haouchine, Vincent Lepetit, Nikos Paragios, William M Wells, Sarah Frisken","doi":"10.1007/978-3-031-72104-5_69","DOIUrl":"10.1007/978-3-031-72104-5_69","url":null,"abstract":"<p><p>3D reconstruction of cerebral vasculature from 2D biplanar projections could significantly improve diagnosis and treatment planning. We introduce a novel approach to tackle this challenging task by initially backprojecting the two projections, a process that traditionally results in unsatisfactory outcomes due to inherent ambiguities. To overcome this, we employ a U-Net approach trained to resolve these ambiguities, leading to significant improvement in reconstruction quality. The process is further refined using a Maximum A Posteriori strategy with a prior that favors continuity, leading to enhanced 3D reconstructions. We evaluated our approach using a comprehensive dataset comprising segmentations from approximately 700 MR angiography scans, from which we generated paired realistic biplanar DRRs. Upon testing with held-out data, our method achieved an 80% Dice similarity w.r.t the ground truth, superior to existing methods. Our code and dataset are available at https://github.com/Wapity/3DBrainXVascular.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15007 ","pages":"722-731"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12715530/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145807074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Insight: A Multi-Modal Diagnostic Pipeline using LLMs for Ocular Surface Disease Diagnosis. Insight:使用llm进行眼表疾病诊断的多模态诊断管道。
Chun-Hsiao Yeh, Jiayun Wang, Andrew D Graham, Andrea J Liu, Bo Tan, Yubei Chen, Yi Ma, Meng C Lin

Accurate diagnosis of ocular surface diseases is critical in optometry and ophthalmology, which hinge on integrating clinical data sources (e.g., meibography imaging and clinical metadata). Traditional human assessments lack precision in quantifying clinical observations, while current machine-based methods often treat diagnoses as multi-class classification problems, limiting the diagnoses to a predefined closed-set of curated answers without reasoning the clinical relevance of each variable to the diagnosis. To tackle these challenges, we introduce an innovative multi-modal diagnostic pipeline (MDPipe) by employing large language models (LLMs) for ocular surface disease diagnosis. We first employ a visual translator to interpret meibography images by converting them into quantifiable morphology data, facilitating their integration with clinical metadata and enabling the communication of nuanced medical insight to LLMs. To further advance this communication, we introduce a LLM-based summarizer to contextualize the insight from the combined morphology and clinical metadata, and generate clinical report summaries. Finally, we refine the LLMs' reasoning ability with domain-specific insight from real-life clinician diagnoses. Our evaluation across diverse ocular surface disease diagnosis benchmarks demonstrates that MDPipe outperforms existing standards, including GPT-4, and provides clinically sound rationales for diagnoses. The project is available at https://danielchyeh.github.io/MDPipe/.

准确诊断眼表疾病在验光和眼科学中至关重要,这取决于整合临床数据源(例如,meibography imaging和临床元数据)。传统的人类评估在量化临床观察方面缺乏准确性,而当前基于机器的方法通常将诊断视为多类分类问题,将诊断限制在预定义的封闭的精心设计的答案集合中,而没有推理每个变量与诊断的临床相关性。为了应对这些挑战,我们引入了一种创新的多模式诊断管道(MDPipe),采用大语言模型(LLMs)进行眼表疾病诊断。我们首先使用视觉翻译器通过将meibography图像转换为可量化的形态学数据来解释它们,促进它们与临床元数据的整合,并使细致入微的医学见解能够与llm交流。为了进一步推进这种交流,我们引入了一个基于法学硕士的总结器,将形态学和临床元数据的见解结合起来,并生成临床报告摘要。最后,我们通过对现实生活中临床医生诊断的特定领域洞察力来改进法学硕士的推理能力。我们对多种眼表疾病诊断基准的评估表明,MDPipe优于现有的标准,包括GPT-4,并为诊断提供了临床合理的依据。该项目可在https://danielchyeh.github.io/MDPipe/上获得。
{"title":"Insight: A Multi-Modal Diagnostic Pipeline using LLMs for Ocular Surface Disease Diagnosis.","authors":"Chun-Hsiao Yeh, Jiayun Wang, Andrew D Graham, Andrea J Liu, Bo Tan, Yubei Chen, Yi Ma, Meng C Lin","doi":"10.1007/978-3-031-72378-0_66","DOIUrl":"10.1007/978-3-031-72378-0_66","url":null,"abstract":"<p><p>Accurate diagnosis of ocular surface diseases is critical in optometry and ophthalmology, which hinge on integrating clinical data sources (e.g., meibography imaging and clinical metadata). Traditional human assessments lack precision in quantifying clinical observations, while current machine-based methods often treat diagnoses as multi-class classification problems, limiting the diagnoses to a predefined closed-set of curated answers without reasoning the clinical relevance of each variable to the diagnosis. To tackle these challenges, we introduce an innovative multi-modal diagnostic pipeline (MDPipe) by employing large language models (LLMs) for ocular surface disease diagnosis. We first employ a visual translator to interpret meibography images by converting them into quantifiable morphology data, facilitating their integration with clinical metadata and enabling the communication of nuanced medical insight to LLMs. To further advance this communication, we introduce a LLM-based summarizer to contextualize the insight from the combined morphology and clinical metadata, and generate clinical report summaries. Finally, we refine the LLMs' reasoning ability with domain-specific insight from real-life clinician diagnoses. Our evaluation across diverse ocular surface disease diagnosis benchmarks demonstrates that MDPipe outperforms existing standards, including GPT-4, and provides clinically sound rationales for diagnoses. The project is available at https://danielchyeh.github.io/MDPipe/.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15001 ","pages":"711-721"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12832216/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146069519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CLEFT: Language-Image Contrastive Learning with Efficient Large Language Model and Prompt Fine-Tuning. 基于高效大语言模型和快速微调的语言-图像对比学习。
Yuexi Du, Brian Chang, Nicha C Dvornek

Recent advancements in Contrastive Language-Image Pre-training (CLIP) [21] have demonstrated notable success in self-supervised representation learning across various tasks. However, the existing CLIP-like approaches often demand extensive GPU resources and prolonged training times due to the considerable size of the model and dataset, making them poor for medical applications, in which large datasets are not always common. Meanwhile, the language model prompts are mainly manually derived from labels tied to images, potentially overlooking the richness of information within training samples. We introduce a novel language-image Contrastive Learning method with an Efficient large language model and prompt Fine-Tuning (CLEFT) that harnesses the strengths of the extensive pre-trained language and visual models. Furthermore, we present an efficient strategy for learning context-based prompts that mitigates the gap between informative clinical diagnostic data and simple class labels. Our method demonstrates state-of-the-art performance on multiple chest X-ray and mammography datasets compared with various baselines. The proposed parameter efficient framework can reduce the total trainable model size by 39% and reduce the trainable language model to only 4% compared with the current BERT encoder.

对比语言图像预训练(CLIP)的最新进展已经在各种任务的自监督表示学习中取得了显著的成功。然而,由于模型和数据集的相当大的规模,现有的类似clip的方法通常需要大量的GPU资源和较长的训练时间,这使得它们不适合医疗应用,在医疗应用中,大型数据集并不总是常见的。同时,语言模型提示主要是手动从与图像绑定的标签中获得的,可能忽略了训练样本中信息的丰富性。我们介绍了一种新的语言-图像对比学习方法,该方法利用了广泛的预训练语言和视觉模型的优势,采用高效的大语言模型和快速微调(CLEFT)。此外,我们提出了一种有效的策略来学习基于上下文的提示,以减轻信息丰富的临床诊断数据和简单的类标签之间的差距。与各种基线相比,我们的方法在多个胸部x线和乳房x线摄影数据集上展示了最先进的性能。与现有的BERT编码器相比,所提出的参数高效框架可以将可训练模型的总大小减少39%,将可训练语言模型的大小减少到4%。
{"title":"CLEFT: Language-Image Contrastive Learning with Efficient Large Language Model and Prompt Fine-Tuning.","authors":"Yuexi Du, Brian Chang, Nicha C Dvornek","doi":"10.1007/978-3-031-72390-2_44","DOIUrl":"10.1007/978-3-031-72390-2_44","url":null,"abstract":"<p><p>Recent advancements in Contrastive Language-Image Pre-training (CLIP) [21] have demonstrated notable success in self-supervised representation learning across various tasks. However, the existing CLIP-like approaches often demand extensive GPU resources and prolonged training times due to the considerable size of the model and dataset, making them poor for medical applications, in which large datasets are not always common. Meanwhile, the language model prompts are mainly manually derived from labels tied to images, potentially overlooking the richness of information within training samples. We introduce a novel language-image Contrastive Learning method with an Efficient large language model and prompt Fine-Tuning (CLEFT) that harnesses the strengths of the extensive pre-trained language and visual models. Furthermore, we present an efficient strategy for learning context-based prompts that mitigates the gap between informative clinical diagnostic data and simple class labels. Our method demonstrates state-of-the-art performance on multiple chest X-ray and mammography datasets compared with various baselines. The proposed parameter efficient framework can reduce the total trainable model size by 39% and reduce the trainable language model to only 4% compared with the current BERT encoder.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15012 ","pages":"465-475"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11709740/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142960994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tagged-to-Cine MRI Sequence Synthesis via Light Spatial-Temporal Transformer. 通过光时空变换器实现标记-正片磁共振成像序列合成
Xiaofeng Liu, Fangxu Xing, Zhangxing Bian, Tomas Arias-Vergara, Paula Andrea Pérez-Toro, Andreas Maier, Maureen Stone, Jiachen Zhuo, Jerry L Prince, Jonghye Woo

Tagged magnetic resonance imaging (MRI) has been successfully used to track the motion of internal tissue points within moving organs. Typically, to analyze motion using tagged MRI, cine MRI data in the same coordinate system are acquired, incurring additional time and costs. Consequently, tagged-to-cine MR synthesis holds the potential to reduce the extra acquisition time and costs associated with cine MRI, without disrupting downstream motion analysis tasks. Previous approaches have processed each frame independently, thereby overlooking the fact that complementary information from occluded regions of the tag patterns could be present in neighboring frames exhibiting motion. Furthermore, the inconsistent visual appearance, e.g., tag fading, across frames can reduce synthesis performance. To address this, we propose an efficient framework for tagged-to-cine MR sequence synthesis, leveraging both spatial and temporal information with relatively limited data. Specifically, we follow a split-and-integral protocol to balance spatialtemporal modeling efficiency and consistency. The light spatial-temporal transformer (LiST2) is designed to exploit the local and global attention in motion sequence with relatively lightweight training parameters. The directional product relative position-time bias is adapted to make the model aware of the spatial-temporal correlation, while the shifted window is used for motion alignment. Then, a recurrent sliding fine-tuning (ReST) scheme is applied to further enhance the temporal consistency. Our framework is evaluated on paired tagged and cine MRI sequences, demonstrating superior performance over comparison methods.

标记磁共振成像(MRI)已成功用于跟踪移动器官内部组织点的运动。通常情况下,要使用标记磁共振成像分析运动,需要获取同一坐标系的 cine MRI 数据,这就需要额外的时间和成本。因此,从标记到线性磁共振合成有望减少与线性磁共振成像相关的额外采集时间和成本,同时又不会影响下游的运动分析任务。以往的方法对每一帧图像进行独立处理,从而忽略了标签图案闭塞区域的补充信息可能存在于显示运动的相邻帧图像中这一事实。此外,各帧之间不一致的视觉外观(如标签褪色)也会降低合成性能。为了解决这个问题,我们提出了一个高效的框架,利用空间和时间信息,在数据相对有限的情况下进行标记到线性 MR 序列合成。具体来说,我们采用分割-积分协议来平衡时空建模效率和一致性。轻型时空变换器(LiST2)旨在利用运动序列中的局部和全局注意力,训练参数相对较轻。通过调整方向积相对位置-时间偏置,使模型意识到时空相关性,同时使用移动窗口进行运动对齐。然后,采用循环滑动微调(ReST)方案进一步增强时间一致性。我们的框架在成对标记和电影核磁共振成像序列上进行了评估,证明其性能优于比较方法。
{"title":"Tagged-to-Cine MRI Sequence Synthesis via Light Spatial-Temporal Transformer.","authors":"Xiaofeng Liu, Fangxu Xing, Zhangxing Bian, Tomas Arias-Vergara, Paula Andrea Pérez-Toro, Andreas Maier, Maureen Stone, Jiachen Zhuo, Jerry L Prince, Jonghye Woo","doi":"10.1007/978-3-031-72104-5_67","DOIUrl":"10.1007/978-3-031-72104-5_67","url":null,"abstract":"<p><p>Tagged magnetic resonance imaging (MRI) has been successfully used to track the motion of internal tissue points within moving organs. Typically, to analyze motion using tagged MRI, cine MRI data in the same coordinate system are acquired, incurring additional time and costs. Consequently, tagged-to-cine MR synthesis holds the potential to reduce the extra acquisition time and costs associated with cine MRI, without disrupting downstream motion analysis tasks. Previous approaches have processed each frame independently, thereby overlooking the fact that complementary information from occluded regions of the tag patterns could be present in neighboring frames exhibiting motion. Furthermore, the inconsistent visual appearance, e.g., tag fading, across frames can reduce synthesis performance. To address this, we propose an efficient framework for tagged-to-cine MR sequence synthesis, leveraging both spatial and temporal information with relatively limited data. Specifically, we follow a split-and-integral protocol to balance spatialtemporal modeling efficiency and consistency. The light spatial-temporal transformer (LiST<sup>2</sup>) is designed to exploit the local and global attention in motion sequence with relatively lightweight training parameters. The directional product relative position-time bias is adapted to make the model aware of the spatial-temporal correlation, while the shifted window is used for motion alignment. Then, a recurrent sliding fine-tuning (ReST) scheme is applied to further enhance the temporal consistency. Our framework is evaluated on paired tagged and cine MRI sequences, demonstrating superior performance over comparison methods.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15007 ","pages":"701-711"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11517403/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142524019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TLRN: Temporal Latent Residual Networks For Large Deformation Image Registration. TLRN:用于大变形图像配准的时间隐残差网络。
Nian Wu, Jiarui Xing, Miaomiao Zhang

This paper presents a novel approach, termed Temporal Latent Residual Network (TLRN), to predict a sequence of deformation fields in time-series image registration. The challenge of registering time-series images often lies in the occurrence of large motions, especially when images differ significantly from a reference (e.g., the start of a cardiac cycle compared to the peak stretching phase). To achieve accurate and robust registration results, we leverage the nature of motion continuity and exploit the temporal smoothness in consecutive image frames. Our proposed TLRN highlights a temporal residual network with residual blocks carefully designed in latent deformation spaces, which are parameterized by time-sequential initial velocity fields. We treat a sequence of residual blocks over time as a dynamic training system, where each block is designed to learn the residual function between desired deformation features and current input accumulated from previous time frames. We validate the effectivenss of TLRN on both synthetic data and real-world cine cardiac magnetic resonance (CMR) image videos. Our experimental results shows that TLRN is able to achieve substantially improved registration accuracy compared to the state-of-the-art. Our code is publicly available at https://github.com/nellie689/TLRN.

本文提出了一种新的方法,称为时间隐残差网络(TLRN),用于预测时间序列图像配准中的一系列变形场。配准时间序列图像的挑战通常在于大运动的发生,特别是当图像与参考图像明显不同时(例如,与峰值拉伸阶段相比,心脏周期的开始)。为了获得准确和鲁棒的配准结果,我们利用了运动连续性的本质,并利用了连续图像帧的时间平滑性。我们提出的TLRN突出了在潜在变形空间中精心设计的残余块的时间残余网络,这些残余块由时间序列初速度场参数化。我们将一段时间内的残差块序列视为一个动态训练系统,其中每个块被设计用于学习期望变形特征与从以前的时间框架积累的当前输入之间的残差函数。我们在合成数据和真实的电影心脏磁共振(CMR)图像视频上验证了TLRN的有效性。我们的实验结果表明,与最先进的配准精度相比,TLRN能够实现显着提高的配准精度。我们的代码可以在https://github.com/nellie689/TLRN上公开获得。
{"title":"TLRN: Temporal Latent Residual Networks For Large Deformation Image Registration.","authors":"Nian Wu, Jiarui Xing, Miaomiao Zhang","doi":"10.1007/978-3-031-72069-7_68","DOIUrl":"10.1007/978-3-031-72069-7_68","url":null,"abstract":"<p><p>This paper presents a novel approach, termed <i>Temporal Latent Residual Network (TLRN)</i>, to predict a sequence of deformation fields in time-series image registration. The challenge of registering time-series images often lies in the occurrence of large motions, especially when images differ significantly from a reference (e.g., the start of a cardiac cycle compared to the peak stretching phase). To achieve accurate and robust registration results, we leverage the nature of motion continuity and exploit the temporal smoothness in consecutive image frames. Our proposed TLRN highlights a temporal residual network with residual blocks carefully designed in latent deformation spaces, which are parameterized by time-sequential initial velocity fields. We treat a sequence of residual blocks over time as a dynamic training system, where each block is designed to learn the residual function between desired deformation features and current input accumulated from previous time frames. We validate the effectivenss of TLRN on both synthetic data and real-world cine cardiac magnetic resonance (CMR) image videos. Our experimental results shows that TLRN is able to achieve substantially improved registration accuracy compared to the state-of-the-art. Our code is publicly available at https://github.com/nellie689/TLRN.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15002 ","pages":"728-738"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11929566/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143694983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HATs: Hierarchical Adaptive Taxonomy Segmentation for Panoramic Pathology Image Analysis. 全景病理图像分析的层次自适应分类分割。
Ruining Deng, Quan Liu, Can Cui, Tianyuan Yao, Juming Xiong, Shunxing Bao, Hao Li, Mengmeng Yin, Yu Wang, Shilin Zhao, Yucheng Tang, Haichun Yang, Yuankai Huo

Panoramic image segmentation in computational pathology presents a remarkable challenge due to the morphologically complex and variably scaled anatomy. For instance, the intricate organization in kidney pathology spans multiple layers, from regions like the cortex and medulla to functional units such as glomeruli, tubules, and vessels, down to various cell types. In this paper, we propose a novel Hierarchical Adaptive Taxonomy Segmentation (HATs) method, which is designed to thoroughly segment panoramic views of kidney structures by leveraging detailed anatomical insights. Our approach entails (1) the innovative HATs technique which translates spatial relationships among 15 distinct object classes into a versatile "plug-and-play" loss function that spans across regions, functional units, and cells, (2) the incorporation of anatomical hierarchies and scale considerations into a unified simple matrix representation for all panoramic entities, (3) the adoption of the latest AI foundation model (EfficientSAM) as a feature extraction tool to boost the model's adaptability, yet eliminating the need for manual prompt generation in conventional segment anything model (SAM). Experimental findings demonstrate that the HATs method offers an efficient and effective strategy for integrating clinical insights and imaging precedents into a unified segmentation model across more than 15 categories. The official implementation is publicly available at https://github.com/hrlblab/HATs.

全景图像分割在计算病理学提出了一个显着的挑战,由于形态复杂和可变缩放解剖。例如,肾脏病理中复杂的组织跨越多层,从皮层和髓质等区域到肾小球、小管和血管等功能单位,再到各种细胞类型。在本文中,我们提出了一种新的分层自适应分类分割(HATs)方法,该方法旨在通过利用详细的解剖学见解来彻底分割肾脏结构的全景视图。我们的方法需要(1)创新的HATs技术,将15个不同对象类别之间的空间关系转化为跨越区域、功能单元和细胞的多功能“即插即用”损失函数;(2)将解剖层次结构和尺度考虑纳入所有全景实体的统一简单矩阵表示中;(3)采用最新的人工智能基础模型(EfficientSAM)作为特征提取工具,增强了模型的适应性,同时消除了传统分段任意模型(SAM)中手动生成提示符的需求。实验结果表明,HATs方法提供了一种高效的策略,可以将临床见解和成像先例整合到超过15个类别的统一分割模型中。官方实现可以在https://github.com/hrlblab/HATs上公开获得。
{"title":"HATs: Hierarchical Adaptive Taxonomy Segmentation for Panoramic Pathology Image Analysis.","authors":"Ruining Deng, Quan Liu, Can Cui, Tianyuan Yao, Juming Xiong, Shunxing Bao, Hao Li, Mengmeng Yin, Yu Wang, Shilin Zhao, Yucheng Tang, Haichun Yang, Yuankai Huo","doi":"10.1007/978-3-031-72083-3_15","DOIUrl":"10.1007/978-3-031-72083-3_15","url":null,"abstract":"<p><p>Panoramic image segmentation in computational pathology presents a remarkable challenge due to the morphologically complex and variably scaled anatomy. For instance, the intricate organization in kidney pathology spans multiple layers, from regions like the cortex and medulla to functional units such as glomeruli, tubules, and vessels, down to various cell types. In this paper, we propose a novel Hierarchical Adaptive Taxonomy Segmentation (HATs) method, which is designed to thoroughly segment panoramic views of kidney structures by leveraging detailed anatomical insights. Our approach entails (1) the innovative HATs technique which translates spatial relationships among 15 distinct object classes into a versatile \"plug-and-play\" loss function that spans across regions, functional units, and cells, (2) the incorporation of anatomical hierarchies and scale considerations into a unified simple matrix representation for all panoramic entities, (3) the adoption of the latest AI foundation model (EfficientSAM) as a feature extraction tool to boost the model's adaptability, yet eliminating the need for manual prompt generation in conventional segment anything model (SAM). Experimental findings demonstrate that the HATs method offers an efficient and effective strategy for integrating clinical insights and imaging precedents into a unified segmentation model across more than 15 categories. The official implementation is publicly available at https://github.com/hrlblab/HATs.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15004 ","pages":"155-166"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11927787/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143694985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SelfReg-UNet: Self-Regularized UNet for Medical Image Segmentation. selfregg -UNet:用于医学图像分割的自正则化UNet。
Wenhui Zhu, Xiwen Chen, Peijie Qiu, Mohammad Farazi, Aristeidis Sotiras, Abolfazl Razi, Yalin Wang

Since its introduction, UNet has been leading a variety of medical image segmentation tasks. Although numerous follow-up studies have also been dedicated to improving the performance of standard UNet, few have conducted in-depth analyses of the underlying interest pattern of UNet in medical image segmentation. In this paper, we explore the patterns learned in a UNet and observe two important factors that potentially affect its performance: (i) irrelative feature learned caused by asymmetric supervision; (ii) feature redundancy in the feature map. To this end, we propose to balance the supervision between encoder and decoder and reduce the redundant information in the UNet. Specifically, we use the feature map that contains the most semantic information (i.e., the last layer of the decoder) to provide additional supervision to other blocks to provide additional supervision and reduce feature redundancy by leveraging feature distillation. The proposed method can be easily integrated into existing UNet architecture in a plug-and-play fashion with negligible computational cost. The experimental results suggest that the proposed method consistently improves the performance of standard UNets on four medical image segmentation datasets. The code is available at https://github.com/ChongQingNoSubway/SelfReg-UNet.

自推出以来,UNet一直引领着各种医学图像分割任务。尽管许多后续研究也致力于提高标准UNet的性能,但很少有深入分析UNet在医学图像分割中的潜在兴趣模式。在本文中,我们探讨了在UNet中学习的模式,并观察了可能影响其性能的两个重要因素:(i)不对称监督导致的学习不相关特征;(ii)特征映射中的特征冗余。为此,我们提出平衡编码器和解码器之间的监督,减少UNet中的冗余信息。具体来说,我们使用包含最多语义信息的特征映射(即解码器的最后一层)来为其他块提供额外的监督,从而通过利用特征蒸馏来提供额外的监督并减少特征冗余。所提出的方法可以很容易地以即插即用的方式集成到现有的UNet体系结构中,计算成本可以忽略不计。实验结果表明,该方法在四种医学图像分割数据集上均能提高标准UNets的性能。代码可在https://github.com/ChongQingNoSubway/SelfReg-UNet上获得。
{"title":"SelfReg-UNet: Self-Regularized UNet for Medical Image Segmentation.","authors":"Wenhui Zhu, Xiwen Chen, Peijie Qiu, Mohammad Farazi, Aristeidis Sotiras, Abolfazl Razi, Yalin Wang","doi":"10.1007/978-3-031-72111-3_56","DOIUrl":"10.1007/978-3-031-72111-3_56","url":null,"abstract":"<p><p>Since its introduction, UNet has been leading a variety of medical image segmentation tasks. Although numerous follow-up studies have also been dedicated to improving the performance of standard UNet, few have conducted in-depth analyses of the underlying interest pattern of UNet in medical image segmentation. In this paper, we explore the patterns learned in a UNet and observe two important factors that potentially affect its performance: (i) irrelative feature learned caused by asymmetric supervision; (ii) feature redundancy in the feature map. To this end, we propose to balance the supervision between encoder and decoder and reduce the redundant information in the UNet. Specifically, we use the feature map that contains the most semantic information (i.e., the last layer of the decoder) to provide additional supervision to other blocks to provide additional supervision and reduce feature redundancy by leveraging feature distillation. The proposed method can be easily integrated into existing UNet architecture in a plug-and-play fashion with negligible computational cost. The experimental results suggest that the proposed method consistently improves the performance of standard UNets on four medical image segmentation datasets. The code is available at https://github.com/ChongQingNoSubway/SelfReg-UNet.</p>","PeriodicalId":94280,"journal":{"name":"Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention","volume":"15008 ","pages":"601-611"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12408486/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145017053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1