首页 > 最新文献

IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision最新文献

英文 中文
ARD-VAE: A Statistical Formulation to Find the Relevant Latent Dimensions of Variational Autoencoders. ARD-VAE:一种发现变分自编码器相关潜在维度的统计公式。
Pub Date : 2025-02-01 Epub Date: 2025-04-08 DOI: 10.1109/wacv61041.2025.00096
Surojit Saha, Sarang Joshi, Ross Whitaker

The variational autoencoder (VAE) [19, 41] is a popular, deep, latent-variable model (DLVM) due to its simple yet effective formulation for modeling the data distribution. Moreover, optimizing the VAE objective function is more manageable than other DLVMs. The bottleneck dimension of the VAE is a crucial design choice, and it has strong ramifications for the model's performance, such as finding the hidden explanatory factors of a dataset using the representations learned by the VAE. However, the size of the latent dimension of the VAE is often treated as a hyperparameter estimated empirically through trial and error. To this end, we propose a statistical formulation to discover the relevant latent factors required for modeling a dataset. In this work, we use a hierarchical prior in the latent space that estimates the variance of the latent axes using the encoded data, which identifies the relevant latent dimensions. For this, we replace the fixed prior in the VAE objective function with a hierarchical prior, keeping the remainder of the formulation unchanged. We call the proposed method the automatic relevancy detection in the variational autoencoder (ARD-VAE). We demonstrate the efficacy of the ARD-VAE on multiple benchmark datasets in finding the relevant latent dimensions and their effect on different evaluation metrics, such as FID score and disentanglement analysis.

变分自编码器(VAE)[19,41]是一种流行的、深度的、潜在变量模型(DLVM),因为它的数据分布建模公式简单而有效。此外,优化VAE目标函数比其他dlvm更易于管理。VAE的瓶颈维度是一个关键的设计选择,它对模型的性能有很强的影响,比如使用VAE学习的表示来发现数据集的隐藏解释因素。然而,VAE的潜在维数的大小通常被视为通过试错法经验估计的超参数。为此,我们提出了一个统计公式来发现数据集建模所需的相关潜在因素。在这项工作中,我们在潜在空间中使用分层先验,使用编码数据估计潜在轴的方差,从而识别相关的潜在维度。为此,我们用层次先验替换VAE目标函数中的固定先验,保持公式的其余部分不变。我们将提出的方法称为变分自编码器(ARD-VAE)中的自动相关性检测。我们证明了ARD-VAE在多个基准数据集上发现相关潜在维度及其对不同评估指标(如FID评分和解纠缠分析)的影响的有效性。
{"title":"ARD-VAE: A Statistical Formulation to Find the Relevant Latent Dimensions of Variational Autoencoders.","authors":"Surojit Saha, Sarang Joshi, Ross Whitaker","doi":"10.1109/wacv61041.2025.00096","DOIUrl":"10.1109/wacv61041.2025.00096","url":null,"abstract":"<p><p>The variational autoencoder (VAE) [19, 41] is a popular, deep, latent-variable model (DLVM) due to its simple yet effective formulation for modeling the data distribution. Moreover, optimizing the VAE objective function is more manageable than other DLVMs. The bottleneck dimension of the VAE is a crucial design choice, and it has strong ramifications for the model's performance, such as finding the hidden explanatory factors of a dataset using the representations learned by the VAE. However, the size of the latent dimension of the VAE is often treated as a hyperparameter estimated empirically through trial and error. To this end, we propose a statistical formulation to discover the relevant latent factors required for modeling a dataset. In this work, we use a <i>hierarchical</i> prior in the latent space that estimates the variance of the latent axes using the encoded data, which identifies the relevant latent dimensions. For this, we replace the fixed prior in the VAE objective function with a hierarchical prior, keeping the remainder of the formulation unchanged. We call the proposed method the automatic relevancy detection in the variational autoencoder (ARD-VAE). We demonstrate the efficacy of the ARD-VAE on multiple benchmark datasets in finding the relevant latent dimensions and their effect on different evaluation metrics, such as FID score and disentanglement analysis.</p>","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":"2025 ","pages":"889-898"},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12747167/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145866782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Aperture Transformers for 3D (MAT3D) Segmentation of Clinical and Microscopic Images. 多孔径变压器用于临床和显微图像的三维分割。
Pub Date : 2025-02-01 Epub Date: 2025-04-08 DOI: 10.1109/wacv61041.2025.00427
Muhammad Sohaib, Siyavash Shabani, Sahar A Mohammed, Garrett Winkelmaier, Bahram Parvin

3D segmentation of biological structures is critical in biomedical imaging, offering significant insights into structures and functions. This paper introduces a novel segmentation of biological images that couples Multi-Aperture representation with Transformers for 3D (MAT3D) segmentation. Our method integrates the global context-awareness of Transformer networks with the local feature extraction capabilities of Convolutional Neural Networks (CNNs), providing a comprehensive solution for accurately delineating complex biological structures. First, we evaluated the performance of the proposed technique on two public clinical datasets of ACDC and Synapse multi-organ segmentation, rendering superior Dice scores of 93.34±0.05 and 89.73±0.04, respectively, with fewer parameters compared to the published literature. Next, we assessed the performance of our technique on an organoid dataset comprising four breast cancer subtypes. The proposed method achieved a Dice 95.12±0.02 and a PQ score of 97.01±0.01, respectively. MAT3D also significantly reduces the parameters to 40 million. The code is available on https://github.com/sohaibcs1/MAT3D.

生物结构的三维分割在生物医学成像中至关重要,提供了对结构和功能的重要见解。本文介绍了一种新的生物图像分割方法,该方法将多孔径表示与三维变形金刚(MAT3D)相结合。我们的方法将Transformer网络的全局上下文感知与卷积神经网络(cnn)的局部特征提取能力相结合,为精确描绘复杂生物结构提供了全面的解决方案。首先,我们在ACDC和Synapse多器官分割两个公开的临床数据集上评估了所提出的技术的性能,与已发表的文献相比,Dice得分分别为93.34±0.05和89.73±0.04,参数更少。接下来,我们在包含四种乳腺癌亚型的类器官数据集上评估了我们的技术的性能。该方法的Dice评分为95.12±0.02,PQ评分为97.01±0.01。MAT3D也将参数大幅减少到4000万。代码可在https://github.com/sohaibcs1/MAT3D上获得。
{"title":"Multi-Aperture Transformers for 3D (MAT3D) Segmentation of Clinical and Microscopic Images.","authors":"Muhammad Sohaib, Siyavash Shabani, Sahar A Mohammed, Garrett Winkelmaier, Bahram Parvin","doi":"10.1109/wacv61041.2025.00427","DOIUrl":"10.1109/wacv61041.2025.00427","url":null,"abstract":"<p><p>3D segmentation of biological structures is critical in biomedical imaging, offering significant insights into structures and functions. This paper introduces a novel segmentation of biological images that couples Multi-Aperture representation with Transformers for 3D (MAT3D) segmentation. Our method integrates the global context-awareness of Transformer networks with the local feature extraction capabilities of Convolutional Neural Networks (CNNs), providing a comprehensive solution for accurately delineating complex biological structures. First, we evaluated the performance of the proposed technique on two public clinical datasets of ACDC and Synapse multi-organ segmentation, rendering superior Dice scores of 93.34±0.05 and 89.73±0.04, respectively, with fewer parameters compared to the published literature. Next, we assessed the performance of our technique on an organoid dataset comprising four breast cancer subtypes. The proposed method achieved a Dice 95.12±0.02 and a PQ score of 97.01±0.01, respectively. MAT3D also significantly reduces the parameters to 40 million. The code is available on https://github.com/sohaibcs1/MAT3D.</p>","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":"2025 ","pages":"4352-4361"},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12040328/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144047757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative Model-Based Fusion for Improved Few-Shot Semantic Segmentation of Infrared Images. 基于生成模型的改进红外图像少镜头语义分割融合。
Pub Date : 2025-02-01 Epub Date: 2025-04-08 DOI: 10.1109/wacv61041.2025.00535
Junno Yun, Mehmet Akçakaya

Infrared (IR) imaging is commonly used in various scenarios, including autonomous driving, fire safety and defense applications. Thus, semantic segmentation of such images is of great interest. However, this task faces several challenges, including data scarcity, differing contrast and input channel number compared to natural images, and emergence of classes not represented in databases in certain scenarios, such as defense applications. Few-shot segmentation (FSS) provides a framework to overcome these issues by segmenting query images using a few labeled support samples. However, existing FSS models for IR images require paired visible RGB images, which is a major limitation since acquiring such paired data is difficult or impossible in some applications. In this work, we develop new strategies for FSS of IR images by using generative modeling and fusion techniques. To this end, we propose to synthesize auxiliary data to provide additional channel information to complement the limited contrast in the IR images, as well as IR data synthesis for data augmentation. Here, the former helps the FSS model to better capture the relationship between the support and query sets, while the latter addresses the issue of data scarcity. Finally, to further improve the former aspect, we propose a novel fusion ensemble module for integrating the two different modalities. Our methods are evaluated on different IR datasets, and improve upon the state-of-the-art (SOTA) FSS models.

红外(IR)成像通常用于各种场景,包括自动驾驶、消防安全和国防应用。因此,这类图像的语义分割是非常有趣的。然而,这项任务面临着一些挑战,包括数据稀缺,与自然图像相比不同的对比度和输入通道数,以及在某些场景(例如国防应用程序)中数据库中未表示的类的出现。少镜头分割(FSS)提供了一个框架,通过使用几个标记的支持样本分割查询图像来克服这些问题。然而,现有的红外图像FSS模型需要配对的可见RGB图像,这是一个主要的限制,因为在某些应用中获取这种配对数据是困难的或不可能的。在这项工作中,我们通过使用生成建模和融合技术开发了红外图像FSS的新策略。为此,我们建议合成辅助数据以提供额外的通道信息,以补充红外图像中有限的对比度,以及红外数据合成以增强数据。在这里,前者帮助FSS模型更好地捕获支持集和查询集之间的关系,而后者解决了数据稀缺的问题。最后,为了进一步改进前者,我们提出了一种新的融合集成模块来集成两种不同的模态。我们的方法在不同的红外数据集上进行了评估,并改进了最先进的(SOTA) FSS模型。
{"title":"Generative Model-Based Fusion for Improved Few-Shot Semantic Segmentation of Infrared Images.","authors":"Junno Yun, Mehmet Akçakaya","doi":"10.1109/wacv61041.2025.00535","DOIUrl":"10.1109/wacv61041.2025.00535","url":null,"abstract":"<p><p>Infrared (IR) imaging is commonly used in various scenarios, including autonomous driving, fire safety and defense applications. Thus, semantic segmentation of such images is of great interest. However, this task faces several challenges, including data scarcity, differing contrast and input channel number compared to natural images, and emergence of classes not represented in databases in certain scenarios, such as defense applications. Few-shot segmentation (FSS) provides a framework to overcome these issues by segmenting query images using a few labeled support samples. However, existing FSS models for IR images require paired visible RGB images, which is a major limitation since acquiring such paired data is difficult or impossible in some applications. In this work, we develop new strategies for FSS of IR images by using generative modeling and fusion techniques. To this end, we propose to synthesize auxiliary data to provide additional channel information to complement the limited contrast in the IR images, as well as IR data synthesis for data augmentation. Here, the former helps the FSS model to better capture the relationship between the support and query sets, while the latter addresses the issue of data scarcity. Finally, to further improve the former aspect, we propose a novel fusion ensemble module for integrating the two different modalities. Our methods are evaluated on different IR datasets, and improve upon the state-of-the-art (SOTA) FSS models.</p>","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":"2025 ","pages":"5479-5488"},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12790678/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145960984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sli2Vol+: Segmenting 3D Medical Images Based on an Object Estimation Guided Correspondence Flow Network. Sli2Vol+:基于目标估计导向的对应流网络分割3D医学图像。
Pub Date : 2025-02-01 Epub Date: 2025-04-08 DOI: 10.1109/wacv61041.2025.00357
Delin An, Pengfei Gu, Milan Sonka, Chaoli Wang, Danny Z Chen

Deep learning (DL) methods have shown remarkable successes in medical image segmentation, often using large amounts of annotated data for model training. However, acquiring a large number of diverse labeled 3D medical image datasets is highly difficult and expensive. Recently, mask propagation DL methods were developed to reduce the annotation burden on 3D medical images. For example, Sli2Vol [59] proposed a self-supervised framework (SSF) to learn correspondences by matching neighboring slices via slice reconstruction in the training stage; the learned correspondences were then used to propagate a labeled slice to other slices in the test stage. But, these methods are still prone to error accumulation due to the inter-slice propagation of reconstruction errors. Also, they do not handle discontinuities well, which can occur between consecutive slices in 3D images, as they emphasize exploiting object continuity. To address these challenges, in this work, we propose a new SSF, called Sli2Vol+, for segmenting any anatomical structures in 3D medical images using only a single annotated slice per training and testing volume. Specifically, in the training stage, we first propagate an annotated 2D slice of a training volume to the other slices, generating pseudo-labels (PLs). Then, we develop a novel Object Estimation Guided Correspondence Flow Network to learn reliable correspondences between consecutive slices and corresponding PLs in a self-supervised manner. In the test stage, such correspondences are utilized to propagate a single annotated slice to the other slices of a test volume. We demonstrate the effectiveness of our method on various medical image segmentation tasks with different datasets, showing better generalizability across different organs, modalities, and modals. Code is available at https://github.com/adlsn/Sli2VolPlus.

深度学习(DL)方法在医学图像分割方面取得了显著的成功,通常使用大量带注释的数据进行模型训练。然而,获取大量不同的标记三维医学图像数据集是非常困难和昂贵的。为了减轻三维医学图像的标注负担,近年来发展了掩模传播深度学习方法。例如,Sli2Vol[59]提出了一种自监督框架(self-supervised framework, SSF),在训练阶段通过切片重建来匹配相邻切片,从而学习对应关系;然后使用学习到的对应关系将标记的切片传播到测试阶段的其他切片。但是,由于重建误差在片间传播,这些方法仍然容易产生误差积累。此外,它们不能很好地处理不连续性,这可能发生在3D图像的连续切片之间,因为它们强调利用对象的连续性。为了解决这些挑战,在这项工作中,我们提出了一种新的SSF,称为Sli2Vol+,用于分割3D医学图像中的任何解剖结构,每个训练和测试体积仅使用单个带注释的切片。具体来说,在训练阶段,我们首先将训练卷的一个带注释的二维切片传播到其他切片,生成伪标签(PLs)。然后,我们开发了一种新的目标估计引导对应流网络,以自监督的方式学习连续切片和相应PLs之间的可靠对应关系。在测试阶段,这样的对应被用来将单个带注释的片传播到测试卷的其他片。我们证明了我们的方法在不同数据集的各种医学图像分割任务上的有效性,在不同的器官、模式和模态上表现出更好的泛化性。代码可从https://github.com/adlsn/Sli2VolPlus获得。
{"title":"Sli2Vol+: Segmenting 3D Medical Images Based on an Object Estimation Guided Correspondence Flow Network.","authors":"Delin An, Pengfei Gu, Milan Sonka, Chaoli Wang, Danny Z Chen","doi":"10.1109/wacv61041.2025.00357","DOIUrl":"10.1109/wacv61041.2025.00357","url":null,"abstract":"<p><p>Deep learning (DL) methods have shown remarkable successes in medical image segmentation, often using large amounts of annotated data for model training. However, acquiring a large number of diverse labeled 3D medical image datasets is highly difficult and expensive. Recently, mask propagation DL methods were developed to reduce the annotation burden on 3D medical images. For example, Sli2Vol [59] proposed a self-supervised framework (SSF) to learn correspondences by matching neighboring slices via slice reconstruction in the training stage; the learned correspondences were then used to propagate a labeled slice to other slices in the test stage. But, these methods are still prone to error accumulation due to the inter-slice propagation of reconstruction errors. Also, they do not handle discontinuities well, which can occur between consecutive slices in 3D images, as they emphasize exploiting object continuity. To address these challenges, in this work, we propose a new SSF, called <b>Sli2Vol+</b>, for segmenting any anatomical structures in 3D medical images using only a single annotated slice per training and testing volume. Specifically, in the training stage, we first propagate an annotated 2D slice of a training volume to the other slices, generating pseudo-labels (PLs). Then, we develop a novel Object Estimation Guided Correspondence Flow Network to learn reliable correspondences between consecutive slices and corresponding PLs in a self-supervised manner. In the test stage, such correspondences are utilized to propagate a single annotated slice to the other slices of a test volume. We demonstrate the effectiveness of our method on various medical image segmentation tasks with different datasets, showing better generalizability across different organs, modalities, and modals. Code is available at https://github.com/adlsn/Sli2VolPlus.</p>","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":"2025 ","pages":"3624-3634"},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12459605/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145152049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CUNSB-RFIE: Context-aware Unpaired Neural Schrödinger Bridge in Retinal Fundus Image Enhancement. 上下文感知的未配对神经Schrödinger桥在视网膜眼底图像增强中的应用。
Pub Date : 2025-02-01 Epub Date: 2025-04-08 DOI: 10.1109/wacv61041.2025.00442
Xuanzhao Dong, Vamsi Krishna Vasa, Wenhui Zhu, Peijie Qiu, Xiwen Chen, Yi Su, Yujian Xiong, Zhangsihao Yang, Yanxi Chen, Yalin Wang

Retinal fundus photography is significant in diagnosing and monitoring retinal diseases. However, systemic imperfections and operator/patient-related factors can hinder the acquisition of high-quality retinal images. Previous efforts in retinal image enhancement primarily relied on GANs, which are limited by the trade-off between training stability and output diversity. In contrast, the Schrödinger Bridge (SB), offers a more stable solution by utilizing Optimal Transport (OT) theory to model a stochastic differential equation (SDE) between two arbitrary distributions. This allows SB to effectively transform low-quality retinal images into their high-quality counterparts. In this work, we leverage the SB framework to propose an image-to-image translation pipeline for retinal image enhancement. Additionally, previous methods often fail to capture fine struc tural details, such as blood vessels. To address this, we enhance our pipeline by introducing Dynamic Snake Convolution, whose tortuous receptive field can better preserve tubular structures. We name the resulting retinal fundus image enhancement framework the Context-aware Unpaired Neural Schrödinger Bridge (CUNSB-RFIE). To the best of our knowledge, this is the first endeavor to use the SB approach for retinal image enhancement. Experimental results on a large-scale dataset demonstrate the advantage of the proposed method compared to several state-of-the-art supervised and unsupervised methods in terms of image quality and performance on downstream tasks.The code is available at https://github.com/Retinal-Research/CUNSB-RFIE.

视网膜眼底摄影对视网膜疾病的诊断和监测具有重要意义。然而,系统缺陷和操作者/患者相关因素会阻碍高质量视网膜图像的获取。以往的视网膜图像增强主要依赖于gan,但受到训练稳定性和输出多样性之间权衡的限制。相比之下,Schrödinger Bridge (SB)通过利用最优传输(OT)理论来模拟两个任意分布之间的随机微分方程(SDE),提供了一个更稳定的解决方案。这使得SB可以有效地将低质量的视网膜图像转换为高质量的图像。在这项工作中,我们利用SB框架提出了一个用于视网膜图像增强的图像到图像转换管道。此外,以前的方法往往不能捕获精细的结构细节,如血管。为了解决这个问题,我们通过引入动态蛇卷积来增强管道,其曲折的接受野可以更好地保存管状结构。我们将由此产生的视网膜眼底图像增强框架命名为上下文感知的未配对神经Schrödinger桥(cunsdb - rfie)。据我们所知,这是第一次尝试使用SB方法来增强视网膜图像。在大规模数据集上的实验结果表明,与几种最先进的有监督和无监督方法相比,所提出的方法在图像质量和下游任务的性能方面具有优势。代码可在https://github.com/Retinal-Research/CUNSB-RFIE上获得。
{"title":"CUNSB-RFIE: Context-aware Unpaired Neural Schrödinger Bridge in Retinal Fundus Image Enhancement.","authors":"Xuanzhao Dong, Vamsi Krishna Vasa, Wenhui Zhu, Peijie Qiu, Xiwen Chen, Yi Su, Yujian Xiong, Zhangsihao Yang, Yanxi Chen, Yalin Wang","doi":"10.1109/wacv61041.2025.00442","DOIUrl":"10.1109/wacv61041.2025.00442","url":null,"abstract":"<p><p>Retinal fundus photography is significant in diagnosing and monitoring retinal diseases. However, systemic imperfections and operator/patient-related factors can hinder the acquisition of high-quality retinal images. Previous efforts in retinal image enhancement primarily relied on GANs, which are limited by the trade-off between training stability and output diversity. In contrast, the Schrödinger Bridge (SB), offers a more stable solution by utilizing Optimal Transport (OT) theory to model a stochastic differential equation (SDE) between two arbitrary distributions. This allows SB to effectively transform low-quality retinal images into their high-quality counterparts. In this work, we leverage the SB framework to propose an image-to-image translation pipeline for retinal image enhancement. Additionally, previous methods often fail to capture fine struc tural details, such as blood vessels. To address this, we enhance our pipeline by introducing Dynamic Snake Convolution, whose tortuous receptive field can better preserve tubular structures. We name the resulting retinal fundus image enhancement framework the Context-aware Unpaired Neural Schrödinger Bridge (CUNSB-RFIE). To the best of our knowledge, this is the first endeavor to use the SB approach for retinal image enhancement. Experimental results on a large-scale dataset demonstrate the advantage of the proposed method compared to several state-of-the-art supervised and unsupervised methods in terms of image quality and performance on downstream tasks.The code is available at https://github.com/Retinal-Research/CUNSB-RFIE.</p>","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":"2025 ","pages":"4502-4511"},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12408487/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145016765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SODA: Spectral Orthogonal Decomposition Adaptation for Diffusion Models. SODA:光谱正交分解自适应扩散模型。
Pub Date : 2025-02-01 Epub Date: 2025-04-08 DOI: 10.1109/wacv61041.2025.00458
Xinxi Zhang, Song Wen, Ligong Han, Felix Juefei-Xu, Akash Srivastava, Junzhou Huang, Vladimir Pavlovic, Hao Wang, Molei Tao, Dimitris Metaxas

Adapting large-scale pre-trained generative models in a parameter-efficient manner is gaining traction. Traditional methods like low rank adaptation achieve parameter efficiency by imposing constraints but may not be optimal for tasks requiring high representation capacity. We propose a novel spectrum-aware adaptation framework for generative models. Our method adjusts both singular values and their basis vectors of pretrained weights. Using the Kronecker product and efficient Stiefel optimizers, we achieve parameter-efficient adaptation of orthogonal matrices. Specifically, we introduce Spectral Orthogonal Decomposition Adaptation (SODA), which balances computational efficiency and representation capacity. Extensive evaluations on text-to-image diffusion models demonstrate SODA's effectiveness, offering a spectrum-aware alternative to existing fine-tuning methods.

以参数高效的方式适应大规模预训练生成模型正在获得关注。低秩自适应等传统方法通过施加约束来实现参数效率,但对于需要高表示能力的任务可能不是最优的。我们提出了一种新的频谱感知自适应框架的生成模型。我们的方法调整了奇异值及其预训练权值的基向量。利用Kronecker积和高效的Stiefel优化器,实现了正交矩阵的参数高效自适应。具体来说,我们引入了光谱正交分解自适应(SODA),它平衡了计算效率和表示能力。对文本到图像扩散模型的广泛评估证明了SODA的有效性,为现有的微调方法提供了频谱感知替代方案。
{"title":"SODA: Spectral Orthogonal Decomposition Adaptation for Diffusion Models.","authors":"Xinxi Zhang, Song Wen, Ligong Han, Felix Juefei-Xu, Akash Srivastava, Junzhou Huang, Vladimir Pavlovic, Hao Wang, Molei Tao, Dimitris Metaxas","doi":"10.1109/wacv61041.2025.00458","DOIUrl":"10.1109/wacv61041.2025.00458","url":null,"abstract":"<p><p>Adapting large-scale pre-trained generative models in a parameter-efficient manner is gaining traction. Traditional methods like low rank adaptation achieve parameter efficiency by imposing constraints but may not be optimal for tasks requiring high representation capacity. We propose a novel spectrum-aware adaptation framework for generative models. Our method adjusts both singular values and their basis vectors of pretrained weights. Using the Kronecker product and efficient Stiefel optimizers, we achieve parameter-efficient adaptation of orthogonal matrices. Specifically, we introduce <b>S</b>pectral <b>O</b>rthogonal <b>D</b>ecomposition <b>A</b>daptation (SODA), which balances computational efficiency and representation capacity. Extensive evaluations on text-to-image diffusion models demonstrate SODA's effectiveness, offering a spectrum-aware alternative to existing fine-tuning methods.</p>","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":"2025 ","pages":"4665-4682"},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12085162/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144095743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Context-Aware Optimal Transport Learning for Retinal Fundus Image Enhancement. 情境感知视网膜眼底图像增强的最优传输学习。
Pub Date : 2025-02-01 Epub Date: 2025-04-08 DOI: 10.1109/wacv61041.2025.00395
Vamsi Krishna Vasa, Yujian Xiong, Peijie Qiu, Oana Dumitrascu, Wenhui Zhu, Yalin Wang

Retinal fundus photography offers a non-invasive way to diagnose and monitor a variety of retinal diseases, but is prone to inherent quality glitches arising from systemic imperfections or operator/patient-related factors. However, high-quality retinal images are crucial for carrying out accurate diagnoses and automated analyses. The fundus image enhancement is typically formulated as a distribution alignment problem, by finding a one-to-one mapping between a low-quality image and its high-quality counterpart. This paper proposes a context-informed optimal transport (OT) learning framework for tackling unpaired fundus image enhancement. In contrast to standard generative image enhancement methods, which struggle with handling contextual information (e.g., over-tampered local structures and unwanted artifacts), the proposed context-aware OT learning paradigm better preserves local structures and minimizes unwanted artifacts. Leveraging deep contextual features, we derive the proposed context-aware OT using the earth mover's distance and show that the proposed context-OT has a solid theoretical guarantee. Experimental results on a large-scale dataset demonstrate the superiority of the proposed method over several state-of-the-art supervised and unsupervised methods in terms of signal-to-noise ratio, structural similarity index, as well as two downstream tasks. The code is available at https://github.com/Retinal-Research/Contextual-OT.

视网膜眼底摄影提供了一种非侵入性的方法来诊断和监测各种视网膜疾病,但由于系统缺陷或操作者/患者相关因素,容易产生固有的质量故障。然而,高质量的视网膜图像对于进行准确的诊断和自动分析至关重要。眼底图像增强通常是一个分布对齐问题,通过寻找低质量图像和高质量图像之间的一对一映射。本文提出了一种基于上下文信息的眼底图像优化学习框架。标准的生成图像增强方法难以处理上下文信息(例如,过度篡改的局部结构和不需要的工件),与之相反,提出的上下文感知OT学习范式更好地保留了局部结构并最大限度地减少了不需要的工件。利用深层语境特征,利用推土机的距离推导出本文提出的情境感知OT,并证明本文提出的情境感知OT具有坚实的理论保障。在大规模数据集上的实验结果表明,该方法在信噪比、结构相似性指数以及两个下游任务方面优于几种最先进的有监督和无监督方法。代码可在https://github.com/Retinal-Research/Contextual-OT上获得。
{"title":"Context-Aware Optimal Transport Learning for Retinal Fundus Image Enhancement.","authors":"Vamsi Krishna Vasa, Yujian Xiong, Peijie Qiu, Oana Dumitrascu, Wenhui Zhu, Yalin Wang","doi":"10.1109/wacv61041.2025.00395","DOIUrl":"10.1109/wacv61041.2025.00395","url":null,"abstract":"<p><p>Retinal fundus photography offers a non-invasive way to diagnose and monitor a variety of retinal diseases, but is prone to inherent quality glitches arising from systemic imperfections or operator/patient-related factors. However, high-quality retinal images are crucial for carrying out accurate diagnoses and automated analyses. The fundus image enhancement is typically formulated as a distribution alignment problem, by finding a one-to-one mapping between a low-quality image and its high-quality counterpart. This paper proposes a context-informed optimal transport (OT) learning framework for tackling unpaired fundus image enhancement. In contrast to standard generative image enhancement methods, which struggle with handling contextual information (e.g., over-tampered local structures and unwanted artifacts), the proposed context-aware OT learning paradigm better preserves local structures and minimizes unwanted artifacts. Leveraging deep contextual features, we derive the proposed context-aware OT using the earth mover's distance and show that the proposed context-OT has a solid theoretical guarantee. Experimental results on a large-scale dataset demonstrate the superiority of the proposed method over several state-of-the-art supervised and unsupervised methods in terms of signal-to-noise ratio, structural similarity index, as well as two downstream tasks. The code is available at https://github.com/Retinal-Research/Contextual-OT.</p>","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":"2025 ","pages":"4016-4025"},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12337797/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144823347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AnyStar: Domain randomized universal star-convex 3D instance segmentation. AnyStar:领域随机通用星凸三维实例分割。
Pub Date : 2024-01-01 Epub Date: 2024-04-09 DOI: 10.1109/wacv57701.2024.00742
Neel Dey, S Mazdak Abulnaga, Benjamin Billot, Esra Abaci Turk, P Ellen Grant, Adrian V Dalca, Polina Golland

Star-convex shapes arise across bio-microscopy and radiology in the form of nuclei, nodules, metastases, and other units. Existing instance segmentation networks for such structures train on densely labeled instances for each dataset, which requires substantial and often impractical manual annotation effort. Further, significant reengineering or finetuning is needed when presented with new datasets and imaging modalities due to changes in contrast, shape, orientation, resolution, and density. We present AnyStar, a domain-randomized generative model that simulates synthetic training data of blob-like objects with randomized appearance, environments, and imaging physics to train general-purpose star-convex instance segmentation networks. As a result, networks trained using our generative model do not require annotated images from unseen datasets. A single network trained on our synthesized data accurately 3D segments C. elegans and P. dumerilii nuclei in fluorescence microscopy, mouse cortical nuclei in μ C T , zebrafish brain nuclei in EM, and placental cotyledons in human fetal MRI, all without any retraining, finetuning, transfer learning, or domain adaptation. Code is available at https://github.com/neel-dey/AnyStar.

在生物显微镜和放射学中,以核、结节、转移和其他单位的形式出现星凸形状。现有的此类结构的实例分割网络对每个数据集的密集标记实例进行训练,这需要大量且通常不切实际的手动注释工作。此外,由于对比度、形状、方向、分辨率和密度的变化,当出现新的数据集和成像模式时,需要进行重大的重新设计或微调。我们提出了AnyStar,一个领域随机生成模型,它模拟具有随机外观、环境和成像物理的斑点状物体的合成训练数据,以训练通用的星形凸实例分割网络。因此,使用我们的生成模型训练的网络不需要来自未见过的数据集的注释图像。在我们的合成数据上训练的单个网络可以准确地在荧光显微镜下对秀丽隐杆线虫和P. dumerilii的细胞核进行三维分割,在μ C T上对小鼠皮质核进行三维分割,在EM上对斑马鱼的脑核进行三维分割,在人类胎儿MRI上对胎盘子叶进行三维分割,所有这些都不需要任何再训练、微调、迁移学习或区域适应。代码可从https://github.com/neel-dey/AnyStar获得。
{"title":"AnyStar: Domain randomized universal star-convex 3D instance segmentation.","authors":"Neel Dey, S Mazdak Abulnaga, Benjamin Billot, Esra Abaci Turk, P Ellen Grant, Adrian V Dalca, Polina Golland","doi":"10.1109/wacv57701.2024.00742","DOIUrl":"https://doi.org/10.1109/wacv57701.2024.00742","url":null,"abstract":"<p><p>Star-convex shapes arise across bio-microscopy and radiology in the form of nuclei, nodules, metastases, and other units. Existing instance segmentation networks for such structures train on densely labeled instances for each dataset, which requires substantial and often impractical manual annotation effort. Further, significant reengineering or finetuning is needed when presented with new datasets and imaging modalities due to changes in contrast, shape, orientation, resolution, and density. We present AnyStar, a domain-randomized generative model that simulates synthetic training data of blob-like objects with randomized appearance, environments, and imaging physics to train general-purpose star-convex instance segmentation networks. As a result, networks trained using our generative model do not require annotated images from unseen datasets. A single network trained on our synthesized data accurately 3D segments C. elegans and P. dumerilii nuclei in fluorescence microscopy, mouse cortical nuclei in <math><mi>μ</mi> <mi>C</mi> <mi>T</mi></math> , zebrafish brain nuclei in EM, and placental cotyledons in human fetal MRI, all without any retraining, finetuning, transfer learning, or domain adaptation. Code is available at https://github.com/neel-dey/AnyStar.</p>","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":"2024 ","pages":"7578-7588"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12381811/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144981155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ordinal Classification with Distance Regularization for Robust Brain Age Prediction. 利用距离正则化的序数分类法进行可靠的脑年龄预测
Pub Date : 2024-01-01 Epub Date: 2024-04-09 DOI: 10.1109/wacv57701.2024.00770
Jay Shah, Md Mahfuzur Rahman Siddiquee, Yi Su, Teresa Wu, Baoxin Li

Age is one of the major known risk factors for Alzheimer's Disease (AD). Detecting AD early is crucial for effective treatment and preventing irreversible brain damage. Brain age, a measure derived from brain imaging reflecting structural changes due to aging, may have the potential to identify AD onset, assess disease risk, and plan targeted interventions. Deep learning-based regression techniques to predict brain age from magnetic resonance imaging (MRI) scans have shown great accuracy recently. However, these methods are subject to an inherent regression to the mean effect, which causes a systematic bias resulting in an overestimation of brain age in young subjects and underestimation in old subjects. This weakens the reliability of predicted brain age as a valid biomarker for downstream clinical applications. Here, we reformulate the brain age prediction task from regression to classification to address the issue of systematic bias. Recognizing the importance of preserving ordinal information from ages to understand aging trajectory and monitor aging longitudinally, we propose a novel ORdinal Distance Encoded Regularization (ORDER) loss that incorporates the order of age labels, enhancing the model's ability to capture age-related patterns. Extensive experiments and ablation studies demonstrate that this framework reduces systematic bias, outperforms state-of-art methods by statistically significant margins, and can better capture subtle differences between clinical groups in an independent AD dataset. Our implementation is publicly available at https://github.com/jaygshah/Robust-Brain-Age-Prediction.

年龄是阿尔茨海默病(AD)的主要已知风险因素之一。早期发现阿尔茨海默病对于有效治疗和防止不可逆转的脑损伤至关重要。脑年龄是从反映衰老引起的结构变化的脑成像中得出的一种测量指标,它可能具有识别阿尔茨海默病发病、评估疾病风险和计划有针对性的干预措施的潜力。基于深度学习的回归技术从磁共振成像(MRI)扫描中预测脑年龄,最近已显示出很高的准确性。然而,这些方法受制于固有的平均回归效应,这会造成系统性偏差,导致高估年轻受试者的脑年龄,低估老年受试者的脑年龄。这就削弱了预测脑年龄作为下游临床应用的有效生物标志物的可靠性。在这里,我们将脑年龄预测任务从回归重新表述为分类,以解决系统性偏差问题。我们认识到保留年龄的顺序信息对于理解衰老轨迹和纵向监测衰老的重要性,因此提出了一种新的ORdinal Distance Encoded Regularization(ORDER)损失,它包含了年龄标签的顺序,增强了模型捕捉年龄相关模式的能力。广泛的实验和消融研究表明,这一框架减少了系统性偏差,在统计学上显著优于最先进的方法,并能更好地捕捉独立的注意力缺失症数据集中临床组之间的细微差别。我们的实现方法可在 https://github.com/jaygshah/Robust-Brain-Age-Prediction 公开获取。
{"title":"Ordinal Classification with Distance Regularization for Robust Brain Age Prediction.","authors":"Jay Shah, Md Mahfuzur Rahman Siddiquee, Yi Su, Teresa Wu, Baoxin Li","doi":"10.1109/wacv57701.2024.00770","DOIUrl":"10.1109/wacv57701.2024.00770","url":null,"abstract":"<p><p>Age is one of the major known risk factors for Alzheimer's Disease (AD). Detecting AD early is crucial for effective treatment and preventing irreversible brain damage. Brain age, a measure derived from brain imaging reflecting structural changes due to aging, may have the potential to identify AD onset, assess disease risk, and plan targeted interventions. Deep learning-based regression techniques to predict brain age from magnetic resonance imaging (MRI) scans have shown great accuracy recently. However, these methods are subject to an inherent regression to the mean effect, which causes a systematic bias resulting in an overestimation of brain age in young subjects and underestimation in old subjects. This weakens the reliability of predicted brain age as a valid biomarker for downstream clinical applications. Here, we reformulate the brain age prediction task from regression to classification to address the issue of systematic bias. Recognizing the importance of preserving ordinal information from ages to understand aging trajectory and monitor aging longitudinally, we propose a novel ORdinal Distance Encoded Regularization (ORDER) loss that incorporates the order of age labels, enhancing the model's ability to capture age-related patterns. Extensive experiments and ablation studies demonstrate that this framework reduces systematic bias, outperforms state-of-art methods by statistically significant margins, and can better capture subtle differences between clinical groups in an independent AD dataset. Our implementation is publicly available at https://github.com/jaygshah/Robust-Brain-Age-Prediction.</p>","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":"2024 ","pages":"7867-7876"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11008505/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140867793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PathLDM: Text conditioned Latent Diffusion Model for Histopathology. PathLDM:用于组织病理学的文本条件潜在扩散模型。
Pub Date : 2024-01-01 Epub Date: 2024-04-09 DOI: 10.1109/wacv57701.2024.00510
Srikar Yellapragada, Alexandros Graikos, Prateek Prasanna, Tahsin Kurc, Joel Saltz, Dimitris Samaras

To achieve high-quality results, diffusion models must be trained on large datasets. This can be notably prohibitive for models in specialized domains, such as computational pathology. Conditioning on labeled data is known to help in data-efficient model training. Therefore, histopathology reports, which are rich in valuable clinical information, are an ideal choice as guidance for a histopathology generative model. In this paper, we introduce PathLDM, the first text-conditioned Latent Diffusion Model tailored for generating high-quality histopathology images. Leveraging the rich contextual information provided by pathology text reports, our approach fuses image and textual data to enhance the generation process. By utilizing GPT's capabilities to distill and summarize complex text reports, we establish an effective conditioning mechanism. Through strategic conditioning and necessary architectural enhancements, we achieved a SoTA FID score of 7.64 for text-to-image generation on the TCGA-BRCA dataset, significantly outperforming the closest text-conditioned competitor with FID 30.1.

为了获得高质量的结果,扩散模型必须在大型数据集上进行训练。对于计算病理学等专业领域的模型来说,这显然是难以实现的。众所周知,以标注数据为条件有助于提高模型训练的数据效率。因此,组织病理学报告富含宝贵的临床信息,是指导组织病理学生成模型的理想选择。在本文中,我们介绍了 PathLDM,它是首个为生成高质量组织病理学图像而量身定制的文本条件潜在扩散模型。利用病理文本报告提供的丰富上下文信息,我们的方法融合了图像和文本数据,以增强生成过程。通过利用 GPT 对复杂文本报告进行提炼和总结的功能,我们建立了一种有效的调节机制。通过策略性调节和必要的架构增强,我们在 TCGA-BRCA 数据集上的文本到图像生成中取得了 7.64 的 SoTA FID 分数,大大超过了最接近的文本调节竞争者 30.1 的 FID 分数。
{"title":"PathLDM: Text conditioned Latent Diffusion Model for Histopathology.","authors":"Srikar Yellapragada, Alexandros Graikos, Prateek Prasanna, Tahsin Kurc, Joel Saltz, Dimitris Samaras","doi":"10.1109/wacv57701.2024.00510","DOIUrl":"10.1109/wacv57701.2024.00510","url":null,"abstract":"<p><p>To achieve high-quality results, diffusion models must be trained on large datasets. This can be notably prohibitive for models in specialized domains, such as computational pathology. Conditioning on labeled data is known to help in data-efficient model training. Therefore, histopathology reports, which are rich in valuable clinical information, are an ideal choice as guidance for a histopathology generative model. In this paper, we introduce PathLDM, the first text-conditioned Latent Diffusion Model tailored for generating high-quality histopathology images. Leveraging the rich contextual information provided by pathology text reports, our approach fuses image and textual data to enhance the generation process. By utilizing GPT's capabilities to distill and summarize complex text reports, we establish an effective conditioning mechanism. Through strategic conditioning and necessary architectural enhancements, we achieved a SoTA FID score of 7.64 for text-to-image generation on the TCGA-BRCA dataset, significantly outperforming the closest text-conditioned competitor with FID 30.1.</p>","PeriodicalId":73325,"journal":{"name":"IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision","volume":"2024 ","pages":"5170-5179"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11131586/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141163007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Winter Conference on Applications of Computer Vision. IEEE Winter Conference on Applications of Computer Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1